ChatGPT has taken the world by storm since OpenAI launched the chatbot in November 2022. Its impact is being felt by schools and universities, workplaces and in law. Legal experts share the possibilities and pitfalls of this new super tool.
University of New South Wales Professor Cath Ellis described it as a chat bot interface using generative artificial intelligence (AI).
“Basically, how it works to the layperson is you can ask it a question, or you can ask it to write something, and it will have a good go at doing it for you,” said Ellis.
Some of the things people have asked it to do include writing a song in the style of Nick Cave (the legendary musician was not impressed), writing a cover letter, a recipe or even a will.
It has been reported ChatGPT can pass the US medical licensing exam and more recently, the Bar exam. Ellis explained, “While it won’t pass with flying colours, the fact that it can pass the Bar exam to a passable standard should be of concern”.
She described the situation as somewhat of a double-edged sword. While it threatens assessments such as the US Bar exam, it also offers a tool that could make searching through case law cheaper, more efficient, and comprehensive.
“On the one hand, it can already do things we need future legal professionals to be able to do, but on the other hand, we need them to graduate knowing how to use these tools and how to put them to work effectively,” she said.
NSW and Queensland have banned the use of ChatGPT in public schools; however, universities have yet to make a clear policy. How it will affect law students remains to be seen.
“I think it’s inevitable that the way we currently assess students is going to have to change. It’s probably also going to prompt some curriculum changes,” said Ellis.
“But let’s not go down the rabbit hole of getting into a full arms race with our students.
“We need to focus instead on the fact that the learning isn’t happening. How can we stop, reset, and recalibrate what we’re doing?”
Matthew Golab, Director of Legal Informatics and research and development at Gilbert + Tobin, said that in its current form ChatGPT is not appropriate for big law and high precision.
“ChatGPT is still in its infancy. From a legal perspective, these are general purpose tools and there are well known flaws where generative AI can completely fabricate and manufacture evidence,” said Golab.
He explained there are two aspects lawyers need to be across when it comes to these tools.
First is the provision of advice for clients in specialist areas of technology law and helping them navigate the systems. Second is the various legal functions within the firm that could utilise AI, such as the review of documents, drafting or negotiating contracts.
“These types of generative tools could do an amazing job in being able to summarise, in very plain language, complex things,” he said.
Golab imagined a future scenario where AustLII, a database of Australian legislation and judgments, utilised AI technology.
“It would be amazing if you could really simplify very complex regulatory and legislative questions and answers,” he said.
“Obviously AustLII has commercial arrangements with legal publishers so you can’t harvest and do things with that data but there is an amazing potential in generative AI when it comes to textual things.”
Golab explains we are long way off from being able to use machines to comprehend legal text without human training.
“For example, you can perfect a system for one type of contract or very similar contracts, but it gets very complicated when you try to use those systems for just any contract out there,” he says.
“It’s only effective if the system is able to comprehend it and in terms of unsupervised learning, we’re a long way off.”
Some of the negatives of ChatGPT is that it has only been trained on material prior to 2021 and it doesn’t cite its sources. Ellis highlighted another issue to be the way in which it was trained.
“Chat GPT was trained on billions of parameters. To give you a sense of how big that is, the whole of Wikipedia only makes up three per cent of what it was trained on,” said Ellis.
“It’s ingested an internet’s worth of data and the internet can be a pretty toxic place.
“In its trial mode, ChatGPT had a habit of blurting out toxic, racist, and sexist responses. To teach it not to do that, they employed humans who had to look at these responses and tag it as toxic.”
Recently, it came out that OpenAI outsourced some of this labour to people in Kenya and paid them less than $2 an hour.
“This isn’t just some kind of neutral, damage-free zone. There is human harm in the training of these bots,” said Ellis.
When asked if AI technology will one day replace humans, Ellis said, “There are certain things that humans will always be better at – moral and ethical judgment, creativity, imagination and innovation”.
“AI tools can only predict and do what they have been trained to do,” said Ellis.
“One of the great paradoxes is that these tools are actually improving and increasing the value of what only humans can do.”