By -

In a recent online panel event, four academics from the University of NSW (UNSW) discussed the implications of ChatGPT and Artificial Intelligence (AI) on society and the future of education.

Professor Toby Walsh from the UNSW engineering faculty begun with: “Unless you’ve been living under a stone, I imagine you might have heard about ChatGPT which is an AI tool that was released on the last day of November and has really captured people’s imaginations.”

Walsh described some of the functions ChatGPT can perform such as wring an essay, compiling a shopping list, suggesting ideas for a children’s birthday party, or even writing computer code.

Professor Lyria Bennett Moses from UNSW Law & Justice said that ChatGPT has the potential to be beneficial in a legal context.

“Thinking about legal practise, there’s a lot of text generated from very simple things such as an [attached] letter through to legal arguments and written submissions,” said Moses.

“I think it’s going to be really interesting to explore how these kinds of tools can help people do that better.”

In terms of how AI will change the future of legal education, Moses considered that it may be akin to the use of calculators in mathematics.

“If you think about the progression of mathematics teaching and the way tools are used, when you first start doing maths in primary school you don’t get a calculator,” she said.

“It’s all mental arithmetic, learning how to do long division – kids still learn that even though they might not use it in the rest of their lives… but it’s an important part of understanding the process.”

Moses predicted schools will continue to teach kids skills such as writing even though AI tools can do it for them. Much like the calculator example, students will learn how to do tasks without the tool first and then as they become more advanced, they can use the tool to tackle more complex problems.

Professor Cath Ellis from UNSW Arts, Design & Architecture echoed Moses’ remarks and said that having specialised knowledge of an area puts a person in a better position to navigate the tool.

“One thing we’re learning about this tool is that you need to know quite a lot and have the evaluative judgment skills to know the difference between what not good enough, just good enough and good enough looks like in order to write [effective] prompts to get it to do what you want,” said Ellis.

“There’s a lot of talk about [ChatGPT] making up references, it’ll put references in for you. But if you say to it ‘don’t fabricate the references’, it won’t.”

Moses explained that law school assessments may need to change in the future due to AI but what is important is testing the right skills to ensure graduates are leaving university prepared.

“In law, being able to articulate an argument well is the skill that we want our graduates to have,” said Moses.

“The analysis has to be good, and the research has to be done but at the end of the day, if you’re going to be arguing a matter in court, the persuasiveness of the way you say it is also a very important professional skill.

“If analysis is what we really want to be assessing, then writing an essay might not be the best way to check that the students have the analytical skills because… it sort of biases the marking against what you’re trying to teach them.”

Moses highlighted that we need to be teaching people how to use AI tools “ethically, legally, appropriately and responsibly.”

“Anyone can go and look at ChatGPT and see what it can do but the hardest thing to learn, if you’re already working in a job is to ask – is this okay?” said Moses.

“If you’re a journalist for example, can you get the bot to write your story for you and just submit it to your editor?”

Moses said there are two qualifiers to answering that question: honesty and issues of copyright. She said failing to disclose use of the tool to an employer may put someone at risk of breaching their employment contract, falling foul of ethical professional journalism, or even be considered as receiving a financial benefit through deception.

“There are all sorts of problems you can run into if you’re not careful,” said Moses.

“There are student consequences of falling foul of the student code of conduct, potentially leading to academic misconduct, potentially leading to you [not getting] admitted because you’re not a fit and proper person.”

When asked if ChatGPT will replace lawyers, Moses said that it ultimately comes down to the quality of arguments. While AI may be appropriate for simple matters in cases that come up repeatedly with only a handful of variables, she said that arguments in court required a lawyer’s creativity.

“If you’re talking about representations in court, [they’re often] far more complex matters that are sufficiently contentious to end up in court in the first place,” said Moses.

“The lack of accuracy filter is going to be fundamentally part of your downfall, as is the lack of a certain kind of creativity.

“For example, no AI would ever have written the decision in Mabo.”

The lack of accuracy filter is going to be fundamentally part of your downfall, as is the lack of a certain kind of creativity. For example, no AI would ever have written the decision in Mabo.

Professor Lyria Bennett Moses, UNSW Law & Justice

Moses asserted that AI would have come to a different conclusion in Mabo, the landmark High Court case that overturned the doctrine of terra nullius and recognised the land rights of Aboriginal and Torres Strait Islander peoples, as all it can do is look at previous cases and reformulate how previous courts have ruled.

These tools do not have the creativity to consider the wider context, how it played out in history, and fundamentally rethink the way in which cases should be resolved, Moses said.

“Most of the time when a matter is going to court, there is at least an element of having to go beyond what has come previously,” she said.

Moses did not rule out the potential for ChatGPT to be useful in courtrooms. Just recently a US start up company used ChatGPT to feed arguments to a self-represented defendant in a ticketing matter.

“You can devise tools to help litigants write their own written submissions and even potentially do a first draft for the magistrate and judge,” said Moses.

“We have to think about all the legal issues which we haven’t really done yet, copyright is a big one in that context.

“Not only ChatGPT and any copyright claim by the software provider but also because it is re-crunching things that have previously existed, there’s the possibility of copyright being owned by someone in the underlying works that have been re-coagulated for you.”

The use of ChatGPT in legal practise made headlines in Colombia in early February after a judge used it to determine whether an autistic child was covered by his insurance for medical treatment.

Justice Juan Manuel Padilla asked ChatGPT: “Is an autistic minor exonerated from paying fees for their therapies?”

ChatGPT’s response was aligned with Padilla’s final decision which he said was also based on legal precedent. The case has led to criticism from some of his peers.

Padilla defended his use of the AI tool, arguing it has the potential to improve Colombia’s oversaturated legal system.

“By asking questions to the application, we do not stop being judges, thinking beings,” said Padilla.

In 2022, Colombia passed legislation that recommended public lawyers utilise technology where possible to improve efficiency.

Colombia’s Supreme Court Justice Octavio Tejeiro said that ChatGPT had caused a stir within the legal community as people feared robots would replace lawyers and judges.

“The justice system should make the most of technology as a tool but always while following ethics and taking into account that the administrator of justice is ultimately a human being,” said Tejeiro.

“It must be seen as an instrument that serves the judge to improve his judgment. We cannot allow the tool to become more important than the person.”

Please note ChatGPT was not used in the creation of this article