By -

Contrary to popular opinion, there is no AI in Robodebt, and a leading researcher in the field wants Australians to understand that AI can in fact be a useful tool in government decision making. But as the technology becomes more commonly used, is administrative law ready?

Throughout the duration of the Royal Commission into the Robodebt Scheme, Australians were collectively horrified by stories of despair and trauma. The question at the end of it all was: how could this have happened?

The word ‘Robodebt’ conjures up all sorts of negative technological implications, and evokes a sense of robots lurking in the shadows, about to take over and make life worse for us all. But that’s not necessarily true, says Professor Anton van den Hengel, Director of the Centre for Augmented Reasoning at the University of Adelaide. Instead, says van den Hengel, automation has been overwhelmingly positive for humanity.

It was just computers

“The truth is that automation is responsible for the quality of life that we lead,” van den Hengel tells LSJ.

image description
'Automation is responsible for the quality of life that we lead' - Professor Anton van den Hengel, Director of the Centre for Augmented Reasoning at the University of Adelaide

Van den Hengel cites agriculture as an example of an industry that has benefited greatly from automation, with the invention and widespread use of technology like tractors saving humans time and energy.

But misunderstandings about what machine learning and AI are have led to confusion about Robodebt, he says: “The thing with Robodebt is that the technology that was used had nothing to do with machine learning. It was just computers.”

“Computers are a tool of automation. They enable us to be far more productive. And that means that our decisions have a bigger impact.”

Automation allows humans to spend more time on higher level decisions – and this is where the Robodebt story comes in. Ultimately, van den Hengel says, automation serves to amplify human decisions. This means that if there is an ideology or agenda behind the decisions that are made, it will also be amplified.

“Computers … were used in Robodebt to automate the application of a really bad decision. AI wouldn’t have made that decision. AI itself doesn’t have an ideology and it doesn’t have an agenda,” he says.

‘Computers … were used in Robodebt to automate the application of a really bad decision. AI wouldn’t have made that decision.’

Van den Hengel adds that, despite prevailing beliefs to the contrary, no AI was used in Robodebt.

Unfortunately, he says, the disastrous program has led to a belief that AI can’t be applied to government activities because this will result in something akin to Robodebt.

“Nothing could be further from the truth,” he says.

Ask the right questions

What it comes down to, van den Hengel stressed, is giving AI the correct instructions. When AI is asked the right questions, the right results will be given.

Bias creeps in when those asking the questions don’t want the answer, or are looking for a different result.

But van den Hengel can see areas within government operations where AI would be beneficial.

“The advantages for AI in government are incredible. At the moment, government is inevitably a kind of one-size-fits-all process because that’s the only technology we have. Personalising that whole process [would] get much better results,” he explained.

Take social services budgets as an example, he suggests: in this case, AI could be used to determine the best ways to spend the budget and the best interventions to apply to members of the community.

AI, van den Hengel explains, is “very good at prediction”, so it would be able to predict a person’s trajectory if they were supplied with interventions like housing and education. This would allow spending to be targeted to the interventions that would have the biggest impact. It could even be used in our efforts to reduce greenhouse gas emissions, or manage health budgets.

Van den Hengel can also see a role for AI in the running of Centrelink. Van den Hengel believes Centrelink needs a complete “rethink”.

“Most of the Centrelink process is just wasted human effort,” he says.

He feels that many of Centrelink’s decisions could be automated, with human workers available for things that require human connection and interaction, like outreach or fixing any problems that arise. This would lead to “much better outcomes”, says the researcher.

‘Many of Centrelink’s decisions could be automated, with human workers available for things that require human connection and interaction.’

“It’s a long way from perfect at the moment, and automating a lot of those processes with AI would create better outcomes for everybody involved; [and] it might leave the humans more time to go and actually proactively engage with Centrelink recipients. Maybe they could go out and find people who need help; maybe they could engage with Centrelink recipients to suggest new courses or new interventions. You could enable to the humans to do what the humans are good at by not making them do a whole lot of what machines are good at,” van den Hengel says.

An irrational fear

He is bemused by the antipathy towards AI.

“You’ve used AI 20 times today, probably a hundred. We’re all on Google Maps and Facebook and YouTube and Netflix. Everything you do on your phone is enabled by AI and it continually gets it right and just does a fantastic job day after day,” he said.

“We all use it all day, every day, and yet the narrative in society isn’t ‘wow, how much time has AI saved me today?’ That’s the conversation we should be having about AI.”

Referring to concerns around large language models like ChatGPT, van den Hengel said it’s not just AI that is capable of hallucinating. Humans can also hallucinate; even Google Search can throw up an incorrect result depending on the input.

What’s important, according to van den Hengel, is what you do with the results.

“The trouble with hallucinations is that people type something into ChatGPT as though they’re speaking to God. ChatGPT is trained on a whole lot of human-generated data. You need to treat it like Google. It’s maths, not magic.

“It’s not a technical problem. If you go into this system … and expect it to be an oracle of all truth, then you’re bound to be disappointed,” he says.

Van den Hengel is a proponent of embracing the changes that AI will bring, but said he understands the immediate human response of fear. Aiming this fear at Robodebt and ChatGPT though, is “not rational”.

“The Terminator is not coming, and the machines are no closer to human intelligence than they ever were. All they’re doing is parroting our own intelligence back at us.

‘The machines are no closer to human intelligence than they ever were. All they’re doing is parroting our own intelligence back at us.’

“This is the transformative technology of our time,” van den Hengel says. “And the question for Australia is whether we’re going to sit on the side and watch the countries that have invested in it own and use this technology, or whether we’re going to actively engage.”

That’s not to say AI is without risk, he adds; but these risks are drawn from human intent, and human intent can be managed.

Just like internal combustion

Van den Hengel believes the application of law to AI will continue to evolve and become steadily more sophisticated. However, it may not be the type of laws you would expect. Rather, he expects the legal framework to deal with the applications of AI as a tool, instead of AI as a concept.

“We have rafts of laws about cars and how they’re driven. Cars are an application of internal combustion. We will wind up with rafts of laws about the way that AI is applied and really very little about AI itself, because trying to legislate about AI is like trying to legislate about internal combustion,” he says.

In an interesting twist, AI could be used for legal work too, attacking the manual tasks like searching case law and writing positions. Van den Hengel can see much of this work being automated by large language models like ChatGPT, and likens this to the change the accounting profession experienced with the advent of Excel. A “whole class” of accounting jobs was eliminated thanks to computers, he says, and yet the profession has expanded “because the accountants are far more effective than they were before.” As van den Hengel says, this is thanks to the capability of technology to add value to accountants’ work.

“The law is going through the same transition, and the winners and losers will be determined by who can reimagine what value lawyers offer in our society, and make the same transition that accountants made,” he predicts.

AI and administrative law

Concerns have previously been raised about the use of AI in state government activities in the highest legal halls of NSW.

In late 2021, NSW Ombudsman Paul Miller released a special report about machine learning technology and its uses in state government decision-making.

“… administrative law – the legal framework that controls government action – does not necessarily stand in the way of machine technology adoption, but it will significantly control the purposes to which it can be put and the ways in which it can operate in any particular context,” Miller wrote in the report.

The report stressed that the use of AI or automated decision-making is not in itself an example of maladministration, and the technology can be lawfully used by government agencies “to assist in the exercise of their functions”. The report also specifically identified “accuracy and consistency in decision-making, as well as mitigating the risk of individual human bias” as areas where AI could improve administrative activities.

Due to the use of the technology in applying administrative power, administrative law governs what is done with AI in government agencies. Ultimately, provided AI is used in accordance with established principles of good practice, it has potential to be a benefit. If not, “legal challenges, including a risk that administrative decisions or actions may later be held by a court to have been unlawful or invalid” could result.

According to the report, there are four major planks of administrative law that impact on the use of AI in government: proper authorisation, appropriate procedures, appropriate assessment and adequate documentation.

‘The four major planks of administrative law that impact on the use of AI in government: proper authorisation, appropriate procedures, appropriate assessment and adequate documentation.’

Machines need human decision making

Miller suggested “greater visibility” is needed for the use of machine learning by state government agencies.

“The use of machine technology in the public sector is increasing, and there are many potential benefits, including in terms of efficiency, accuracy and consistency,” he said in a statement accompanying the release of the report.

“As an integrity agency, our concern is that agencies act in ways that are lawful, that decisions are made reasonably and transparently, and that individuals are treated fairly. Those requirements don’t go away when machines are being used.”

The NSW Ombudsman website contains guidelines and resources for government agencies considering using AI and other automated processes to make decisions. The resources note the risks inherent in the technology, and the importance of working within administrative law.

This law states that statutory functions can be carried out only by a ‘legal person’, not AI or automated technologies.

However, those technologies can be used to help a legal person to make a decision. In the case of discretionary functions, there also needs to be “a level of genuine and active decision-making” by humans. All decisions made using AI or automated decision-making need to be documented, along with details of the systems used.

Those using automated decision-making need to have an awareness of algorithmic bias, or “systematic and repetitive errors that result in unfair outcomes,” and how to avoid this.