By -

As the popularity of artificial intelligence (AI) grows, the need to legislate and develop a framework for its use is imperative.

The European Union is among the jurisdictions at the forefront when it comes to legislating artificial intelligence. The European Union passed the Artificial Intelligence Act (AI Act) in late 2023 and it was published in the Official Journal of the European Union on 12 July 2024. While the AI Act came into force on 1 August 2024, provisions are coming into effect in stages.

Prohibitions under Article five of the AI Act is set to come into effect on 2 February 2025. The prohibitions cover a range of areas from restricting use of manipulative or deceptive techniques, prohibiting the use of AI to undertake social scoring, predicting criminality based upon profiling, untargeted scraping of facial images from the internet or CCTV, to the use of “real-time” biometric identification systems. While the prohibitions will be reviewed on an annual basis, there are exceptions.

As Jose-Miguel Bello Villarino, Senior Research Fellow at the University of Sydney Law School explains, “the regulation is extremely complex, not for what it does, but … you need something else to be done before you could actually deploy the full force of the regulation.”

He explains that a significant portion of the regulation relates to high-risk systems and prohibited systems. Article five falls under the prohibited system. Bello Villarino says for  high-risk systems, there is a need for technical standards or common specifications in place for developers to check against especially when it comes to things like thresholds for safety or protection of human rights.

As for the prohibitions coming into effect shortly, Bello Villarino says that the real time facial recognition is one of the “more complicated” prohibitions. He explains there was a desire to prohibit facial recognition completely but there may be instances where real time facial recognition is needed. “What if a kid is lost? What if there’s a kidnapping? What if there’s an imminent terrorist attack?” he says.

Another one of the prohibitions relates to the subliminal exploitation of vulnerabilities in various ways. Bello Villarino says “these are very tricky because, even after having studied that article, after having tried to make sense of that article, I am not sure about how that is going to be implemented.”

He gives the example of a person who likes to gamble. While gambling is legal, the website or browser may be able to identify, through targeted ads or other means, that the user is a gambler. If that information is used when the person is most vulnerable, according to the other searches the person has conducted, “that seems to fit into the subliminal or the exploitation of the vulnerability,” he says.

Bello Villarino points out that even where the wording of the Act is clear, there is still a question of how the provisions will be monitored and implemented, and the level of threat and its threshold. “It’s going to be complicated,” he says.

There are, however, exceptions to the prohibitions and ways to work around the prohibitions. When it comes to certain assessments, Bello Villarino gives the example of a Judge assessing the likelihood of a person committing another offense and determining whether bail should be granted. He explains that if the Judge is examining the person’s previous criminal record, or factors that directly relate to the risk of the person absconding then that would be a high-risk system.

However, if the Court assesses the person’s personality traits with AI, the same way that a machine can assess the person’s brain activity, to determine whether the person is more at risk of committing a crime or not, then “that is a prohibited system … especially if you use that as a predictive tool,” he says. He acknowledges that there are ways to get around the prohibition for instance, if the use is “directly related to the criminal activity” and not for profiling a person or assessing their personality traits.

Is there a need for AI regulation?

According to Bello Villarino, “there is an agreement in general among advanced liberal democracies … that some degree of regulation for AI based on risk is necessary,” he says.

“I think the learning for Australia from the EU Act should be that there are some things that should be prohibited … [for example] use [of] AI to exploit people’s vulnerabilities, regardless of [how that is drafted], that should be prohibited. It’s not up for discussion…,” he says.

He believes there is a need for a stable regulatory regime. “The best thing about the EU [AI] Act is that you know what it is. You know what to expect. So, if you are a developer and you’re thinking about developing a system … you’re going to be able to access a market of 500 million people,” he says.

In terms of what he would like to see for Australia, Bello Villarino says “what I would insist [on] is the element of interoperability. The Australian market is very small both in developing and deploying … Australia is going to need to buy systems from outside and whatever is developed for Australia, it needs to be compatible with the rules [of] somewhere else.”

It is worth noting that penalties for non-compliance of prohibitions under Article five of the EU AI Act will come into effect on 2 August 2025. Fines of up to 35 million Euros, or up to seven per cent of its total worldwide annual turnover for the previous financial year, whichever is higher may be applied.

The Law Society of NSW has a number of resources available on its website to help NSW legal practitioners and Law Society members navigate AI in legal practice. The Law Society AI Taskforce, formed as part of the President’s 2024 priorities, released their predictions, tips and suggestions in December last year and is available to read here.