By -

A Victorian local council Mayor has threatened to sue OpenAI if it fails to change ChatGPT’s incorrect claims that he served time in prison and pled guilty to conspiring to bribe a foreign official.

Over a decade ago, Brian Hood of Hepburn Shire Council was involved as a whistle-blower in the high profile Securency bribery case. Hood alerted authorities and journalists to foreign bribery by agents of Securency, which at the time was a subsidiary of Australia’s national bank.

In November last year, Hood was told by several people that ChatGPT had characterised his involvement differently.

“I couldn’t believe it at first, but I went in and made some enquiries myself and got this very incorrect information coming back,” Hood told ABC News.

“It told me that I’d been charged with very serious criminal offences, that I’d been convicted of them and that I’d spent 30 months in jail.”

Hood is represented by Sydney-based law firm Golden Legal. On 21 March 2023, the firm sent a letter of concern to OpenAI demanding they fix the errors concerning their client within 28 days or face potential legal action.

OpenAI, which is based in San Francisco, has yet to respond.

James Naughton, a partner at Golden Legal, told Reuters that if it goes ahead this may be a significant test case for emerging artificial intelligence technology.

“It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space,” Naughton said.

“He’s an elected official, his reputation is central to his role.”

Naughton asserted that if successful, Hood may claim more than $200,000 in damages due to the seriousness of the defamatory statements. He also argued that ChatGPT gives users a “false sense of accuracy” by failing to provide footnotes.

“It’s very difficult for somebody to look behind [ChatGPT’s response] to say, ‘how does the algorithm come up with that answer?’” said Naughton.

“It’s very opaque.”

Neerav Srivastava, a PhD law candidate at Monash University, said that defamation law is essentially about protecting reputations, which are largely influenced by online activity.

“What we’re really starting to understand in the past few years is how damaging online content can be,” said Srivastava.

“We need to look at defamation law in the context of online communications and whether the law itself needs to be changed.

“Identifying sufficient control and responsibility for a defamatory publication in a time of social media and generative AI is a novel and challenging issue.”

Identifying sufficient control and responsibility for a defamatory publication in a time of social media and generative AI is a novel and challenging issue.

Neerav Srivastava, PhD law candidate, Monash University

Professor Geoff Webb, Department of Data Science and AI at Monash University, said that this case provides a “timely illustration of some of the dangers of the emerging new technology”.

“It is important to be aware that what chatbots say may not be true,” said Webb.

“It is also important for people to be aware they will reflect back the biases and inaccuracies inherent in the texts on which they are trained.”

Associate Professor Jason Bosland, Director of the Media and Communications Law Research Network at the University of Melbourne, told ABC News that he doubted the likelihood of a successful action.

“The prospects of successfully suing them are very slim, almost zero, and that’s because the claimant might be able to obtain a judgment in their favour against OpenAI in a local court, but it would need to be enforced in the US,” said Bosland.

“Back in 2010, the US Congress passed an act called the Speech Act which prevents the enforcement of foreign defamation judgments by US courts.”