By -

Artificial intelligence (AI) chatbots, like ChatGPT, are increasingly weaving themselves into our daily lives, but Australia is dangerously behind in recognising the significant risks that they can pose.

Content warning: this article explores issues relating to self-harm. If you or anyone you know needs help contact Lifeline on 13 11 14. The Law Society of NSW’s Solicitor Outreach Service (SOS) is a confidential counselling service for NSW solicitors and can be contacted on 1800 592 296

It took just two minutes for ChatGPT to tell a teenage girl how to die. Just two minutes.

In a report published earlier this month by the Center for Countering Digital Hate, researchers posing as teenagers on ChatGPT found that “within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse – sometimes even composing goodbye letters for children contemplating ending their lives”.

Their testing found that 53 per cent of harmful prompts produced dangerous outputs, and that bypassing safeguards was simple. Worse still, the report concluded that these failures “aren’t random bugs; they are deliberately designed features of systems built to generate human-like responses”.

Artificial intelligence (AI) chatbots, like ChatGPT, are increasingly weaving themselves into our daily lives, but Australia is dangerously behind in recognising the significant risks that they can pose.

AI chatbots have moved rapidly from being a technological novelty to an everyday presence. ChatGPT alone has over 122 million daily users globally and processes more than 2.5 billion prompts every day. The National AI Centre reported in June 2025 that 40 per cent of Australian small-to-medium sized enterprises are currently adopting AI, with customer support and chatbots being one of the top five favoured applications.

There are a range of potentially significant benefits that AI chatbots can deliver, including faster and more accessible customer service, improved government service delivery, personalised education and training, improved accessibility for people with disabilities, enhanced crisis communication, and cost savings for businesses.

At the same time, these systems – whether operating as general-purpose assistants or highly specialised conversational agents – can also pose profound human rights risks.

Recent tragedies and legal disputes around the world have shown that AI chatbots are not simply benign tools. They can facilitate harmful behaviour, perpetuate discrimination, distort public debate, and erode privacy. The impact on our human rights – including the rights to life, health, privacy, freedom of expression, and non-discrimination – need to be understood and addressed.

Emerging overseas legal cases

A number of current overseas cases highlight these risks. The most high profile is Garcia v Character Technologies Inc & Google LLC, which is an American case in which the mother of 14-year-old Sewell Setzer III alleges that a Game of Thrones-themed ‘Daenerys’ chatbot on the Character.AI app engaged in sexually and emotionally abusive exchanges with her son, encouraged self-harm, and ultimately contributed to his suicide in February 2024. She is seeking to hold both the firm behind the Character.AI app, Character Technologies Inc, and Google responsible for her son’s death, claiming that he would still be alive if it wasn’t for the chatbot encouraging him to take his own life.

The lawsuit alleges that the teenager’s mental health severely declined soon after he commenced using the chatbot in April 2023 and that – amongst other harmful interactions – immediately before his suicide, the ‘Daenerys’ chatbot had urged him to “come home to me as soon as possible”. In May 2025, a Florida federal judge ruled that the majority of pleadings against both Character.AI and Google could proceed, most notably finding that chatbot outputs are not protected speech under the First Amendment and that wrongful death, negligence and deceptive trade practices claims should not be summarily dismissed. Importantly, the judge also allowed claims against Google to move forward, recognising potential platform liability.

While recognising that the case is yet to go to trial, this initial ruling is significant because it frames chatbot interactions as potentially actionable harm, not shielded by blanket free speech protections. It also raises important questions about duty of care, foreseeability of harm, and the adequacy of safety safeguards – all of which are issues equally relevant in other parts of the world, including Australia.

This is not an isolated case. A second suit filed in December 2024 by parents in Texas includes allegations that a companion chatbot informed a 17-year-old “that murdering his parents was a reasonable response to their limiting of his online activity” and encouraged him to self-harm. The legal pleadings claim that the Character.AI app “is a defective and deadly product that poses a clear and present danger to public health and safety” and that both the developers and distributors (in this case Character Technologies Inc and Google respectively) should be held legally responsible for launching and facilitating products with full knowledge of their inherent dangers, but without adequate safety features.

This case remains in the pre-trial phase and both the factual and legal issues raised are contested. However, together with Garcia, it raises important questions about the boundaries of legal responsibility for AI chatbot outputs, with potential implications for both domestic and transnational regulation.

Lessons for Australia

What does this mean for Australia?

These cases highlight the need for Australia to recognise the human rights impacts of emerging technologies, such as AI chatbots. There are positive benefits of AI chatbots from a human rights perspective, including making information more accessible, breaking down language barriers, and helping individuals access education, justice and essential services more quickly and fairly. At the same time, however, we need to recognise that AI chatbots can pose significant human rights risks by spreading misinformation, enabling discrimination, undermining privacy, and exposing people to harmful or inappropriate content without adequate safeguards.

The human rights risks are real, and they should not be ignored.

Australia’s legal framework does not directly regulate AI chatbots. Instead, harms are currently left to be addressed through a patchwork of existing laws (such as defamation, privacy laws, consumer law, and duty of care in negligence) that are often themselves struggling to remain fit for purpose in the digital age. The absence of proactive AI-specific obligations means that human rights protections depend on after-the-fact litigation, which is often inaccessible to the vulnerable groups that are most at risk from harm.

To be clear, the answer is not to ban AI chatbots.

Rather, the response needs to be multi-faceted and nuanced to ensure that Australia is able to harness the benefits of these emerging technologies while also protecting against the harms.

A good place to start would be the introduction of an AI-specific duty of care that requires AI developers and deployers to take reasonable steps to prevent foreseeable harm We need to think about the complete lifecycle of the technologies we use – from design and development through to deployment and ongoing operation – ensuring that safety and human rights protections are embedded in tools like AI chatbots from the very start.

Prioritising necessary reforms to existing laws is also essential. A key example here is the need to strengthen Australia’s privacy laws. It was almost three years ago that the then-Attorney General Mark Dreyfus declared that Australia’s privacy laws were ‘out of date and  not fit-for-purpose in our digital age’, but there is currently no clear timeline for the delivery of the promised second tranche of Privacy Act reforms.

Australia has also rightly begun exploring mandatory guardrails for high-risk AI uses, with a public consultation on a federal government Proposals Paper occurring in late 2024. Recently the Productivity Commission has called for the introduction of mandatory guardrails to be paused until an expanded regulatory gap analysis has been completed. We cannot afford to continue delaying the delivery of necessary legal protections.

While it is important to leverage existing regulatory frameworks and maintain technology-neutral approaches where possible, the prolonged delay in implementing specific AI safeguards means the technology continues to advance rapidly without adequate protections – and, with it, real harms are already unfolding.

Mandatory guardrails for high-risk AI applications do not have to hinder innovation or dampen productivity gains. In fact, clear rules can foster trust and responsible development, creating a stronger foundation for sustainable technological progress.

AI chatbots may not be inherently high-risk in all contexts. Under the European Union Artificial Intelligence Act the classification of AI chatbots depends on the specific use case and the potential impact of the chatbot’s deployment. The US cases outlined above highlight, however, the real potential for serious harm.

Conclusion

The legal challenges posed by AI chatbots are not hypothetical. From a human rights perspective, Australia’s current legal protections are insufficient. A proactive, rights-based regulatory framework is essential to protect individuals and embed accountability, while also fostering responsible innovation.

The pace of AI adoption means that law reform is urgent. AI chatbots are here to stay. The question is whether our legal system will evolve quickly enough to ensure they serve, rather than undermine, human rights.


Lorraine Finlay is Australia’s Human Rights Commissioner. She will be speaking on AI and government decision making at the Law Society’s Government Solicitors Conference on 3 September.