By -

Current Australian regulation is inadequate to stem the dissemination of hate speech and other disinformation via social media platforms.

Last month, a panel on information, disinformation and the future of journalism at the Athens Democracy Forum examined an alarming phenomenon: the high capacity for social media platforms to allow the widespread, fast dissemination of disinformation.

Globally, 59 per cent of the world’s population uses social media, with users spending an average of 2 hours and 29 minutes on various platforms daily (according to research by Global WebIndex, published July 2022).

To put it into perspective, that’s at least 4.7 billion people worldwide using social media.

While this is heaven for advertisers, and borderless global unity (in theory), there’s a disturbing capacity for the widespread and fast dissemination of false, misleading and malicious disinformation in the form of conspiracy theories, abuse and doctored imagery and video.

Inadequate controls in Australia

In Australia, the capacity is particularly high, given high social media use combined with inadequacy of regulation.

Last year, Western Sydney University, Queensland University of Technology and University of Canberra released a report on Australian adult media literacy. Their 2020 survey of over 3,500 Australian adults indicated that despite 83 per cent of respondents using social media daily, they lacked confidence in their ability to identify misinformation (fake news).

In March this year, the government released the Australian Communications and Media Authority (ACMA) report on the inadequacy of digital platforms’ disinformation and news quality controls.

The report highlights that 82 per cent of Australians reported their experience of misinformation about COVID-19 for the 18 months prior, and that false information is most likely encountered on Facebook and Twitter.

ACMA traced the typical spread of misinformation to “highly emotive and engaging posts within small online conspiracy groups”. These posts, or the information in them, is then amplified through influencers, public figures (including politicians, celebrities) and through media coverage.

Misinformation spreads from highly emotive and engaging posts within small online conspiracy groups and is then amplified.

This report resulted from the original request from the Australian government for digital platforms in Australia to develop a voluntary code of practice to address disinformation and news quality online.

In February last year, the code was launched by not-for-profit industry advocacy association Digital Industry Group Inc (DIGI) and has subsequently been adopted by Google, Facebook, Microsoft, Twitter, TikTok, Redbubble, Apple and Adobe. ACMA is monitoring platforms’ measures and the implementation of code arrangements, with the intention of providing additional advice to government by the end of the 2022–23 financial year.

Regulation explained

Fiona Martin is Associate Professor in Online and Convergent Media at the University of Sydney.

In May, the book of collected essays she co-authored and edited with Terry Flew, Digital Platform Regulation: Global Perspectives on Internet Governance, enabled multiple authors and academics to explore the impact of monopolised social media platforms on journalism, responsibility for malicious or incendiary posts, and the role of clear legislation that differs from that applied to traditional media and eradicates voluntary codes of conduct (self-regulation).

image description
Fiona Martin, co-author and editor of Digital Platform Regulation: Global Perspectives on Internet Governance

Martin explains: “There are a couple of different forms of regulation. First, there are the community standards which the platforms put into place to control what users put on platforms, and each is different according to the platform. Then, there are the terms of service, each peculiar to the platform, and then there’s a voluntary code on mis- and dis-information, which was put into play in February of 2021. That was developed by the major platforms together with DIGI. It was updated in October of 2021 and reviewed this year, with a lot of submissions. Essentially, the review found that there was a lot of concerns about the voluntary code of practice.”

Martin adds, “Australian law has very little purchase on global platforms. We can introduce laws, as we did with the news media bargaining code, but that wasn’t even actioned.”

She cites the recent criminal case of mining magnate Andrew ‘Twiggy’ Forrest suing Facebook over advertising scams featuring his image to sell cryptocurrency. Facebook didn’t attend the first hearing because it didn’t believe the West Australian court had jurisdiction to hear that case.

Australian law has very little purchase on global platforms. We can introduce laws, as we did with the news media bargaining code, but that wasn’t even actioned.

“The main regulatory lever is the Australian code of practice for disinformation and misinformation, but apart from that we’re dependent on platforms to remove misinformation or disinformation if we report it, flag it, and they accept our reports. Or if the government or the police or an MP publicly flags that content.”

Codes of practice and community managers: some solutions

The European Union brought in a code of practice in June this year, Martin explains. “It’s a voluntary code of practice so it’s very similar to the one we’ve got.”

One aspect that it includes, which Martin would like to see introduced into Australian regulation, is demonetising disinformation. That means that creators or influencers on Instagram who have been found to be spreading disinformation can have their capacity to earn money via ads removed.

“For any action to be taken, our code requires the misinformation or disinformation to have ‘serious and imminent’ threat of harm. To me, that’s a problem in terms of the way that some people dribble out misinformation over time that can have cumulative impacts, fracturing communities or lessening trust in public institutions and news bodies.”

Martin says, “Research that ACMA did suggests that very few people know that they can flag content. Less than half of Australians (48 per cent) know they can ask for content to be removed, and only 7 per cent have seen or experienced it being removed.”

The police aren’t properly educated in how to deal with these cases and they have very few powers, Martin says. “We have a real problem around the extent to which we can take action when platforms do not remove defamatory content or really highly abusive disinformation or misinformation.”

She concludes, “Legal practitioners should be recommending that anyone who has a business and is publishing on social media platforms should employ a professional community manager because those are the people who understand the legal conditions of publishing on social media and can prevent and protect from the spread of disinformation or misinformation on pages or in groups and comments on posts.”

Read more

Global statistics on social media use 2022

Report on Australian adult media literacy

Australian Communications and Media Authority (ACMA) report

European 2022 Code of Practice on Disinformation

Digital Platform Regulation: Global Perspectives on Internet Governance

Further resources

DIGI provides information on making a misinformation report to specific social media platforms here.

Mis- and disinformation voluntary code

To fact check material, use RMIT ABC Fact CheckAAP FactCheck and AFP Fact Check.

Signatories to the voluntary DIGI code provide annual transparency reports here.