By -

Seeing is believing until it isn’t. As we navigate a brave new world of digital deception, fostering public awareness, promoting responsible AI use, and maintaining a firm grasp on ethical considerations will be paramount.

I am part of a generation that grew up on the cusp of an analogue and digital world. My fondest childhood memories did not involve a device; they were bike riding with friends, playing basketball at the local courts, and visiting the local video store. The internet was still a novelty, a Wild West, if you will. There were no rules; we were still figuring it out.

Movies and TV were how we consumed content. Before today’s sophisticated digital manipulations, Hollywood was already experimenting with technology that allowed actors to be inserted into real-life footage. One of the most iconic examples for me is the 1994 film Forrest Gump, in which the titular character, portrayed by Tom Hanks, shakes hands with President John F. Kennedy.

This scene was ground-breaking for its time, seamlessly blending archival footage of JFK with a newly shot performance by Hanks. While not a deepfake in the modern AI-driven sense, it employed similar techniques to achieve a convincing illusion. Hanks acted against a blue screen; his image was later composited into the historical footage. JFK’s image was subtly animated to match Hanks’ movements, and JFK’s voice was replaced by an impersonator, creating the impression of a genuine interaction.

This pioneering use of digital compositing and image manipulation in Forrest Gump marked a significant step towards the deepfake technology we know today. It demonstrated the potential for cinema to transcend the limitations of time and reality, allowing fictional characters to interact with historical figures in a seemingly authentic manner.

However, it also raised ethical questions about blurring lines between fact and fiction. Was it acceptable to manipulate historical footage for entertainment purposes? Could this technology be misused to create misleading or even harmful content? These concerns are even more relevant today as deepfake technology becomes increasingly accessible and sophisticated, sparking an arms race—a technology war pitting malicious deepfake creators against detectors. So, how can we keep up with the challenges ahead, and can this technology be used for good?

Humble beginnings

By the late 1990s and early 2000s, Hollywood had mastered the manipulation of images and video. But for those of us who dabbled in creating websites on Geocities or Angelfire, our image tool of choice was Microsoft Paint. That was until Photoshop came along—a humble image editing software that quietly revolutionised the way we perceive and manipulate visual content.

However, it didn’t take long for users to realise Photoshop’s potential to bend the truth. With a few clicks of a mouse, anyone could alter reality, erase blemishes, slim down waistlines, or even transplant heads onto different bodies. The era of “cheap fakes” had begun.

Tabloid magazines quickly embraced this newfound power, using Photoshop to create sensationalised covers and fuel celebrity gossip. Politicians and public figures found themselves at the mercy of digital manipulators, their images distorted to serve various agendas.

But “cheap” downplays the achievement of such fakery, as without the assistance of AI, the skill level required to pull off a convincing image was immense, not to mention the patience.

We’ve come a long way since then, and that is because certain technologies have become quite sophisticated. The ease at which anyone can create a convincing, manipulated image or video is alarming, when you consider the potentially harmful implications. What once took an entire team of professionals and expensive equipment can be done by someone with no technical expertise with an app on their phone. Enter the era of deepfakes.

Mimi Zou, Professor and Head of the School of Private & Commercial Law at UNSW, has been working on how to regulate deepfakes for the past three years.

Zou explains to the Journal that the term “deepfake” is a combination of the words “deep learning” and “fake” and refers to artificial intelligence-based technology used to produce or alter video and audio content with a high degree of realism. 

The development of deepfake technology has been driven by the increasing availability of large datasets and advances in machine learning algorithms, particularly generative models such as Generative Adversarial Networks (GANs) and diffusion models.

Developed in 2012 by Ian Goodfellow, GANs is a framework in which two neural networks, a generator and a detector, are pitted against each other to create a realistic and convincing image or video. 

A simple analogy is to consider it a competition between two artists. The generator ‘artist’ will attempt to replicate famous paintings, while the detector ‘artist’ will try to spot inconsistencies and provide feedback for the generator. This back-and-forth continues until a realistic replica is produced, and the detector can no longer tell the difference between a fake and the real deal.

Diffusion models are also used to create images or data. Imagine a picture slowly fading away until it’s just a blur. A diffusion model learns how to do this fading and then learns to reverse it—bringing a blurry image back into focus. The model can generate an image by starting with a random blur and developing a brand-new clear picture.

While introduced in 2015, diffusion models didn’t take off until 2020, which marked a turning point that led to subsequent advancements and applications, such as DALL-E 2—the engine that powers OpenAI’s text-to-video generator, Sora.

Zou says that GANs are one of the most common technological backbones of deepfake technologies.

“We are seeing, particularly with great advancements in generative AI, that these technologies have enabled everyday users and applications that also allow for even real-time deepfakes,” Zou says.

This technology laid the groundwork for Face Swap filters on social media platforms such as Snapchat, Instagram and TikTok, as well as popular standalone apps like FaceApp, Zao, and Reface.

Zou shares her experiences using these apps, which have allowed people like her, who are into political satire, to have a bit of fun. Whether that be face-swapping or imposing your own face in popular movie scenes, you can make it happen with just a smartphone.

I tried one of these apps, and with a single photo and a few taps, the app inserted me into some popular movies and TV shows. The results ranged from ridiculous to passable, but that’s fine, as ultimately, it’s just a bit of light-hearted entertainment, right?

Well, like with most things, we must draw the line somewhere. For deepfakes, it’s when it stops being fun, and we risk this technology falling into the hands of those with malicious intent.

image description

A technological arms race

Zou says we’re seeing a surge in the development of detectors. The now widespread use of deepfakes amplifies the dangers of misinformation and propaganda. 2024 has been the year of elections around the world, and there’s no doubt that deepfakes have played a significant role in driving propaganda and misinformation.

Earlier this year, a robocall impersonating US President Joe Biden went out to New Hampshire voters, advising them not to vote in the state’s presidential primary election, falsely asserting that a vote in the primary would prevent voters from being able to participate in the November general election. The voice, generated by artificial intelligence, sounded convincing but was confirmed to be a deepfake.

Swapping my face with Bruce Lee or Keanu Reeves for a laugh isn’t harming anyone. Still, when footage is created of a national leader saying or doing something they didn’t, the dangers are real, and the potential consequences could be disastrous.

In March 2022, three weeks into Russia’s invasion of Ukraine, a one-minute video of Ukrainian President Volodymyr Zelensky calling for his soldiers to lay down their arms and surrender to Russia appeared online. Viewers quickly pointed out that Zelensky’s accent was off and that his head movement and voice did not seem authentic when watched closely. 

Thankfully, they were quickly debunked and confirmed as fake in both instances. However, it’s not just politicians and celebrities at the mercy of this technology; scammers can target anyone, and some aren’t so lucky. In January this year, a Hong Kong-based finance worker at a multinational firm was tricked into paying HKD$200 million to fraudsters who used deepfake technology to impersonate the company’s CFO in a video conference call. The worker was initially suspicious but was convinced and transferred the funds after seeing colleagues they recognised on the call, who were also deepfakes. 

As the technology behind deepfakes becomes more sophisticated, so do the methods for detecting them. Researchers and tech companies are developing various tools and techniques to identify the subtle inconsistencies and artifacts that often betray the artificial nature of deepfakes. These methods include analysing facial movements, inconsistencies in lighting and shadows, and even detecting the digital fingerprints left by the AI algorithms used to create them.

Zou says detection technology has rapidly evolved, and it’s in society’s interest that we have powerful detection tools. But it’s also getting more challenging because it’s like a game of cat and mouse and will only get more complicated to detect. 

While these detection methods show promise, they are not foolproof. Deepfake creators constantly adapt their techniques to evade detection, creating an arms race between creators and detectors. Additionally, the sheer volume of media content created and shared online makes it challenging to analyse and verify every piece of content for authenticity comprehensively.

“When you make any advancements in the tech detecting tools, you’re also creating the space for advancements in the use of the same technology, effectively for creating deepfakes,” Zou says.

She says this “arms race” is not just between content creators and detectors but also between nations working to become an AI superpower.

Where does the liability lie?

As deepfake’s potential for harm grows, the question of legal liability becomes increasingly pressing: who should be held accountable?

Raymond Sun, a tech lawyer at Herbert Smith Freehills and a passionate programmer, explains that a deepfake ecosystem of four key stakeholders needs to be considered. First is the company developing the deepfake apps, then the perpetrator who uses the app, followed by the platforms that disseminate the content, and lastly, the end user who views the content.

He notes that while developers should have some accountability, the biggest challenge is balancing regulations and innovation. “It’s often a very tricky balance to decide,” Sun says.

Mimi Zou shares this view and warns that this is where regulators need to be very careful. “If you impose too much legal regulations, and in a way that’s not appropriate, you could stifle some of the positive innovation that are coming out,” Zou says.

“But I think there needs to be some degree of obligation on the actual technology providers, the services of these deepfake technology.” 

What about the perpetrator or end users who share the content?

Sun believes that while we have existing frameworks to address fraud arising from deepfakes, such as defamation and consumer protection laws, the real challenge is enforcement because the perpetrator or those sharing the content publish it anonymously.

“This is a common issue for any digital content, often abusive or malicious content, and the ability for the content to be easily distributed online by an anonymous poster only adds to the challenge,” Sun says.

Zou acknowledges that the liability for users can be pretty significant as the content they create might infringe on copyright and privacy laws. However, she doesn’t see regulators, at least, imposing the onus on them in the interim.

This leaves the “big tech” or social media platforms that facilitate and distribute the deepfakes at scale. “from a regulatory perspective, just practically, they are more accessible to regulate, and the thinking is to put responsibility on those social media platforms to report malicious deepfakes and either remove them if requested or to detect them…” Sun says.

Zou acknowledges that international regulators are now looking more closely at the platforms that disseminate the content.

“It’s the social media platforms, very large online platforms, that already have a lot of legal obligations, and I think that’s been the target,” Zou says.

In the case of Joe Biden’s robocall, while the Federal Communications Commission (FCC) found that political consultant Steve Kramer was behind the fake call and proposed he pay a fine of US$6 million, it also issued a separate proposal for the telecom company Lingo Telecom to pay US$1 million for its role in transmitting the call.

image description

How can Australia learn from international regulations?

Deepfakes have been used primarily in pornography, and in Australia, the only regulation Australia has seen that targets deepfakes (while not explicitly mentioned) is the introduction of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 in June this year, which prohibits the creation of sexually abusive deepfakes, especially those that target children. So, are there lessons we can learn from our international counterparts?

China and the European Union have taken contrasting approaches to address this issue, with China opting for strict regulations and the EU leaning towards a risk-based framework.

China is known for its stringent internet controls. If you have ever visited the country, you would know that access to Facebook, YouTube, or WhatsApp isn’t possible without workarounds like a Virtual Private Network (VPN) service or by turning on international roaming on your mobile device. So, it’s not surprising that in January 2023, the Chinese government introduced the strictest legislation in the world, taking a firm stance against the misuse of this technology. 

Key provisions include mandatory labelling of deepfake content, the requirement for explicit consent from individuals depicted in deepfakes, and a strict prohibition on content that harms national security, disrupts the social order, or infringes on individual rights.

While these measures aim to protect citizens from potential harm, they also raise concerns about their impact on freedom of expression and innovation. The broad definition of prohibited content and the potential for strict enforcement could stifle legitimate uses of deepfake technology, such as in the entertainment industry or for educational purposes.

In contrast, the EU AI Act adopts a risk-based approach to regulating AI, including deepfakes. It categorises AI systems based on their potential impact on fundamental rights and safety. High-risk AI systems, which could include those used to create deepfakes for malicious purposes, face stricter requirements, including mandatory conformity assessments and ongoing monitoring.

The Act also emphasises transparency, accountability, and human oversight in AI systems. It prohibits certain AI practices deemed unacceptable, such as social scoring by governments and the use of AI for indiscriminate surveillance.

Sun says it’s still too early to tell if the EU AI Act will be effective, but if it does well in broadly regulating AI systems, it should also work well in regulating deepfakes. He also believes that this risk-based approach would align with what could be implemented in Australia.

“Basically, the more risky the AI system, the more strict the rules that apply to it. So, the risk-based approach is something that we could look into, but that doesn’t necessarily mean we’ll copy the text of the EU AI Act,” Sun says.

Zou says that while the EU AI Act has a specific provision on deepfakes, critics say this legislation will destroy AI innovation.

She notes that there is a fine line between deepfake and generative AI, and both uses of AI technology require careful consideration of transparency. This means that when we interact with AI-generated or manipulated content, whether deepfake or not, we understand that the content is being created through AI.

“I think transparency is something that would be very important for Australia to think about … not just limited to deepfakes, but also any sort of generative AI content, because in order to build trust, I think it’s really important that we know that the content is actually being manipulated,” Zou says.

However, Zou believes legislation can only go so far, and using technical tools like digital watermarking will further improve transparency. Digital watermarking embeds a hidden identifier within a digital file, such as an image, video, or audio file. The identifier is invisible to human senses and only detectable using specialised software.

“… It’s once these technical tools make progress, just like detection, just like the watermarking. I think we will see a better ecosystem where implementation of these laws that require transparency will come into being,” Zou says.

As the world grapples with the challenges posed by deepfakes, it remains to be seen which approach will prove more effective in curbing misuse while fostering responsible innovation. The global community will be watching closely as these regulations unfold and potentially shape the future of AI governance worldwide.

Zou says that in some parts of society, there is genuine concern about the risk posed by AI, but regulators are pretty slow to intervene.

“…then you have really powerful tech companies that either want regulation, because they’ll squeeze out the competition from smaller players, or they want self-regulation, which doesn’t quite work either,” Zou says.

She notes that it’s about accountability and clearly identifying who is legally liable when things go wrong. “I don’t think any regulators have got this right at the moment.”

image description

Public awareness

It’s only a matter of time before deepfakes are used in mainstream advertising. We’re already seeing them in online ads using AI-generated people. But if we question what we see when it comes to politicians, how will we ever believe a manufacturer’s claim about their “revolutionary” products?

There is a real risk of misleading representations and false testimonials that can be generated with deepfakes, leading to deceiving consumers, eroding trust in brands, and costly legal battles. Comparative advertising, already a sensitive area, could be further complicated by using deepfakes to unfairly portray competitors or their products. The line between creative marketing and deceptive practices could become increasingly blurred. Are public awareness and education the keys to mitigating risks? And how can policymakers contribute to the effort?

Zou thinks that people are already quite wary about deepfake content, and while there is some degree of awareness, education could be strengthened.

She says that in the legal profession, law firms should focus on setting up responsible AI practices. But we can only do our best to contribute to this sort of legal development and clarify liability. She thinks that as a profession, we have the power to do this; we are in the driver’s seat.

“I think promoting not just responsible AI from an ethical point of view, but also, being kind of protectors of people’s rights, being able to also defend the general public from the harms, whether it’s in lawsuits or law reform, that’s something that our profession should really pay closer attention to.”

The leaps we have seen over the last few years, mainly general, large language models and generative AI systems like ChatGPT, have led to the public’s distrust of the technology.

She thinks the legal profession should ultimately intervene in this area and promote trust.

“I think now we’ve reached a tipping point, and the harms that have been caused by deepfakes have really demanded the necessity of intervention.”

The Liar’s Dividend

As humans, once we’ve been fooled, we become cynical and question what we see. This brings us to the argument that deepfakes could undermine integrity, instil widespread cynicism, and erode the foundation of democracy. In 2018, after several deepfake and manipulated videos of US politicians began to surface, US Congress warned that the technology was “blurring the line between fact and fiction” and that it “could undermine public trust in recorded images and videos as objective depictions of reality.”

Since then, developers have had to race to keep up with deepfakes. While there have been many public and private efforts to increase the efficiency of deepfake detectors, there are, unfortunately, those who prey on public anxieties about the prevalence of deepfakes to erode trust in information. By making false claims and discrediting genuine information, these individuals exploit the so-called “liar’s dividend.” 

Robert Chesney, Dean of the University of Texas Law School, and Danielle Citron, Law Professor at the University of Virginia School of Law, first introduced the concept of the “liar’s dividend” in their 2018 article Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. It refers to the phenomenon where, as the public becomes more aware of the potential for convincing audio and video fakes, some individuals may attempt to evade responsibility for their actions by falsely labelling authentic content as deepfakes.

Chesney and Citron say that some politicians relentlessly encourage distrust on television and radio. The mantra “fake news” has become an instantly recognised shorthand for many propositions about the supposed corruption and bias of a wide array of journalists and a valuable substitute for argument when confronted with damaging factual assertions. “It is not difficult to see how ’fake news’ will extend to ’deep-fake news’ in the future,” Chesney and Citron say.  

image description

“As deepfakes become widespread, the public may have difficulty believing what their eyes or ears are telling them—even when the information is real. In turn, the spread of deepfakes threatens to erode the trust necessary for democracy to function effectively.” 

“The combination of truth decay and trust decay accordingly creates greater space for authoritarianism.”

It’s a bleak outlook, but while deepfakes have the capacity for deception, it’s also important to recognise that technology can be harnessed for creative, educational, and even therapeutic purposes for the betterment of society.

How can deepfakes be used for good?

Zou and Sun highlight the benefit of deepfakes as creative tools for everyday people, artists, and filmmakers.

Sun says that those who are embracing the technology have been able to use deepfake technology to create videos, images, or what they need at a rapid pace, expanding the productive capabilities of creators. As a videographer and editor, I agree with this. Generative AI has made shooting and editing videos much more efficient. From stabilising shaky footage to cropping and artificially filling spaces in a scene, it has dramatically reduced the need for reshoots. 

Zou says that as an educator, her students live in a wholly digital era now, and it would be beneficial for her to create an avatar of herself to produce engaging educational content that her students can access on demand.

“I think some students would really like it that they can access an AI Professor anytime, day or night,” Zou says.

There are numerous benefits for healthcare as well. GANs offer a ground-breaking approach to enhancing medical imaging, particularly for X-rays and body scans. By leveraging their ability to generate highly realistic images, GANs can aid in creating detailed and accurate representations of organs for surgical planning and simulation. This can provide surgeons with a more comprehensive understanding of a patient’s anatomy, potential complications, and surgical approaches, ultimately improving the precision and success of procedures. Furthermore, GANs can also be used to generate synthetic medical images for training purposes, addressing the scarcity of real patient data and aiding in the development of advanced diagnostic tools.

As a programmer, Sun is familiar with data scarcity and appreciates AI’s potential to generate synthetic data. He explains that synthetic data involves using AI to create fake data sets that can be used to train an AI system.

Such data sets would avoid the privacy and copyright issues that current AI systems use to train. He says the big obstacle when training AI systems is finding and getting data from the real world can be challenging.

“If you do get it [the data], they either have personal information which has privacy implications, or they contain third party IP which then has to get the consent and licences,” Sun says.

“What deepfakes have done, and generative AI have done in general, is that you can create fake data sets that don’t contain any personal information and don’t contain any third-party information, and can use that to train your AI system,”

He gives an example of creating a facial recognition system that requires a lot of facial data to train. “If I were to go onto social media and get a bunch of real people’s photos, it’ll be a huge breach of privacy law if I don’t follow the relevant procedures and steps. But with deepfakes, I can create 5000 or a million fake faces that are not reflective of real-life people. They’re non-existent, but they look real, and they’re good enough to train my facial recognition system.”

As technology advances, the line between reality and fabrication blurs, leaving us grappling with the implications for trust, communication, and democracy. Pursuing AI supremacy fuels an arms race, but we must remember that technology is a tool, not a master, and we can’t forget the underlying human effects. Deepfakes possess the potential for both harm and good, but it’s up to us to harness it and point it in the right direction.