By -

What do Elon Musk, Pope Francis, Mark Zuckerberg, Donald Trump and Jeff Bezos have in common? Beyond international renown, each of these men have been the subject of deepfake imagery. On 26 June, the Danish government announced amendments to its current copyright laws that provide a potential blueprint for Europe and beyond.

The Ministry of Culture (Kulturministeriet) secured bipartisan agreement to submit a proposal to amend the current law, strengthening the capacity for individuals to take legal action if their voice or image are depicted in deepfakes. The amendments proposed in the bill, yet to be consulted upon, address “realistic, digitally generated imitations” of an artist’s work that are shared online without consent, or where individuals identify content that exploits their face, body, or voice, the law is designed to both enable individuals to request online platforms to remove deepfake content depicting their image, or for artists to seek compensation.

Caricatures, parody and satirical imagery and audio are still allowed, and section 24 b of the Danish Copyright Act permits use of works protected by copyright for satire, parody and caricature.

Tech platforms, such as Meta, X, Instagram, Tik Tok or YouTube, could be liable for “severe fines”, according to Danish culture minister, Jakob Engel-Schmidt. He told the Guardian UK: “In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI… Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.”

According to a review by an identity verification service, Sumsub, deepfake fraud increased more than tenfold between 2022 and 2023. Deep Media, a media intelligence company contracted to the US Department of Defense amongst others, reported that 500,000 video and audio deepfakes shared on social media in 2023 alone.

Deepfakes can result in more than psychological and reputational injury. They are used to bribe and defraud individuals, whether via the threat of publishing deepfake porn material or imitating an authority or individual to seek payment or sensitive material. In 2024, UK engineering group Arup lost $AU38.23 million after fraudsters used a digitally cloned version of a senior manager to order financial transfers during a video conference, according to the Financial Times.

The current Danish Copyright Act states that copyright belongs to the one that produces the standard or the person entrusted with copyright. The new bill, which was submitted for consultation on 7 July, intends to introduce provisions in the Act that prohibit the sharing of realistic digital reproductions of personal characteristics without consent.

Under section 65 of the Danish Copyright Act, literary or artistic work by a performing artist is already protected, meaning that performances are prevented from being recorded or made available to the public without consent for a period of 50 years after they took place. The new bill extends the protection to include artistic performances other than of literary or artistic works.

The protection measures for artists have also been expanded to add a new section 65 to the Act to implement measures against realistic, digitally generated imitations of performances being recorded and shared without consent for 50 years from the year of death of the performing artist.

For individuals, section 73 a of the Danish Copyright Act addresses realistic audio and visual imitations. It consolidates elements of existing statutory and non-statutory rules and legal principles within the Danish Criminal Code, the Danish Marketing Law, and the GDPR. The protection period is set to be 50 years from the year of death of the individual being imitated.

The amendments proposed by this bill do not directly provide for compensation and imprisonment, but they allow individuals and performing artists with a legal basis to demand that illegal digital imitations be removed from social media and other platforms. Parties can consequently seek damages and compensation under the general rules of Danish law. Under the European Union’s Digital Services Act (EU DSA), if a platform does not remove illegal content after receiving a notification, the provider may be liable for financial consequences.

Critics have pointed to the limited capacity of the protective measures proposed by the bill. The laws are limited to Denmark, and the illegal deepfake content could still be accessible from other countries, even where the same content is made unavailable to users accessing social media platforms from within Denmark.

The timing of this bill is not coincidental. Denmark holds the Presidency of the Council of the European Union from 1 July to 31 December 2025, which the Ministry of Culture describes as “a major and significant task, with Denmark steering the political agenda and driving the negotiations in Brussels. In the field of culture, the ambition is to make media and culture cornerstones of the defence of European democracies.”

Both ambitious and agenda-setting, Denmark has won international attention for the pioneering new legal measures proposed that will build upon the digital landmark laws already proposed and accepted by the EU DSA.

The eSafety Commissioner Julie Inman Grant’s opening statement to the Senate Standing Committee on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, pointed out that deepfake detection tools are not keeping pace with the technology used to create and share this material. Open-source AI apps are largely free and simple to use to create deepfake image-based abuse material and realistic synthetic child sexual abuse material. It was Inman Grant’s opinion that companies could be doing more to reduce the risks that their platforms can be used to generate damaging content.

Australians whose images or videos have been altered and posted online can contact eSafety for help to have them removed. According to their site, “eSafety investigates image-based abuse which means sharing, or threatening to share, an intimate photo or video of a person online without their consent. This includes intimate images that have been digitally altered like deepfakes.”