Deepfake laws: Global regulations in the digital age against sexually explicit and criminal use of deepfakes

profile picture Amba Karsondas 12 min read
Image of a person whose face is being checked to see whether or not it is a computer generated image

If someone said the word ‘deepfake’ just a decade ago, nobody would know what they were talking about. The term hadn’t been coined and the technology as we know it hadn’t yet been created. Fast forward to the present day and it seems as though deepfakes are everywhere.

However, their explosive and widespread prevalence has highlighted some serious problems such as criminal offence and sexually explicit deepfakes.. In response, regulatory bodies are beginning to pass laws to combat these issues, but they’re competing against the rapid evolution of the technology.

This article gives a snapshot of some of the deepfake laws around the world aiming to regulate the creation and deployment of deepfakes.

 

What are deepfakes?

Deepfakes are digitally altered photos, videos or audio that appear incredibly realistic. They’re designed to make it seem as though someone is saying or doing something that they never actually said or did.

The process of creating a deepfake involves feeding a computer program with existing images, videos or audio of a particular person. The program uses this data to analyse the person’s facial expressions, movements and voice patterns. Once the program has enough data, it can create new photos or videos of that person that appear real.

 

Why are deepfakes being regulated?

In some cases, deepfakes can be harmless. A video of your dog talking is bound to be entertaining. On a more serious note, they can be used in medical research, to help deliver educational lessons or to assist people in expressing themselves through avatars.

However, they can also be used for malicious purposes such as:

  • spreading misinformation
  • manipulating political events or speeches
  • stigmatising already marginalised communities
  • creating fake videos of people engaging in unethical or illegal activities
  • harassing or demeaning individuals
  • exploiting people, such as through the creation of revenge porn
  • impersonating influential figures to spread hate speech

Since deepfakes are so realistic, they’re often very difficult to detect. And the increasing availability of deepfake technology is making it easier for individuals with harmful intentions to create convincing fake content. There have been several stories of people falling victim to deepfakes, such as an employee who was tricked into sending $25 million to fraudsters. They’re also being used to generate explicit content of people without their consent as well as produce child sexual abuse imagery

This poses a serious threat to society as it becomes harder to distinguish between what’s real and what’s fake. In turn, this raises concerns about the potential impact of deepfakes on our trust in media, politics and society as a whole.

As a result, regulatory bodies around the world are racing to introduce effective legislation – but this isn’t without challenges.

 

Why are deepfakes hard to regulate?

  • Definition – It’s difficult to create a distinction between editing media using photo-editing tools (such as de-aging software) and using AI to generate realistic but fictional images.
  • Technological complexity – Deepfakes are created using advanced AI algorithms that can generate highly realistic videos or images. This technology is constantly evolving, making it tough for regulators to keep up with new developments.
  • Difficulty of detection – Deepfakes are becoming increasingly difficult to detect. Often, they’re almost indistinguishable from genuine videos or images. This makes it difficult to identify and remove deepfakes from online platforms.
  • Lack of universal standards – There are currently no universal standards or guidelines for the creation and dissemination of deepfakes. This lack of regulation makes it difficult to establish clear rules for how deepfakes should be used and shared.
  • Global nature of the internet – The internet allows deepfakes to be shared and disseminated across borders. Therefore, it’s challenging for national regulators to enforce deepfake laws on a global scale.

Here’s some deepfake laws that have been passed so far.

 

Global deepfake legislation

Deepfake laws: United States

Federal laws

In the US, there are currently no federal laws that comprehensively regulate AI or deepfakes. There are some federal laws, such as the National AI Initiative Act of 2020, which regulate the use of AI in particular industries, but these have very limited applications.

 

State laws

As a result, some, but not all, states have attempted to fill the regulatory void by enacting state-level initiatives. Since each state passes its own laws, there is significant variation in what constitutes an offence, when the use of deepfakes is prohibited and the penalties for non-compliance.

Notably, California passed AB 602, which addresses non-consensual deepfake sexual content. The law, which went into effect in 2022, states that a person can take action against a person who:

  • creates and intentionally discloses sexually explicit material where the person knows or reasonably should have known the depicted individual did not consent to its creation or disclosure
  • intentionally discloses sexually explicit material that the person did not create if the person knows the depicted individual did not consent to its creation

This includes content which has been digitally manipulated. 

Earlier this year, the Colorado AI Act was signed into law. It’s the first law in the US to comprehensively impose obligations on developers and deployers of AI systems. “High-risk AI systems” and deepfakes are amongst some of the concerns addressed by the Act.

Other states such as Louisiana and Florida have criminalised deepfakes that show minors engaging in sexual conduct. States such as Oregon require a disclosure of the use of synthetic media in election campaign communications. Alternatively, Tennessee and Mississippi have legislated more broadly against “the unauthorised creation and distribution of a person’s photograph, voice, or likeness”.

Alongside these laws which explicitly mention deepfakes, some states may have existing defamation and privacy laws. For example, California’s Right of Publicity Law protects against the implicit use of an individual’s name or likeness but only when it is used for commercial purposes. Though this may be relevant in some cases, it does not cover non-commercial uses of deepfakes such as revenge porn or election communications.

State-level defamation laws may also be able to provide some recourse for victims of deepfakes. However many defamation laws in the US rely on the victim being able to prove that the false information – in this case, the deepfake – has been presented as a fact. Additionally, it should be noted that many defamation laws only refer to false statements, rather than false images or videos. Defamation laws also rely on the victim being able to demonstrate that their reputation has been harmed.

 

Deepfake laws: EU

This month, the EU’s AI Act entered into force. Hailed as a landmark piece of legislation, the Act is the first of its kind in the world to address AI systems. The Act splits AI systems into four risk categories: unacceptable risk, high-risk, limited risk and minimal or no risk.

The AI Act does not ban deepfakes completely but instead places obligations on providers and users of AI systems. This includes those concerning transparency, which ensure that the origins of deepfakes are traceable. For example, providers must maintain records of their processes and data when generating deepfakes. Providers are also required to make users aware of when they’re interacting with AI content.

In certain high-risk contexts, such as those that have a “significant impact” on individuals, AI systems may be subject to more stringent legislation. One example is a prohibition on using deepfakes to facilitate illegal surveillance.

The AI Act applies to almost all sectors across the EU. It affects all parties involved in the development, usage, import, distribution or manufacturing of AI systems in the EU. However, some individual countries within the EU have also passed their own laws.

Similarly to the UK, the EU’s GDPR plays a significant role in regulating deepfakes. If a person’s personal data, which includes images of them, is processed without their consent, then this could be considered a violation of the legislation.

 

Deepfake laws: France

Alongside the various laws passed by the EU, France is one example of a country which has passed additional national legislation.

In May this year, France passed the SREN law which supplements Article 226-8 of the French Criminal Code. The law aims to regulate the digital environment, protect children from online pornography and combat online fraud. As part of the SREN Law, France explicitly prohibits the non-consensual sharing of deepfake content unless it’s obvious that the content is artificially generated.

France has also updated its Criminal Code to include clauses about deepfakes. Though revenge porn was already outlawed in France, the new provisions specifically criminalise the sharing of non-consensual pornographic deepfakes.

 

Deepfake laws: United Kingdom

As it stands, the UK does not have any specific laws about deepfakes. However, some existing laws may be applicable to resolve disputes concerning deepfakes.

Under UK GDPR and the Data Protection Act 2018, a person could claim that their personal data has been misused. ‘Personal data’ means any information relating to an identified or identifiable natural person. Therefore, an image of a person is classed as personal data because a person can be identified through it.

If this data has been ‘processed’, then a deepfake could fall foul of this data protection legislation. ‘Processing’ is defined as any operation or set of operations which is performed on personal data, including “adaptation or alteration”.

Alternatively, a person whose likeness is used to create a deepfake may feel that their reputation has been damaged. If so, they may be able to sue the person responsible under the Defamation Act 2013. The ‘person responsible’ could be the creator of the deepfake but also the editor, publisher or printer. However, since deepfakes are often posted anonymously, finding the person responsible is often a very difficult, if not impossible, task. Additionally, under the Defamation Act 2013, there is only a case if “serious harm” is caused to the individual’s reputation. 

The Online Safety Act, passed earlier this year, contains provisions to tackle revenge porn. It makes the sharing of non-consensual intimate images an offence. This includes images which have been digitally altered.

96% of deepfake material online is pornographic and the overwhelming majority of non-consensual pornography targets women and girls. The newly-elected UK government has announced that they will “provide a stronger, specialist response to violence against women and girls”. They have also announced that they are working on legislation to regulate AI models to improve AI safety.

It’s worth noting that at the time of writing, the new government has only just been formed. Therefore, we’ll be keeping a close eye on developments and will update this blog as more information is given.

 

Deepfake laws: Australia

Australia has not passed any laws specifically about deepfakes. However, in June this year, it introduced the Criminal Code Amendment (Deepfake Sexual Material) Bill. The bill aims to make it an offence to share sexual material which depicts (or appears to depict) another person when:

  • the person knows the other person does not consent to the transmission; or
  • the person is reckless as to whether the other person consents to the transmission.

It applies to both material which has been digitally altered, such as deepfakes, and unaltered material. This is part of Australia’s wider agenda to tackle gender-based violence.

Additionally, victims of deepfakes may be able to seek redress under Australia’s defamation laws. Though many defamation laws around the world tend to focus on spoken or written words, Australia’s legislation recognises that images, including those which have been digitally altered, can also be defamatory. However, it should be noted that for defamation cases, Australian courts don’t often grant injunctions. Therefore, though victims may be able to receive compensation, it’s much more difficult to have the deepfakes removed.

 

The war against deepfakes

We’re at a crucial point where legislative bodies across the world are introducing and passing laws to address the challenges posed by deepfake technology. And the current trajectory suggests that we’ll continue to see fast-paced development of legislation globally.

It’s clear that we’ve got a long way to go to protect those who are victims of deepfakes. But by establishing clear guidelines for the creation and sharing of deepfakes, regulators can take steps to protect individuals and society as a whole against such a rapidly evolving sector.

Alongside regulation, we believe that user-generated content platforms should gain explicit consent from their users at the point that they upload a piece of content. As part of this process, platforms should ensure that the person uploading the content is the same person in the image.

If a person attempts to upload a deepfake image or video of someone else, the person in the content would not provide their consent. This would help prevent the spread of misinformation and could help mitigate non-consensual, intimate imagery of real people.

If you’d like to know more about how we’re helping to detect deepfakes, please get in touch or take a look at our white paper.

Please note this blog has been prepared for information purposes only. You should always seek independent legal advice.

Keep reading

An image of someone holding a mobile phone. The screen is blurred out and has a symbol that indicates there is sensitive content on the screen. The accompanying text next to the image reads “Online safety laws - Australia”.

Navigating Australia’s online safety laws

As the digital landscape continues to evolve, regulators are prioritising online safety. Countries around the world are introducing new legislation that aims to protect people online and create safer, age-appropriate experiences.   What’s the current state of online safety legislation in Australia? As the internet has become a central part of daily life, Australia’s approach to online safety has evolved over time. Online safety laws were initially more reactive, focused on specific issues such as cyberbullying and child exploitation. However, over the past decade, legislation has become more comprehensive. New laws aim to prevent harm and promote a safer

8 min read
Young girl looking at smartphone

Yoti responds to the Draft Statement of Strategic Priorities for online safety

This week, Peter Kyle, Secretary of State for the Department of Science, Innovation and Technology, published a letter of proposed strategic priorities for online safety. We welcome the draft statement, which highlights the five areas the government believes should be prioritised for creating a safer online environment. These areas are: safety by design, transparency and accountability, agile regulation, inclusivity and resilience, and technology and innovation. Peter Kyle also made a strong statement that when the Online Safety Act comes into force, Ofcom will have his full support and will expect them to be assertive. He said, “The powers that

8 min read
An image of a woman who is looking at her driving licence. The accompanying text next to the image reads “Tobacco and Vapes Bill - United Kingdom”.

Understanding age assurance in the UK’s Tobacco and Vapes Bill

In a significant move towards tightening regulations on tobacco and vaping products, the UK has introduced the Tobacco and Vapes Bill. Originally introduced by the previous Conservative government, the Bill has now been reintroduced by the new Labour government, signalling bipartisan support. The Bill aims to create a “smoke-free generation” by gradually raising the age of sale for tobacco and vaping products every year until they are completely phased out across the UK.   What is the main aim of the Tobacco and Vapes Bill? The Tobacco and Vapes Bill seeks to tighten the regulatory framework around tobacco and

7 min read