The world’s leading digital media and regulatory policy journal

Regulatory challenges for deepfake technologies

In his new book, Regulating the Synthetic Society, BART VAN DER SLOOT explores the rise of generative artificial intelligence, its societal effects and the need to develop new regulatory standards. Here he draws attention to the challenges presented by one of its most concerning applications: the deepfake

A deepfake is content (video, audio or otherwise) that is wholly or partially fabricated by artificial intelligence (AI) or existing content that has been manipulated using AI. In just a few years, the underlying technology has advanced rapidly in terms of quality, speed of production and cost efficiency, so much so that humans are no longer able to distinguish fake from authentic content. There are already signs that people find faces of non-existent people more trustworthy than those of real ones.1 Many deepfakes are used for relatively trivial purposes but because, in a deepfake, real people may appear to be doing and saying things they never did or said, they can have significant personal and societal repercussions. In my book, Regulating the Synthetic Society, I map three challenges that the rise of deepfakes and related technologies pose to regulators.

AI-driven content in politics

AI technologies are used by politicians, political parties and their supporters as well as by their adversaries. Politicians have toured the country by hologram,2 created deepfakes of themselves while in jail to reach their voters,3 used AI to tweak their facial features to appear more relatable4 employed deepfake technology to give a speech in a minority language the candidate does not actually speak.5  Although these uses mostly go against the terms and conditions of various products and services, there are no legal provisions explicitly establishing whether they amount to voter deception or not. Deepfake technology can also be used against politicians, such as using voice cloning to have a candidate say something compromising or damaging6 or creating entire fake media environments: a deepfake video of a political candidate doing something outrageous; fake news websites that seem to report on it; fake X accounts that discuss the fake video; fake Instagram accounts that generate memes using frames from the video, and so forth. Harmful use of AI by political adversaries comes both from domestic rival parties and from foreign powers. Countries like Russia, China and Iran are targeting the Global South with fake and manipulated news for a variety of reasons, such as influencing concrete decision making so that a Russian state-owned company gets a contract, influencing local elections to bring a China-friendly regime to power or influencing decisions at international level by getting a country to vote in favour of lifting sanctions against Iran. Consequently, one threat of deepfake technology is that it is utilised to undermine or influence the political process.

Online manipulation

The second challenge arises from the advance of chatbots, humanoid robots, augmented reality, virtual reality, deepfakes and other applications powered by generative AI, which is increasingly used to manipulate online material. Generative AI is not only used to deliberately mislead people, but for generating beautiful atmospheric images of artificial forest cabins on Facebook or photos of non-existent children who appear to be doing something spectacular, which cannot be distinguished from authentic material. AI-operated photo cameras increasingly work on the basis of blueprints: a burning forest might consequently still look green in the photo because AI ‘knows’ that a forest is green and the moon may appear crystal clear, even on a foggy evening.7 With video calling, as a standard, high audio registers are filtered out and the skin tones of people are softened. Because deepfake technology is freely accessible to anyone, citizens are increasingly generating deepfakes for homely and satirical purposes. Taken together, these trends (and many more) may mean that in a few years’ time, more than 90 per cent of all digital material is AI-generated or AI-manipulated.8 This would result in a push towards a post-truth society, with considerable repercussions for personal and societal interests. Media companies, for example, will need to establish whether material is authentic or not, as will courts of law. Currently, under most jurisdictions, evidence introduced in a legal procedure is deemed to be authentic unless there are contra-indications. That assumption may need to be reversed: evidence has probably come into contact with AI and the questions should be, who made which manipulations, for what purpose and are they relevant to the case at hand.9 AI detection programs can only filter out half of the AI-generated or -manipulated material and often only give an ‘authenticity’ percentage such as ‘the likelihood that this material is authentic is 67 per cent’. This may create difficulties for the media, judges and other parties for whom verifiable information is quintessential.

The rise of deepfake porn

Third, by far the majority, and according to some reports more than 95 per cent, of all deepfakes concern non-consensual deepfake porn of women.10 As a result, the female body is further sexualised, unrealistic beauty ideals are reinforced and women are stigmatised. ‘Slut shaming’ and misogynistic comments are already the order of the day offline and especially online, something that deepfake technology will only exacerbate. Female celebrities and politicians are regularly targeted, undermining their credibility and causing reputational damage that has resulted in politicians resigning in order to protect themselves and their family members from harmful content.

Prohibiting ‘fake’ news thus easily becomes a power tool in the hands of the incumbent, allowing them to unduly limit freedom of speech

Perhaps even more problematic, with deepfake technology democratised and available to anyone, every teenage boy can generate a fake porn video of a girl in class and distribute it privately, on social networks or specialised porn sites. This can have catastrophic effects on girls’ social status, perception of self and personal development.

Although it is clear that these three issues should be on the legislative agenda, good regulatory solutions are very few and far between and often entail a choice between two evils. Regulating deepfake and other AI technologies for political purposes, for example, raises the question of where the boundary between unproblematic and problematic manipulations should be drawn. Is faking a massive crowd at a rally so bad that it should be prohibited? Is altering one’s facial features to come across as more approachable so much different from altering one’s micro expressions in Photoshop? Even for deepfakes used against a political candidate, it is not always clear where the line should be drawn between an innocent satirical video and election interference. Similarly, what is to be considered ‘fake’ and what is ‘real’ is often one of the crosslines between voters of the different political parties. Prohibiting ‘fake’ news thus easily becomes a power tool in the hands of the incumbent, allowing them to unduly limit freedom of speech, yet doing nothing opens the risk to election interference.

Dilemmas for regulators

It is difficult to see how the rise of synthetic content and with it a post-truth era can be prevented, as so many AI-driven tools, products and services have been democratised and many have AI embedded in their functional design. Although there are several ways of addressing this issue, each raises its own dilemmas. For example, it has been suggested that hosting providers, platforms and content services should run AI detection programs to filter out AI-generated and -manipulated content. However, this would block an enormous amount of legitimate and unharmful content as well, as most content will be (albeit marginally) manipulated by AI and even substantially AI-modified, and AI-generated content is often legitimate. Another approach would be to only rely on watermarked content, at least when used by the media or in courtrooms; a watermark is a logbook attached to a photograph, video or other material that shows what has been altered or manipulated, when and by whom. However, requiring a watermark could have the effect that potentially authentic material that could exculpate a person is declared inadmissible by a judge.

Finally, with regard to deepfake porn, stronger action may be considered given that the harm inflicted on women, and in particular young girls, can be catastrophic. However, making deep porns is already prohibited in criminal law. Given the enormous number of deep porn images and videos produced, it is almost impossible for law enforcement authorities to adequately tackle the issue, a problem that is aggravated by the fact that it is not always clear who made the video and that the services through which they are published are often based in foreign jurisdictions. Even if the perpetrator is caught, something that happens only seldom, the damage is already done. This is why, ideally, such material should be prevented from being produced in the first place. However, this would require a ban on technologies, apps and services, at least those that are explicitly advertised for producing sexual content.11 Banning technology is a radical measure as it also disallows legitimate and positive use cases, denies innovation of technologies through serendipitous experimentation and prohibitions are often easily circumvented, especially in the online environment.

Bart van der Sloot’s book, Regulating the Synthetic Society: Generative AI, legal questions and societal challenges, is available in paperback or can be downloaded here.

Bart van der Sloot

Bart van der Sloot is associate professor at the Tilburg Institute for Law, Technology and Society at the University of Tilburg, Netherlands. He is editor in chief of the European Data Protection Law Review.

1 Nightingale SJ and Farid H (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences, 14 February.

2 Welch C (2014). Indian politician morphs into hologram to reach millions of voters. The Verge, 7 May.

3 Ray S (2023). Imran Khan – Pakistan’s jailed ex-leader – uses AI deepfake to address online election rally. Forbes, 18 December.

4 Duffy K (2024). AI in Context: Indonesian elections challenge GenAI policies. Council on Foreign Relations blogpost, 13 February.

5 Lyons K (2020). An Indian politician used AI to translate his speech into other languages to reach more voters. The Verge, 18 February.

6 Seitz-Wald A (2024). Democratic operative admits to commissioning fake Biden robocall that used AI. NBC, 25 February.

7 Vincent J and Porter J (2023). Samsung caught faking zoom photos of the Moon. The Verge, 13 March.

8 Schick N (2020). Deepfakes and the Infocalypse: What you urgently need to know. Hachette UK.

9 For example, see Cyfor Blog post: Deepfake audio evidence used in UK court to discredit father.

10 Ajder H, Patrini G, Cavalli F and Cullen L (2019). The State of Deepfakes: Landscape, threats, and impact. Deeptrace, September.

11 Cole S (2019). This horrifying app undresses a photo of any woman with a single click., 26 June.