After a flood of negative reactions, McDonald's decided last month to take its Christmas commercial offline. The commercial only features AI-generated people, which did not go down well with the public.[1] Meanwhile, the proportion of 'fake' content in our daily information flow is growing rapidly. If this trend continues, within a few years the majority of online content will be 'manipulated'.[2]

My colleague Sander has previously explained the phenomenon of "voice clones" and assessed the extent to which and under what conditions their processing falls under the GDPR.[3] In the next two blogs, I (Femke Algera) will discuss the issues surrounding deepfakes.[4] and how Dutch and European legislators are attempting to regulate this phenomenon. In this first article, I describe the risks of deepfakes and respond to the proposal by Member of Parliament Rosemarijn Dral to extend the Neighbouring Rights Act so that it can also offer protection against deepfakes.
Manipulating content has been the norm on social media for years. Instagram and TikTok users have access to filters and tools that allow them to retouch photos and videos of themselves and others. With generative AI, this editing is becoming increasingly sophisticated. A striking example is Mia Zelu, the model who appeared at Wimbledon and went viral—she turned out to be entirely generated by AI.[5]
These types of "deepfakes" are also often used for political influence and disinformation. Consider the manipulated video in which President Zelensky seemingly called for surrender. Equally striking is the phenomenon of AI-generated 'witnesses': videos circulated of an AI-generated National Guard soldier named Bob, who threatened to gas protesters—the soldier turned out not to exist.[6] By using recognizable authority markers in deepfakes (such as a uniform in this case), the recipient perceives the content as coming from an authority figure.
Financial fraud and sexual deepfakes are also a major problem: it is estimated that more than 95% of deepfakes are pornographic in nature and generated without the consent of the person involved.[7] The consequence of all this is that it is becoming increasingly difficult to determine whether something is real or fake. This applies to social media users who get their news from the internet and, apparently, to those who unsuspectingly watch a McDonald's commercial on television. This is also a challenge for journalists and even judges. Deepfakes are increasingly being used as evidence.[8]
Dutch law already provides ways to combat problematic deepfakes, but there is no coherent framework: protection is fragmented. Only when deepfakes are used to commit a criminal offense, such as extortion, defamation, or sexual exposure,[9] is there criminal law protection. The right of publicity[10] could offer protection, but it only protects therecognizable imageof a person. Unfortunately, the body and voice are also often used (abused) to create a deepfake—in most cases, this would not be covered by portrait rights.
In October, a private member's bill was therefore published to create a new neighboring right with regard to deepfakes.[11] Earlier this year, Denmark also began regulating deepfakes by extending copyright to protect a person's appearance and voice.[12]
To understand what a neighboring right is, it is useful to have a basic understanding of how copyright works. Copyright protection works as follows: a composer who composes a piece of music can, thanks to copyright, determine who may exploit the work (for example, by charging a fee for its use). When an artist decides to use the composition in a performance, that artist acquires a neighboring right over that performance. That right therefore exists alongside the composer's copyright.
A number of lawyers are critical of the bill. Creating an intellectual property right implies—as the name suggests—that youownyour appearance and voice: and ownership is transferable. It would encourage the exploitation of AI-generated content.
Neighboring rights are often sold or licensed by artists. A record company or publisher could then, for example, more easily "trade" tracks. But is this possible with someone's appearance or voice? Lawyers warn of unfair situations. For example, a young model could 'sign away' her neighboring rights to a modeling agency. The agency would then have the exclusive right to decide on deepfakes of her face.[13]
But such issues are not new: they have long been a part of copyright law. In the music industry, market forces are simply a fact of life; creators work together with producers, labels, and other intermediaries. In exchange for the rights to the music, they offer financing, distribution, and professional support. These parties are often in a stronger position than the artist when it comes to taking action against intellectual property infringement. An extension of rights in the context of deepfakes therefore places the responsibility largely with these parties. Producers, labels, and (for example) modeling agencies will have to adapt their contracts accordingly and carefully handle the acquisition of rights relating to deepfake applications.
A neighboring right for deepfakes means that you can decide for yourself whether, and if so, how your voice or appearance may be used in fake content. I think that is a desirable development. After all, it creates a clear legal remedy for anyone who suddenly sees (or hears) themselves in content for which no permission has been given. The question remains whether this extension of neighboring rights will be the breakthrough in the fight against online deception. The protection mainly worksex post: by the time someone discovers that a deepfake has been made and goes to court, the content has usually already been widely disseminated. And try to track down the creator of such a deepfake
Moreover, copyright protection only applies to situations in which a deepfake has been made of an identifiable person. The law thus addresses individual interests—not the public interest in not being exposed to fake content. Consider the aforementioned Mia Zelu or soldier Bob: deepfakes do not always imitate an existing person. In these cases, there is no rights holder who can invoke copyright, while the misleading effect on the public certainly remains.
In recent years, the European Union has developed a comprehensive regulatory framework for new technological developments. In my next blog, I will explore the European legal remedies designed to combat the spread of deepfakes and online deception. I will also discuss enforcement initiatives and what these developments mean for your organization.
1L. Verhagen, ‘McDonald’s takes AI Christmas ad offline after criticism’,de Volkskrant, December 10, 2025.
2Van der Sloot, Wagensveld, and Koops, "Deepfakes: the legal challenges of a synthetic society," Tilburg University, November 2021, accessed viahttps://www.tilburguniversity.edu/sites/default/files/download/Deepfakes%20NL.pdf
3https://www.lexdigitalis.nl/adele-zingt-een-nieuw-nummer-maar-ze-weet-van-niets/
4In this article, I limit the definition of ‘deepfake’ to: AI-generated or manipulated image, audio, or video material. There are also many texts online (such as blogs, marketing texts, and product descriptions) that are generated by AI: I will not consider these here.
5 De Telegraaf, 'Viral Instagram fashion turns out to be generated by AI', 2025
6The Observers, 'AI-powered disinformation spreads online amid Los Angeles protests', June 12, 2025
7Van der Sloot, Wagensveld, and Koops, 'Deepfakes: the legal challenges of a synthetic society', Tilburg University, November 2021, accessed viahttps://www.tilburguniversity.edu/sites/default/files/download/Deepfakes%20NL.pdf
8L. Marshall, ‘Deepfakes and AI in the courtroom: Report calls for legal reforms to address a troubling trend’, CU Boulder Today, November 17, 2025
9Article 254ba WvSr
10Art 21 Aw
11The proposal for the Neighboring Rights Act on Deepfakes of Persons
12NOS, ‘In the fight against deepfakes, Danes are granted copyright on their own faces and voices’, July 13, 2025
13E. Valk, ‘Taking action against deepfakes is a good idea, but do so via portrait rights’, NRC July 29, 2025
