The Dutch cabinet sees no need to amend legislation to combat deepfakes following the Danish example, which would give citizens "copyright" on their own faces and voices. Outgoing Justice and Security Minister Foort van Oosten confirmed this position in answers to written questions from the House of Representatives.
The questions were submitted by members Van der Werf (D66), Six Dijkstra (New Social Contract), Kathmann (GroenLinks-PvdA) and Michon-Derkzen (VVD) following a report on the Danish approach.
According to the Cabinet, Dutch law already offers a wide range of instruments to deal with unwanted deepfakes, both civil, criminal and administrative. The minister therefore states that there is no reason to proceed with amending legislation. Earlier observations based on WODC research already confirmed that the legal framework offers sufficient possibilities to address undesirable use of deepfake technology.
Moreover, using copyright in a voice or image to prevent disclosure runs counter to the core idea of copyright, which is to enable the creator of a work (literary, scientific or artistic) to exploit the work.
Available legal instruments include:
- The General Data Protection Regulation (AVG): It applies when processing personal data, such as facial features or voice, in deepfakes. Creating or distributing deepfakes without a processing basis, especially with sensitive (sexual) content, violates the AVG in practice.
- Criminal Law: Article 254ba of the Sexual Offenses Act (effective July 1, 2024) criminalizes the making or distribution of sexual imagery without consent, regardless of whether it is made by an individual or AI.
- Portrait Right: The Copyright Act (Section 21) prohibits the publication of a portrait that is not commissioned (which is often the case with deepfakes) if a reasonable interest of the person portrayed opposes the publication.
Big tech companies have a responsibility to stop illegal content. The Digital Services Act (DSA) aims to address the distribution of illegal content and systemic risks. Very large online platforms and search engines are required to conduct risk assessments and take appropriate measures against systemic risks, such as harmful deepfakes.
Platforms can already be held liable if they fail to take action on reports of illegal content, provided conditions are met. If a platform systematically fails to handle reports, national regulators or the European Commission can enforce by imposing fines, for example.
The government recognizes the major impact of deepfakes and online abuse on victims. However, there is criticism from the Stop Online Shaming foundation that the police, judiciary and the Personal Data Authority (AP) "apparently do not have the capacity and priority" to deal with disseminators.
The government points out that effective enforcement is hampered by the fact that providers of deepfake websites are often not transparent about their identity, or are not based in the Netherlands or even in the EU.
In support of citizens and effective enforcement, the Ministry of Justice and Security supports the Helpwanted hotline run by the Offlimits foundation. Offlimits received Trusted Flagger status from the Consumer and Market Authority (ACM) on July 15, formalizing its role in reporting illegal content under the DSA.
The policy response to the March 21 report "Online Sexual Violence," which reflects on the capacity and priority of the police and the Public Prosecution Service (OM), among others, is expected in the fall