Menu

Filter by
content
PONT Data&Privacy

0

From promise to policy: how EU plans to tackle disinformation structurally

Below is the first part of a two-part blog series written by Tijn Iserief, Consultant Privacy & Data Protection at Lex Digitalis.

May 27, 2025

a fake news megaphone with the word fake news coming out of it

Introduction

Who determines what you see affects what you think. In an online world full of content, algorithms increasingly determine your view of reality. In an earlier article "From disinformation to dystopia" on this website, we showed how your reality is dangerously distorted when false or misleading information is given free rein. The European Union is now taking steps to put a stop to this. One example is the "Strengthened Code of Practice on Disinformation.[1] This Code was recently integrated into the Digital Services Act (DSA).

The integration of the Code into the DSA means that platforms can no longer deal with disinformation without obligation. The Code combines self-regulation with the hard frameworks of the DSA. That combination promotes transparency, freedom of choice and user control. Especially the emphasis on explanations and alternatives to algorithms gives users more control. Education and collaboration with fact checkers also strengthen media literacy. Overall, Chapter V of the Code provides a concrete complement to the DSA framework.

Where the DSA addresses risks at the platform level, the Code makes clear how those risks can also be mitigated at the user level. This is done not through censorship, but by increasing users' autonomy and media literacy. Among other things, the measures seek to strengthen user awareness, choice and participation.

What does this article cover?

This article provides an overview of the legal and policy framework around addressing disinformation. It discusses what exactly the Code means, how it is linked to the DSA, why no legal definition of disinformation is included, and how the Code addresses this with practical measures. It shows how voluntary self-regulation is slowly evolving into testable digital governance.

What is the Code of Practice on Disinformation?

The 2018 (original) EU Code of Practice on Disinformation is a form of self-regulation without binding legal status. The document was the result of a voluntary collaboration between tech giants such as Facebook, Google and X, the former Twitter. These tech giants promised greater transparency about ads, cooperation with fact checkers and countering fake accounts. X withdrew from the Code in 2019.[2] The decision to withdraw highlighted the Code's non-committal nature.

A strengthened version was published in 2022: the Strengthened Code of Practice on Disinformation. This contains more extensive and concrete measures, such as better moderation of ads, access to data for researchers and more user control over recommendation systems.

Integration into the DSA

The DSA provides for the possibility of giving legal meaning to voluntary codes of conduct.[3] In February 2025, the European Commission recognized the strengthened Code as an official code of conduct.[4] Platforms that have signed the Code not only record what measures they take against disinformation; regulators such as the European Commission and national Digital Services Coordinators can verify that the measures are actually being followed. This makes the Code no longer a voluntary code of conduct, but a verifiable part of a broader governance framework.

The Commission may even propose measures in case of structural non-compliance.[5] Important detail: the Code applies only to platforms that have signed it. Platforms that withdrew, like X, are not formally bound by it. However, this does not give the withdrawn platforms a free pass to get out from under other DSA obligations that combat the spread of disinformation.[6]

Lack of definition of disinformation

The DSA does not contain a legal definition of "disinformation. This is not a mistake, but a deliberate choice. The core reason for this is the constitutional tension with freedom of expression.[7] Indeed, a legally binding definition of disinformation could lead to government control of truthfulness. This could be problematic in democratic societies.[8] Disinformation falls under freedom of speech in certain cases.

Indeed, freedom of speech does not apply only to favorable, inoffensive or indifferent information or ideas. Freedom of speech also applies to expressions that offend, shock or disturb. This is one of the requirements of pluralism,[9] the tolerance and broad-mindedness essential to a democratic society.[10]

Incorrect or misleading statements are therefore protected so long as they are not unlawful. In particular, when factual claims are intertwined with opinions or worldviews, information may be "awful but lawful. The truthfulness of value judgments, for example in a political context, cannot be proven. Requiring proof of this violates freedom of expression.[11]

The Code affirms that fundamental rights must be fully respected in all measures taken to combat disinformation: "The Signatories are mindful of the fundamental right to freedom of expression, freedom of information [...], and of the delicate balance that must be struck between protecting fundamental rights and taking effective action to limit the spread and impact of otherwise lawful content".[12]

No legal definition, but practical countermeasures

The code attempts to solve the problem that there is no conclusive legal definition of disinformation by not focusing on a substantive assessment. Instead, the Code focuses on practical measures around dissemination, visibility and funding. Instead of defining what disinformation is, the Code focuses on how disinformation spreads and the systems that enable it. It does so along six main tracks:
  • Removing financial incentives (demonetization): preventing disinformation from generating revenue through advertising.[13]
  • Transparency in political ads: better recognition of political ads by indicating sponsor, term and cost.[14]
  • Integrity of services: addressing manipulative techniques such as fake accounts, bots and deepfakes.[15]
  • Supporting researchers: better access to data for disinformation research, with attention to privacy.[16]
  • Factcheckers strengthen: greater coverage in all EU countries and languages, fair compensation and better access to relevant information."[17]
  • Empowering users: tools for recognizing and reporting disinformation, better access to authoritative sources and a focus on media literacy.[18]

    Conclusion

    The strengthened Code of Practice on Disinformation constitutes a policy tool that focuses on limiting the spread of disinformation, without a substantive assessment of what is true or false. Instead, the focus is on transparency, limiting financial incentives, collaboration with external parties and the design of digital services. The legal embedding in the DSA increases the possibilities for monitoring and enforcement, although its application remains limited to Code signatories. This makes it a hybrid instrument that combines elements of self-regulation with public control. The extent to which this model is effective ultimately depends on the commitment of platforms to Code compliance. This will be discussed in the next article.

    Read part 2 of Tijn Iserief's blog series here: 'Grip on the feed? Shared responsibility, but poor execution: users are on their own'

    [1] [1] The 2022 Code of Practice on Disinformation (hereafter, the Code). Link: https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation.

    [2] Francesca Gillett, 'Twitter pulls out of voluntary EU disinformation code', https://www.bbc.com/news/world-europe-65733969.

    [3] Article 45 DSA, recitals 88 and 104 DSA.

    [4] The Code of Conduct on Disinformation, European Commission, Feb. 13, 2025. Link: https://digital-strategy.ec.europa.eu/en/library/code-conduct-disinformation.

    [5] Article 45(4) DSA.

    [6] Article 34 DSA, recitals 80, 83 and 84 DSA.

    [7] Article 11 EU Charter.

    [8] Center for Democracy & Technology, "Chilling effects on Content Moderation Threaten Freedom of Expression for Everyone. Link: https://cdt.org/insights/chilling-effects-on-content-moderation-threaten-freedom-of-expression-for-everyone/.

    [9] Pluralism is recognizing and respecting diversity of views and groups within a society.

    [10] Handyside v. United Kingdom, ECHR December 7, 1976, no. 5493/72, §49.

    [11] Lingens v. Austria, ECHR 8 July 1986, no. 9815/82, §46.

    [12] The Code of Conduct on Disinformation, European Commission, Feb. 13, 2025. Link: https://digital-strategy.ec.europa.eu/en/library/code-conduct-disinformation.

    [13] Chapter II Code of Conduct on Disinformation.

    [14] Chapter III Code of Conduct on Disinformation.

    [15] Chapter IV Code of Conduct on Disinformation.

    [16] Chapter VI Code of Conduct on Disinformation.

    [17] Chapter VII Code of Conduct on Disinformation.

    [18] Chapter V Code of Conduct on Disinformation.
  • Share article

    Comments

    Leave a comment

    You must be logged in to post a comment.