This is the second part of a blog series written by Tijn Iserief, Consultant Privacy & Data Protection at Lex Digitalis. Read the first part here '
From promise to policy: how EU plans to tackle disinformation structurally'
What does this article cover?
This second article focuses on how the Code empowers users in their role as critical recipients of online information. We discuss measures from the Code regarding "empowering users," from which come measures such as safe design, notification and objection procedures, educational elements and the provision of authoritative context. We also look at the extent to which major platforms take these obligations seriously.
Autonomy and media literacy of users
The Code contains obligations that directly or indirectly enable users to handle online information independently and critically. To this end, the Code sets out various measures that enhance user autonomy. Accordingly, the chapter is titled "Empowering Users.
[1] These include transparency, freedom of choice and education as means of promoting users' autonomy and media literacy.
These measures complement the general principles in the DSA, such as transparency about recommendation systems
[2] and conducting a systemic risk assessment.
[3] They do go a step further in their concretization and implementation.
Safe design
A key tenet of Chapter V is the idea of safe design: digital services should be designed so that users are not unknowingly exposed to misleading information. Signatories design their services in such a way that users can interact with information in a transparent and safe manner.
[4] Platforms should give users insight into how recommendation systems work and allow them to influence them or choose alternative options, such as chronological feeds.
[5]
The DSA already requires VLOPs to provide some transparency about recommendation systems, including disclosure of key parameters and user options, and the ability to select and change preferred options at any time.
[6] The Code requires not only VLOPs, but also other online platforms that have signed on to the Code, to provide users with greater transparency. This includes giving users a better understanding of how recommendation systems work. In addition, these platforms must allow users to adjust their preferences for algorithms.
Disinformation flagging
Reporting mechanisms for disinformation are also further elaborated. Where the DSA requires platforms to have reporting procedures for illegal content,
[7] the Code establishes additional requirements for legal but harmful disinformation. Users have the option to specifically flag disinformation as disinformation, with different categories to clearly distinguish between types of misleading content.
[8]
This goes beyond the neutral reporting buttons required by the DSA. After all, these buttons target illegal content.
[9] The Code thus increases users' ability to act in situations in which content, while not illegal, is misleading or manipulative.
Objection mechanisms
Furthermore, the Code provides a transparent appeal mechanism for when user content is removed or flagged. Users must be informed of the reason for the removal or demotion of content, and must have access to a system by which they can challenge this decision.
[10]
The DSA already requires platforms to establish an internal complaint handling system.
[11] Also, the DSA requires platforms to provide information about the reason and opportunity to object when content is moderated.
[12] What the Code adds is that this obligation explicitly extends to non-illegal but potentially harmful content, such as disinformation.
In addition, the DSA requires platforms to inform the user of the existence and possibility of objection.
[13]The Code adds that the user should be informed of the exact steps or design of the objection process.
[14] In doing so, the Code concretizes the requirements of the DSA in operational terms.
Media Literacy
Another important addition to the Code concerns support for media literacy. The DSA does not explicitly name the concept of media literacy in the legal text. Yet some risk mitigation measures in the DSA can indirectly contribute to it.
[15] The Code goes a step further by explicitly requiring platforms to strengthen users' critical thinking skills.
[16] For example, it calls for collaboration with experts to develop media literacy initiatives and educational activities such as campaigns on disinformation.
[17]
In addition, the Code encourages the direct inclusion in the service of tools that promote media literacy, such as alerts for questionable content, explainer videos on how algorithms work, or interactive quizzes.
[18]In doing so, the Code concretizes the commitment to actively support users in recognizing and understanding online disinformation.
Authoritative sources
Finally, the Code calls on platforms to actively direct users to authoritative sources. Services should offer users alternative or additional information when they encounter potentially misleading messages (measure 20.1 Code of Conduct).
[19] Consider displaying informational panels, source citations or references to independent fact checkers.
The DSA does not explicitly require such corrective context to be offered. Although the DSA requires VLOPs to implement appropriate risk mitigation measures, it largely leaves the interpretation of these to the platform itself (Article 35 DSA).
[20] The Code fills this gap by encouraging platforms to proactively provide context to users. The goal is to support users in making informed choices.
Implementation by platforms?
Between 2022 and 2025, the four largest platforms (Google, Microsoft, Meta and Tiktok) sharply scaled back their ambitions within the Code.
[21] These platforms have scaled back their commitments to varying degrees. Most withdrawals involve measures around fact-checking, political ads and media literacy.
[22]
This trend is at odds with the Code, which is instead committed to user empowerment and responsible design of recommendation systems. This affects users' ability to guard against misinformation and improve their skills in reviewing digital content.
LinkedIn
LinkedIn, part of Microsoft, withdrew completely from all fact-checking measures, despite the platform labeling election-related disinformation a "high risk" in its own risk analysis.
[23] LinkedIn also dropped measures that would give users more control over the origin or reliability of digital content. According to the platform, the terms "media literacy" and "safe design" would be too vague. As a result, users lack support in assessing the reliability of shared information.
[24]
Ironically, LinkedIn itself became embroiled in a high-profile disinformation case last year. As the BBC reported, a LinkedIn message played a key role in spreading false information about the fatal stabbing during a children's dance class on July 29 in the United Kingdom. The message falsely claimed that the suspect was an illegal immigrant. After the message went viral on other platforms, it led to riots in England and Northern Ireland.
[25]
YouTube and Google Search
YouTube and Google Search are ending all their commitments to fact-checking and user education. Google states that their existing measures are more effective, but do not name concrete alternatives. They argue that their existing measures are more effective, but name no concrete alternatives.
Transparency on recommendation systems was also scaled back: the platform withdrew from the obligation to disclose its key parameters.
[26] The verifiability of algorithmic recommendations, thus decreases. The absence of clear tools for users to verify information negatively impacts their ability to make informed choices and protect themselves from being misled.
Facebook and Instagram
Facebook and Instagram, both part of Meta, retained most of their fact-checking measures, but withdrew from commitments aimed at informing users in the case of flagged disinformation. For example, Meta stopped offering contextual explanations for certain forms of flagged content. As a result, users receive less support in independently assessing the reliability of information they encounter.
[27]
In addition, Meta has stopped participating in a number of initiatives that promote transparency and external review, without clarity as to whether they have been replaced by useful alternatives for users. This limits the ability of users to recognize deceptive patterns and engage consciously with digital content.
[28]
TikTok
TikTok has backed away from a host of measures related to fact-checking and supporting users' media literacy. Educational content, context to highlighted posts, and source verification tools have largely been removed. While some fact-checking obligations have been retained, TikTok makes them contingent on the behavior of other platforms.
[29]
This lacks a structural offer for users to independently review content or adjust recommendation systems. The reliance on external factors does not undermine effectiveness and makes it more difficult for you as a user to check for yourself whether the information you come across is reliable.
And now? What can you do?
Where platforms are withdrawing from various commitments, others are taking the initiative to set up fact-checking websites. This offers users various opportunities to check for themselves the extent to which reporting is based on facts. In this article, we highlight some examples. We start with some fact-checkers from the United States and then turn to fact-checkers that are part of the European Network for Fact-Checking Standards.
PolitiFact.com
PolitiFact is known as a leading website for checking political statements. PolitiFact is affiliated with the International Fact-Checking Network (IFCN), a global partnership that promotes quality and independence in factchecking.
[30] The IFCN was founded by the Poynter Institute, a renowned American center for journalistic ethics and training. The IFCN has been nominated for the Nobel Peace Prize.
[31] According to the Berkeley Library of the University of California, PolitiFact was awarded a Pulitzer Prize for the thoroughness with which it investigates facts.
[32] Especially during election time in the US, PolitiFact is a frequently consulted resource. Their well-known "Truth-O-Meter" indicates at a glance how truthful a statement is. A team of experienced journalists carefully examine each claim and back up their findings with clear sources, so readers can check for themselves whether the claims are true and verify the facts.
Snopes
According to the library of the College of Staten Island (part of The City University of New York), Snopes is among the best known and most reliable fact-checking websites.
[33] Snopes is a member of the IFCN. The platform is known as the place to debunk bizarre stories, urban legends and dubious rumors. Whereas Snopes initially focused on puncturing modern folklore and Internet myths, today it also focuses on checking out the news, politics and entertainment world. A dedicated editorial team carefully assesses the factual accuracy of a variety of claims and news stories.
FactCheck.org
The Annenberg Public Policy Center at the University of Pennsylvania runs FactCheck.org, a nonprofit and independent initiative.
[34] FactCheck is also a member of the IFCN. The organization focuses on debunking viral claims, ranging from statements made in election debates to misleading advertisements on television. They also scrutinize political speech, which is why FactCheck.org is often mentioned in the same breath as leading sites such as PolitiFact. Thanks to clear and accessible explanations on the site, FactCheck.org helps the public better recognize disinformation and contributes to media literacy. The organization reduces complex topics to understandable information for a wide audience.
Reuters Fact Check
Reuters Fact Check adheres to the guidelines of the IFCN. Like other fact-checkers, Reuters Fact Check checks statements made in the news and on social media, especially by public and political figures.
[35] The fact checkers at Reuters use thorough journalistic methods: they contact primary sources where necessary or consult experts to carefully verify claims. Reuters has an international reputation as a reliable and independent news organization.
Ground News
Ground News is an independent media platform that focuses on media literacy and exposing news-bias. Instead of doing fact-checks of its own, Ground News compares how the same news fact is covered by different media outlets, ranging from left to right, national and international. The platform shows what political angle a source may have, whether a story is underexposed, and whether it is largely ignored by certain media outlets. A notable feature is the so-called "Bias Distribution": a visual representation of how left-, center- and right-leaning news organizations report on the same issue. Also, Ground News offers a feature that shows whether you as a reader are reading mostly sources from one camp. It's a tool for escaping from your information bubble.
While Ground News is not a fact-checker in the traditional sense, it helps combat disinformation by putting context, perspective and a critical eye on sources at its core. The platform aims to make users aware of their news consumption, helping them form a more complete and balanced picture of reality.
European fact-checkers
The European Fact-Checking Standards Network (EFCSN) assesses fact-checkers for compliance with the European Code of Standards for Independent Fact-Checking Organizations. The EFCSN issues a certificate to publishers who pass the audit of two independent reviewers. This certificate is valid for two years, and fact-checkers must be re-assessed upon completion of their certification to retain the certificate. Membership in the EFCSN is open to organizations with a significant focus on Council of Europe member countries.
[36] The fact-checkers operate in several languages, but the pages can be easily translated into Dutch.
Conclusion
The strengthened Code of Practice on Disinformation offers an important step toward greater transparency and control for users of major online platforms. By emphasizing autonomy, media literacy and responsible algorithms, the Code contains concrete starting points for making users more resilient to disinformation.
In practice, however, large platforms are increasingly withdrawing from precisely those parts of the Code that support users. Measures aimed at education, insight into recommendation systems and source verification are often considered too vague or too burdensome. This puts the empowerment of users at risk, precisely at a time when digital information is becoming increasingly decisive for public opinion formation. In a digital democracy, responsibility is a shared task - but only if that responsibility is actually taken.
Yet citizens are not empty-handed. If desired, citizens can take matters into their own hands by using reliable fact-checking platforms such as PolitiFact, FactCheck.org, Reuters Fact Check, Snopes or Ground News. In this way, they can better inform themselves, recognize disinformation and be more aware of news sources.
[1] Chapter V Code of Conduct on Disinformation.
[2] Article 27 DSA.
[3] Article 34 DSA.
[4] Commitment 18 and 19 Code of Conduct on Disinformation.
[5] Measure 19.2 Code of Conduct on Disinformation.
[6] Article 27(1) and (2) DSA.
[7] Article 16 DSA.
[8] Commitment 23 Code of Conduct on Disinformation.
[9] Article 16 DSA.
[10] Commitment 24 Code of Conduct on Disinformation.
[11] Article 20 DSA.
[12] Article 17 DSA.
[13] Article 16(5) and Article 20 DSA.
[14] Measure 24.1 Code of Conduct on Disinformation.
[15] For example, Article 35(1)(i) DSA.
[16] Commitment 17 Code of Conduct on Disinformation.
[17] Measure 17.3 and 17.2 Code of Conduct on Disinformation.
[18] Measure 17.1 Code of Conduct on Disinformation.
[19] Measure 20.1 Code of Conduct on Disinformation.
[20] Article 35 DSA.
[21] Democracy Reporting International, "DRI Statement on Platforms Reducing Commitments Ahead
of Strengthened Code of Conduct on Disinformation," January 22, 2025.
Link:
https://democracy-reporting.org/en/office/EU/publications/dri-statement-on-platforms-reducing-commitments-ahead-of-strengthened-code-of-conduct-on-disinformation.
[22] The number of measures to which platforms have committed in the Code of Practice (CoP) has decreased by 31%. In 2022, platforms endorsed an average of 78 out of 132 measures (59%), but by 2025 this dropped to 53 (40%). The largest decrease is in measures supporting the fact-checking community (down 64%).
[23] Linkedin, "Systemic Risk Assessment," August 2024, pp. 34, 40.
Link:
https://content.linkedin.com/content/dam/help/tns/en/2024_LinkedIn_DSA_SRA_Report_23_Aug_24.pdf.
[24] D. Alvarado Rincón & M. Meyer-Resende, "Big tech is backing out of commitments countering disinformation: What's Next for the EU's Code of Practice?",
Europen Union, Democracy Reporting International, Feb. 7, 2025, pp. 6 - 7.
Link:
https://democracyreporting.s3.eu-central-1.amazonaws.com/pdf/67ac7d8316ca4.pdf.
[25] Ed Thomas & Shayan Sardarizadeh, "How a deleted LinkedIn post was weaponized and seen by millions before the Southport riot," BBC, Oct. 25, 2024.
Link:
https://www.bbc.com/news/articles/c99v90813j5o.
[26] D. Alvarado Rincón & M. Meyer-Resende, "Big tech is backing out of commitments countering disinformation: What's Next for the EU's Code of Practice?",
Europen Union, Democracy Reporting International, Feb. 7, 2025, pp. 8 - 9.
[27] D. Alvarado Rincón & M. Meyer-Resende, "Big tech is backing out of commitments countering disinformation: What's Next for the EU's Code of Practice?",
Europen Union, Democracy Reporting International, Feb. 7, 2025, p. 9.
[28] D. Alvarado Rincón & M. Meyer-Resende, "Big tech is backing out of commitments countering disinformation: What's Next for the EU's Code of Practice?",
Europen Union, Democracy Reporting International, Feb. 7, 2025, p. 9.
[29] D. Alvarado Rincón & M. Meyer-Resende, "Big tech is backing out of commitments countering disinformation: What's Next for the EU's Code of Practice?",
Europen Union, Democracy Reporting International, Feb. 7, 2025, p. 10.
[30] https://ifcncodeofprinciples.poynter.org/about
[31] https://www.poynter.org/about/
[32] https://guides.lib.berkeley.edu/c.php?g=620677&p=4333407
[33] https://library.csi.cuny.edu/c.php?g=619342&p=4310783
[34] https://www.snopes.com/about/
[35] https://www.reuters.com/fact-check/.
[36] https://members.efcsn.com/signatories.