The Digital Services Act ("DSA"), together with the Digital Markets Act ("DMA"), aims to create a safe, predictable and reliable online environment. The DSA has a broad scope and applies to more than 10,000 brokering services operating within the European Union ("EU"). Examples of brokering services include hosting services, online marketplaces, social networks, appstores, content-sharing platforms and online travel platforms. In addition, the deployment of Artificial Intelligence ("AI") falls within the frameworks of the DSA in certain cases. In the case of AI, additional regulations such as the General Data Protection Regulation ("AVG") and the AI Act may come into play.
In the first part of our three-part blog series, we provided an introduction to the DSA. This second part focuses on transparency and algorithms under the DSA, linking to the AVG and the AI Act. The third and final part focuses on content moderation under the DSA.
A key issue in the DSA is transparency: for example, the DSA sets requirements for how algorithmic processes may be deployed, how ads are placed, and how harmful or illegal content should be moderated. The DSA also imposes specific rules for automated AI-based (recommendation) systems to give users insight into how certain content is presented.
Transparency obligations related to algorithms
Under the DSA, illegal or harmful content can be addressed or removed following a notification or order from a competent authority (Article 9) or a complaint from a user (Article 14). In addition, parties can take proactive action against illegal or harmful content. This means that they do not only have to wait for notifications, but can also (for example) deploy algorithmic systems to automatically detect and remove such illegal or harmful content, such as deepfakes, before it spreads further.
If algorithmic systems are used in the context of content moderation, users should be informed of this through general terms and conditions (Article 14). Intermediary services should also report annually on the content moderation performed, indicating to what extent automated tools were used (Article 15). This is also relevant to the duty of justification for intermediary services when making content moderation decisions (Article 17). Finally, platforms must handle complaints carefully, with automated decision-making always subject to human review (Article 20).
AI is not only used to detect and address harmful or illegal content. Platforms, such as Tiktok and Instagram, commonly use algorithmic recommendation systems to bring specific information to the attention of users (Article 3s). Such systems are used to rank, prioritize and present content, which significantly affects how users receive and respond to information. Therefore, platforms should specify in their terms and conditions the parameters used for recommendation systems, as well as options for users to change them (Article 27).
Transparency obligations related to advertisements
Similar obligations apply around online advertising and advertisements. For example, the DSA requires online platforms that display advertising to ensure that the buyers of the service are clear for each advertisement displayed:
That there is advertising,
in whose name the advertising is shown,
who paid for the advertising, and
What parameters are used to determine to which customers advertising is shown (Article 26).
Platforms may not show advertising based on profiling that uses special personal data (Article 28). In addition, platforms may not show advertising based on profiling when the customer is a minor.
With respect to online advertising and advertisements, VLOPs and VLOSEs are subject to additional transparency obligations. For example, VLOPs and VLOSEs must keep a register of all advertisements displayed on its services that is publicly accessible to allow monitoring and investigation of risks such as illegal advertising, manipulation and disinformation (Article 30). This register must include details such as ad content, advertiser, display period, targeting parameters used and reach, without including users' personal data.
The AVG laid the foundation for EU-wide legal frameworks regarding the protection of personal data. The DSA builds on the principles of the AVG.
Whereas the AVG focuses primarily on the protection of personal data, the DSA's focus is on creating a secure and transparent online environment. The DSA does this by requiring parties to be more transparent and accountable regarding their activities. Parties active in an online environment often process personal data, which is why the DSA and the AVG apply simultaneously. The DSA emphatically does not detract from the AVG in this regard and is sometimes even stricter.
A clear interface between the DSA and the AVG is how user data (insofar as personal data) may be processed and used. Under the AVG, users can consent (in certain cases) to the use of their data, and the DSA adds that personalized ads without explicit consent are no longer allowed. In addition, the DSA requires platforms to make algorithms more transparent, giving users more insight into how their online experiences are shaped.
The DSA and the AI Act basically regulate separate subjects. The AI Act deals primarily with AI technology, while the DSA regulates brokering services and other providers. Nevertheless, the DSA and the AI Act complement each other through their shared focus on transparency and accountability around the use of AI systems. Indeed, platform regulation and the use of AI systems are becoming increasingly intertwined, as recognized in the preamble to the AI Act.
However, the DSA does not regulate anything in the area of content distribution by AI systems. The DSA looks at the tripartite division of mere conduit, caching and hosting services. Self-contained AI services, such as generative AI that creates content based on user input, do not fit into any of these three categories. Indeed, content creation and distribution by AI systems involves more complex processes that are not (fully) addressed in the DSA. However, the DSA does provide rules for platforms that use automated AI-based (recommendation) systems to bring certain content to the attention of users (Article 29). The DSA requires explanation of recommendation systems and advertisements, as discussed above.
Instead, the AI Act imposes broader requirements for risk assessment and user disclosures. For example, under the AI Act, users of AI systems that manipulate photographs, sound and/or video footage must disclose that this involves something artificially generated or manipulated. An example is a deep fake. Deep fakes, on the other hand, are identified in the DSA as a risk (Article 34). VLOPs and VLOSEs must mitigate such risks, with measures under safeguards that "generated or manipulated image, audio or video material" (such as deep fakes) are marked as not authentic and not truthful (Article 35).
For more information on the AI Act, please see our whitepaper.
The DSA, combined with the AVG and the AI Act, presents complex compliance challenges for companies in the Netherlands and the rest of the EU. In particular, organizations that develop and deploy AI tools must prepare for the confluence of these regulations. After all, AI (also) plays a role within the DSA: it provides content moderation solutions on the one hand, but also introduces new enforcement risks. This requires investments in their algorithmic infrastructure.
The DSA ushers in a new era of transparency and accountability within the digital world. With rules on algorithms, advertising and content moderation, the DSA provides clear frameworks for user rights and responsibilities for brokering services. At the same time, the DSA strengthens existing laws, such as the AVG, and prepares intermediary services for the arrival of additional regulations, such as the AI Act.
The final part of this series, to be published Feb. 3, will take a closer look at companies' obligations and responsibilities around content moderation and dealing with illegal content under the DSA.