Menu

Filter by
content
PONT Data&Privacy

0

Autoriteit Persoonsgegevens consultation to obtain insight into banned AI systems in the Netherlands

8 October 2024

Articles

Articles

This article in brief

  • The Autoriteit Persoonsgegevens (AP) launched a consultation on banned AI systems despite the lack of a formal supervisory role.

  • There are concerns about possible misuse of sensitive information by the AP and risk of "naming and shaming."

  • Opinion: the AP would have been better off delaying its consultation until the European Commission publishes guidelines and its role is clearer.

Merely a (too) premature action without jurisdiction or (also) a fishing expedition in disguise?

The legal landscape around artificial intelligence ("AI") has been mapped out a little further by the coming into force of the AI Act [1] on August 2, 2024, but in practice still raises the necessary questions about its actual application. That the Autoriteit Persoonsgegevens ("AP") is also searching is evidenced by the "first call for input" that it issued - from its role as Directorate for Coordination of Algorithms ("DCA") - on September 27, 2024 to stakeholders ("citizens, governments, companies and other organizations") in connection with prohibited AI systems.

Section 5 of the AI Act provides an exhaustive list of AI systems that - in view of the unacceptable risk they pose - have been classified as prohibited and must be withdrawn from the market or out of use no later than February 2, 2025. One of these bans concerns a:

"AI system that uses subliminal techniques of which individuals are unaware or deliberately manipulative or deceptive techniques, with the purpose or effect of materially interfering with the behavior of individuals or a group of individuals by appreciably interfering with their ability to make an informed decision, thereby causing them to make a decision that they would not otherwise have made, in a manner that causes or is reasonably likely to cause substantial harm to such or other individuals, or a group of individuals."

The AP's consultation focuses on this category of systems. It is the duty of the national market surveillance authority under the AI Act to take enforcement action against this. This is not allowed for all AI systems "that have a smell". Indeed, the AI Act requires that the national market surveillance authority must have "sufficient reason" "to believe that an AI system poses a risk." Only then may an evaluation of the system be conducted to assess whether it meets the requirements of the AI Act (and thus: is not prohibited). Whether there is a risk follows explicitly from the AI Act.[2] The national market surveillance authority is explicitly not free to determine the criteria for this itself. If the supervisory authority determines that the high-risk system does not meet the requirements - and also no appropriate, corrective measures are taken to make the system AI Act "compliant" - then it is up to the national market supervisory authority to order the withdrawal or recall of the system. 

It is striking that the AP's consultation seems to anticipate a designation as a national market regulator under the AI Act for prohibited AI systems, while there is currently no question at all of such a designation: the Ministry of Economic Affairs and the Ministry of Legal Protection still have until August 2, 2025 to do so. Although it is not inconceivable that this task will be entrusted to the AP, it cannot be completely ruled out either that another authority, or an entirely new authority to be established, will be designated for this purpose. 

Also noteworthy is the short lead time that the AP has attached to its solicitation. Stakeholder input must be provided by November 17, 2024 at the latest. This means that from that moment on the AP is expected to have access to very sensitive (personal) data, while neither the AI Act nor its role as DCA[3] provides for this. Precisely from the AP, as data protection supervisor, more care in processing data should therefore have been expected.

All the more so with the likelihood of naming and shaming (e.g. by an angry customer, interest group or competitor) lurking as a result of the consultation. The AP asks stakeholders to provide concrete notification of existing AI systems that may qualify as prohibited, by requiring answers to questions such as "[k]ould you provide examples of AI-based systems that (potentially) lead to manipulative or deceptive and/or exploitative practices?" and "[k]ould you provide examples of AI systems that use subliminal techniques?"

The question that arises is what the AP intends to do with the insights gained. Will they be kept in a secure, locked environment until there is clarity about the AP's possible role as national market regulator? And will they then be used to detect and investigate concrete, potentially prohibited AI systems (in other words, a fishing expedition, where the AP hopes to "catch" something using its consultation as a figurative fishing rod)? Even in such a scenario, where the insights are not used until the AP is designated as a regulator under the AI Act, the legitimacy of the current solicitation is questionable. Is this really the most appropriate way to identify prohibited AI systems operated in a state? The AI Act makes no mention of this method of enforcement against prohibited AI systems at the national level. 

The solicitation does not answer these questions. It does indicate that by generic terms a "summary and appreciation of the input" will be published by the AP, after which the input will be removed. Unknown, however, is when it plans to issue this publication. In the short term or only after possible designation as supervisor? And how will function creep, the situation where the insights obtained are nevertheless also (unexpectedly) used for enforcement purposes, be prevented? This question is all the more pressing now that the AP emphasizes in the request that, if desired, naming of organizations or groups is possible in its report. 

Moreover, the knowledge "gained" by the AP can be shared with other (European) AI regulators. Is this only for the purpose of mutual knowledge sharing, or is this information also used more broadly for enforcement purposes? The questionnaire does not give a definitive answer on this. 

In the solicitation, the AP further indicates that it will at least use the input to establish a preliminary basis for further explanation of the prohibitions in the AI Act. Although the AP's attempt to be transparent about its perspective on the enforcement of prohibited AI systems is commendable (partly from the perspective of legal certainty), however, establishing a general framework for review is not a task that the AP - should it be designated as the market regulator for prohibited AI systems - is entitled to: the AI Act explicitly places the drafting of guidelines on, among other things, (qualification of) AI systems for prohibited practices with the European Commission. This is not illogical, as it creates a strong degree of harmonization for all member states on this issue and reduces the risk of divergent standards - requiring the AI system in one member state to be framed differently than another in order not to be classified as prohibited.

All in all, therefore, it would have been better for the AP to omit its "first call for input" and instead calmly await the designation process as market regulator for prohibited AI systems, giving the European Commission the opportunity in the meantime to draft guidelines on the interpretation of qualification of prohibited AI systems. By issuing the consultation, the AP has raised more questions than intended to be answered by. `

Footnotes

[1] Regulation EU 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144, and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Regulation).

[2] For this, reference is made to Article 3(19) of Regulation (EU) 2019/1020. It concerns - in short - risks that concern health, safety or fundamental rights of persons.

[3] Appendix to Parliamentary Paper 35 788, no. 77; Parliamentary Paper 26 643, no. 953.

Share article

Comments

Leave a comment

You must be logged in to post a comment.