Cooperation, coordination and the optimal use of expertise in fundamental rights and security. These elements are at the heart of the final advice 'Supervision of AI' presented today by the Rijksinspectie Digitale Infrastructuur (RDI) and the Autoriteit Persoonsgegevens (AP). The advice describes how an integrated approach can be used to effectively supervise the use of AI.

Artificial intelligence (AI) is undergoing rapid development and is being applied everywhere and on an increasingly large scale. The applications are endless and AI offers enormous opportunities for our society. At the same time, major risks can arise.
The AI Regulation contains rules for the responsible development and use of AI by companies, governments and other organizations. Well-organized supervision of this gives confidence to consumers and creates clarity for organizations and businesses. They can continue to interact with the (sectoral) supervisors they already know.
Because of the many different AI applications, there are also many different risks. An AI system in toys poses different risks than, for example, an AI system for recruitment and selection. The RDI and the AP therefore recommend that the supervision of AI in the various sectors and domains be aligned as much as possible with regular supervision.
Products such as, machines, elevators or toys must already comply with various European (safety) regulations. Consumers recognize this by the CE marking. When AI is used in these types of products, the AI Regulation must also be complied with. The supervision of these products can remain with the same supervisor. In this way, knowledge, expertise and capacity of the sectoral supervisors are optimally utilized and their mandates remain intact.
When organizations integrate AI applications into products and services without the current mandatory CE marking, oversight of them can also align with existing oversight roles. These include, for example, applications where AI systems are used to make decisions about people. Such as in recruitment and selection, assessments in education or risk selection by government organizations.
For these cross-sector applications, it is important that supervisors collaborate from their sectoral and domain-specific expertises. For example, signals should be able to be shared back and forth, and supervisors can work together to make appropriate interventions. Supervision of the AI regulation should take place in close cooperation between supervisors so that supervision does not fragment and that organizations know what is expected of them.
The final opinion on the supervision of the AI regulation assigns coordinating roles to the RDI and the AP. From an expert role, they support and advise other regulators and facilitate cooperation.
This final opinion is the result of intensive cooperation between all involved supervisors and with the support of the Inspection Council and the Algorithm & AI Chamber of the Digital Supervisors Cooperation Platform (SDT).
Angeline van Dijk, Inspector General of the RDI commented, "This broad involvement underlines the joint commitment to integrate AI into Dutch society in a responsible and safe way. In doing so, we are not just looking at today, but precisely what a safe and at the same time innovative AI landscape will require of us tomorrow."
AP president Aleid Wolfsen stresses the importance of the protective effect of the AI regulation: "People can already come into contact with AI systems on a daily basis, and it is important that citizens can trust that AI is deployed safely, fairly and with respect for fundamental rights. By working together as regulators, we ensure the development and safe deployment of high-risk AI systems across sectors based on a uniform interpretation of the law across the European Union. As the European Data Protection Board also recently wrote, within such a structure, the AP can make an important contribution from its role and knowledge in product oversight where AI makes decisions or assessments about people."
