Menu

Filter by
content
PONT Data&Privacy

0

AP and RDI: Oversight of AI systems requires cooperation and must be arranged quickly

In the supervision of artificial intelligence (AI), cooperation between supervisors should be paramount, write the Personal Data Authority (AP) and the National Digital Infrastructure Inspectorate (RDI) in an advice to the cabinet. It must also quickly become clear which bodies will carry out the various parts of the supervision. After all, the first parts of the new European AI regulation will already be in force in early 2025.

Personal Data Authority June 12, 2024

News press release

News press release

Moreover, the AP and the RDI stress that sufficient budget and personnel must be available in time for all concerned supervisors. So that they can start their tasks, such as education and enforcement, on time.

The advice was drafted by the AP and the RDI in cooperation with the other 20 regulators that may play a role in AI supervision. For more than a year, Dutch regulators have been jointly preparing for the new AI supervision. With this joint vision of how national AI supervision should be regulated, Dutch regulators are leading the way in Europe.

AI Regulation

Last month, European ministers approved the AI Regulation: the world's first comprehensive law on artificial intelligence. The AI regulation regulates that high-risk AI systems can only be offered and used if they meet strict product requirements. Those systems will then receive a so-called CE mark, as has been mandatory for years for such things as elevators, cell phones and toys.

Joining regular supervision

The AP and the RDI recommend that the supervision of AI in the various sectors be aligned as much as possible with the regular supervision that already exists. Supervision of high-risk AI products that already require CE marking can remain the same. For example, the Dutch Food and Consumer Product Safety Authority (NVWA) will continue to inspect toys even if they contain AI. And the Healthcare and Youth Inspectorate (IGJ) will also supervise AI in medical devices.

Angeline van Dijk, RDI inspector general: 'Cooperation is the key, where it comes to concentration of knowledge and coordination in implementation. Effective supervision with an eye for innovation can only occur if the supervisors involved cooperate with the developers and users of high-risk AI. And market parties do the same among themselves. Companies and organizations can explore with RDI whether they need to comply with AI regulations. And how they can make that happen. RDI's commitment to regulatory sandboxes, a kind of incubator for responsible AI applications, is an excellent example. And this opinion is an important milestone in that regard.'

New monitoring of AI

Supervision of high-risk AI applications for which no CE marking is currently mandatory should largely be the responsibility of the AP in addition to sectoral supervision, the regulators write. It does not matter in which sector these systems are deployed. From education to migration and from employment to law enforcement; the AP should be the so-called "market supervisor" here.

AP chairman Aleid Wolfsen: 'The market supervisor will ensure that AI put on the market actually meets requirements in areas such as training AI, transparency and human control. That requires a lot of specialist knowledge and expertise, and it is efficient if that is bundled. It is also important that the AP then has an overview: the companies that develop such AI often do not just do so for one sector. Cooperation with sectoral regulators is crucial here, because they also have a good overview of AI use in, for example, education or by employers. We are going to work quickly to set up that cooperation.'

The regulators propose 2 exceptions: as far as the financial sector is concerned, the Financial Markets Authority (AFM) and De Nederlandsche Bank (DNB) will undertake market supervision, the Inspectorate of the Environment and Transport (ILT) and the RDI for critical infrastructure. Furthermore, market oversight of AI systems used for the benefit of the judiciary should be shaped to ensure the independence of judicial authorities.

It is important that supervisors are quickly appointed not only in the Netherlands, but also in other member states. Indeed, cross-border and large AI systems require cooperation between supervisors from different member states and with the new European AI Office that will oversee large AI models underlying ChatGPT, for example.

Urgent AI tasks

A number of issues must already be settled in the short term. These include, for example, the appointment of fundamental rights supervisors, a role that the supervisors foresee for the Human Rights Council (CRM) and the AP. Also of concern are the bodies to be appointed to assess whether AI systems meet European standards. The supervisors ask the Cabinet to decide quickly to designate the relevant supervisors so that they can start in time with information, enforcement and practical preparation for these new tasks. For example, the ban on some forms of AI is likely to apply as early as January 2025. The regulators propose that the AP become responsible for overseeing these bans.

Share article

Comments

Leave a comment

You must be logged in to post a comment.