AP: Shared choices needed for responsible deployment of generative AI
The Autoriteit Persoonsgegevens (AP) is consulting on the AP's vision for responsible development and deployment of generative AI. This vision provides a picture of the technology, addresses trends, outlines future scenarios and includes legal interpretation. The AP is working on the vision from its role as personal data protection regulator and coordinating AI and algorithm regulator. The AP invites organizations to provide feedback on this preliminary vision and matching preliminary AVG boundary conditions for generative AI.
Autoriteit Persoonsgegevens May 23, 2025
Generative artificial intelligence (AI) has become an integral part of society. Its potential to contribute to economic and social prosperity is great. Generative AI is possible because models are trained with a staggering amount of information. In addition, these systems are additionally powered by humans entering personal data while using them. As a result, many models for generative AI presumably have a
unlawful genesis.
Responsible Ahead
Both the societal promises and risks of generative AI are great. This presents dilemmas for regulation and for oversight. Fundamental rights and public values must therefore be guiding. As a society, we can actively steer by creating conditions for responsible generative AI. Innovation opportunities and effective regulation can go hand in hand in this way.
This vision of the future is central to
Responsibly moving forward: AP vision for generative AI. The AP is submitting this vision for consultation in the coming period. The vision is a guiding perspective for what it will take for society to use generative AI responsibly.
AP President Aleid Wolfsen: "The possibilities of generative AI continue to surprise. The technology can bring many good things, but just as many concerns. Moreover, generative AI goes hand in hand with massive data storage. It is essential that personal data protection be in order even when generative AI is deployed. And that we as a society have a grip on technology to protect fundamental rights and public values. Given the rapid technological developments, correcting afterwards is not enough. With this vision, the AP shows that the development and deployment of generative AI is safely and responsibly possible."
Future scenario: values at work
In this vision, the AP explores 4 future scenarios for 2030. In doing so, the AP considers the possible development of technology and European regulation. The AP aims for the scenario "Values at Work. To get there, the AP identifies a number of necessary starting points at the societal level. Such as more European digital autonomy, social resilience, democratic governance, a well-functioning market and the ability to correct through the AI chain.
Additionally, the AP sees starting points for this scenario at the level of individual AI models and systems. Such as when developers and users are transparent, identify and weigh risks, set clear goals, and have control over the environment where the AI system is running and the data being processed. By taking these measures, organizations can comply with relevant legislation, including the
AVG, the
AI Regulation and any sectoral legislation.
AI general purpose models
This vision is about widely deployable AI models that generate all kinds of output, and the systems and applications of which those models are part. For example, as a chatbot, image or video generator, or as part of a search engine. Generative AI models can serve as foundations for all kinds of purposes. Such "foundation models" or "general purpose AI models" fall under the heading of generative AI in this view.
AVG edge conditions for generative AI
The AP sees both opportunities and challenges for the lawful development and deployment of generative AI. Therefore, the AP is simultaneously publishing
AVG edge conditions for generative AI. Again, the AP invites organizations to provide feedback on this preliminary version. The AP will incorporate the responses into the final version.
According to the AP, the creation of most foundation models falls short in terms of
legality short to date because of (special) personal data in the training data and how it was collected. The
EDPB noted, however, that the use of these foundation models by Dutch and European parties is not a priori illegitimate.
Steps AP
Technological developments in the field of generative AI are advancing rapidly. Developers and organizations deploying generative AI must be mindful of the risks to end users. With the vision and AVG boundary conditions, the AP provides more clarity for organizations.
In addition to these consultation documents, the AP is taking concrete steps in the coming period to arrive at the preferred "Values at Work" scenario. In the European context, the AP is contributing to standardization of technology.
The AP is also launching a counter for generative AI, where developers and users can share questions and concerns with the AP. This helps the AP keep a finger on the pulse of the biggest concerns facing organizations. The AP will be engaging in discussions about the vision and the AVG boundary conditions in the coming period, and gathering input at meetings.