To ensure that AI (artificial intelligence) is developed and applied safely and reliably, the AI Regulation came into force in the European Union on Aug. 1, 2024. These are European requirements and frameworks for AI. This will allow us to take full advantage of the (economic) opportunities of AI, address the risks of AI systems and applications, and encourage the development of reliable AI. The regulation will apply incrementally.

The AI Regulation is a set of rules to ensure reliable AI in the EU, by setting requirements for the development and use of these systems. Depending on the AI system or the context in which it will be used, different requirements apply. Applications of AI that pose truly unacceptable risks to humans will be banned. Consider systems that are used to unfairly judge people (so-called "social scoring"). High-risk AI, such as applications in the labor market or in financial services, will have clear requirements. For certain AI applications that interact with citizens (such as chatbots) or create or manipulate content (deepfakes), it must be clear that this is AI. No new requirements are imposed on AI applications that pose little or no risk. Most AI systems now in use in Europe fall into this category.
Citizens, consumers, public authorities and businesses who use or come into contact with AI products can thus be confident that these systems are safe and reliable. And it creates clarity for AI developers when they want to offer their products on the European market. In addition, in 'regulatory sandboxes', companies and (government) organizations can get advice and explanations from supervisors on the rules of the regulation, thus lowering the threshold to develop and deploy AI systems responsibly. Finally, these European rules ensure a level playing field; non-European providers of AI products and services must also comply with them if they offer them here.
Minister Beljaarts (Economic Affairs): "We want to better exploit the economic opportunities of AI and continue to encourage innovation by researchers and entrepreneurs. At the same time, we need to be able to ensure the safe operation of AI. With this legislation, we therefore match the risks of various AI systems. Strict rules where necessary, but precisely no unnecessary rules for companies that develop and apply low-risk AI systems. I am therefore pleased with these European agreements that strike the right balance between the opportunities and risks of AI."
State Secretary Szabó (Digitalization and Kingdom Relations): "This moment of entry into force of the AI regulation is an important milestone for the European Union. The EU, and with it the Netherlands, is strongly committed to ensuring that we can trust AI, and that we encourage development and use of trusted AI in government. On the front end, for example, developers must now identify risks to fundamental rights, and citizens are actively informed about the deployment of high-risk AI systems when it is used in decisions that affect them."
The regulation came into force on Aug. 1, 2024, but the various parts will apply in phases. This gives developers and providers of high-risk AI an opportunity to bring their AI systems into compliance with the new requirements.
February 2025: provisions on prohibited AI
August 2025: requirements for generalpurpose AI
August 2026: requirements for high-risk AI applications and transparency obligations
August 2027: requirements for high-risk AI products: full AI regulation then applicablea
August 2030: requirements for high-risk AI systems at government organizations launched before August 2026
The European Commission is going to oversee large AI models that can be used for many different purposes. National regulators will monitor compliance with the requirements for high-risk AI. This will be laid down in national regulations.
