Both capitalizing on the economic opportunities of artificial intelligence (AI) and ensuring the safe and reliable operation of the technology. That is the premise of the European AI Regulation (AI Act), which has been phased in since last summer. From Feb. 2, 2025, various bans on unwanted applications of AI will apply across the EU. For example, on AI systems that classify or rank people based on social behavior or personal characteristics (social scoring) where this could lead to adverse or unfavorable treatment.

The entire AI regulation offers opportunities for both developers and entrepreneurs as well as safeguards for European consumers. These include basic agreements on how AI works in products and services, requirements for (potentially) risky applications, and support for developers such as SMEs. This will allow Europeans to rely on AI and entrepreneurs to work on innovations in a more focused way.
Minister Dirk Beljaarts (Economic Affairs): "We strive for AI models that work according to European standards and values. Not only for safe operation of the technology. So we can also better exploit opportunities for innovation and entrepreneurs. As of today, we prohibit unwanted risks from multiple AI systems. This fits with the balance we strive for. Strict rules where necessary, but no unnecessary rules for companies developing and deploying low-risk AI systems."
In addition to banning social scoring through technology, AI systems that use emotion recognition in work and education are also no longer allowed. As well as AI systems that use manipulative or deceptive techniques, to change behavior in negative ways. The same goes for real-time remote biometric identification in public spaces for law enforcement, with limited exceptions. Similarly, using AI for risk assessments for committing crimes solely on the basis of profiling is no longer tolerated in the EU.
The AI regulation entered into force on Aug. 1, 2024, but the various parts are applicable in phases. This gives developers and providers of high-risk AI an opportunity to bring their applications into compliance with the new requirements.
Feb. 2, 2025: provisions on prohibited AI;
Aug. 2, 2025: requirements for generalpurpose AI models;
Aug. 2, 2026: requirements for high-risk AI applications and transparency obligations;
Aug. 2, 2027: requirements for high-risk AI products; full AI regulation applies;
Aug. 2, 2030: requirements for high-risk AI systems at government organizations that were marketed before August 2026.
National regulators will monitor compliance with bans on certain AI systems and high-risk AI and transparency requirements. The European Commission is going to oversee large AI models that can be used for many different purposes.
