The European Commission proposes the first-ever legal framework for AI, addressing AI risks and positioning Europe to play a leading role globally.

The Commission today (April 21, ed.) presents new rules and actions to make Europe the global hub for trusted artificial intelligence (AI). The combination of the first-ever regulatory framework for AI and a new coordinated plan by the Union and member states will ensure the safety and fundamental rights of people and businesses while strengthening support for AI, investment and innovation across the EU. That approach is complemented by new rules for machines, which update safety regulations to increase user confidence in the new and versatile generation of products.
Margrethe Vestager, Executive Vice President for a Europe Ready for the Digital Age: "For artificial intelligence, trust is a must, not a luxury. With these groundbreaking rules, the EU is taking the lead in developing new global standards to ensure everyone can trust AI. By setting standards, we can pave the way for global ethical technology and ensure the EU remains competitive. Our rules are future-proof and innovation-friendly. We act only where strictly necessary: when the security and fundamental rights of EU citizens are at stake."
Internal Market Commissioner Thierry Breton said, "Artificial intelligence is a means, not an end. AI has been around for decades but the computing power of modern computers has opened up new possibilities in areas such as health, transport, energy, agriculture, tourism and cybersecurity. However, this also comes with some risks. Today's proposals aim to strengthen Europe's position as a global center of excellence in AI from the lab to the marketplace, ensure AI in Europe respects our values and rules, and harness the potential of AI for industrial use."
The new AI regulation will give Europeans confidence in the potential of AI. Proportionate and flexible rules will address the specific risks of AI systems. Europe opts for the strictest in the world. The coordinated plan outlines the necessary policy changes and investments that must happen in member states to strengthen Europe's leadership in the development of human-centered, sustainable, secure, inclusive and reliable AI.
The new rules will be applied directly in the same way in all member states based on a future-proof definition of AI. This will follow a risk-based approach:
Unacceptable risk AI systems that pose a clear threat to human safety, livelihoods and rights will be banned. These include AI systems or applications that manipulate human behavior to circumvent users' free will (e.g., voice-activated toys that encourage minors to engage in dangerous behavior) and systems that enable "social scoring" by governments.
High risk AI systems identified as high risk and used in:
critical infrastructure networks (e.g., transportation), which could endanger the lives and health of citizens;
education or professional training, which may determine access to education and professional careers (e.g., improving exams);
safety components of products (e.g., AI application in robot-assisted surgery);
employment, personnel management and access to self-employment (e.g., software for screening resumes in selection procedures);
essential private and public services (e.g., credit rating on the basis of which citizens are denied loans);
law enforcement, which may affect our fundamental rights (e.g., assessing the reliability of evidence);
migration, asylum and border control management (e.g., verification of the authenticity of travel documents);
administration of justice and democratic processes (e.g., application of the law to concrete facts).
High-risk AI systems will be subject to strict obligations before they are allowed to be marketed:
adequate risk assessment and mitigation systems;
high quality of the datasets feeding the system to eliminate risk and discriminatory results as much as possible;
recording of activities to ensure traceability of results;
detailed documentation containing all necessary information to enable the authorities to assess the purpose and conformity of the system;
clear and adequate information for users;
appropriate human supervision to minimize risks;
strong robustness, security and accuracy.
In particular, all remote biometric identification systems are considered high-risk and must meet strict requirements. In public places, the direct use of those systems for law enforcement purposes is in principle prohibited. Limited exceptions are strictly defined and regulated (e.g. to search for a missing child, to avert a specific and imminent terrorist threat, or to locate, identify or prosecute a perpetrator or suspect of a serious crime). Prior authorization must be granted by a judicial or other independent authority, valid only for a limited time and environment and for specific databases.
Limited risk, i.e. AI systems subject to specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so that they can make informed decisions about whether or not they wish to continue that interaction.
Minimal risk: The legislative proposal allows the free use of AI-based video games or spam filters. The vast majority of AI systems fall into this category. The draft regulation leaves those systems untouched since the risk to citizens' rights or safety is minimal or non-existent.
Regarding governance, the Commission proposes that competent national market surveillance authorities oversee the new rules. The establishment of a European artificial intelligence committee will facilitate the implementation of those rules and encourage the development of standards for AI. In addition, the Commission advocates voluntary codes of conduct for AI without major risks and regulatory testing environments to promote responsible innovation.
Through coordination, Europe will strengthen its leadership role in human-centered, sustainable, safe, inclusive and reliable AI. To remain globally competitive, it is committed to promoting innovation in the development and use of AI technology across industries and member states.
The Coordinated Plan on AI, first published in 2018 to define actions and funding instruments for AI development and deployment, has paved the way for a dynamic landscape of national strategies and EU funding for public-private partnerships and research and innovation networks. As part of the comprehensive update of the Coordinated Plan, concrete joint cooperation actions are proposed to align all efforts with the European Strategy on AI and the European Green Deal, taking into account the new challenges posed by the corona pandemic. That plan proposes a vision to accelerate investments in AI, which can benefit the recovery. It also aims to encourage the implementation of national AI strategies, reduce fragmentation and address global challenges.
The updated coordinated plan will leverage funding through the Digital Europe and Horizon Europe programs, the Recovery and Resilience Facility, which includes a 20 percent digital spending target, and the Cohesion Policy programs to:
Create favorable conditions for the development and application of AI through the exchange of policy insights, data sharing and investment in critical computing capabilities;
promote excellence in AI "from laboratory to market" by establishing a public-private partnership, building and mobilizing research, development and innovation capacity, and providing testing and experimentation facilities and digital-innovation hubs for small and medium-sized enterprises (SMEs) and public administrations;
Ensure that AI serves people and benefits society by leading the development and deployment of trusted AI, by developing talent and skills through the support of internships, PhD networks and postdoctoral fellowships in digital fields, by integrating trust into AI policies, and by promoting the European vision of sustainable and trusted AI worldwide;
build strategic leadership in high-impact sectors and technologies, including the environment, by focusing on AI's contribution to sustainable manufacturing, to healthcare through greater cross-border information sharing, and to the public sector, mobility, home affairs, agriculture and robotics.
Machinery products include a wide range of products for consumers and professionals, from robots to lawnmowers, 3D printers, construction machinery and industrial production lines. The Machinery Directive, which is being replaced by the new Machinery Regulation, sets health and safety requirements for machinery. The new machinery regulation will both ensure that the new generation of machinery ensures the safety of users and consumers and encourage innovation. While the new AI regulation will address the safety risks of AI systems, the machinery regulation will ensure the safe integration of the AI system into the machine as a whole. Companies will only have to carry out one conformity assessment.
In addition, the new Machinery Directive addresses market needs by legally clarifying current provisions and simplifying administrative burdens and costs for companies. Documentation will be allowed to be submitted in a digital format and conformity assessment costs for SMEs will be reduced. Finally, the regulation ensures consistency with the EU legislative framework for products.
The European Parliament and member states will consider the Commission's proposals for a European approach to artificial intelligence and machine products through the ordinary legislative procedure. Once adopted, the regulations will become directly applicable throughout the EU. At the same time, the Commission will continue to work with the member states in implementing the actions announced in the coordinated plan.
Background
The Commission has been promoting and strengthening AI cooperation in the EU for years to improve competitiveness and ensure trust based on EU values.
Following the publication of the European Strategy on AI in 2018 and extensive stakeholder consultations, the High Level Expert Group on Artificial Intelligence (HLEG) developed guidelines for Trusted AI in 2019 and an assessment list for Trusted AI in 2020. At the same time, the first Coordinated Plan on AI was presented in December 2018 as a joint commitment of the EU and member states.
The Commission's White Paper on AI, published in 2020, outlined a clear vision for AI in Europe: an ecosystem of excellence and trust. The White Paper paved the way for today's proposal. The public consultation on the AI White Paper garnered a strong response worldwide. The White Paper was accompanied by a "Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics," which concluded that current product safety legislation contains a number of gaps that need to be addressed in the Machinery Directive.
View the proposed regulation here
View the annex here
