ANNEX III
High-risk AI systems referred to in Article 6(2)
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in the following areas:
|
1.
|
Biometrics, to the extent that their use is permitted under applicable Union or national law:
|
a)
|
systems for remote biometric identification.
This does not apply to AI systems intended to be used for biometric verification for the sole purpose of confirming that a specific natural person is who they claim to be;
|
|
b)
|
AI systems intended to be used for biometric categorization based on sensitive or protected characteristics or features, or based on what is derived from those characteristics or features;
|
|
c)
|
AI systems intended to be used for emotion recognition.
|
|
|
2.
|
Critical infrastructure: AI systems intended to be used as a safety component in the management or operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, and electricity.
|
|
3.
|
Education and vocational training:
|
a)
|
AI systems intended to be used for determining access or admission to, or assigning natural persons to, educational and vocational institutions at all levels;
|
|
b)
|
AI systems intended to be used for evaluating learning outcomes, including when those outcomes are used to guide the learning process of natural persons in educational and vocational institutions at all levels;
|
|
c)
|
AI systems intended to be used for assessing the appropriate level of education that a person will receive or have access to, in the context of or within educational and vocational training institutions at all levels;
|
|
d)
|
AI systems intended to be used for monitoring and detecting unauthorized behavior by students during tests in the context of or within educational and vocational institutions at all levels.
|
|
|
4.
|
Employment, personnel management, and access to self-employment:
|
a)
|
AI systems intended to be used for recruiting or selecting natural persons, in particular for posting targeted job vacancies, analyzing and filtering applications, and assessing candidates;
|
|
b)
|
AI systems intended to be used for making decisions that affect the terms of employment relationships, the promotion or termination of employment-related contractual relationships, for assigning tasks based on individual behavior or personal characteristics or attributes, or for monitoring and evaluating the performance and behavior of individuals in such relationships.
|
|
|
5.
|
Access to and use of essential private and public services and benefits:
|
a)
|
AI systems intended to be used by or on behalf of public authorities to assess whether natural persons are eligible for essential public benefits and services, including health services, or to grant, limit, withdraw, or recover such benefits and services;
|
|
b)
|
AI systems intended to be used for assessing the creditworthiness of natural persons or for determining their credit score, with the exception of AI systems used to detect financial fraud;
|
|
c)
|
AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;
|
|
d)
|
AI systems intended to evaluate and classify emergency calls from natural persons or to be used for deploying or prioritizing the deployment of emergency services, including police, fire, and ambulance services, as well as systems for triaging patients in need of urgent medical care.
|
|
|
6.
|
Law enforcement, to the extent permitted by applicable Union or national law:
|
a)
|
AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices, or agencies in support of or on behalf of law enforcement authorities, to assess the risk of a natural person becoming the victim of criminal offenses;
|
|
b)
|
AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices, or agencies in support of law enforcement authorities as lie detectors or similar tools;
|
|
c)
|
AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices, or agencies in support of law enforcement authorities to assess the reliability of evidence during the investigation or prosecution of criminal offenses;
|
|
d)
|
AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices, or agencies in support of law enforcement authorities to assess the likelihood of a natural person committing or reoffending, not solely on the basis of profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or previous criminal behavior of natural persons or groups;
|
|
e)
|
AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices, and agencies in support of law enforcement authorities to profile natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, during the detection, investigation, or prosecution of criminal offenses.
|
|
|
7.
|
Migration, asylum, and border control management, to the extent that its use is permitted under applicable Union or national law:
|
a)
|
AI systems intended to be used by or on behalf of competent public authorities or by institutions, bodies, or agencies of the Union as lie detectors or similar tools;
|
|
b)
|
AI systems intended to be used by or on behalf of competent government authorities or by institutions, bodies, or agencies of the Union to assess risks, including a security risk, a risk of illegal migration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State;
|
|
c)
|
AI systems intended to be used by or on behalf of competent public authorities or by institutions, bodies, or agencies of the Union to assist competent public authorities in processing applications for asylum, visas, or residence permits and in the processing of related complaints concerning the eligibility of natural persons applying for a status, including related assessments of the reliability of evidence;
|
|
d)
|
AI systems intended to be used by or on behalf of competent public authorities, or by institutions, bodies, or agencies of the Union, in the context of migration, asylum, or border control management, for the purpose of detecting, recognizing, or identifying natural persons, with the exception of the verification of travel documents.
|
|
|
8.
|
Administration of justice and democratic processes:
|
a)
|
AI systems intended to be used by or on behalf of a judicial body to assist a judicial body in investigating and interpreting facts or the law and in applying the law to a specific set of facts, or to be used in a similar manner in the context of alternative dispute resolution;
|
|
b)
|
AI systems intended to be used to influence the outcome of an election or referendum or the voting behavior of natural persons in the exercise of their voting rights in elections or referendums. This does not apply to AI systems whose output natural persons are not directly exposed to, such as tools used to organize, optimize, or structure political campaigns from an administrative or logistical point of view.
|
|