It proposes a risk-based approach with four levels of risk for AI systems, as well as identification of risks specific to general purpose models:
Minimal risk : All other AI systems can be developed and used without additional legal obligations in accordance with existing legislation. The vast majority of AI systems currently in use or likely to be used in the EU fall into this category. Providers of those systems may choose to implement the requirements for reliable AI and comply with non-binding codes of conduct.
High risk : A limited number of AI systems defined in the proposal that potentially harm people's safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered high risk. The regulation is accompanied by a list of high-risk AI systems, which may be revised based on the development of AI applications in practice.
They also include safety components of products covered by sectoral Union legislation. By definition, they are considered high risk if they are subject to third-party conformity assessment under sectoral legislation.
Unacceptable risk. : A very limited number of particularly harmful applications of AI that are contrary to EU values because they violate fundamental rights and will therefore be banned:
social scoring for government and the private sector;
exploitation of individuals' vulnerabilities, use of subliminal techniques;
real-time remote biometric identification in public places by law enforcement agencies, subject to limited exceptions (see below);
biometric categorization of natural persons based on biometric data to infer or infer their race, political opinions, union membership, religious or philosophical beliefs, or sexual orientation. Filtering datasets based on biometric data in the field of law enforcement will still be possible;
individual predictive policing;
emotion recognition in the workplace and educational settings, unless for medical or safety reasons (i.e., monitoring a pilot's fatigue level);
non-targeted scrapping of Internet or CCTV images for facial images to build or expand databases.
Specific transparency risk: A number of transparency obligations are imposed for specific AI systems, for example where there is a clear risk of manipulation (e.g. through the use of chatbots). Users must be aware that they are interacting with a machine.
In addition, the AI Act takes into account systemic risks that may arise from general purpose AI models, including large generative AI models. These can be used for various tasks and meanwhile form the basis for many AI systems in the EU. Some of these models may pose systemic risks if they are very powerful or widely used. For example, powerful models can cause serious accidents or be misused for far-reaching cyber attacks. Many individuals may be affected if a model spreads harmful biases in many applications.
This is a translated message.