Menu

Filter by
content
PONT Data&Privacy

0

Prohibited uses under the AI Act

The rise of AI offers countless opportunities to advance society. However, as with many emerging technologies, there are downsides to the rise of AI. Malicious people can use AI for the wrong purposes, making it a powerful new tool for manipulation, exploitation and social control. In order to prevent this as much as possible, the European Union has attempted to intervene in the AI sector by banning certain uses of AI that would grossly violate EU (fundamental) rights. In particular, the right to non-discrimination, privacy and the rights of the child would be under threat, according to the Union.

27 January 2025

To protect these rights, eight prohibited uses of AI have been formulated. These prohibitions, along with possible exceptions, are discussed below. Not all prohibitions are absolute, and it is therefore important to consider whether an intended application of AI falls into an exception category and is therefore still permitted under conditions.

These prohibitions apply not only to parties who place such an AI system on the market (providers), but also, in principle, to parties who use such a system (user agents). It is therefore advisable to check before using an AI system whether it falls under one of these prohibited categories. Indeed, the heaviest sanction within the AI Regulation is reserved for violations of this prohibition, thus a fine of up to €35,000,000 or 7% of a company's total worldwide turnover can be imposed.

1. Manipulative techniques

Under no circumstances is the use of manipulative AI techniques permitted to interfere with the behavior of individuals in such a way that their ability to make independent choices is drastically impaired. This is the case when individuals make choices that they would not have made without the application of AI, resulting in their behavior under the influence of the output of AI causing harm to themselves or others, or creating a reasonable likelihood of harm.

Examples of such manipulative techniques include the use of certain audio and video stimuli that cannot be perceived by individuals but still influence their behavior. For example, displaying an image for less than 50 milliseconds, which can influence behavior but is usually not consciously perceived. But it also includes less intrusive techniques, where individuals are aware of their use but nevertheless cannot make free choices.

Not every technique that influences the behavior of individuals can be classified as manipulative within the meaning of the law. For example, it has been specified that current general advertising practices do not fall under this, although it can be argued that they influence people's free choice to some extent.

To fall under this prohibition, the degree of manipulation must be significantly higher. An example is the use of "virtual reality" in such a way that a person can no longer distinguish between reality and fiction.

2. Abuse vulnerabilities

In addition to general manipulative techniques, an AI system may also be designed to exploit specific individuals who are in a vulnerable position. That some individuals are more vulnerable to exploitation may stem, for example, from old age, minority status, a particular disability, extreme poverty, or belonging to a particular minority. An AI system that exploits any of these characteristics to thereby interfere with the behavior of these individuals in a way that causes, or could reasonably cause, them to harm themselves or others is prohibited.

However, this prohibition does not prevent certain medical applications of AI, which could actually benefit persons with mental illnesses, for example. Here it is important that the use of AI always complies with applicable medical standards and legislation, and that the explicit consent of the patient or his or her legal representative has been obtained.

3. Social scores

An AI system that assigns individuals a score based on their behavior over a certain period of time and attaches consequences to it is at odds with the EU principle of non-discrimination. This is because such systems may contain inherent biases against certain groups of persons, which may lead to unjust and adverse treatment of individuals or groups. For this reason, an AI-driven social credit system, as currently practiced in certain parts of China, is prohibited. This prohibition applies regardless of whether such an AI system is deployed by a government or a private party.

However, this does not mean that AI may not be used to assess individuals on the basis of their behavior and attach consequences. For example, the use of AI to determine the amount of benefits is allowed in principle, although this may also have far-reaching consequences for individuals involved. The prohibition focuses on the existence of a general score that has an effect on the actions of individuals in areas other than those for which the data was originally collected.

4. Profiling in criminal cases

When an AI system is used to estimate the likelihood that a person will commit a criminal offense, it may violate the principle of presumption of innocence. Therefore, it is prohibited to deploy AI systems that, based solely on person characteristics (such as place of birth, number of children or type of car) and profiling (the automated collection and processing of data about a person to obtain and predict a picture of his behavior), analyze the likelihood that a person will commit a criminal offense.

However, it is permissible to use AI in the assessment of persons in the context of a criminal investigation when there is already a reasonable suspicion, based on objective and verifiable facts, and through human assessment, that that person is involved in a criminal offense.

5. Databases for facial recognition.

Large-scale, indiscriminate searching of the Internet and surveillance camera images to collect facial images violates the right to privacy of the individuals concerned. When done for the purpose of creating or expanding a database of facial images, the use of an AI system for that purpose is prohibited.

6. Emotion Recognition

Given the fact that individuals display emotions in different ways, there is the possibility that an AI system that recognizes emotions may have a discriminatory effect when consequences are attached to its output. For this reason, the use of an AI emotion recognition system in the workplace or in education is prohibited unless the system is used exclusively for medical or safety purposes, such as by an occupational physician. However, recognizing pain and fatigue does not fall under emotion recognition. For example, an AI system that detects fatigue in a truck driver is not prohibited.

7. Diverting sensitive information

AI systems that classify people into groups based on biometric data (such as height, eye color or way of walking) for the purpose of extracting special personal data concerning those individuals are prohibited. Special personal data includes political opinions, union membership, religious or philosophical beliefs, race, sex life or sexual orientation.

An exception to this prohibition applies when such a system is used in the field of law enforcement, for filtering or labeling lawfully obtained personal data.

8. Real-time identification

Real-time remote biometric identification systems use live camera images of public places to recognize individuals based on their personal data. With sufficient camera surveillance, the system could thus track a person's location through a city. The use of these systems for law enforcement, such as for tracking a suspect, is in principle prohibited unless a strict set of conditions is met.

First, the system may only be used for one of the following purposes:

  • The targeted search for victims of kidnapping, human trafficking, sexual exploitation or other missing persons.

  • The prevention of an imminent threat to a person's life or physical safety.

  • Preventing a terrorist attack.

  • Locating a person suspected of a serious crime, such as terrorism, human trafficking, sexual exploitation of children and child pornography, drug trafficking, arms trafficking, murder, aggravated assault, trafficking in human organs, trafficking in nuclear substances or kidnapping.

When any of the above situations occur, the use of a real-time identification system by the police is not allowed until a court order has been issued, except in very urgent situations where judicial review must take place after the fact.

Share article

Comments

Leave a comment

You must be logged in to post a comment.