In a report published last week, the European Union Agency for Fundamental Rights (FRA) warns that organizations are ill-prepared to assess and mitigate fundamental rights risks when using high-risk AI. According to the FRA, this threatens to create a gap between the ambitions of the AI Act and the daily practice of developers and users of AI systems in areas such as asylum, education, employment, policing, and social security. This gap directly affects the way people and AI interact in society: if the human side of that interaction—knowledge, reflection, and critical thinking—is lacking, AI loses its basis for trust.

The FRA notes that understanding and knowledge of how AI specifically affects fundamental rights are still limited. This is worrying now that the EU has announced the world's largest public investment in AI and the AI Act is explicitly intended to protect fundamental rights. In the practices examined, risk assessments often seem to focus solely on privacy and non-discrimination. Other rights—such as access to effective legal remedies, good administration, or children's rights—are rarely systematically taken into account. This points to a narrow understanding of what "human intervention" really means: not just ex post facto control, but a conscious and well-informed dialogue between humans and systems.
According to the FRA, organizations are struggling with the question of whether their system falls under the AI definition and whether it should be considered high-risk. Respondents use varying interpretations and sometimes apply irrelevant criteria—such as the absence of personal data or the use of "simple" techniques—to avoid classifying a system as high-risk. The watchdog warns that the filter clause in Article 6 of the AI Act, which exempts certain Annex III systems, could lead to underclassification, particularly in sensitive areas. This underscores the need for human assessors to be better supported in these complex assessments.
Because providers themselves must determine whether their systems pose "significant risks," there is a real risk of overly optimistic assessments. Where documentation requirements and registration apply, external control is often limited—especially in areas such as law enforcement and migration. The FRA emphasizes that this model only works if human expertise outside the development organization—such as independent regulators with knowledge of fundamental rights—is firmly embedded in the process. Human-AI collaboration loses legitimacy if it takes place exclusively within the provider's walls.
In the practices examined—from AI selection tools to risk assessments in social security—technical and privacy impact analyses dominate. Organizations conduct DPIAs, test for bias, and rely on human intervention, but also recognize that this does not automatically lead to results that comply with fundamental rights. Human oversight only proves effective if people have sufficient knowledge of the functioning and limitations of the system they are working with. Without that expertise, the human role fades into a formality.
The FRA calls on the European Commission and Member States to invest in a systematic knowledge base on fundamental rights risks and effective mitigation practices. This requires not only technical innovation, but also the strengthening of human skills and responsibilities in the AI ecosystem. Specifically, the FRA advocates practical guidelines and models for fundamental rights impact assessments (FRIAs) that are more in line with existing initiatives such as the Dutch FRAIA. In addition, the agency calls for sufficient resources and mandates for supervisory authorities so that the human component in supervision and accountability can truly take shape.
The core message of the report is that the protection of fundamental rights does not depend solely on rules or algorithms, but oneffective cooperation between humans and AI: with critical, expert, and independent oversight that is both technical and normative.
Download the report here
