Why you as a General Counsel need to work on a Fundamental Rights Impact Assessment (FRIA) now
As General Counsel, what ethical considerations do you make when deploying AI? The Fundamental Rights Impact Assessment (FRIA) helps identify risks to fundamental rights early. Read why this is essential and how it contributes to compliance, transparency and trust.
June 9, 2025
The technological acceleration in which we find ourselves puts high stress on the legal playing field. Artificial intelligence, automated decision-making and data-driven processes offer unprecedented opportunities, as well as risks. New laws and regulations, such as the European Commission's AI regulation, require not only compliance, but also a rethinking of the way fundamental risks are identified and managed.
Pressure on legal departments increases
As General Counsel or Head of Legal Affairs, you face a multitude of forces:
- Technological: AI is being integrated into business processes, from HR to customer service, without the risks to the organization and people involved always being well understood.
- Political: Governments are stepping up their oversight. AI regulation requires organizations to proactively identify and mitigate risks associated with high-risk AI systems on fundamental rights.
- Macroeconomic: The pressure to work more efficiently and (continue to) innovate is increasing, while the regulatory framework that must be met is increasing.
- Ethical: Stakeholders, from regulators to customers and employees, expect organizations not only to act legally correct, but also to show moral leadership.
What is a Fundamental Rights Impact Assessment?
A Fundamental Rights Impact Assessment, in English "Fundamental Rights Impact Assessment" or abbreviated "FRIA") is a tool for organizations to identify specific risks to the rights of (groups of) individuals likely to be affected by use of a high-risk AI system and to determine what mitigation measures are needed in the event those risks occur.
The reason is that AI systems may have significant impact on the fundamental rights of individuals, such as the right to privacy (including the right to data protection, but also the right to autonomy and self-determination), the right to equal treatment, the right to due process and the right to protection of (intellectual) property .
High-risk AI systems are often deployed in sensitive areas such as law enforcement, migration, healthcare and education, where errors, bias or lack of transparency can lead directly to unfair treatment, exclusion or harm. FRIA helps organizations identify, assess and mitigate in advance the risks their systems may pose, and ensures that the use of AI systems is in line with the values and fundamental rights legally protected within the EU.
Under the AI regulation, organizations using an AI system may be required to conduct a FRIA in certain cases. This is particularly true if the AI system classifies as "high-risk" and is used for credit, scoring or risk assessments or if the organization is a public law body or private entity providing public services. But even outside that context, a FRIA is a powerful legal tool for early identification and management of ethical risks.
With performing a FRIA:
- Systematically identify risks early;
- Get concrete guidance on how to take mitigation measures;
- Increase transparency toward regulators and stakeholders;
- Strengthen the trust of customers, employees and society;
- Meet you compliance obligation from AI regulation.
Why you need to get started on this now
The European AI Regulation has already entered into force and is being phased in. As of Aug. 2, 2026, an organization is expected to comply with all the rules in the AI Regulation. Organizations that prepare now have a clear head start, legally, ethically and strategically. At the same time, social sensitivity around AI is high: discrimination by algorithms,
bias in recruitment processes or privacy breaches can result in reputational damage or even legal claims.
By conducting a timely FRIA, you are demonstrating that your organization is not only
compliant is but also consciously and responsibly dealing with AI and the fundamental rights of the individuals involved.
Email
laura.poolman@kvdl.com to request the FRIA template developed by Kennedy Van der Laan, based on the AI Regulation and standards of international (human rights) organizations, and immediately get a practical tool to identify and manage fundamental rights risks when using AI systems.