Menu

Filter by
content
PONT Data&Privacy

0

What is the difference between a DPIA and a FRIA?

The EU legislature is currently discussing the adoption of the contested Art 29 of the AI Act. The provision outlines new risk management frameworks by introducing a Fundamental Rights Impact Assessment (FRIA). Giacomo Delinavelli, legal and policy counsel at Arthur's legal, explains differences and intersections between the proposed instrument and Data Protection Impact Assessments (DPIA)

September 22, 2023

As the use of complex technologies becomes more and more deeply rooted in our daily lives, we want to understand and potentially manage the risks associated with them. To this end, European legislation has created several risk management frameworks that responsible parties can, or in some cases may be required to, use.

In 2016, the General Data Protection Regulation (GDPR), Article 35, introduced the Data Protection Impact Assessment (DPIA). The purpose of this provision is to provide a tool for data controllers (Art. 26) to identify and mitigate risks to fundamental rights and freedoms arising from their data processing activities.

In June 2023, the European Parliament, in its amended version of the Artificial Intelligence Act (AI Act), proposed a similar tool called Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems. This tool, established in the new Article 29, aims to identify and assess risks to fundamental rights and freedoms related to the use of AI systems, and to deploy risk mitigation measures.

DPIA vs. FRIA

Although the objectives and nature of these two instruments are similar, namely risk management, they overlap and differ in at least three aspects:

1) Personal data and non-personal data. Under the AVG, conducting a DPIA is only mandatory when processing personal data, while a FRIA may be required in the event that an AI system is identified as "high risk," regardless of the nature of the data processed. Two clarifications are needed here. First, the distinction between personal and non-personal data is not always so clear and often depends on context. Second, the EDPB (formerly A29WP) Guidelines suggest that even if conducting a DPIA is not strictly required, it always remains a good risk management practice.

2) A FRIA should have a broader scope of analysis. Although the AVG does not specifically state that a DPIA should only consider risks to privacy and personal data (Art. 7 and Art. 8 EU Charter on Fundamental Rights), it is common practice to limit the analysis to these two fundamental rights. On the other hand, an FRIA should address broader questions, from non-discrimination to human dignity and environmental impact. Several regulators (3), as well as research institutions (4) and governments (5) have published guidelines and templates on how to conduct an FRIA. Here they list a wide range of questions, both related to risk identification and mitigation measures, that include considerations regarding the data processed and the algorithms used.

3) The implementers of a risk assessment, may differ or coincide. The AVG imposes this obligation on the controller, while the AI Act proposed by the European Parliament imposes this obligation on the "deployers" of high-risk AI systems. Regarding the degree of control in determining the scope of a particular technology or process, the two parties involved (i.e., deployer and controller) may also agree. In that case, the same party may be required to conduct both a DPIA and a FRIA.

As this brief comparison shows, there are several interfaces between a DPIA and a FRIA. Given the deliberations on this subject between the European Parliament, the Council and the EU Commission (trilogue), it is worth pointing out certain inconsistencies that may reduce the effectiveness of these instruments.

Perform the same exercise twice.

Given the impact and scale that a high-risk AI system can have on the rights and freedoms of individuals - and society at large - conducting a DPIA would already be commonplace when deploying these systems. While the AVG does not require that a DPIA be conducted for every data processing activity that may pose risks to the rights and freedoms of individuals, conducting a DPIA is mandatory when the processing is "likely to present a high risk to the rights and freedoms of individuals." In particular, Art. 35(3)(a) points to the use of "automated processing, including profiling, [which] produces legal effects concerning the natural person or significantly affects the natural person in a similar way." For example, AI systems used for creditworthiness assessment (2) may require a double assessment: first, to meet the requirements of Art. 35(3)(a) AVG, as well as the new proposed requirements of Art. 29 of the AI Act.

A broader scope of research does not necessarily mean more knowledge accumulation.

Asking additional questions, as with impact assessment on the environment or on vulnerable groups, does not necessarily lead to conclusive or satisfactory answers. On the one hand, established research shows how AI systems can exacerbate societal conflicts or tensions. Yet focusing on the effects of a single AI system within a broader context (of injustice) does not seem to be an effective solution. The same AI system may have different effects when deployed in various socio-legal contexts. These differences may, as a result, affect the legality of particular systems or technology.

Who will read the FRIA and take action?

Despite the obligation to conduct a risk assessment of an impactful system, enforcement is what will ultimately make the difference. Section 29(4) of the AI Act imposes an obligation to involve "to the extent possible representatives of persons or groups of persons likely to be affected by the high-risk AI system," such as "equality bodies, consumer protection agencies, social partners and data protection authorities, for input into the impact assessment." This provision provides a broad role for civil society organizations to participate in the "social" monitoring of machines, which is a novelty compared to conducting a DPIA.

But given the current lack of civil society organizations and authorities with specific powers, this provision seems overly ambitious. Authorities and civil society organizations in Europe vary widely in supervisory readiness and this would potentially result in piecemeal application of this requirement in different member states.

All in all, an FRIA could be intended as a risk management tool of the same nature as the DPIA. According to the European Parliament, this tool is intended to expand the scope of inquiry to include multiple and varied questions, such as the impact of AI systems on the environment and vulnerable groups. This applies even when privacy and personal data protection are not directly at stake. In the event that the EU legislature adopts Art 29, this will present a new challenge and an interesting opportunity for legal professionals in general."

Resources

  1. https://ec.europa.eu/newsroom/article29/items/611236

  2. In this case, the Norwegian Data Protection Supervisor noted that the credit rating systems under consideration do not have an adequate system of internal control. https://gdprhub.eu/index.php?title=Datatilsynet_(Norway)_-_20/02172

  3. https://www.cnil.fr/en/artificial-intelligence-cnil-publishes-set-resources-professionals

  4. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4315823

  5. https://www.government.nl/documents/reports/2021/07/31/impact-assessment-fundamental-rights-and-algorithms

Share article

Comments

Leave a comment

You must be logged in to post a comment.

KENNISPARTNER

Martin Hemmer