In this article Janneke Gerards describes the results of the research report 'Algorithmic discrimination in Europe, challenges and opportunities for gender equality and non-discrimination' which she wrote for the European Commission with Raphaële Xenidis (Edinburgh University Law School and iCourts, Copenhagen University).* The report was recently published by the European Commission.

In early 2020, the European Commission acknowledged in its White Paper on Artificial Intelligence (hereafter AI) that AI poses a number of potential risks. In doing so, the Commission specifically mentioned the risk of unequal treatment based on gender and other forms of discrimination. Because of these risks, the Commission considered it important to further investigate whether EU equal treatment law can actually provide an adequate answer to the particular questions associated with the enormous expansion of algorithms in almost every area of social life. That question was therefore the focus of the just published study 'Algorithmic Discrimination in Europe', which was prepared in cooperation with the European network of legal experts in gender equality and non-discrimination.
Our study first confirmed that algorithms are indeed playing an increasing role in our lives. Until now, many of the examples have come from the United States, but the European expert reports have shown that algorithmic applications can be seen more and more in Europe as well. Those applications can be found first and foremost in the public sector. These include, for example, the support of labor market policies and policies in the fields of social welfare, education, crime control and detection, justice and the regulation of (classical and social) media. In addition, many examples can be seen in the private sector, for example, in the recruitment and selection policy of companies, setting up platform work, determining risks in banking and insurance, matching supply and demand in retail, and personalizing online advertisements. All of these examples also involve risks of unequal treatment and bias in the deployment of algorithms. Not surprisingly, the European Commission has characterized the risks of algorithmic discrimination as widespread and deep-rooted.
The pervasiveness of these risks makes the importance of good EU non-discrimination policies and strong regulation all the more important. At the same time, our study shows that current EU law is sorely lacking when it comes to responding to existing challenges. This is especially true when it comes to machine-learning algorithms, which after a learning process are independently able to generate relevant outcomes from large amounts of input data. We discuss below four key problems found in our study, and finally discuss some opportunities and possibilities for addressing them.
The first problem has to do with the fact that EU equal treatment law is currently something of a Swiss cheese. EU law prohibits discrimination when it comes to gender, race, ethnic origin, disability, religion or belief, sexual orientation and age, but really only when it comes to the field of employment. In contrast, when it comes to subjects such as the supply and use of goods and services, only discrimination based on gender and race is prohibited. In education, media and online ads, in principle, even only discrimination by ethnicity and race is prohibited.(1) These restrictions are problematic because self-learning algorithms are especially widely used in the market for goods and services. There, for example, they are used to personalize offers and adjust prices to users' wants and needs, can help determine the risks when someone applies for a loan, and play an important role when it comes to tailoring online advertisements to specific audiences. Thus, when discrimination based on disability, age, belief or sexual orientation occurs, EU equal treatment law has little control over it. Moreover, this means that a proportion of EU citizens cannot currently obtain legal protection against such cases of discrimination.(2)
A second core problem has to do with the fact that the kind of discrimination associated with algorithmic applications does not mesh well with the grounds of discrimination central to EU law and the way the EU Court of Justice deals with them. All sorts of profiling techniques can be combined to search and analyze large amounts of personal and behavioral data, resulting in very sophisticated profiles. Those profiles include a multitude of personal characteristics, ranging from a preference for red shoes to the kind of hairstyle someone has. Those profiles and characteristics by no means always overlap with the rather crude and general grounds central to the EU Equal Treatment Directives, such as "gender" or "age.
In practice, this leads to problems. First, algorithmic profiling can lead to all kinds of "intersectional" discrimination. This is discrimination that is located at the intersection of recognized grounds, such as religion and ethnicity, or that arises from a unique combination of those grounds. What is tricky here is that the EU Court of Justice, in its Parris-ruling rejected the notion of intersectionality: unequal treatment must always be concretely traceable to one of the prohibited grounds. Precisely because of the multiformity of algorithmic discrimination, this is often not easily possible.
On top of this, it is very difficult to completely eliminate factors such as gender, ancestry, disability, religion, belief, sexual orientation and age as input variables in an algorithm. If at all it succeeds in making an algorithm "blind" on this point, machine-learning algorithms prove very easy to recognize "proxies" for these kinds of grounds, such as body height, movie preferences or buying behavior. EU equal treatment law can currently only be applied if such a 'proxy' can be clearly associated with one of the named grounds. That is probably not going to be so easy, because again the ECJ has high standards that are not easily applicable in the context of algorithmic discrimination. In any case, in the Jyske Finans case (2017), the ECJ refused to recognize a person's country of birth as a proxy for their ethnic origin.
A third problem is that algorithmic discrimination is not so easy to fit into the types of discrimination recognized in EU equal treatment law. For example, "direct" discrimination requires a person to be able to prove that unequal treatment is wholly motivated by a protected ground, and, following the classic rationale, that someone has been "treated less favorably than someone else has been or would be treated in a similar situation. It is already difficult to prove this due to the proxy problem mentioned above, and this is made even more difficult by the fact that algorithms are often inscrutable to the people affected by them - as a result, they can hardly figure out how large and essential the role played by, say, the ground "gender" or "ethnicity" was.
By itself, this problem of proof can still be addressed by working with the concept of "indirect" discrimination. This refers to the situation where an apparently neutrally formulated regulation or practice (such as an algorithm-driven decision) causes a disproportionate disadvantage to persons belonging to a protected category (i.e. women or the elderly, for example). Because of the high evidentiary requirements, this is often difficult to prove. In addition, the concept of indirect discrimination clearly offers lesser legal protection than the concept of direct discrimination. Indeed, any indirect discrimination can be justified by invoking an objective and legitimate purpose. Often, assessing this amounts to conducting a cost-benefit analysis, where the precision of the algorithmic results must be weighed against general notions of social justification.
A fourth and final key problem is that enforcing equality rights in an algorithmic context is very difficult. It is often difficult to ascertain whether there is discrimination because the underlying algorithms are incomprehensible to ordinary people, or because the operation of the algorithm is kept secret because of trade secrets, intellectual property rights, or the interest of fighting crime. On top of that, it is not always easy to know who to hold accountable for the outcomes of an algorithmic decision-making process. This is especially true when there is close cooperation between humans and machines (where, for example, humans may be inclined to take an algorithm's output too quickly for granted), or when there are composite systems where different algorithms work together. Especially since this world also often involves collaborations between international and foreign actors, it is difficult to properly enforce equality rights. Legal protection against algorithmic discrimination is therefore under great pressure.
While the discussion of these four key problems does not paint an overly positive picture, our study shows that algorithms also offer many opportunities. This is also true when it comes to identifying, preventing and mitigating discrimination. For example, the report shows that algorithms can make it easier to detect discriminatory ad copy and can help ensure more equal opportunities for men and women in recruitment. Moreover, the good news is that the study shows that there are already many good practices in many European countries in this regard, from, for example, monitoring discriminatory effects of algorithms to attempts to make the relevant professional communities more diverse.
To take advantage of these opportunities, however, it is necessary to implement sound equal treatment legislation and robust non-discrimination policies. In the study, we propose an integrated framework called "PROTECT," with the main goal of increasing societal awareness of and providing robust legal safeguards against structural inequality and discrimination. This is essential to ensure the fundamental right to equality in the algorithmic society.
Footnotes
(1) See Article 19 Treaty on the Functioning of the EU and EU Directives 2000/43/EC, 2000/78/EC, 2004/113/EC and 2006/54/EC.
(2) While this risk can be mitigated at the national level, as EU law sets only minimum requirements, the study shows that a number of countries do not make use of this possibility.
Read the full report here.
*NB: An English-language version of this piece was previously published on the European Futures Blog and the Montaigne Blog.
