Menu

Filter by
content
PONT Data&Privacy

0

Deployment of AI in recruitment and selection

On May 21, the European Council approved the AI regulation. The regulation will enter into force shortly. It contains a number of transitional periods that will delay certain obligations. It contains requirements and frameworks for the development and use of AI systems by, among others, employers. It is intended to allow for innovation and economic development while protecting important values such as privacy.

July 2, 2024

Employers are increasingly using AI in recruitment and selection. Think of the tool that screens candidates' resumes and sequences them according to degree of suitability, automated video interviews that analyze the candidate's body language or facial expression, or Unilever applying Pymetrics: a program that evaluates online job applicants based on an hour of playing games and on demonstrated character traits such as curiosity, adaptability and risk-taking. Often the use of AI allows employers to be cost-efficient.

Regulations on AI

The deployment of such systems may be covered by the new AI regulation. More on that below. In fact, there are already many regulations in the Netherlands that currently regulate the deployment of AI by employers. AI systems make use of an enormous amount of data, process this data in real time and make decisions based on this data in the form of recommendations or predictions. In recruitment and selection, this poses risks in terms of privacy and unequal treatment (discrimination).

Anti-discrimination

Thus, the algorithm's original (training) data may give rise to discriminatory bias, for example, if the AI system uses a data set that is not representative. Consider Amazon's job application tool that could automatically analyze resumes of many candidates. However, it later turned out that the tool discriminated between men and women when selecting candidates. In fact, the tool had learned which candidates to select by analyzing all the resumes it had received since 2004. Most of these were from men, so the application robot learned that men should be given preference. CVs containing the word "women," as well as graduates from schools specifically for women or CVs with typically female traits such as perfectionism, empathy and helpfulness, were therefore disadvantaged.

Discrimination based on gender is prohibited in the Netherlands. Indirect discrimination is also prohibited; unequal treatment which in itself is not discriminatory (for example, the criterion of 'contiguous employment' is not in itself discriminatory) but does affect one specific group more than the other. In this case, women.

This risk arises not only with discrimination based on gender but also on migration background and disability. As mentioned above, the issue with AI - unlike other selection methods - is that not only the search criteria can be indirectly discriminatory, but also the data against which the searched information is compared. Consider the existing male workforce at Amazon. This could be avoided if the output of AI tools could be constantly tested so that the tools could be improved. But this is precisely what is often lacking in practice. On the one hand because it is not a priority and on the other hand because privacy laws prohibit the processing of certain data (and thus control of systems). Think of special personal data such as racial data or data about the health of the applicant. So this means that an employer often does not even know and cannot know if a candidate has a migration background and if remarkably few candidates with a migration background passed the selection, while the system has indirectly discriminated against this group.

Opportunities for applicants and employees

Violation of anti-discrimination regulations when deploying AI can lead to liability claims from candidates and higher severance payments in dismissal proceedings. Often, candidates first knock on the door of the Human Rights Board, which then issues a ruling. With that ruling in hand, candidates can more easily claim damages. Based on its rulings, the Board has developed guidelines for the use of AI in recruitment and selection, which the Board tests against. Most importantly, employers must be able to explain in what ways the algorithms work and explain any objective justification for indirect discrimination. Selection procedures must be transparent, verifiable and systematic, and employers are legally responsible for the AI tools (and not the provider of the software). As an employer, if you are faced with an employee's complaint to the Board, make sure you not only act in line with anti-discrimination laws but also take into account the Board's specific guidelines.

The Human Rights Board promotes and protects the observance of human rights in the Netherlands through education, research, advice, cooperation and monitoring of equal treatment issues. Digitization offers opportunities for better protection of human rights, such as access to information and care, but also brings risks such as discrimination by automated processes and irreducible decisions by algorithms. This is why the College launched the Digitalization and Human Rights Strategic Program in 2020.

Privacy

Then privacy: The AVG and the UAVG prohibit processing of special personal data such as data on racial or ethnic origin, health and biometric data, unless there is a legal exception for this. So this is something that employers must consider when implementing AI tools. There is also a wide range of regulations on the use of monitoring with job applicants. Monitoring is broader than AI and refers to the deployment of digital systems that monitor applicants' behavior and performance. European human rights protection legislation, our constitution, and the AVG and UAVG contain regulations on the deployment of monitoring as does the Works Councils Act, which requires the consent of the works council to be sought in the event of a proposed decision to implement a personnel tracking system. In practice, we see that judges are testing the use of monitoring more and more strictly if the results of monitoring are used to support a dismissal case and that in the event of unlawful use of monitoring, fair compensation is imposed with some regularity or evidence is even brushed aside. That judges will have the same critical attitude toward decisions based on AI is to be expected.

Equal opportunity recruitment oversight law

On the AI front, it is interesting that the Equal Opportunities in Recruitment and Selection Monitoring Act was rejected (by a slim majority) by the Senate in March 2024. This law contained specific obligations for employers using AI in recruitment and selection, such as the introduction of anti-discrimination policies, the obligation to verify with AI systems that they do not cause discrimination, and a documentation requirement. The Labor Inspectorate would become responsible for enforcement which would mean that violation could lead not only to liability and high severance payments, but also fines for employers. Given the current government, it is not to be expected that such a law will be introduced anytime soon.

AI Regulation

We do find similar obligations in the AI regulation. It applies when deploying high-risk AI systems. AI systems intended for recruitment or selection of candidates, in particular for advertising job vacancies, screening or filtering applications, and evaluating candidates during interview or testing, qualify as high-risk AI systems. It takes us too far for now to go into all the obligations of the regulation for employers, but at least the following rules are important to know:

(i) employers must adhere to the instructions for use of the AI software. Instructions for use must include human supervision;

(ii) The input data must be relevant to the intended purpose;

(iii) if there is an increased risk, the employer must report to the AP; (iv) there is an obligation to conduct a DPIA; and

(v) A duty to inform employees is required.


Failure to comply with the regulation's obligations carries a potentially high penalty for employers: a maximum fine of EUR 15 million or 3% of total annual turnover.

Conclusion

Despite the fact that we already have legislation in the Netherlands, the AI regulation seems welcome for the protection of workers. The common denominator of the obligations is to increase transparency about the use and consequences of AI systems. So employers can already start working on that.

Share article

Comments

Leave a comment

You must be logged in to post a comment.