It was recently revealed that the Autoriteit Persoonsgegevens ("AP") has called on Minister Wopke Hoekstra to explain an algorithm used within the Ministry of Foreign Affairs in assessing visa applications.

Research conducted by the NRC in collaboration with journalistic organization Lighthouse Reports has revealed that the digitization of the issuance of Schengen visas has been problematic due to an algorithm that may incorrectly assign a negative score in the case of certain nationalities (1)
Previously, visas were granted by Dutch consulates and embassies abroad. As part of the desired digitization of visa applications, the government has transferred most of the document processing to VFS Global, a multinational company headquartered in Dubai. Travelers should go to a VFS Global office for their visa application. VFS Global digitizes the visa application files and then sends them to so-called decision officers.
It is indicated by the NRC that these so-called decision officers initially worked at embassies, but moved to an office in The Hague in 2015. These decision officers use what is known as Information Supported Decision Making ("IOB"): an algorithm that assigns risk scores to visa applications. The NRC explains this as follows. A high number as a risk score represents a less promising application. A low number risk score indicates that the application looks good. The higher a number is, the more intensively the relevant official must study the application, which may sometimes require an interview, or additional documents.
The risk score for a visa applicant calculated by the IOB algorithm is based on previous applications, personal data and hits in databases of the military police and immigration service IND, among others. The IOB algorithm considers whether such an application should be studied "briefly" or "intensively" based on such factors as nationality, gender, age of the applicant and place of application. Applications that fall into the latter category are then granted much less frequently than those in the former.
The NRC reports that in 2022, the then Data Protection Officer at the State Department advised that it would immediately stop profiling visa applicants because it could encourage discrimination. Thus, the NRC gives the example of visa applications from young Surinamese men, whose applications automatically end up in the pile of "intensively" assessable visa applications.
As a result of the investigation conducted by NRC and Lighthouse Reports, Minister Wopke Hoekstra must appear in person at the AP to defend the use of the IOB software, the NRC reports.
Besides data trading and digitale overheid, algorithms & AI is one of the AP's focus areas in the 2020-2023 period, according to the Focus AP 2020-2023 (2). Beyond the benefits that the use of algorithms and AI can offer, the AP points out that their deployment also has risks and harmful effects.
For example, the AP mentions that decisions can be the result of bias when self-learning algorithms are trained with datasets that are objectively correct but inherently biased. Furthermore, the AP points out that the design choices of algorithm creators may contain (unintended) biases.
There is currently no legislation with a specific focus on algorithms and AI. However, specific legislation is in the works at the EU level: the AI Act. The European Commission's first proposal for the AI Act was made on April 12, 2021, and European telecom ministers reached an agreement on the AI Act late last year. Following this, the European Parliament and the European Commission entered into negotiations on the AI Act. The aim is to have reached a final agreement by autumn 2023, after which member states will implement the AI Act domestically.
The current text of the AI Act follows a risk-based approach, distinguishing between AI applications that involve (i) unacceptable risk, (ii) high risk, and (iii) low or minimal risk. It follows from recital 39 of the proposed AI Act that the accuracy, non-discriminatory nature and transparency of AI systems used in the context of migration, asylum and border management are of particular importance to ensure respect for the fundamental rights of the persons concerned, and in particular (inter alia) their rights to freedom of movement, non-discrimination, protection of private life and personal data. In that light, it is indicated that it is appropriate to classify AI systems as high-risk when they are intended to be used by public authorities in carrying out migration, asylum and border management tasks, such as when submitting a visa or asylum application.
Given the foregoing, the IOB algorithm is likely to qualify as a high-risk AI system under Schedule III(7)(d) AI Act within the meaning of Section 6(2) AI Act. While AI systems that would qualify as a high risk system under the AI Act are not necessarily prohibited, they are subject to more stringent requirements and obligations, including following a conformity assessment procedure based on internal control and high transparency requirements.
Under the AI Act, member states will soon be required to designate a national supervisory authority. In anticipation of this, an algorithm supervisor has already been designated in the Netherlands. Indeed, since the beginning of this year, despite the lack of specific legislation, the AP has been supervising algorithms from a data protection perspective. For this purpose, a new organizational unit has been created within the AP: the Algorithms Coordination Directorate. This contributes to the ambition of the House of Representatives and the Cabinet to prevent discrimination and arbitrariness and to promote transparency in algorithms that process personal data, according to the AP.
For 2023, the AP has indicated the following activities to be specifically addressed:
Gather, analyze and share (cross-sector and cross-domain) signals and insights about the risks and effects of algorithm use
Strengthen and facilitate existing collaborations in algorithm surveillance
Promote joint and cross-sectoral standards and 'guidance'
In addition, the AP has indicated that it will work with other regulators and relevant ministries during the course of this year to determine what else is needed to strengthen algorithm oversight in the Netherlands. In that context, the AP will produce a joint report at the end of this year, which will provide insight into developments in algorithm supervision and the risks of algorithms.
https://www.nrc.nl/nieuws/2023/04/23/beslisambtenarenblijven-profileren-met-risicoscores-a4162837
https://www.autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/focus_ap_202-2023_groot.pdf
