The Dutch government is doing too little to protect people in the Netherlands from ethnic profiling by algorithms. Various practical examples show that this is a "structural and government-wide problem. It is high time for the government to take effective measures to combat ethnic profiling.

That is the conclusion of Amnesty International(1) in the report "Ethnic profiling is a government-wide problem - Dutch government must protect citizens from discriminatory checks. The report was published today, on the International Day against Racism and Discrimination.
For years, Amnesty International has fought tooth and nail against ethnic profiling in the Netherlands. While various government organizations have taken steps to protect citizens from ethnic profiling, these steps are too non-committal and make no real difference. "Citizens in the Netherlands are not effectively protected against government discrimination," believes Gwen van Eijk, a researcher at Amnesty International.
In the report, the human rights organization describes that although the government is taking measures to address ethnic profiling, the "real risks of ethnic profiling" remain. The "discriminatory practices" at the police, Royal Military Constabulary (KMar), Tax and Customs Administration (Belastingdienst) and Educational Implementation Service (DUO) are cited as examples.
In all these cases, Dutch nationals with a migration background were subjected to extra checks compared to native Dutch nationals. This, according to Amnesty, is a clear signal that the government bases checks on risk profiling and risk assessments, whether or not fueled by algorithms.
Citizens have a right to transparency to get a grip on the implementation of government decisions, Amnesty argues. After all, they have a right to know how a decision about them was made. Especially when algorithms are involved. "The use of algorithms, especially self-learning algorithms and black box systems, is at odds with the principle of transparent government action," the human rights organization wrote in its report 'Xenophobic Machines'(2).
Amnesty still stands firmly behind this philosophy. "Government organizations should be aware that there is a risk of 'automation bias' when using algorithms to support decisions. That is, officials may be overly guided by the outcome of the algorithm, in this case risk scoring and categorization. This is very much at odds with the principle of due diligence," the organization wrote in its report.
Amnesty International believes that all government organizations must account for how they monitor citizens and what they do to prevent ethnic profiling. An important condition for meeting this is that regulators such as the Autoriteit Persoonsgegevens, the National Ombudsman and the Human Rights Board have sufficient resources and capacity to investigate and take enforcement action.
However, oversight by these bodies falls short, Amnesty believes. To begin with, the rulings of supervisors are not binding, which affects the effectiveness of external monitoring. As a result, the legal protection of citizens currently falls short: they can file a complaint about discrimination or ethnic profiling, but they often end up empty-handed. Amnesty therefore wants supervisory bodies to be given adequate (binding) powers and complaint handling to be greatly improved.
Amnesty is positive about the introduction of the algorithm registry, which was introduced in late 2022. With it, the government wants to be open and honest about which government organizations use algorithms and what data they use. Still, the human rights organization notes that the algorithm register has three shortcomings: the register is not mandatory, the information in the register is incomplete and insufficient, and the cabinet wants to include an exception for security services engaged in law enforcement, investigation, defense and intelligence gathering.
"Require the algorithm registry for algorithms used for selection decisions and risk profiling in crime and fraud prevention, and in support of migration and anti-terrorism policies," Amnesty advises. Finally, the organization wants self-learning and black-box algorithms for selection decisions and risk profiling to be banned.
https://www.amnesty.nl/actueel/het-kabinet-moet-burgers-beschermen-tegen-etnisch-profileren
Xenophobic machines: Discrimination through unregulated use of algorithms in Dutch benefits scandal
