Following recent critical reporting by Follow the Money on the application of the Risk Assessment Instrument Violence (RTI-G), the police will immediately stop using this algorithm. Marc Schuilenburg, professor of Digital Surveillance at Erasmus School of Law, was granted access to the documents on the substantiation of the algorithm and expressed his concerns about this investigative tool to Follow the Money. Not only has its effectiveness not been evaluated by the police, the risks associated with its use are also very high. Moreover, ethical and legal concerns are major points of criticism, according to Schuilenburg.

Since 2015, Dutch police have been using an algorithm to predict on an individual level whether someone will use violence in the future. In doing so, until mid-2017, individuals with an Antillean, Moroccan or Somali background always received a higher score than individuals of other origins, according to research by Follow the Money. Despite the fact that the police would have removed this setting from the algorithm, such an algorithm can only lead to problems, Schuilenburg said.
The professor is highly critical of RTI-G: "There are enormous risks involved." First, because it is already very difficult to arrive at reliable crime predictions at the area level, so at the individual level the chance of success is even lower, Schuilenburg sees: "This is many times more complicated. An infinite number of factors play a role. Every person is different."
Marking someone as a person with a risk profile gives the police all kinds of sweeping investigative powers, such as preventive searches and searching that person's car. If the police want to restrict a citizen's freedom so severely, this must be properly justified. According to Schuilenburg, the substantiation in the algorithm's accountability document is inadequate: "the input, the processing and the output. It all falls short." According to Schuilenburg, proper information about the data used, the selection of risk factors, the weighting, how the model was validated and the control for bias is missing. There is also no interim evaluation found in the document: "The risk factors in this instrument were devised ten years ago, but they were never looked at again. It really can't be done this way."
Schuilenburg examines why the government uses these kinds of predictive algorithms and how they work: "In politics and society, it's about preventing potential risks. There used to be suspicion first and then surveillance. Now there is surveillance first and then suspicion."
Instruments like this police algorithm may emerge, according to Schuilenburg, because safety, effectiveness and efficiency within our society outweigh transparency, non-discrimination and algorithmic accountability. Thus, something like RTI-G emerges: "You shouldn't want this. Not just ethically, but legally. This risk model falls in all respects under the definition of 'high-risk' in the European Union's future Artificial Intelligence Regulation. It clashes with the requirements therein."
Marc Schuilenburg is pleased that police will stop using RTI-G. There are doubts whether the tool is useful, and it is not clear whether officers were still using it at all, says a police spokeswoman after Follow The Money reported.
