Menu

Filter by
content
PONT Data&Privacy

0

The line between tools and AI in decision-making

Artificial intelligence (AI) is increasingly being used in administrative decision-making, from permit applications to asylum procedures. The question is when a digital tool is considered to be AI. And what responsibilities this entails in terms of transparency, assessment, and documentation.

18 December 2025

On October 29, 2025, the Groningen District Court heard two asylum cases in which precisely these questions were addressed. The debate focused on an internal tool used by the IND: the Case Matcher, a system that compares files to find previous, similar cases. What started as a technical detail grew during the hearing into a fundamental discussion about the legal status of algorithmic support in government decisions.

Borderline case between Ctrl + f search function and AI

Attorney Meijering argues that the Case Matcher (which uses text mining techniques such as TF-IDF) should be classified as AI because it automatically ranks documents by relevance and can therefore influence decision-making. In the opinion of the IND's state attorney, it is merely a classic search function with statistical relevance calculation: no autonomy and no adaptability. 

The Case Matcher works with sensitive data from many previous asylum cases. If such a tool is classified as AI, stricter requirements apply: its use must be reported, its operation documented, and risks assessed in advance. If it is considered ordinary search technology, these obligations largely disappear. That is precisely what makes classification so important—and at the same time so complex. 

Transparency in the use of AI

During the hearing, it became apparent that it could not be established with certainty whether the Case Matcher had actually been used in any of the cases discussed. This was partly because the use of the tool is not routinely recorded in case files. As a result, it remains unclear to those involved whether and, if so, how the tool played a role in their case. 

The broader discussion about the nature of the Case Matcher also raised the issue of documentation. Lawyer Meijering argued that the available information about how the tool works is limited, making it difficult to assess whether it constitutes artificial intelligence within the meaning of the European AI Regulation.

These findings are in line with a broader development in case law, in which there is an increasing emphasis on transparency: citizens must be able to know how algorithmic tools play a role in decisions that affect them. 

What organizations can learn from this

This issue does not only affect governments. Private organizations are also increasingly using data-driven or algorithmic tools for analysis and decision-making. The most important lessons: 

  1. Legally map out functionality.Describe specifically what a system does and whether it has autonomy or adaptability.
  2. Define usage and permissions.Who is permitted to use the tool, and with what training or authorization?
  3. Perform risk analyses at the appropriate level.Not only for the platform, but also for individual modules that may influence decisions.
  4. Be proactively transparent.Inform users, customers, or citizens in advance when algorithms or AI-like tools are being used. 

In conclusion

The court has yet to rule, but the case underscores one thing: the line between AI and a "regular" tool is not only technical, but also legal and ethical in nature. For the responsible use of data-driven technology, it is therefore not only important what a system does, but also how organizations account for it. 

This article was written in collaboration with Melody Jansen.

Share article

Comments

Leave a comment

You must be logged in to post a comment.