Artificial intelligence (AI) is increasingly being used in administrative decision-making, from permit applications to asylum procedures. The question is when a digital tool is considered to be AI. And what responsibilities this entails in terms of transparency, assessment, and documentation.

On October 29, 2025, the Groningen District Court heard two asylum cases in which precisely these questions were addressed. The debate focused on an internal tool used by the IND: the Case Matcher, a system that compares files to find previous, similar cases. What started as a technical detail grew during the hearing into a fundamental discussion about the legal status of algorithmic support in government decisions.
Attorney Meijering argues that the Case Matcher (which uses text mining techniques such as TF-IDF) should be classified as AI because it automatically ranks documents by relevance and can therefore influence decision-making. In the opinion of the IND's state attorney, it is merely a classic search function with statistical relevance calculation: no autonomy and no adaptability.
The Case Matcher works with sensitive data from many previous asylum cases. If such a tool is classified as AI, stricter requirements apply: its use must be reported, its operation documented, and risks assessed in advance. If it is considered ordinary search technology, these obligations largely disappear. That is precisely what makes classification so important—and at the same time so complex.
During the hearing, it became apparent that it could not be established with certainty whether the Case Matcher had actually been used in any of the cases discussed. This was partly because the use of the tool is not routinely recorded in case files. As a result, it remains unclear to those involved whether and, if so, how the tool played a role in their case.
The broader discussion about the nature of the Case Matcher also raised the issue of documentation. Lawyer Meijering argued that the available information about how the tool works is limited, making it difficult to assess whether it constitutes artificial intelligence within the meaning of the European AI Regulation.
These findings are in line with a broader development in case law, in which there is an increasing emphasis on transparency: citizens must be able to know how algorithmic tools play a role in decisions that affect them.
This issue does not only affect governments. Private organizations are also increasingly using data-driven or algorithmic tools for analysis and decision-making. The most important lessons:
The court has yet to rule, but the case underscores one thing: the line between AI and a "regular" tool is not only technical, but also legal and ethical in nature. For the responsible use of data-driven technology, it is therefore not only important what a system does, but also how organizations account for it.
This article was written in collaboration with Melody Jansen.
