The July 2025 Parliamentary Letter on the Reflection Paper and the Internet Consultation 'Algorithmic Decision-Making and the Awb' shows that more specific standards regulating algorithmic decision making are lacking for the time being; new research will follow. Given the call for more clarity and safeguards, this blog explores whether, in the meantime, the Bill on Strengthening the Safeguarding Function of the Awb already offers valuable intermediate steps towards better legal protection.
Government agencies are increasingly using algorithms in making decisions. This raises questions about its implications. For example, how does the use affect government transparency, and how is the protection of civil rights guaranteed? Because algorithmic decision-making can put this under pressure, legislators have been investigating for some time whether regulation of this is necessary and, if so, how it should take shape. The latest step taken in this regard is a recent House letter dated July 2025. In this Kamerbrief, the results of the internet consultation of the reflection document 'algorithmic decision-making and the General Administrative Law Act' presented. This Kamerbrief shows that the government is starting trajectories with a view to improving transparency and legal protection in the use of algorithms, but does not yet propose specific rules for this. More research is needed first.
However, the need for more interpretation and regulation from practitioners is great. The current regulations are said to offer insufficient guarantees and clarity, according to practitioners from both interest groups and government bodies. In that context, the question arises, in our opinion: to what extent can the present bill Bill on Strengthening the Safeguarding Function of the Awb not already provide additional protection? While this proposal serves to strengthen the legal position of citizens vis-à-vis the government in general, it can also be a valuable intermediate step towards better legal protection against algorithmic decision-making.
This blog explores that potential. First, we briefly outline the context of algorithmic decision-making. We then discuss the need in practice for further regulation, as evidenced by the aforementioned reflection document and the results of the internet consultation on it. This is followed by an analysis of the proposed amendments to the law: to what extent does the legislative proposal to strengthen the safeguard function of the Awb Act - when implemented - improve protection against the risks of algorithmic decision-making?
A general definition of algorithms and algorithmic decision-making is as yet lacking in the Awb. By algorithmic decision-making we mean Awb decision-making that is fully or partially based on the outcome of algorithms. An algorithm is a set of rules and instructions that, when solving a problem or answering a question, processes input intooutput. Algorithms vary in form: from self-learning(case-based) to fixed,rule-based models.
Algorithms are used by the government, for example, in the assessment of benefit applications, and in tax assessments and licensing. This has significant advantages, such as speeding up processes, reducing administrative burdens, and promoting consistency and accuracy. At the same time, there are potential risks associated with the application of algorithmic decision-making, especially when there is no (meaningful) human intervention and decision-making is thus fully automated. Below we highlight some aspects:
Lack of explainability/clarity: Complex algorithms entail the risk that it is difficult to ascertain why the algorithm arrives at a particular output (they are then a 'black box'). This makes identifying errors or biases difficult for citizens, administrative bodies and judges alike.
Evidential position of citizens: The lack of insight and the resulting inimitability of decisions, together with the supposed neutrality of algorithms, can lead to the bias that an algorithmic decision is a correct decision(automated bias). This makes it difficult for citizens to successfully challenge decisions.
Human rights concerns: Algorithms can infringe on the right to equal treatment, the right to privacy and other fundamental rights, as shown, inter alia, by the SyRI ruling (ECLI:NL:RBDHA:2020:865). This resonates with algorithm-based decisions.
Given the risks, much attention is being paid to safeguarding the interests of citizens and businesses in algorithmic decision-making. In the Reflection Document 2024 the government identifies several safeguards offered by the current legal framework, in particular:
The AVG: To the extent that personal data are involved in decision-making, the AVG prohibits fully automated decision-making and the government must (be able to) provide access to the personal data (Art. 5 AVG). However, transparency about personal data is not the same as transparency about the decision. Moreover, the AVG does not apply to all administrative decisions (for example, not to decisions for companies or when the law provides that fully automated decision-making is allowed and also provides safeguards for it).
The AI Regulation: Governments using high-risk AI systems are required by the AI Regulation required to, among other things, conduct human monitoring of the algorithm and a fundamental rights impact assessment (arts. 14 and 27). These rules also do not apply to all administrative decisions, for example, because not all AI systems are high-risk AI systems. For example, a system that decides whether a natural person is entitled to a social assistance benefit is, a system that decides whether there is a right to a license or a subsidy for a company is not without question. In the latter case, less stringent rules apply.
Fundamental rights and general principles of good governance: Other rights and principles such as the right to equal treatment, the right to protection of private life, and the principles of due care, justification, and proportionality also regulate algorithmic decision-making by governments. Although they do not specifically address algorithmic decision-making, judges use them as a necessary safety net to achieve customization and human intervention. Notable court cases in which this happens are the AERIUS-ruling on due diligence and transparency and the SyRI ruling on right to privacy, and in implementation practice the Impact Assessment Human Rights and Algorithms.
The current legal framework is fragmented and incomplete. With the Internet consultation Algorithmic Decision Making and the Awb, more insight was gained into the practice and the need for additional safeguards. An analysis of the results was presented in the July 2025 parliamentary letter. According to the Cabinet, four main needs follow from the Internet consultation: i) more transparency about the use and role of algorithms in individual decision-making, ii) transparency at the system level, for example through registers, iii) more expertise in the judiciary, for example through a specialized magistrate judge, and iv) concrete, enforceable standards for government agencies (instead of open standards). It is worth noting here that not only litigants need this, but also government organizations, as is evident from the need mentioned under iv). In response, the Cabinet has announced that it will further investigate how Awb standards can be adjusted with respect to algorithmic decision-making, with a focus on better transparency and motivation of decisions.
Specific standards for algorithmic decision-making in the Awb seem, all in all, still a long way off. Moreover, it is unclear whether these standards will also pay attention to elements that, in our view, are also essential for effective legal protection against algorithmic decision-making, but which the Cabinet less explicitly mentions in the July 2025 Parliamentary Letter, such as the right to customization, remedies and comprehensible motives. Because this builds on rules that apply to all administrative law decisions, possibly the Bill on Strengthening the Guarantee Function of the Awb could play a role here. In the remainder of this contribution, therefore, we consider whether and, if so, how this bill offers reinforcement on these points. We will discuss the provisions that, in our opinion, can contribute to this to a greater or lesser extent.
Article 2:4a Awb contains a new principle: the service principle. This principle guarantees citizen-oriented treatment, and requires administrative bodies to put the citizen first. They must organize their policy and implementation in such a way that citizens' interests are optimally served. Non-compliance can lead to careless preparation (Art. 3:2 Awb) or incomplete consideration of interests (Art. 3:4, subsection 1 Awb), according to the explanatory memorandum to the consultation version.
Although the exact scope of this principle is still unclear due to its abstract wording, it at least seems to have potential as a "safety net principle. This can then be invoked if no more specific provisions are helpful. In the context of algorithmic decision-making, for example, it is conceivable that the principle might entail abandoning this decision-making method if, in a specific case, it does not do justice to the situation of the citizen concerned.
The proposed amendment to Section 3:4(2) of the Awb means that the implementation of formal laws can be tested against the principle of proportionality. This means that in situations where a decision in line with a bound power ("mechanical rule application") would lead to a conflict with the principle of proportionality, government bodies must make a different decision. This is a significant change from the current situation. An administrative body must consider whether the outcome of rule application is not disproportionate to the objective intended by the legislator, and depart from it if the legislator's intention can be realized with a different decision.
It is noteworthy that the explanatory memorandum to the bill also explicitly addresses the consequences of Art. 3:4(2) of the Awb in the case of automated decisions. Automated decision-making is allowed, but if it is known to the administrative body that there are special circumstances that may make the outcome disproportionate, these must be taken into account under an interplay of Article 3:4(1) and (2). This must also be done in an objection. Interests (of third parties) that are not protected by the applicable framework are not considered.
The bill thus underscores - following earlier unsolicited advice from 2018 of the Council of State on digitization and constitutional relations - the need for customization in objection, also in the case of algorithm deployment. In the same vein, the newly proposed Art. 4:84 Awb, incidentally, also perpetuates case law, which entails rejection in the event of disproportionate policy rules.
The obligation for administrative bodies to attach contact information to decisions (the proposed Art. 3:45 Awb) is also linked to automated decision-making in the explanatory memorandum. A citizen or organization must be able to contact a knowledgeable official with access to the file in fully automated decision-making.
This is a valuable step toward human intervention. While this intervention is not prior, it at least creates a point of contact to which a citizen or organization can ask questions about the operation or reasoning of the decision. This supports the right to explanation and can potentially lead to faster correction of errors. For administrative bodies, this means that they will have to appoint specific employees with knowledge of the algorithms used and the citizen's situations - which also increases internal awareness and expertise.
The proposed Section 3:45b Awb means that a person wishing to object to a decision should already have access to the relevant documents when the objection is drafted.
It is not yet entirely clear what documents are to be provided with this and to what extent this supplements the justification of a decision. If the documents refer to the algorithm used, this provision could potentially partially open up the "black box. Although it also remains to be seen to what extent such documents would actually enable citizens to fathom the algorithm, inspection ensures a fairer process and could potentially enable them to identify errors or develop a targeted legal strategy.
Article 3:47 Awb proposes that the reasons for decisions must be comprehensible. A comprehensible statement of reasons, according to the explanatory memorandum to the bill, means that: (1) the statement of reasons should be written in comprehensible language and (2) the reader should be able to understand how the administrative body arrived at this decision and what the decision means.
The question arises whether a decision, when made with the use of an algorithm, must also provide information about how that algorithm works. This is well defensible as far as we are concerned: after all, otherwise motivation is not actually comprehensible, but rather inscrutable. This provision therefore has the potential to improve the comprehensibility of algorithms. It may also strengthen the evidentiary position of litigants: they will then be better able to assess (or have assessed) choices, data and assumptions in algorithmic decision-making. Furthermore, the amendment of Section 3:47 of the Awb may contribute to a better implementation practice. After all, if the administrative body cannot explain the decision intelligibly, the decision is therefore unlawful.
In our opinion, it is a missed opportunity that algorithmic decision making is not included in the preparation of this article for the time being.
The proposed Articles 3:51-3:53 of the Awb give administrative bodies more room to rectify manifest inaccuracies in decisions at their request or on their own initiative. It follows from the explanatory note to the bill that the legislator expects that this amendment will result in faster rectification and less frequent appeals against decisions.
In the context of algorithmic decision-making, we believe these provisions have the potential for great added value, for example in the case of errors due to technical malfunctions, bugs or incorrect data links. The law article thus contributes to a "learning" government with room for correction.
The obligation to be heard (laid down in art. 4:7 and 4:8 of the Awb) is a guarantee for the careful preparation of decisions. Many financial decisions from tax and social security law, in which algorithms are used, were exempted from the duty to be heard (art. 4:12 Awb) because they were too burdensome for administrative bodies. The bill proposes to remove this exception and provide that a hearing must be held when a decision"is not foreseen by the addressee and of which the administrative body can reasonably suspect that the decision has a significant impact on his direct spending power." This will help prevent excesses.
If this provision gives cause to actually hear more often, it can contribute to providing customization where necessary, for example by making it clear that there are circumstances that justify deviation from or adjustment of the algorithm. In doing so, the main question will be how this provision will work through in implementation practice.
The bill strengthens the power of the administrative judge to allow interested parties to supplement their notice of appeal during proceedings with Section 8.2.2b Awb. The judge can thus give them the opportunity to strengthen their procedural position, including providing evidentiary information.
This "citizen loop" creates room for correction of an unequal litigation position: if the citizen does not understand and/or insufficiently refutes the algorithm outcome, the judge can indicate what is required. The condition is that the judge has sufficient expertise to recognize and interpret algorithm use. In our opinion, it is advisable for the judge to engage an expert for this purpose where necessary (via Section 8:47 of the Awb). Another good option for this is the judge commissioner mentioned in the Internet consultation by the Council for the Judiciary.
In our opinion, the implementation of the Strengthening of the Safeguarding Function of the Awb Act will strengthen the legal position of persons seeking justice in several respects when confronted with algorithmic decision-making. For several articles, the legislator already makes a link between the proposal and algorithmic decision-making in the explanatory memorandum to the bill itself. However, it is also striking that this is not yet the case with precisely the comprehensible substantiation obligation, while in our opinion there is a lot to be gained there as well. Precisely because it concerns a general rule, it is valuable to already explore how this standard also relates to algorithmic decision-making. In our opinion, this is a missed opportunity. The precise effect of some of the proposed amendments is still unclear, such as the new obligation to be heard. This also makes it difficult to estimate the added value for the legal position of persons seeking justice in algorithmic decision-making.
Because of the general nature of the Awb, some essential elements that are mainly problematic in algorithmic decision-making remain out of the picture for the time being: general registration and transparency obligations for algorithmic decision-making (and thereby a definition of what constitutes such decision-making) remain in the future. In certain cases, other laws, such as the AI Regulation, fortunately (partially) compensate for this. Still, further elaboration is important. This is evident not only from the case law cited, but also from the call for more specific standards. With the bill, the Awb therefore logically does not offer an end station, but a legally solid intermediate station on the way to a responsible and verifiable use of algorithms in administrative law.