The use of algorithms within government offers opportunities to create and implement policies based on more accurate analysis and more sophisticated tools. For a variety of reasons, however, Artificial Intelligence (AI) is now in the dock. The Court of Audit's recent report Aandacht voor Algoritmes offers starting points to position the Netherlands as a progressive AI country with a realistic view and responsible experimentation, say Bram Klievink and Frank Zwetsloot.
When it comes to algorithms, the search for positive jargon is often in vain. Weapons of Math Destruction or a proposed "notification requirement for algorithms" are the often alarmist jargon used to cast suspicion on algorithms, behind which there is a wide variety. They are said to inimitably make automated decisions, influence elections and support totalitarian regimes.
There are legitimate concerns behind these but they are also mystifications, which the debate generalizes to a wide range of applications. In it, much attention is paid to the complexity of algorithms, but the complexity of human and managerial behavior is also unnecessarily simplified. This prevents a more realistic exploration of the opportunities and limits of algorithm use within government. The Court of Audit has now come out with a report that clearly identifies the current state of affairs. It shows that they have not found any algorithms within the central government where the government has lost control. The applications they looked at are relatively simple, often well thought out, and backed by various safeguards. Perhaps this is precisely the time to take a more constructive look at the opportunities for AI within government, because with this new outlook there may also be a view of more far-reaching and complex applications.
If we look at the United States, we see a lot of room for private parties, especially homegrown Big Tech, in the development and deployment of algorithms. In China, for example, development and deployment is much more government driven. In Europe, the attitude toward the use of AI is somewhat more defensive, with much attention to the human side. This is reflected in legislation such as the AVG and (ethical) guidelines for the responsible use of AI. This provides a good basis for looking at the positive opportunities AI has to offer, while taking into account the risks.
In this, be realistic in the limitations of the data, what an algorithm is capable of, in what policy context and how trade-offs around proportionality and legitimacy are made. An algorithm registry such as Amsterdam - together with Helsinki - has established could provide a basis for removing algorithms from mystification. After all, this registry challenges the municipality to explain for each algorithm what the benefits are for the citizen or government and how one protects this citizen from arbitrariness. It allows the government to show the diversity and the choices and safeguards behind the various applications. This is a path to a more rational relationship to a tool that is considered a key technology by the government.
The Court of Audit looked at dozens of algorithms and concluded that those currently deployed are relatively simple and easily explained. Some of these do fall under the heading of AI, but they do not include variants that have become uncontrollable due to their intelligence and complexity. This is not to say that all stakeholders are always able to fathom what an algorithm does, and how. An Algorithms Assessment Framework can further limit the risks for citizens, partly because it forces a discussion between different interests and expertise regarding the use of algorithms. A register can be used to organize unambiguous terminology. It provides a platform to give citizens a transparent and understandable overview of the consequences and can also provide further content to the concept of Human Centred AI that the EU so strongly advocates. Furthermore, technical developments in the area of explainability can provide visibility into how an outcome is arrived at.
To avoid working well only in theory, it is important to operationalize all these good principles as well. This requires room to experiment with AI. Of course, this must be done responsibly, but only by trying do we experience the bottlenecks, learn about the limits, and also make guidelines and requirements regarding transparency and explainability "actionable. All this, however, requires that we take algorithms out of the suspect box and distinguish between different types of policy applications. The new law for better combining data files that is currently in the Senate (Wet Gegevensverwerking door Samenwerkingsverbanden) offers starting points for well-considered interdepartmental experiments. Private parties, if desired, can be involved in experiments aimed at learning. Scientific research and university data science centers are also essential for developing new possibilities. In doing so, the government must teach itself to designate technology a bit more often as a force for good. This requires realism and clear choices in its own use. By responsibly designing data-driven policy and using well-understood technology, more focused policy can thus be created - and implemented - with better results.
Source: Platform O