Menu

Filter by
content
PONT Data&Privacy

0

The algorithm registry, a useful solution for practice?

The City of Amsterdam has announced an algorithm registry to increase transparency about algorithms used and public support for them. But what is actually the "problem" with the use of algorithms? And does such a registry (sufficiently) address the social discussion and the rights of individuals in practice?

October 20, 2020

The City of Amsterdam, together with the Finnish capital Helsinki, has launched an algorithm registry. This aims to increase openness and transparency about algorithms and public support for their use. The registry will allow everyone to see used algorithms and participate in recognizing, or addressing, their potential risks.

That there is increasing concern about the use of algorithms within business and government is common knowledge. Often this topic is equated with artificial intelligence (AI). In late 2019, State Secretary Keijzer informed the House of Representatives about the Strategic Action Plan for AI and, in addition, a white paper from the European Commission on AI in early 2020 leaked. Both pieces focus on the deployment of AI, or rather algorithms, in practice and the need for more far-reaching laws and regulations to manage it. But what are we talking about?

Algorithms are fundamentally nothing more than a predetermined set of instructions addressed to a computer that determine how to handle information or data. Depending on the algorithm being used, the results can be (re)used to train or improve an algorithm. Thus, certain algorithms may become "smarter" or better over time for the purpose for which they are used. In addition, the computer also cannot distinguish between them and so will follow the same set of instructions for all situations. That sounds promising. For example, the City of Amsterdam was already using algorithms in the context of parking controls and combating illegal housing rentals.

However, this also has a downside. The lack of customization, especially in the government-citizen relationship, can lead to unreasonable or unfair outcomes. That risk may stem, for example, from the bias, or bias, of the party developing, or having developed, the algorithm. Alternatively, the lack of a sound outcome may be attached to the data itself. The use of algorithms thus requires careful consideration of both the factors involved in the decision-making process and the data from which the analysis is performed.

The problem with algorithms, however, is that it is far from always possible to identify the flaws. The so-called "black box" phenomenon, where the outcome can no longer be ascertained, lurks when algorithms are used. This is also precisely why, for example, the General Data Protection Regulation (AVG) tries to guarantee the right to human intervention in fully automated decision-making (based on algorithms). This aims to prevent data subjects from having decisions held against them or being significantly affected by an outcome based on an algorithm, without human intervention having taken place.

Another safeguard provided by the AVG is that data subjects must be meaningfully informed about the underlying logic and expected consequences of the use of an algorithm, for example. There is much debate about how to implement these obligations in practice. It is generally agreed that both human intervention and information must be "meaningful" to the individual involved, but what is meant by this is then - of course - up for debate.

Given the social and academic debate on the subject, the City of Amsterdam's algorithm registry seems a valuable move. In any case, it offers (concerned) citizens and companies the opportunity to take a look behind the scenes and ask critical questions about the policies being implemented. Its practical usefulness, however, remains to be seen. It is doubtful whether the information provided about the algorithms will actually provide sufficient insight to mitigate their most impactful risks and whether this can be understood by the average citizen. There is also the question of what happens next with an identified risk. Can human intervention or algorithm modification be enforced in such a case, or is that a choice of the municipality? Isn't the affected individual simply more likely to benefit from a complete reconsideration of the outcome if the algorithm is found to be flawed, and shouldn't the process also be adjusted with an eye toward preventing future errors? And what if no one looks into the (in)soundness, can the municipality hide behind the transparency provided? Is it then incumbent on citizens and businesses to prevent the municipality from making mistakes when applying algorithms in practice?

In any case, the registry and its use can contribute valuable insights. In particular, because the use of algorithms is indispensable in practice and the call for further regulation of them persists. Therefore, the initiative for the registry and the commitment to transparency and openness is commendable.

More articles by Loyens & Loeff

Share article

Comments

Leave a comment

You must be logged in to post a comment.