Menu

Filter by
content
PONT Data&Privacy

0

Human-centric AI, how do you actually do it?

With a series of articles, we aim to provide a realistic perspective on the application of Artificial Intelligence (AI) within government. In current events, we see concerns and problems surrounding AI: how do we guard the human touch? For example, how do we ensure that a system does not discriminate? This third article focuses on human-centered AI and non-discrimination. Bart van der Sloot of the Tilburg Institute for Law, Technology, and Society examined how AI algorithms can abide by the law on non-discrimination. First, we explain what non-discrimination principles can help with this. Then we discuss with Bart his advice on how the government can use the lessons from his research to deploy AI. "Actually, these opinions are not new at all, we just need to relearn how to apply them to AI," Bart explains. A disconcerting or rather hopeful conclusion?

Dutch Digital Delta November 8, 2021

Everyone knows that discrimination is prohibited by law. So even when we apply algorithms when making decisions, we don't want to unintentionally discriminate. But how do we ensure this? Bart van der Sloot researched this and published a Handreiking AI System Principles for Non-Discrimination. The essence of this is that in the preparation and during the development of algorithms, at each step it is very consciously considered what the goal is, what exactly is happening and with what data. This is also done carefully during implementation and evaluation.

This procedure is called *Non-discrimination by design. In 6 steps you get to an AI application that prevents discrimination, which are explained in the box on the right. We asked Bart van der Sloot to tell us a bit more about it.

Human-centered AI does not exist

What exactly is human-centric AI? "Algorithms, by definition, are precisely NOT human," explains Bart van der Sloot. "With the term human-centric AI, we have to be careful not to disguise the fact that algorithms are essentially not human. What we mean by it is that we have to keep a good eye on what an algorithm does and what interests we want to serve with it, in which context. For that, all stakeholders need to be involved." That is also actually the core of non-discrimination principles. But how do we do that concretely?

Discriminating is allowed

"The power of AI is finding correlations in groups of data and not having to think about how to treat each case separately. It's faster, more efficient and consistent. By law, we cannot discriminate, even indirectly. But we know that data are not objective (biased) and that correlations we find can be discriminatory. That cannot be avoided. But the law says something about the outcome, not the process leading up to it. So you still don't know how to build an AI system, what data you can and cannot collect, and how to train the algorithm. That is exactly the translation we did with a team of lawyers, techies and ethicists from three universities and the Human Rights Board. The Handreiking Non-Discrimination by Design indicates step by step the frameworks within which choices must be made."

We will use AI more

Bart argues that a world with AI is not necessarily better than a world without AI. AI can have great advantages, but it also has intrinsic disadvantages. So the question that should always be asked is: why AI? Or conversely, why not? "Suppose you have an algorithm prepare a decision, but as a precaution, have a human make the final decision. Then why deploy AI? There is a risk that a human just takes over the decision anyway. For example, due to time pressure, or to avoid having to explain why he wants to deviate from the algorithm's outcome. AI can also offer an advantage over human choices: you can follow very closely how decisions are made and on the basis of which considerations, whereas with humans subconscious processes often come into play. You can also see if and to what extent there are discriminatory biases and then take corrective action where necessary, whereas human decisions are often no less discriminatory. AI is also very consistent, which can increase legal certainty and equality for citizens."

How does it work in practice?

According to Bart, the failures with SyRI in the judiciary and predictive policing systems, which use algorithms to predict where crime is occurring, could have been avoided. "There just wasn't enough advance thinking," he says. Still, according to Bart, government agencies are doing better and better. There was also a lot of interest in the advice from his research. "You have to realize that legislation and jurisprudence cannot simply be applied in AI in black and white. Programmers would like to, but need clarity. AI is basically just statistics, so we already have a lot of knowledge about reliability of correlations and decisions. We need to use that knowledge more in AI development. However, there is still often ignorance about AI among policy makers and administrators, so the right orders are not given. That's why involvement of all stakeholders is so important." What about the unpredictability of self-learning systems, for example? "If it's really a black box from which you can't figure out how it works, you shouldn't use it within government. Government agencies should always be able to demonstrate how they work and on what basis they make decisions. Continuous evaluation is necessary with these types of systems because they are self-learning. So also listen carefully to signals from citizens and take them seriously."

Communicating, listening, communicating

Listening is a hugely important soft skill in AI development. Bart: "We have long known that when developing systems, communication between clients, users, other stakeholders and developers is crucial. Not just for support, but in the first place to ensure that a system works and does what it is supposed to do. AI is of course more complex in some ways and involves many different disciplines, but that is precisely why communication remains the key! So that applies to algorithm development, but also to assessing the quality of data you use and the weight you give it. It is very tempting to use all kinds of available data, while the accuracy and actuality is not at all clear. You can start a project based on incomplete and biased data, but then you often have to stop it after a year, two years, because the system does not produce anything at all and the predictions are not accurate. So then you have wasted time, manpower and resources."

In Europe, we know how to do it

Bart welcomes the ambitions of the Netherlands and the EU to lead in human-centered AI. The EU repeatedly comes up with regulatory proposals. "That is also the strength of Europe," he believes. "We have human rights, norms and values here. If you want to do business with AI here, we have clear frameworks for that. It just could have been done 20 years earlier. In America, you see that the shore is turning: distrust of AI has increased so much that strong regulation is now being called for there as well, especially by the AI companies." Bart explains that predictive policing has also damaged the image of the police there, because they only act when intervention is needed. As a result, the police are less visible as a familiar face in the neighborhood. "If the Dutch police apply AI, the power of the neighborhood cop, who knows his surroundings and has contact, should not be lost. The neighborhood cop often sees much more than an AI system."

That's what we do it for!

Communication, stakeholder engagement and careful thought before you begin. It sounds simple. Does Bart have anything to add to that advice? Of course: "Make it very clear what you are doing it for! Have a good story and communicate it openly. Be open to signals of unexpected effects and deal with them quickly. Specify very precisely what your benchmarks are and what success criteria apply and check after a year whether you have met those benchmarks; if not, stop the project and do not muddle through indefinitely. AI and data-driven work is not an end in itself, but a means. Use it for rationalization within government. Make policies more transparent and accountable, eliminate human bias and (unconscious) discrimination, and make the application of rules consistent. That way AI can really help us and government can use AI with integrity and transparency for our society." And that is a hopeful conclusion.

*Non-discrimination by design
1.Problem definition
What is the problem and how will AI help solve the problem? What percentage in false negatives and false positives is acceptable?
2.Data collection
To what extent are the needed data available within the organization and to what extent must they be obtained from outside? What bias is in the data?
3.Data preparation
Based on what criteria is the choice made to use or not use certain data for the model and how does this choice affect the distinction between groups?
4.Modeling
How are criteria of explainability and fairness translated into the model selection strategy? How does the model perform on the chosen benchmark of false positives and false negatives?
5.Implementation
Select a defined and representative application to test the system. Adjust the model based on the results and engage stakeholders.
6.Evaluation
Select an implementation strategy and formulate an evaluation and exit strategy. Assess how the system functions and how it would function with a different model, fairness definition and/or algorithm.

Source: Handreiking AI system principles for non-discrimination., Bart van der Sloot et al.

This article originally appeared on the website of the Dutch AI Coalition.

Share article

Comments

Leave a comment

You must be logged in to post a comment.
-->