Menu

Filter by
content
PONT Data&Privacy

0

AI: building together a realistic perspective for societal challenges

Artificial intelligence (AI) is often used within the public sector, and knowledge about its capabilities is becoming more widespread. Technological knowledge is developing rapidly, but how can we harness its potential? At the mini-symposium of the Public Services Working Group during the ECP Annual Festival 2021, the exchange of experiences proved to be an essential step in finding a realistic perspective on AI.

January 17, 2022

Background articles

Background articles

Demystification

An important first move in recognizing the potential of AI is to demystify it. With the term AI, many people get all kinds of big ideas, however, it is "just" a tool that helps create added value under certain conditions, "You hear all kinds of stories about AI," says Barbara Visser of the Public Services Working Group. "It's the solution to everything, or it's a gigantic threat." But the reality is more nuanced.

Founder of Aigency Jim Stolze is passionate about spreading this realistic perspective. He briefly discusses the National AI Course, which teaches people what AI is all about - and exactly what it is not. "What people call AI are actually different things," Jim explains. "With knowledge-driven AI, people input the rules, which are then executed by the computer. But you also have machine learning. In that, we build algorithms that look for rules on their own within a data set. So this is actually more statistics than programming."

Supplemental intelligence

So Paul Iske, Chief Failure Officer of the Institute for Brilliant Failures, talks about AI as complementary intelligence. Ultimately, we are in control, Jim emphasizes, "There are always people responsible who wrote the code and it's people who interpret the results. In the end, a flesh-and-blood human makes the decisions and is assisted by an algorithm."

Demystifying AI reveals how powerful it can be, and the challenges the public sector faces in using it. Devin Diran of TNO discusses the splits that public agencies can find themselves in if they want to use AI for the energy transition. Here, AI is expected to be a new technology that can ensure a fast, fair and inclusive energy transition. However, AI still has many knowledge gaps and requires the space to experiment. TNO is learning about this from projects from Rotterdam to Zoetermeer, involving questions such as: how do you translate informal local knowledge generated through AI into formal knowledge? How do you create trust for AI in decision-making? How do you guarantee that AI contributes to empowered citizens and stakeholders? How do you test the reliability of AI? As an agency that is going to use AI, the space to experiment is necessary to answer these questions.

Multidisciplinary team

This is also emphasized by Guido Hobeijn, founder of the Computer Vision Team (CVT) of the City of Amsterdam. Often a proof of concept turns out nicely, but that's not the same as actually putting a solution into practice. That is why the CVT was created, in which experts in various fields work together to successfully implement projects. With a project that uses images from public space, for example, you have to take into account privacy legislation, public support, while at the same time you want the right government agencies to have access to the right data.

Preconditions

The main challenges are usually not technical, but rather concern the ethical, legal and social preconditions for embedding and scaling up AI in the public process. The social environment within which a project is set up is often understudied. Citizen participation is crucial for a fair and inclusive energy transition, for example, Devin points out, but the best way to involve citizens from the start is still unclear: information evenings only reach certain people, and kitchen table talks are very intensive. It is precisely here that data-driven solutions could help, and because this quickly involves resident data, here too the ethical, legal and social preconditions are essential. As Barbara also adds later: it is essential to involve citizens in one way or another from the beginning in the question of whether, and if so under what conditions, an AI application can create added value. After all, in the Netherlands and within Europe we stand for AI in which the human being is central at all times.

Also, the legal and ethical frameworks in which a project finds itself are often underexposed.
Government agencies quickly enter a legal jungle, within which laws often contradict each other. Working in multidisciplinary teams is therefore important, Jim says. "AI is too important to leave to technologists. As Guido said, it's okay to have a lawyer on your team."

Cold feet

"Before you start putting a solution into practice, you have to look at what values might be compromised," said Bert Kroese, guest speaker and deputy director general of CBS. "You have to have the ethical discussion from the beginning and look at how to represent those values in design. We see that AI can be very important in big challenges, and we want it to be done responsibly." Getting all stakeholders, including citizens, correctly informed and giving them a voice is proving difficult but necessary. For many agencies, it feels risky not to succeed in an ambitious project, while there is much to be learned from it for others. "I see cold feet in many governments," says Jim. "Before you know it, you get punished if things don't go well all at once. When in fact we can learn the most from each other."

Ethical AI

People would always rather see a project succeed than not. Especially if it was funded with public money. In that regard, public support for AI-based solutions is another big challenge. "Everybody does believe that an AI is objective," says Jim, "but nobody wants a robot judge." Demystification of AI also plays an important role in this process. "It starts with transparency," says Bert. "First, clarify what you are doing, express your ambitions, and share your experiences."

In the area of ethical use of AI, there is still plenty to learn from each other. What are effective communication strategies? And what about when algorithms can make better diagnoses than doctors? Here Barbara notes that it can also be unethical not to use the possibility of AI. After all, being ethical with AI is not only about protecting citizens from privacy risks, but also about allowing citizens to take full advantage of what AI can do. "We really need to exploit the opportunities," Bert emphasizes, "but we need to do so responsibly. Talking to each other, like at this symposium can help find the balance between going too far and doing enough."

Fail successfully and thus learn from each other

To find that balance together, it's important to share even the failed projects, Paul argues. Ultimately, the use of AI is something we have yet to discover exactly what it all entails, so it's also weird to assume it will all go right the first time. "You're going to run into failures anyway, and then you lose money, time and effort. Those are your failure costs," Paul explains. "But if you don't discuss what happened, you also lose your potential returns: new knowledge, new experience, and a better starting point for future projects. Paul therefore makes a call to also see the potential of the learning return and pay attention to the "failure return.

This is why it is so important to realize that business cases are never actually fully realized. The value of a project is not only in its success, but also in the effort put into it. Often a project is a good idea, everyone has prepared well, but things still go differently. So an important next step is to share experiences - both successful projects and brilliant failures.

A realistic perspective

"AI has long since ceased to be a dreamscape," Bert tells us in his apt conclusion to the symposium. "To get a realistic perspective, it is important to show that a technique like machine learning is not very different from ordinary statistics. Patterns are estimated from data and those patterns can be used to make predictions. This is very powerful but also has serious limitations. The biggest challenges are not in the technology but in the preconditions: how do you get the right data, how do you deal with ethical issues, and how do you ensure public acceptance? There are risks involved in using AI, but it also offers many opportunities. To exploit them, we are all still experimenting. That's why we need to share our experiences together. Success stories, but especially also failures - even if they are not brilliant."

Share article

Comments

Leave a comment

You must be logged in to post a comment.

KENNISPARTNER

Martin Hemmer