Menu

Filter by
content
PONT Data&Privacy

0

The new Artificial Intelligence Strategic Action Plan: is privacy still viable?

The Artificial Intelligence (AI) Strategic Action Plan released in October by the Ministry of Economic Affairs and Climate (EZK) shows that it is mainly the market that needs to act. The action plan contains three action points: exploiting social and economic opportunities, creating the right conditions and strengthening foundations.

January 10, 2020

author: Hoven van Genderen, Rob van den

The AI action plan is intended to accelerate and strengthen research and application of AI in the Netherlands.(1) It reminds me strongly of the 'Informatica stimulating plan' of 1983. At that time it was deemed necessary to develop a government policy to stimulate the use of informatics, because it was feared that the market would fail to take advantage of the new technology and thus be left behind. Attention to privacy was secondary to economic development; in fact, privacy played no role at all. The market had to be helped by the government, and computing was the key to the future. This AI action plan takes the same route. Logical platitudes repeated in every similar plan. AI is the key technology for the future.

Social and economic opportunities

As in 1983, for the first action point, exploiting social and economic opportunities, public-private cooperation is of great importance. This time with the Dutch AI Coalition (NLAIC) and European partners, with an emphasis on people and society, not to mention inclusiveness.

The orientation on the application of AI in government seems to be mainly focused on efficiency and effectiveness, privacy and transparency are of later concern; cooperation in the exchange and linking of data and databases are of greater importance. Iconographically translated, "Government makes optimal use of AI in public task performance." A reference to the FAIR principles for data use enhances this suspicion by indicating that "data are made suitable, or their suitability can be established, for reuse (sharing) by both humans and machines under clearly described conditions."(2) Fortunately, it is regularly reiterated that all solutions must be transparent and accessible to citizens. In addition, research into smart (smart) solutions is everywhere. From energy transition to research on autonomous vehicles where problems need to be identified on a multidisciplinary basis:

"Autonomous driving will increase safety, but it also leads to questions, for example about dealing with uncertainties within autonomous systems. Where does responsibility for control lie? How can AI algorithms for autonomous driving be controlled and explained to the various stakeholders for acceptance and safety, etc.? The CWI, TNO, the Faculty of Law at UvA and the Institute of Computer Science at UvA are working together to answer these questions under the heading "Meaningful Control of Autonomous Systems" (MCAS).(3)

Yet we must hurry because, as described in the action plan, a dynamic of the "winner takes it all" or "takes most" may emerge. There is a possibility of the Netherlands becoming dependent on other parties. Which is not only detrimental to autonomy, but also to economic security and our well-being.(4)

Multi-year AI programs are being released on all fronts, for education, government and industry in research labs, including between TNO and defense. Examples of collaboration are in dangerous areas:

"Studies are currently being conducted, often in cooperation with scientific institutions, on the opportunities for application of AI, for example for cybersecurity, police tasks and defense. This includes explicit attention to ethical aspects and proportionality." A report on "the effectiveness of the application of AI for the police task and the ethical aspects of AI" will be released in 2020.

The regular repetition of ethical aspects suggests the worst. As examples, others refer to "the selection of relevant imagery for investigative research."(5) In defense, these include data analysis and "further development of algorithms, command and on the interaction between different unmanned systems. But also in healthcare, agriculture and energy transition, studies are being prepared and new reports are expected. The government is also going to work on AI in public spaces. They are not going to invent this themselves, they like to leave this to the market, which is thus challenged to "come up with innovative solutions for better task performance." In order to realize the application of AI in public space, an appeal is again made primarily to the aforementioned Dutch AI Coalition.(6) Nevertheless, the government has also started working on its own, no doubt with the help of the Coalition, referring to the fact that "in various places within the government, experiments are already taking place with AI applications such as chatbots, decision algorithms and translation algorithms."(7) And of course, smart AI startups are being stimulated, which "strengthen the knowledge sharing and valorization of AI applications."(8)

Education and awareness

The second action point focuses on education and awareness. The Netherlands wants to be a forerunner in the development and application of AI, because after all, with an educated population and an 'open mind' for new opportunities, we have the capabilities. A shot in the dark: "The Netherlands already has a vanguard position in Europe in high-quality digital and intelligent connectivity for effective AI applications."(9) Consulting firm McKinsey also indicates that the Netherlands scores above average on all fronts in terms of AI readiness. Moreover, the Netherlands already has a top position in fundamental and applied research. The emphasis is on fundamental research, which is linked to applied and practice-oriented research. This means (probably) that fundamental research must also be able to be put to practical use. This emphasis is reflected in the science agenda of the Netherlands Organization for Scientific Research (NWO), which focuses on responsible, human-centric, trusting, transparent and explainable AI research.(10) Multidisciplinary research on AI is of course 'leading' , after all, AI applies to all disciplines and all sectors of society. This refers to the 'almost religious mission-driven innovation policy initiated by the Top Team Dutch digital delta'.(11)

But there is not only good news. Due to the popularity of AI studies, there is an increasing teacher shortage to train students who will be able to provide society with sufficient knowledge of the application and consequences of AI. But there are solutions to this as well, share as much information as possible, then everyone will understand what it is all about. Apparently, the lack of well-trained teachers is not so relevant after all.

Data sharing is a magic formula that comes up several times:

"Combining different types of data from different parties can create valuable new datasets that enable new AI applications. To capitalize on this potential, it is necessary for public, private and civil society organizations to be able to share more data among themselves. Obviously, this must be done responsibly and in compliance with privacy rules, among other things."

However, the data should be of high quality and without bias.(12)

And that's where the safety valve of protecting privacy and transparency comes in again. It's just a shame that no concrete research proposal is attached to this.

Strengthening foundations

Specifically, the third action item focuses on fundamental rights, privacy issues and ethics. Fortunately, it is recognized that fundamental rights play an important role in the applications of AI. The government has considered this issue in depth in this report. It finds that "privacy may be violated if the processing of personal data does not meet the requirements of propriety and transparency under the General Data Protection Regulation (GDPR)."(13) The dangers come into clear focus when using facial recognition technology and big data. It also addresses how AI can jeopardize freedom of expression, how human dignity and autonomy can be compromised, and how the right to due process can be harmed by over-reliance on AI.

Fortunately, policy letters were sent to the House on AI, public values and human rights, and AI and justice. In the policy letters, the cabinet announces policies to safeguard public values and human rights in AI developments. These letters and policy proposals are based on the research of Utrecht University, among others, and such as a paper by Mireille Hildebrandt and a report by the Rathenau Institute on AI.(14) Incidentally, my confidence in the government is not directly strengthened by the minister's answers, among other things with regard to the transparency principle:

"In other cases where the government uses algorithms, it is not required in advance to make them insightful and verifiable or to provide useful information about the underlying logic. One might think of cases in which the government conducts surveillance and in that context uses algorithms to assess risks that individuals will not comply with the law. The use of such risk assessment tools can contribute to a more efficient and effective use of capacity."(15)

The distinction from the Chinese "social credit system" is not so great!

Fortunately, later in the report, it is indicated that AI policies should comply with accepted ethical frameworks and that AI can be used to avoid discrimination and bias, thus enhancing public trust in AI. Either social actors are already working on this, or they are encouraged to realize the ethical application, transparency in the sense of explainability of those involved and standardization of this. And fear not, there is already a NEN standards committee AI developing best practices and frameworks for reliable and ethical AI applications. In addition, investing in further research into responsible AI use, transparency/explanability and monitoring of algorithms is envisioned. To this end, the agenda includes a research call by NWO on explainable, socially conscious and responsible AI.

Would this lead to concrete proposals to promote and regulate the transparency of AI and its underlying algorithms? Is it necessary, and is it even feasible? Can we control self-learning algorithms at all, and (why) would data subjects always want to know? The rules in the AVG are such that the realizability and establishment of control levels cannot be determined. It seems more important to me that AI is applied in a lawful and just manner. AI technology will undeniably become increasingly part of our society. Ethics and privacy are dynamic and flexible principles that flexibly shape themselves to practical application, sociocultural attitudes and politics, both positive and negative. The question is whether an action plan therefore actually makes sense.

Footnotes

(1) https://www.privacy-web.nl/publicaties/strategisch-actieplan-voor-artificiele-intelligentie
(2) Strategic Action Plan for Artificial Intelligence. Page 34
(3) Strategic Action Plan for Artificial Intelligence. Page 18
(4) Strategic Action Plan for Artificial Intelligence. Page 10
(5) Strategic Action Plan for Artificial Intelligence. Page 15
(6)Strategic Action Plan for Artificial Intelligence. Page 19
(7) Strategic Action Plan for Artificial Intelligence. Page 20
(8) Strategic Action Plan for Artificial Intelligence. Page 24
(9) Strategic Action Plan for Artificial Intelligence. Page 9
(10) Strategic Action Plan for Artificial Intelligence. Page 27
(11) Strategic Action Plan for Artificial Intelligence. Page 29
(12) Strategic Action Plan for Artificial Intelligence. Page 34
(13) Strategic Action Plan for Artificial Intelligence. Page 41
(14) Mireille Hildebrandt: position paper for roundtable discussion on "AI in Law" in the House of Representatives, March 29, 2018; Rathenau Institute: Upgrading. Securing public values in the digital society (2017)
(15) Parliamentary Paper 26643, no. 570.

This article can also be found in the Digital Transformation dossier

Share article

Comments

Leave a comment

You must be logged in to post a comment.