Menu

Filter by
content
PONT Data&Privacy

0

AI act in final stage: more grip on ChatGPT?

On June 14, the European Parliament adopted by a large majority its position on the AI Act. This opens the way for the trilogue between Parliament, Council and Commission to finalize the text of the world's first comprehensive AI Act. Given the rapid developments in the field of AI, it is interesting to analyze the parliamentarians' proposed amendments. Both in a general sense and specifically regarding the regulation of ChatGPT and equivalents, writes Bart Schellekens, data and tech lawyer at Eindhoven University.

17 July 2023

Articles

Articles

For example, the definition of "AI system" may be better aligned with international standards by the European Parliament. It is therefore proposed to align with the OECD definition. Like the texts of the Commission and the Council, the definition is broadly defined (1). So the material scope of the law will become very broad, left or right.

Prohibited AI systems

To keep the broad scope workable, however, the risk-based approach is used. This means that unacceptable risks are banned, strict rules apply to high-risk systems, and low-risk systems have to comply with transparency obligations. MEPs saw a number of improvements in these three categories. For example, they want emotion recognition software banned in certain cases and a ban on rampant biometric data collection to feed facial recognition databases. With a wide-ranging amendment, parliamentarians are also proposing a ban on AI systems that estimate the risk of criminal or administrative offenses based on personality traits - think of the surcharge affair or predicitive policing.

Additional condition for qualifying high risk

Very relevant is the additional condition to the qualification to high risk proposed by Parliament. This makes the article defining high-risk systems shoot less with hail and reduces the regulatory burden. According to the amendment in question, if the AI system falls into a high-risk category, it is not necessarily high risk per se; there must actually be a significant risk of harm to human health, safety or fundamental rights (and in a specific case: the environment). Parliament is shifting the question of when there is a "significant risk" to the Commission, which must issue guidelines on this before the AI Act comes into force. I wish them every wisdom in doing so.

Foundation models

However, the most discussed topic during the House negotiations was: how to deal with generative AI such as ChatGPT. In other words, AI systems that are not built for a specific purpose. It is ironic that the European Commission's original proposal containing the explicit intention to be technology-neutral is dated even before the legislative process is completed. Fortunately, with the European Parliament's preamble, one can still regulate generally deployable models and systems. And it is also much needed. The European Parliament focuses on the so-called foundation models. These are very large language models that can be used for a wide range of different tasks.

The European Parliament's proposed amendments create a whole raft of requirements for foundation models before they are allowed on the market. The requirements have the same approach and systematics as the requirements for high-risk systems. They aim to enforce transparency, accountability, quality management, risk management and standards so that risks to "health, safety, fundamental rights, the environment, democracy and the rule of law" are reduced and mitigated as much as possible.

That's a challenging task, especially considering the development and implementation of well-known models such as ChatGPT and Stable Diffusion. Researchers at Stanford University tested ten well-known foundation models against some of the proposed requirements. The results are not surprising: the best performing model only achieved a score of 75%, and some of the models did not even meet a quarter of the requirements. The areas of intellectual property, energy, risk management and evaluation were the biggest challenges (2).

The Parliament's proposed requirements, if enforced (!), can therefore contribute positively to the current ecosystem of generative AI - according to the researchers. Hopefully AI developers will take this call to heart and fully commit to developing methods and standards to deploy generative AI as responsibly as possible. Instead of the already deployed reflex of just wanting to lobby away as many requirements as possible (3).

Continued

What's next on the horizon? The trilogue between the Commission, Council and Parliament has now begun. Dealing with generative AI such as ChatGPT will certainly be a prominent topic on the agenda. The trilogue will probably be ready before the end of the year and the entire process in early 2024. Or at least before the European Parliament elections start in June 2024. In the meantime, a comprehensive set of standards and guidelines must also be developed, to which many articles in the AI Act refer. Two years after the publication of the AI act, the regulation will come into force, and some parts after only three months. Or three years and 12 months respectively if it is up to the Council - these dates are also still part of the upcoming negotiations. So plenty of reason to keep a close eye on this file.

1. The European Parliament proposes to define an AI system as follows: "a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments

2. https://crfm.stanford.edu/2023/06/15/eu-ai-act.html

3. https://www.ft.com/content/9b72a5f4-a6d8-41aa-95b8-c75f0bc92465

Share article

Comments

Leave a comment

You must be logged in to post a comment.
-->