Menu

Filter by
content
PONT Data&Privacy

0

Interview with Martin Hemmer: "Using AI tools responsibly is useful

The editors of Data&Privacyweb spoke with Martin Hemmer, partner and attorney at AKD and a specialist in Artificial Intelligence, among other fields. In his opinion, what can we expect now that ChatGPT and other AI tools are on the rise? And are there any positives to this development?

Editorial Data&Privacyweb 2 May 2023

Articles

Articles

It has been argued that ChatGPT poses a threat to jobs. For example, it was recently in the news that the latest AI tool could take a lot of work off the hands of lawyers, but could potentially replace this profession. What's your take on that? Or will it not go that far?

"In recent years, it has often been all too easy to call a technology "disruptive." With ChatGPT, I think it really is.

While AI will certainly not completely replace judges and lawyers in the short term, it will substantially change the work I expect. ChatGPT, for example, is good at an important quality of a lawyer that was previously typically human: turning information into a persuasive story. So ChatGPT could undoubtedly help with tasks previously performed by humans. Also, people might choose (if they can agree on the facts) to have a dispute resolved by AI arbitration instead of a judge. As long as law is written by humans and fraught with texts subject to subjective interpretation, judges and lawyers will still have a role to play, it seems to me."  

When we talk about ChatGPT, we mainly talk about the dangers. But are there also positive sides to this development that can help us move forward, for example, in the healthcare and education sectors. In that case, do you have any concrete examples or expectations?

"In many areas there is a labor shortage. Without technological advances, that shortage will only get worse. So the responsible use of AI tools seems to me to be useful. Moreover, AI can in itself contribute to technical progress by coming up with solutions to problems."

There is all sorts of regulation and legislation coming out of "Brussels" that should curb AI and strengthen it where necessary. To what extent is that legislative package sufficient and where does it bite other laws and regulations (think DSA and DMA)?

"In Brussels, hard work is being done on the AI Act. It is hoped to finalize it in late 2023 or early 2024. However, the advent of ChatGPT has also caused renewed discussion among European legislators. The AI Act introduces rules based on risk category. The big question is in which risk category a technology like ChatGPT should fall. Opinions are divided on that."

How do you look at the ban on ChatGPT in Italy and an investigation by the French regulator into the AI tool. Based on that "case law," can we expect something similar in the Netherlands?

"I think these bans are a bit premature, at least from a privacy perspective. The use of ChatGPT can certainly lead to sensitive data processing, especially depending on the souls shared with ChatGPT, however, that is determined by the input, the prompts, given by the users. The user will have to be made aware of that. That is what ChatGPT is trying to do. To my knowledge, for example, ChatGPT does not develop profiles of users for commercial use based on the prompts entered."

ChatGPT is seen as something of a panacea, but where are the limitations?

"ChatGPT can generate incorrect answers with a lot of conviction."

Want to learn more about ChatGPT and other developments in AI and legislation? Then check out Martin Hemmer's e-college at this link.

Share article

Comments

Leave a comment

You must be logged in to post a comment.