Menu

Filter by
content
PONT Data&Privacy

0

Caution: use of AI chatbot could lead to data breach

The Autoriteit Persoonsgegevens (AP) has recently received several reports of data breaches due to employees sharing personal data of, for example, patients or customers with a chatbot that uses artificial intelligence (AI). By entering personal data into AI chatbots, the companies offering the chatbot could gain unauthorized access to that personal data.

Autoriteit Persoonsgegevens August 9, 2024

News press release

News press release

The AP sees many people in the workplace using digital assistants, such as ChatGPT and Copilot. For example, to answer customer questions or summarize large files. This can save time and save employees less fun work, but there are also major risks to it.

A data breach involves access to personal data without permission or intent. Often employees use the chatbots on their own initiative and against the agreements with the employer: if personal data have been entered in the process, then it is a data leak. Sometimes the use of AI chatbots is part of the policy of organizations: then it is not a data breach, but often not legally permitted. Organizations should avoid both situations.

Most companies behind chatbots store all data entered. As a result, that data ends up on the servers of those tech companies, often without the person who entered the data realizing it. And without that person knowing exactly what that company will do with that data. Moreover, the person whose data it belongs to won't know either.

Customer medical records and addresses

In one of the data breaches reported to the AP, an employee of a general practice had entered patients' medical data into an AI chatbot - against agreements. Medical data is very sensitive data and get extra protection in the law for a reason. Just sharing that data with a tech company is a major violation of the privacy of the people involved.

The AP also received a report from a telecommunications company, where an employee had entered a file including customer addresses into an AI chatbot.

Make arrangements

It is important that organizations make clear agreements with their employees about the use of AI chatbots. Are employees allowed to use chatbots, or would they rather not? And if organizations allow it, they should make it clear to employees what data they can and cannot enter. Organizations could also arrange with the chatbot provider that it will not store the data entered.

Report data breach

Does it still go wrong, and does an employee leak personal data by using a chatbot against the agreements made? Then a report to the AP and to the victims is is mandatory in many cases.

Share article

Comments

Leave a comment

You must be logged in to post a comment.