Organizations that use chatbots in their services should always offer people the opportunity to speak with an employee. Organizations should also make it clear when you are dealing with a chatbot and ensure that the chatbot does not give incorrect, evasive or misleading information.
The Personal Data Authority (AP) and the Consumer & Market Authority (ACM) are calling on organizations to take responsibility if they choose to deploy chatbots. The regulators are paying extra attention to this in the coming period.
Consumer law already requires companies to communicate directly, effectively and accurately with their customers. Yet this does not appear to be clear enough for many businesses now. This clarity exists thanks to the digital services regulation (DSA) already exists for so-called "intermediary services" such as social media, marketplaces and online platforms. The DSA states that people should be able to choose a means of communication that is not fully automated. The European Commission is currently preparing amendments to consumer legislation, which, thanks to the Digital Fairness Act, should soon provide more clarity for other companies as well.
In addition, new obligations under the AI Regulation for transparency when deploying AI chatbots. Among other things, companies must then inform users that they are communicating with an automated system.
The regulators are asking the Commission to supplement the transparency obligations with clear rules for the design of AI chatbots. These should be honest, recognizable and accessible. And people should not be led astray.
The AP (coordinating regulator of algorithms and AI) and the ACM (consumer rights regulator) are seeing a rapid increase in reports of problems with chatbots. Recent consumer research by the ACM shows that the lack of human contact in customer service is one of the biggest annoyances.
It also appears that chatbots sometimes give poor or even incorrect answers, or talk people down without really solving the problem. Furthermore, it is often unclear to people whether they are interacting with a chatbot or an employee. And they don't always manage to get in touch with an employee when they want to.
Greater oversight of chatbot use is also needed because of information security and privacy risks. Chatbots are a form of generative AI, trained with large amounts of information and data. This may include confidential data and documents. This can make it possible - for example, for malicious parties - for the chatbot to provide more information than necessary to answer "normal" customer questions, and to extract this confidential information. This compromises data security and may even lead to data breaches.