Menu

Filter by
content
PONT Data&Privacy

0

A "friend" in your pocket?

Last summer, it was reported that OpenAI, the company behind ChatGPT, was sued by the parents of a 16-year-old American boy who committed suicide after lengthy conversations with the chatbot. In the Netherlands, too, young people are increasingly using AI chatbots such as ChatGPT, Snapchat My AI, or other chatbots to talk about mental and health issues, financial and political (voting) advice, or virtual friendships and relationships. The conversations with the chatbot are about school stress, arguments at home, insecurity, mental or physical complaints. For many children, AI feels like a safe place: always available, non-judgmental, and discreet. However, its use raises important (privacy) questions about the role of AI in the daily lives of young people, how they can use this technology safely, and what role the parties involved can play in this.

Privacy First December 10, 2025

Young people told NOS Stories why they use AI in their daily lives.[1]One of the users said that he doesn't share much with people, but he was able to get things off his chest with ChatGPT. It was much more accessible, easier, and less confrontational.

In addition, young people indicate that AI is always available. This is in contrast to healthcare, for example, where there are sometimes (long) waiting lists. Psychologists also emphasize that people with early symptoms or who are on a waiting list for treatment can benefit from conversations with a chatbot, but they certainly also see the risks.[2]

The risks

AI chatbots pose various risks, especially for vulnerable users such as young people. After all, the output of an AI chatbot is based on a statistical prediction model, which means that answers are not necessarily correct or contain the truth. It is therefore inherent in the way these chatbots work that you cannot simply trust the answers and should always check them.

However, because it sounds so convincing, young people are quick to accept the answers as true. In addition, AI chatbots often contain addictive elements to keep users chatting longer. This is done by ending the conversation with a question to the user, in order to prolong the interaction and keep users engaged for longer. Some AI apps also pretend to be real people by offering virtual characters to chat with, ranging from "dream partners" and movie characters to psychologists. The design can be hyper-realistic, such as the appearance of a phone call screen when using the voice option. This makes it seem as if the user can 'call' the virtual conversation partner. This feeling is only reinforced if the AI bot also sounds like a real person and the chatbot provider is not transparent about the fact that the user is not communicating with a real person. Finally, the use of AI chatbots in crisis situations can be dangerous. First of all, because AI chatbots are not able to draw up a personalized treatment plan, as a professional would. AI chatbots also often fail to refer users to support services, or do so incorrectly. This is because an AI chatbot cannot recognize emotions and nuances in conversations. It does not understand context in the same way that a support worker does, and has no empathy or sense of responsibility. As a result, users may rely on answers that do not help them and ultimately fail to receive the care they need.

Commercial purposes

The providers of many of these apps are commercial companies. These parties are profit-oriented and gain access to a lot of personal information through these conversations.[3]This often includes special categories of personal data within the meaning of the GDPR: data about health, family situation, or emotional well-being. The GDPR imposes strict requirements on the processing of such data. According to ChatGPT's Privacy Statement, personal data is shared with third parties and affiliates of OpenAI.[4]This may create the risk that personal data will be used outside its original context or processed for purposes other than those for which it was provided. This can lead to a loss of control over one's own data. In addition, a lower level of security or transfer to countries outside the EU may increase the risk of misuse or unauthorized access. Personal data is also used for the further development of AI models. This has an indirect commercial purpose, as better models make the product more valuable and competitive.

Parties involved

Parents

It is important for parents to remain involved with an AI chatbot or app that is always available. Digiwijzer emphasizes that it is important to discuss with young people how AI works, what it can and cannot do, and why it is important to use AI technology consciously.[5]This requires parents to actively engage in conversation about their child's use of an AI chatbot and explain that a chatbot is not a human being, does not treat what a child shares confidentially, and can give incorrect advice. As parents, you should also check the settings of AI chatbots to share as little data as possible with the provider(s) of such AI technologies.

Educational institutions

Schools are increasingly confronted with AI in the classroom, both as a learning tool and in the lives of students. Education has a dual role in this regard: providing information and protection. According to an article on Kennisnet, "a good AI strategy is a necessity for every school."[6]Without its own vision on AI, the school loses control over the use of AI. A prerequisite is that schools understand how AI works, where the boundaries lie, and what it means for young people. This requires digital literacy, critical thinking about technology, and insight into how young people use AI.

Government

The government plays an important role in ensuring the responsible use of AI technology by young people. On the one hand, it has a duty to protect young people from risks such as privacy violations, manipulation, discrimination, and exposure to inappropriate content. The government does this by establishing laws and guidelines, monitoring compliance, and promoting awareness about the safe use of AI. On the other hand, the government plays a supporting role by helping young people develop the knowledge and skills they need to use AI critically and responsibly. This can be done, for example, by promoting digital literacy in education or providing information through media channels. In this way, the government seeks to strike a balance between protecting young people and encouraging their development in an increasingly digital society.

Providers

Developers and providers of AI systems must realize that children are not ordinary users. At OpenAI, they realize this and state in their Privacy Policy that their services are not aimed at or intended for children under the age of 13. Users under the age of 18 must have permission from their parent or guardian to use OpenAI services.[7]In response to some troubling experiences, the creators of OpenAI have also introduced new features specifically for parents. One of these new features allows parents to link their own account to their child's account, so that they receive a notification if ChatGPT detects that their child is in acute distress.[8]This is a step in the right direction, but an AI chatbot cannot assess whether someone is really in distress. The new features give parents more control, but do not solve the core problem.

Design choices must be based on safety and comprehensibility. Important considerations include data minimization, transparency, privacy-friendly age verification options, and ethicsby design. The latter focuses on the privacy, protection, and development of the child. The European AI Regulation also introduces new obligations for developers and providers of AI systems, especially when AI interacts with children and exploits the vulnerability of minors.

Privacy First calls on the providers of such AI chatbots to take responsibility.

[1]SeeNOS Stories, September 26, 2025. For the above examples, see also Volkskrant,Parents sue ChatGPT over their son's suicide, August 27, 2025; NOS News,ChatGPT as an aid for mental health issues: "Better AI than therapy," September 26, 2025; Telegraaf,10 percent of young people prefer to ask AI for financial advice rather than their parents, October 14, 2025; Autoriteit Persoonsgegevens,AP warns: chatbots give biased voting advice, October 21, 2025; Autoriteit Persoonsgegevens,AI chatbot apps for friendship and mental health are simplistic and harmful, February 12, 2025.

[2]NOS News,ChatGPT refers people too quickly, notes suicide helpline 113, September 26, 2025.

[3]Autoriteit Persoonsgegevens,AI chatbot apps for friendship and mental health are simplistic and harmful, February 12, 2025.

[4]OpenAI Privacy Policy, updated June 27, 2025.

[5]AI lets children talk to fictional characters: innovation or risk? | Digiwijzer

[6]Kennisnet,‘A good AI strategy is a necessity for every school’ – Kennisnet, October 1, 2025.

[7]OpenAI Privacy Policy, updated June 27, 2025.

[8]OpenAI,Creating Better ChatGPT Experiences for Everyone | OpenAI, September 2, 2025.

Share article

Comments

Leave a comment

You must be logged in to post a comment.

Learn more
-->