Menu

Filter by
content
PONT Data&Privacy

0

Privacy Risks and Measures in Large Language Models (LLMs) in AI

The report presents a comprehensive methodology for risk management of LLM systems, with a number of practical measures to mitigate common privacy risks in such systems.

European Data Protection Board April 10, 2025

In addition, the report provides examples of "use cases" on the application of the risk management framework in real-world situations:

  • First "use case": a virtual assistant (chatbot) for customer questions,

  • Second "use case": an LLM system for monitoring and supporting student progress,

  • Third "use case": an AI assistant for travel and calendar management.

Large Language Models (LLMs) represent a groundbreaking development within artificial intelligence. They are deep learning models designed to process and generate human language trained on large data sets. Their applications are versatile and range from text generation and summarization to help with coding, sentiment analysis and more.

The EDPB initiated this project as part of the Support Pool of Experts program, at the request of the Croatian Data Protection Supervisor (DPA).

The project was completed in April 2025 by outside expert Isabel Barbera.

Objective

The report proposes a comprehensive risk management methodology to systematically identify, assess and mitigate privacy and data protection risks.

The report helps data protection authorities (DPAs) gain a complete and up-to-date understanding of LLM system operations and associated risks.

Download the report here.

Share article