AI offers many opportunities for innovation and its social impact is significant. More and more organizations are using AI to analyze (big) data and improve processes. At the same time, this innovation is sometimes at odds with existing legislation, such as the General Data Protection Regulation (AVG), which imposes strict requirements on the processing of personal data and the transparency of algorithms. Enforcement and application of the AVG in recent years often lagged behind the rapid developments of AI.
In addition to legal challenges, AI also creates ethical dilemmas. Algorithms can lead to discrimination, biased information provision and misuse of personal data. The question of how AI can be used responsibly to create social value while protecting fundamental rights and privacy is central to European policy.
In recent years, the European Commission has been firmly committed to reliable and human-centered AI. With strategies, coordinated plans and ethical guidelines, it laid the foundation for regulation. This path led to the AI Act (AI Regulation): the world's first comprehensive legal framework for artificial intelligence.
The AI Act was finally passed in April 2024 and has been partially in effect since February 2025. It aims to ensure that AI systems are secure, that fundamental rights remain protected, and that monitoring and enforcement are possible. In doing so, it sets the global example for regulating AI.
The AI regulation uses a risk-based approach:
Unacceptable risks: some applications are prohibited, such as social scoring, manipulative AI systems and emotion recognition at work or school.
High risk: systems used in recruitment, law enforcement or healthcare, for example, must meet strict requirements for transparency, data quality, logging and human oversight.
Limited risk: applications should inform citizens when dealing with AI (e.g., AI-generated content).
The AI Act is being phased in and will be fully effective from Aug. 2, 2027. Key intermediate steps:
Since Feb. 2, 2025: ban on certain AI systems (including social scoring and manipulative AI).
Since Aug . 2, 2025: rules for providers of general purpose AI models(such as language models for chatbots).
From Aug . 2, 2026: additional obligations for high-risk systems, such as registration in a European database and fundamental rights impact assessments when used by public authorities.
The AI Office was created to oversee compliance, with national regulators additionally responsible. Violations can result in hefty fines and mandatory removal of systems from the market.
The AI Act applies to both providers (developers) and users (applicators) of AI. From government and healthcare facilities to SMEs, anyone deploying AI must consider which risk category a system falls into. Organizations are required to take measures such as:
furnishing human supervision,
Provide representative training dates,
ensure documentation and transparency,
report incidents to providers and regulators.
In addition, AI literacy of employees is a legal requirement: organizations must train their staff to use AI responsibly.
How does the AVG relate to AI applications? EDPB Opinion 28/2024 at a glance.
Blog'AI Compliance Officer must speak the language of both lawyer and techie'
InterviewsDeepfakes and copyright: is there a legal breakthrough?
News press releaseBill strengthening safeguard function Awb: a valuable step for algorithmic decision making
Blog