The long-awaited "guardrail" for AI, the European rules on artificial intelligence, was endorsed by the EU Council of Ministers this week. However, the rules will not come into force until 2025, when it was actually intended to do so (at least partially) this year.

The AI Law is the first legal framework for AI and addresses the risks of artificial intelligence. The goal of the rules is to promote reliable AI in Europe and worldwide. It aims to ensure that AI systems respect the fundamental rights and safety of citizens on the one hand, and mitigate the risks of powerful AI models on the other.
Above all, the goal is to prevent citizens from becoming victims of companies and governments working with big data and algorithms.
The rules focus on different levels of risk, assigning forms of artificial intelligence to a level of risk ranging from "Minimal risk" to "Unacceptable risk. The first set of rules, focusing on the most dangerous applications of AI, will now go into effect in early 2025.
These rules specifically target artificial intelligence built to exploit weaknesses or manipulate people. Techniques to track or steer, as China does to its population, will also be banned.
Part of this AI Act is a set of rules for generative AI, such as chatbots. OpenAI's ChatGPT and Google Gemini are well-known examples. These rules will go into effect around the summer of 2025.
