Menu

Filter by
content
PONT Data&Privacy

0

How are general purpose AI models regulated?

European Commission September 25, 2025

Question & Answer

ANSWER

General purpose AI models, including large generative AI models, can be used for a variety of tasks. Individual models can be integrated into a wide range of AI systems.

It is important that a provider seeking to build on a general purpose AI model have all the necessary information to ensure that their system is secure and compliant with the AI Act.

Therefore, the AI Act requires providers of such models to disclose certain information to downstream system providers. This transparency enables a better understanding of these models.

Model providers must also have policies in place to ensure that they respect copyright law when training their models.

Moreover, some of these models may pose systemic risks if they are very powerful or widely used.

Currently, general purpose AI models trained with a total computational power of more than 10^25 flops are considered to pose systemic risks, since models trained with greater computational power tend to be more powerful. The AI Office (established within the Commission) may update this threshold in light of technological advances and may additionally designate other models as such in specific cases based on further criteria (e.g., number of users or degree of autonomy of the model).

Model providers with systemic risks are therefore mandated to assess and mitigate risks, report serious incidents, conduct advanced testing and model evaluations, ensure cybersecurity, and provide information on the energy consumption of their models.

To this end, they are asked to work with the European AI Bureau to draft codes of conduct as a central tool to develop the rules in collaboration with other experts. A scientific panel will play a central role in monitoring AI models for general purposes.

This is a translated message.

KENNISPARTNER

Martin Hemmer