Menu

Filter by
content
PONT Data&Privacy

0

General-Purpose AI Code of Practice: Europe's new rules for AI models

On July 10, 2025, the European Commission published the final General-Purpose AI Code of Practice (GPAI-CoP) - a voluntary but influential code of conduct that provides model suppliers with a clear path to compliance with the EU AI Act, which comes into force on Aug. 2, 2025. Vice President Henna Virkkunen called the Code "a clear, shared pathway to compliance" in the accompanying press release from Brussels.

July 31, 2025

The Code compiles the key obligations for providers into three thematic chapters. The core information is below.

Chapter: Transparency

For whom. All GPAI providers.

Essence of Commitments: Standardized Model Documentation Form including architecture, compute and energy consumption, data-provenance and distribution channels [1].

Chapter: Copyright

For whom. All GPAI providers.

Essence of obligations: Internal copywright policy, crawlers complying with robots.txt and other rights reservations, filters against infringement in model output, complaint handling for rights holders [1].

Chapter: safety and security

For whom. High-impact models only

Essence of obligations: Life cycle risk analysis, red teaming, security of model weights, serious incident reporting requirement 2-15 days, semi-annual reports to AI office [1].

What does this mean in practice?

The Transparency Module requires each provider to have a fully completed Model Documentation Form ready at launch. That form minutely records how a model was built, trained and distributed, what data was used and how much energy was consumed in the process. Downstream developers thus get the information they need for their own AI-Act obligations, while regulators can access the full documentation upon request1.

The Copyright Chapter establishes that Web crawlers must respect not only technological blockers (paywalls, DRM) but also machine-readable rights reservations. In addition, it requires providers to incorporate technical and contractual safeguards that prevent their models from producing plagiarism or other infringing material1.

For the most powerful models - those with potential "systemic risk" - an additional layer is added. For these models, providers must continuously identify, test and mitigate risks, from the first training run to well after launch. Serious incidents, such as large-scale data breaches or public health damage, must be reported at a rapid pace: in the case of a cyber breach, for example, within five days1.

Relevance to supervisors

  • AI Office (Brussels) - Thanks to its semi-annual Safety & Security Model Reports, the AI Office receives a consistent stream of data on system risks, red-teaming results and incident reports. That standardization makes it easier to compare market risks and plan targeted enforcement actions.

  • Autoriteit Persoonsgegevens (NL) - The transparency form reveals in detail which data sources were used, how they were filtered and which bias detection was applied. This allows the AP to test whether the processing of (special) personal data in training and validation sets is lawful.

  • Sectoral regulators (e.g. ACM, DNB) - They gain visibility into the underlying models being integrated into critical services. The incident reporting regime and mandatory risk analyses provide early signals about potential financial or consumer risks.

These common lines of information create a "regulatory backbone": providers provide one set of standardized documents, on which different authorities can graft their own supervisory duties.

A new standard of confidence

With the GPAI CoP, Europe will for the first time have a unified, public framework that brings together transparency, copyright protection and security standards in one package. For providers, it is no longer a question of whether they set up documentation and risk processes, but how quickly they can achieve the set level. For regulators, the Code means a clear, harmonized basis for supervising a technology sector that is developing at lightning speed.

Anyone who brings a generic AI model to the European market from now on will find that this voluntary code becomes the de facto minimum standard for trust - just in time to be ready for the legal hard-launch of the AI Act in August 2025.

Source: Zahed Ashkara, Embed AI.

Resources

[1]European Commission(2025) Commission welcomes finalization of Code of Practice on General-Purpose AI. .

Share article

KENNISPARTNER

Martin Hemmer