The European Union’s General-Purpose AI Code of Practice: The Key Points at a Glance

On the 10th of July 2025, the European Union’s newly established AI Office officially released the Code of Practice for Providers of General-Purpose AI Models (GPAI). This Code, promised under Article 56 of the EU AI Act (a landmark law regulating AI through a risk-based framework) is a crucial guiding tool for compliance with AI Act obligations pertaining to GPAI.
As of February 2025 the AI Act has begun taking effect gradually, starting with the ban on prohibited practices. It officially enters into force on the 2nd of August 2025, with provisions for GPAI models becoming enforceable from the 2nd of August 2026 onwards. The GPAI Code of Practice is intended to help providers prepare for compliance. It is important to note that the Code is subject to regular updates, with additional guidelines from the European Commission expected to be published later in July. As such, the Code is the regulatory vehicle used by the EU to keep the regulations in line with technological advances of AI systems.
The Code is structured into three main chapters: Transparency, Copyright, and Safety and Security. Each chapter consists of commitments, accompanied by corresponding implementation measures. The Safety and Security chapter only applies to GPAI models classified as posing systemic risk. With the release of the Code, the EU addresses several ambiguities in the AI Act where instructions for compliance had previously been left vague.
In this article, we present a clear and comprehensive summary of the key points from each chapter of the GPAI Code of Practice, giving you the insights you need at a glance.
General Information at a Glance:
- The Code of Practice is voluntary and not legally binding. However, this does not mean it lacks legal significance. While it may not be mandatory from a regulatory standpoint, adherence to the Code can help demonstrate that a provider has exercised a duty of care. As such, it may serve as evidence of responsible conduct, potentially reducing the provider's exposure to legal liability.
- It applies to all providers of GPAI affected by provisions coming into effect on the 2nd of August of 2025 and 2026 respectively. (Some of the obligations do not apply to GPAI models released under a free and open-source license, unless the model presents systemic risk.)
- Voluntarily signing the Code allows GPAI providers to demonstrate their commitment to following responsible AI practices – however, signing the code does not constitute evidence of complying with the obligations of the AI Act.
- In the Safety and Security Chapter, the Code includes a glossary clarifying terminology around GPAI and systemic risk within the AI Act.
- Considering the limited capabilities and resources of SMEs and start-ups, the outlined measures are to be interpreted in proportion to the size of the organisation implementing them.
Chapter 1: Transparency
This chapter supports compliance with the documentation obligations from Article 53 (1) (a) and (b) and Annexes XI and XII. Additional documentation requirements for models with systemic risk are covered separately in the Safety and Security Chapter. This involves:
- Diligent Documentation: Signatories commit to preparing and maintaining up-to-date model documentation for downstream providers and, upon request, for the AI Office, using the provided Model Documentation Form enclosed within the Code as a baseline.
- Transparency: Signatories must ensure contact information is accessible and share relevant documentation upon request, while also being encouraged to publish parts of it publicly to support transparency.
- Data Integrity: They must guarantee the quality, integrity, and security of the model information provided, retain it as evidence of compliance, and protect it against unintended changes.
Chapter 2: Copyright
This chapter supports providers of GPAI with complying to obligations described in Article 53(1) (c) AI Act that pertain to copyright and related rights within the EU. It lays out how GPAI providers specifically can demonstrate compliance with respective EU copyright laws. Specific points are:
- Creating a Copyright Policy: Signatories must publish an up-to-date, EU-compliant copyright policy and provide clear information about their web crawlers, including data collection practices and copyright compliance.
- Legal Compliance and Lawful Crawling Practices: Signatories recognise that they remain fully subject to EU copyright law and must respect rightsholders’ control over their content. When using web crawlers to collect data, providers commit to accessing only legally available content and respecting access controls like robots.txt files.
- Safeguards and Redress: Providers must mitigate copyright risks through technical safeguards and offer a reliable contact point and redress mechanism for copyright-related complaints.
Chapter 3: Safety and Security
This chapter is the most detailed, as it applies exclusively to GPAI models assessed as posing systemic risk. It outlines eleven principles and nine commitments, offering guidance on how to safely operate such models within the EU market. Below, we summarise the EU’s key recommendations for ensuring the safety and security of GPAI models with systemic risk. The main provisions relate to:
- Providing Safety Frameworks: GPAI providers are required to draw up structured safety and security frameworks according to the models provided in the document detailing how risks will be identified and mitigated throughout the entire AI lifecycle. They are encouraged to publicly share them and collaborate with different stakeholders on improving existing safety measures.
- Updating Frameworks: The provided frameworks may not be suitable as AI technologies advance. Signatories commit to appropriately adjust their safety and security frameworks for improved safety outcomes.
- Incident Monitoring: Signatories commit to continuously monitor their AI models and to diligently record and report incidents.
Conclusion
The GPAI Code of Practice, developed through a multistakeholder effort involving experts from academia, industry, and the public sector, offers the first concrete look at how the EU intends to support practical compliance with the AI Act. It codifies best practices for human-centered, responsible AI through clear commitments and corresponding implementation measures.
Although the Code has already faced criticism with Meta refusing to sign over concerns about overregulation and broader industry calls to delay the AI Act, it still marks an important step forward. Despite the tensions, we see the GPAI Code as a foundational move toward industry-wide responsible AI. Paired with the AI Act, there is hope it will spark a Brussels Effect akin to what we have seen with GDPR. Encouragingly, major players like OpenAI have already pledged to sign the Code of Practice. As access to the EU market increasingly hinges on compliance with the AI Act, these standards may well have the chance to shape the global AI landscape.
In a Nutshell: What is the GPAI Code of Practice?
The European Union’s General Purpose AI Code of Practice, released on the 10th of July 2025, is a non-binding framework developed through a multistakeholder process to guide providers of general-purpose AI models in meeting AI Act obligations. It codifies established best practices for responsible, human-centered AI through commitments and corresponding measures across three chapters: Transparency, Copyright, and Safety and Security.
What does the Transparency Chapter require?
Signatories must maintain accurate, up-to-date documentation for their models, ensure accessibility and transparency of information, and safeguard the integrity and security of shared data.
What does the Copyright Chapter require?
Providers are required to publish a compliant copyright policy, use lawful data collection methods, respect rightsholders’ content, and implement copyright safeguards along with a reliable complaints mechanism.
What does the Safety and Security Chapter require?
Signatories who provide GPAI with systemic risk must establish and update safety frameworks, continuously monitor their models, and report incidents to ensure ongoing compliance and risk mitigation.
Which AI systems does the Code of Practice apply to?
General-Purpose AI systems are defined in Article 3(63) where they are broadly defined as an AI system that can perform a wide range of distinct tasks (as opposed to being designed for a narrow, specific function). This definition is squarely aimed at large language models like chatGPT, Llama or Gemini, as well as multi-modal foundation models capable of handling and reasoning across multiple data types (text, image, audio). In practice, these provisions therefore apply primarily to those firms developing foundation models, less so to those using them. They may however also apply to deployers who offer substantially modified foundation models to EU customers.
More on AI regulation

AI vs Human Intelligence: The Key Differences Explained

Who is the EU AI Act for and what are their Obligations?
