Who is the EU AI Act for and What are Their Obligations?
As of the 1st of August, the European Union Artificial Intelligence Act (EU AI Act) has passed into law, marking a key moment in AI regulation. This legislation introduces a tiered, risk-based approach to AI governance for any system deployed across the EU.
The EU AI Act’s implementation will be gradual, allowing organisations time to adapt. However, it’s key to note that by February 2025, all AI applications classified as “prohibited” under the AI Act will be illegal and fully sanctionable. It is thus urgent that all organisations developing or using AI systems on the EU market ensure they comply with the new regulations to avoid hefty sanctions.
The first step in ensuring compliance with the AI Act, is identifying your and your organisation’s role in the AI value chain. The AI act defines these roles as “operator classes”, each carrying specific legal obligations. This article provides an overview of these roles and outlines their responsibilities under the new legislation.
The Operator Classes
Legislators recognise that the value chain for AI systems is complex and that responsibilities for compliance should be distributed fairly. In general, all providers, deployers, and any other third parties involved with AI systems are obliged to cooperate on ensuring compliance with regulations on every step along the AI value chain. They are prompted to hold other operators accountable through compliance assessments, continuous documentation, and reporting of malfunctions or regulatory breaches.
Article 3 of the EU AI act defines six classes of operators: providers, deployers, distributors, importers, authorised representatives, and product manufacturers. Operators may fall into more than one category. For example, an organisation that acts both as provider and distributor of an AI system, must fulfill responsibilities attributed to both roles.
Provider
The AI Act defines a provider as…
“(…) a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge” (emphasis added)
Put simply, a provider is any person or organisation that develops an AI system and places it on the market. Examples include product manufacturers who incorporate AI systems into their products; developers who create and directly offer AI systems to users or deploy them for their own use; and organizations that modify the intended purpose of an AI system already on the market.
The provider is the main operator subject to controls enacted by the AI Act. The Act establishes a risk-based approach, with provider responsibilities varying according to the risk level of their AI system. Their responsibilities extend beyond the initial market introduction, as providers must continuously monitor their system, report ethical transgressions, and take corrective measures to ensure ongoing compliance and safety. For systems classified as "prohibited," providers must take immediate action. These systems must be decommissioned by February 2025, as they will become illegal after this date.
For AI systems with minimal or limited risk, provider responsibilities are less stringent but still important. These typically include ensuring transparency by clearly informing users that they are interacting with an AI system. Providers must also comply with existing EU regulations on data protection, consumer protection, and product safety. Furthermore, they are encouraged to implement voluntary codes of conduct to maintain ethical standards.
High-risk AI systems face the most comprehensive requirements. While this list may appear daunting, it's designed with a crucial purpose: to preemptively address potential issues throughout the AI system's lifecycle. By adhering to these requirements, providers not only ensure compliance but also contribute to the development of trustworthy and ethical AI systems.
Specific provisions may apply, but in essence all providers of these systems must comply with a substantive list of requirements, set out in Chapter III, Section 3, art 16. In summary, the main requirements are to…
- Create, and demonstrate upon request, the corresponding documentation of the AI system in question. This includes a “conformity assessment” and a declaration of conformity in line with EU requirements. Following that, the CE marking has to be attached to the AI system to indicate successful assessment and compliance with EU ethical standards.
- Develop a technical monitoring system in place which logs the performance of their AI system and testifies adherence to regulations pre- and post-market.
- Implement a risk management system covering the AI’s life cycle.
- Ensure data governance and high-quality data standards.
- Establish a quality management system across all stages of development, training, distribution, and usage of an AI system, proportionate to the size of the provider’s organisation.
- Ensure appropriate human oversight and transparency to those interacting with the system.
- Register their AI system, whether it be classified as high-risk or other, on a publicly available EU database maintained by the European Commission.
- Report any post-market malfunction or breach of regulations.
Deployer
A deployer is defined as follows in the AI Act:
“(…) a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”
So, different from an operator, a deployer or user of a respective AI system has the chief obligation to operate it according to the provider’s instructions and only enter data relevant for the system’s original purpose.
However, the Act introduces a crucial caveat: if a deployer modifies the AI system in a way that alters its intended purpose or represents a substantial change, they effectively assume the same obligations as a provider. This provision ensures that significant alterations to AI systems do not escape regulatory oversight. If no such alterations are made, a deployer is obligated to…
- Perform continuous assessments of the impact of their system usage on fundamental rights which details purpose, extent, and time frame of usage, affected parties and risks of harm. However, they may also rely on previous assessments carried out by the provider and are called to cooperate on any necessary updates to it. (This is only the case under specific circumstances, such as for example, high risk AI systems that meet the criteria in 5b and c)
- Keep automated and detailed logs of usage and recording potential malfunctions, especially as a user of a high-risk AI system.
- Contact the provider and relevant authorities should malfunctions occur or suspicions of non-compliance arise and provide them with all information necessary to facilitate investigation and corrective measures.
- Maintain full transparency towards employees and their representatives about being subjected to a high-risk AI system, in case the deployer is an employer.
- Provide human oversight and ensure competence in the use of a respective AI system through training and support measures. Records of the implementation of human oversight measures should also be kept and included in any assessment. Furthermore, deployers have to have governance and risk mitigation arrangements in place should external or internal complaints or malfunctions arise.
- Ensure the AI system has been properly registered within appropriate data bases and refrain from using a system should there be any suspicion that it is not registered.
- Ensure their usage of an AI system only falls within ethical standards and permissions.
Distributor or Importer
The terms distributor and importer are defined as follows:
“(…) a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.” (Distributor)
“(…) a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.” (Importer)
These two terms probably will not apply to the majority of AI operators, but nonetheless are critical given the global nature and portability of AI technology.
Distributors and importers largely share the same obligations under the EU AI Act as the intermediary between the provider and the market. Both are principally held to ensure that the provider of the AI system they distribute, or import is compliant with regulations. They must not release the AI system onto the market until full compliance with the EU AI Act has been verified. This particularly means that distributors and importers are under the following obligations, namely to…
- Confirm that the AI system is marked with the appropriate CE marking, is accompanied by an EU declaration of conformity, and that it has thus been subjected to the conformity assessment procedure.
- Ensure that the provider follows its obligations under the EU AI Act regulations such as having drawn up technical documentation and having appointed an authorized representative.
- Ensure that storage and transport does not affect compliance.
- Set into motion or take corrective actions should the AI system not comply with regulations. They can take corrective measures themselves and must report malfunctions and non-compliance to their provider and relevant authorities providing details and evidence of said breach and any corrective measures taken. As soon as the suspicion of an ethical regulation breach arises, the distributor or importer shall withdraw the AI system from the market.
- Provide all necessary and available information and documentation to authorities to prove conformity with regulations.
-
For importers the AI Act adds the following responsibilities, namely to…
- Ensure that their trademark and contact information are attached to the high risk AI system on packaging or documentation.
- Keep records which demonstrate the conformity of a respective AI system on the EU market for 10 years such as technical documentation, declaration of conformity, instructions for use.
In short, distributors and importers are obliged to cooperate with authorities on mitigating the risks posed by the AI system on the market be it through provision of information or through actions required for conformity.
Authorised Representative
“(…) a natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.”
All providers are responsible for mandating an EU-based, authorised representative for their high risk AI system. This is especially important for providers outside of the Union to ensure all systems entering the EU market are held to the same standards and comply with the same legal and regulatory burdens. The authorised representative’s role primarily comprises to…
- Fulfill the tasks outlined in the mandate received from their provider such as verifying all necessary documentation has been issued and drawn up, and all necessary procedures have been performed such as assessments and registration. If mandated by the provider, the representative may also be responsible to comply with registration obligations.
- Act as a contact for distributors, importers, deployers, and authorities to cooperate on full compliance with regulations who will be in possession of all necessary documentation and information.
- Terminate their mandate should the representative suspect non-compliance to regulations. Connected to this, the representative is also obliged to report such suspicions and the termination of their mandate to the provider and all relevant authorities including documentation of their reasoning.
Protection of Competitiveness
The responsibilities we have just listed produce considerable administrative and technical efforts. Common concerns with safety regulations such as the EU AI Act are that an increase in bureaucracy and organisational effort may decrease competitiveness or hinder the development of emergent technologies altogether.
However, the EU AI act specifically contains the objective to support innovation and development of AI, albeit within a coherent ethical framework. Hence, legislators aim to enable fair competition and progress within the market. For example, should a provider identify as a small or midsize enterprise, the European commission plans to reduce the financial and administrative burden by developing simplified guidelines for quality management systems. Thus, resources and market share of a provider will also determine the relative effort required to meet obligation.
In A Nutshell
All actors on the AI value chain are obligated to continuously monitor and ensure compliance with the EU AI Act. While the main responsibilities fall onto the provider and deployer, other parties will also be held accountable for failure to report or act on regulatory breaches.
The extent of the compliance mandate is, in essence, a function of the risk level that the AI system poses, as well as the role that the organisation has within the AI value chain. The more risky the AI system, and the more responsibility within the development of the AI system in question, the greater the compliance effort will be. In turn, for most AI systems that do not pose any risks to fundamental rights, there is compliance mandate at all. That way the EU seeks to tailor the compliance efforts to strike a balance between innovation and safety.
No matter the role you play, at oxethica we can effectively support you with meeting your obligations. As an AI Governance Platform we provide expertise and a practical webtool - AI-conform - to assist auditing and maintaining the compliance of your AI system.