Systemic Risk

February 24, 2026
4 min read
Delve into the Systemic Risk category within AI regulation and find out more about what it means and how to ensure compliance.

Definition

Systemic risk in the context of AI regulation refers to the potential for AI systems or technologies to cause widespread, interconnected, or cascading negative effects on society, the economy, or specific industries, particularly when those systems fail, behave unpredictably, or lead to unintended consequences. These risks are not just isolated incidents but have the potential to disrupt large-scale systems, affecting multiple stakeholders across various domains.

Systemic risk has emerged as a central concept under the EU Artificial Intelligence Act (EU AI Act). Systemic risks are those that have the potential to cause large-scale harm. Unlike ordinary risks, which might affect a single organisation or sector, systemic risks can impact entire communities, markets, or even society at large. In the context of AI, systemic risks are primarily associated with the most advanced general-purpose AI models. These models push the state of the art and can introduce novel hazards.

Examples of systemic risks include:

  • Lowering barriers to chemical or biological weapons development
  • Loss of control over autonomous AI models
  • Disruptions to critical infrastructure or public health
  • Spreading illegal, false, or discriminatory content
  • Threats to democratic processes, privacy, or economic security

Notably, some AI models below the cutting-edge threshold may also pose systemic risks due to their reach, scalability, or ability to act as a foundation for other models.

How the AI Act identifies Systemic Risk

The AI Act takes a risk-based approach to regulation. For general-purpose AI models, systemic risk is determined in two main ways:

1. High-impact capabilities


Models are considered high-impact if their abilities match or exceed those of the most advanced AI available. Factors include:

  • Number and quality of training data
  • Computational resources used (measured in floating-point operations, or FLOP)
  • Level of autonomy and scalability
  • Access to tools and modalities (text, image, or speech processing)
  • Number of users or businesses affected

2. Computational threshold


Models that exceed a threshold of 10^25 FLOP during training are presumed to pose systemic risk. This threshold reflects the scale of computation needed to reach cutting-edge capabilities, currently estimated to cost tens of millions of euros to train a single model.

The Commission can adjust this threshold over time, reflecting changes in technology and ensuring that the most advanced models continue to be monitored.

Challenges of Systemic Risk

Despite clear definitions, systemic risk is inherently uncertain. The AI Act’s approach introduces several issues:

Institutionalised uncertainty: Uncertainty about systemic risk is embedded within the law itself, leaving regulators and companies to interpret and operationalise it.

Delegation to private actors: Providers of AI models are responsible for defining whether their models present systemic risks, creating potential gaps in oversight.

Model vs system distinction: Systemic risk is assessed at the model level, but the interaction of models within wider AI systems can also create risks that go unaddressed.

In essence, systemic risk acts as a proxy for uncertainty about the broader societal impact of AI, rather than a precise, quantifiable category.

Managing Systemic and High Risks

For organisations, understanding systemic risk is vital to governance and compliance. Steps include:

Building governance structures: Committees should include technology, cybersecurity, legal, compliance, HR, and other teams interacting with AI.

Risk assessment: Identify systemic and high-risk AI applications, considering potential societal harm and model capabilities.

Controls and compliance: Implement testing, documentation, transparency, and reporting requirements in line with EU or local laws.

High-risk AI systems, distinct from systemic risks, are classified based on their function. These include AI used in critical infrastructure, employment decisions, education, biometric identification, law enforcement, and access to essential services.

Looking Ahead

The AI Act represents the EU’s first major attempt to regulate AI in a proportionate, risk-based manner. Systemic risk highlights the complexities of governing rapidly evolving technologies. While the concept introduces uncertainty, it also ensures that regulators and businesses remain alert to the potential widespread consequences of advanced AI.

Key takeaways

  • Systemic risk refers to the potential for large-scale harm caused by the most advanced or widely impactful AI models.
  • General-purpose AI models with high-impact capabilities or exceeding a computational threshold are presumed to pose systemic risk.
  • The AI Act embeds uncertainty into its rules, delegating risk assessment to model providers and regulators.
  • Governance requires cross-functional teams, risk assessments, and clear policies for testing, documentation, and reporting.
  • High-risk AI systems are defined by their function and potential societal impact, covering areas such as critical infrastructure, employment, education, and law enforcement.

Related Terms

No items found.