← Back to Blog
Policy Updates
4 min read10 April 2026

The EU's AI Rule Has a Threshold Problem — And It's About to Catch Up With Foundation Model Providers

The EU AI Act uses a 2023 compute threshold to flag AI models as "systemic risk." As of April 2026, dozens of models are expected to blow past it — yet the Commission has not committed to updating the number. Here's why that matters for every business that uses or builds on top of foundation models.

The Number Nobody Is Talking About

Somewhere in the EU AI Act is a number that determines whether a foundation AI model faces extra scrutiny. That number is 1025 FLOPs — a measure of how much computing power was used to train the model. Cross that line, and your model is classified as posing "systemic risk" under the Act, triggering obligations around adversarial testing, incident reporting, and cybersecurity documentation.[1]

Here is the problem: that threshold was calibrated in 2023. It reflects what a demanding model looked like two and a half years ago. As of April 2026, that bar is about to be cleared by dozens of models in active development — and the European Commission has not committed to updating it.

This is one of the more technical corners of the EU AI Act, but it has real consequences. And as full enforcement kicks in from August 2026, it is becoming an urgent question for the AI companies that build the foundation models European businesses run on.

Why the Threshold Was Set Where It Was

The 1025 FLOPs figure was chosen based on an analysis of the most capable models at the time the AI Act was finalised — GPT-4, Claude 3 Opus, and Gemini Ultra were the benchmarks. The idea was sensible: only the most computationally expensive training runs produced models with genuinely frontier capabilities that could pose systemic risks to the European market.

The problem is that AI capability curves do not stay still. Efficient training methods, better hardware, and larger distributed runs mean that models trained at a fraction of the compute of GPT-4 can now match or exceed its performance on many tasks. The threshold, as written, is based on a training run's cost — not on what the model can actually do.[2]

As MLex reported on April 9, 2026, experts and EU member state governments have both urged the Commission to update the threshold to reflect current capability benchmarks. The Commission's position so far: no commitment either way.[1]

What Systemic Risk Actually Means — And Why It Matters

If a GPAI model crosses the 1025 FLOPs threshold, its provider faces a specific set of obligations under the AI Act:

  • Adversarial testing: The model must be tested for vulnerabilities and systemic risks through adversarial means, with results reported to the European AI Office.
  • Incident reporting: Serious incidents — cyberattacks exploiting model vulnerabilities, unexpected behaviours causing harm — must be reported to the AI Office within days.
  • Cybersecurity documentation: Providers must document the model's attack surface, summarise measures taken to prevent misuse, and share this with downstream developers.
  • Model registry: The model must be registered in the EU's public AI database before deployment in the European market.[3]

These obligations are not trivial. They require ongoing monitoring, formal reporting processes, and technical documentation that smaller AI companies may struggle to maintain. The GPAI Code of Practice — the agreed compliance framework signed by major providers including Anthropic, Google, and OpenAI — operationalises these obligations. But the Code was built with the 2023 threshold in mind.

The Stakes Are Higher Than They Look

At first glance, this looks like a debate that only affects the handful of companies training frontier-level models. In practice, it has a much wider reach.

If the threshold stays fixed at 1025 FLOPs while the industry moves past it, more and more models will be swept into the systemic risk category — including some that were never designed to pose systemic risk and may not meet the capability bar that the threshold was meant to capture.

Conversely, if the threshold is never updated, it loses its regulatory meaning. Models that genuinely have frontier capabilities sail through because they were trained efficiently, while less capable but still risky models trained on enormous compute runs get flagged instead. The signal becomes noise.

The AI Policy Bulletin framed the issue in broader terms this week: the longer the EU's rules remain unclear or inconsistently applied, the less likely they are to shape global AI development in the way the "Brussels Effect" assumes. For the GPAI framework to work as intended, the threshold question needs to be resolved before enforcement ramps up in August.

What This Means for SMEs

If you are a small or medium business, this threshold debate probably feels distant. You are not training frontier models. But you are almost certainly using them — through ChatGPT, Claude, Gemini, open-source models via API, or AI features embedded in the SaaS tools you already use.

Here is the practical connection: the GPAI obligations that apply to the providers of those models flow downstream to you in two ways.

First, documentation. If your AI vendor crosses the systemic risk threshold, they must provide you with technical documentation and summary of training data — information you may need for your own AI Act obligations as a deployer. A vendor drowning in threshold uncertainty is less likely to give you clean documentation on time.

Second, pricing and availability. Compliance costs money. If a provider suddenly faces systemic risk obligations because their new model clears an outdated threshold, those costs get passed on — or the provider limits access to certain markets. European businesses using US-based foundation models have already seen access restrictions as providers navigated regulatory uncertainty.

For SMEs building products on top of GPAI models, the threshold question is worth monitoring. If you are using a specific model family for a product, check whether that provider has signalled whether their next release might cross the line — and ask them directly what their compliance plans are for that scenario.

What Comes Next

The GPAI obligations — including the systemic risk framework — become enforceable from August 2, 2026. With trilogue negotiations on the AI Omnibus also targeting an April 28 political agreement, the Commission's position on the threshold will likely need to be clarified in the coming weeks.

Two paths are plausible: the Commission updates the threshold via delegated act — a streamlined legislative process that allows faster adjustment — or it leaves the threshold fixed and lets the GPAI Code of Practice evolve to compensate through guidance and interpretative documents.

Neither option is ideal. An updated threshold requires technical expertise and political will. A static threshold with flexible interpretation creates uncertainty that cuts both ways: providers do not know their obligations, and regulators do not know what they are enforcing.

For SMEs, the practical advice mirrors what it always is under the AI Act: keep your AI inventory current, know who your model providers are, and ask them directly about their compliance timelines. The foundation model providers have largely committed to the GPAI Code of Practice. But the threshold question is an early stress test of whether the AI Act's implementation can move at the speed the market requires — and that test is happening right now.

This article is for informational purposes only and does not constitute legal advice.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →