← Volver al Blog
Policy Updates
4 min read6 April 2026

The AI Act Has a Loophole — And the Delay Makes It Bigger

The EU AI Act is non-retroactive. Combined with the Omnibus delay pushing high-risk deadlines to December 2027, critics warn that AI systems deployed before then may never need to comply. Here's what that means for SMEs.

The Delay Everyone Welcomed Has a Side Effect

When the European Parliament voted to push high-risk AI deadlines from August 2026 to December 2027, the reaction from industry was relief. More time to prepare. More time for standards to land. Reasonable.

But a new analysis published this week by TechPolicy.Press highlights an uncomfortable consequence that most businesses — and many policymakers — haven't fully reckoned with: the EU AI Act is non-retroactive. Combined with the delay, that creates a structural gap in the law that could leave some of the riskiest AI systems permanently outside its reach.[1]

How Article 111 Creates the Gap

Article 111 of the AI Act states that high-risk AI systems placed on the market before the compliance deadline only need to meet the new rules if they undergo "significant changes" after that date.[2] Under the original timeline, the cutoff was August 2, 2026. Under the proposed Omnibus amendments, it shifts to December 2, 2027.

In practice, that means an AI hiring tool, a credit scoring system, or a biometric identification platform that launches before December 2027 could operate indefinitely without ever needing to comply with the AI Act's high-risk requirements — no conformity assessment, no technical documentation, no mandatory human oversight — as long as the provider doesn't make substantial changes to the system.

Laura Caroli, a former co-negotiator of the AI Act, put it plainly in her interview with TechPolicy.Press: a hiring AI placed on the market before December 2027 "may remain outside the AI Act indefinitely, unless it is substantially altered after that date."

A Race to Market?

The concern isn't just theoretical. MEP Sergey Lagodinsky of the Greens described the provision as "a loophole" and warned it creates "an incentive to put things on the market before the Act enters into force, and especially put on the market AI systems which are high risk."

The logic is straightforward: the higher the compliance burden, the greater the incentive to launch before the rules apply. Companies facing the most demanding conformity assessments — biometrics, employment, credit — have the most to gain from moving early. Bram Vranken of the Corporate Europe Observatory warned that "some companies might abuse this timeline and quickly push risky AI systems onto the market without having to comply with the Act."[3]

Whether this actually happens at scale remains to be seen. Many responsible companies will comply regardless — reputational risk and customer trust aren't governed by legal cutoff dates. But the structural incentive exists, and it's worth understanding.

What This Means for SMEs

If you're an SME buying AI tools — especially in hiring, credit, or education — this has a direct practical implication. The AI vendor selling you a high-risk system may not be required to provide the compliance documentation the AI Act envisions if their system was placed on the market before December 2027.

That doesn't mean you should accept an undocumented tool. It means you need to be a smarter buyer:

  • Ask your vendor directly: Does this system comply with the EU AI Act's high-risk requirements? If not, when will it? Get the answer in writing.
  • Check your deployer obligations. Even if your vendor isn't legally required to comply yet, your obligations as a deployer — human oversight, usage logging, impact assessments — may still apply depending on how the final Omnibus text is worded.
  • Don't treat the loophole as a shield. If an AI hiring tool screens candidates unfairly, GDPR still applies. Anti-discrimination law still applies. The AI Act isn't the only regulation in play. A system that dodges the AI Act's requirements doesn't dodge accountability.

The Bigger Picture

This gap wasn't created by the Omnibus alone — Article 111 was in the original AI Act text. But every month the deadline shifts forward, the window of non-retroactive exemption grows wider. A system launched in 2025 was always going to be grandfathered. A system launched in November 2027 — after years of public debate about AI risks — being permanently exempt feels different.

The trilogue negotiations that will finalise the Omnibus text are expected to conclude before summer. One of the open questions is whether negotiators will tighten the Article 111 provisions or leave them as-is. For SMEs, the practical advice remains the same: don't rely on legal technicalities to protect you from real-world AI risk. Build your compliance foundation now — transparency disclosures, AI inventories, staff briefings — and evaluate your vendors on substance, not on whether they've found a way around the deadline.

This article is for informational purposes only and does not constitute legal advice.

Conozca su nivel de riesgo IA en 10 minutos

Nuestra auditoría gratuita le guía a través de las preguntas exactas para clasificar sus sistemas de IA e identificar lo que necesita hacer antes del 2 de agosto de 2026.

Iniciar auditoría gratuita →

⚠️ No es asesoramiento jurídico — solo con fines orientativos