← Back to Blog
Breaking News
5 min read27 March 2026

EU Parliament Confirms AI Act Delay — High-Risk Deadline Pushed to December 2027

The European Parliament's plenary voted yesterday to officially adopt its position on the Digital Omnibus on AI. High-risk AI compliance is now set for December 2027. Here's what it means for SMEs.

It's Official: Parliament Votes to Delay High-Risk AI Deadline

Yesterday, 26 March 2026, the European Parliament's full plenary in Brussels voted to adopt its official negotiating position on the Digital Omnibus on AI — the package of amendments designed to simplify the EU AI Act.[1]

This is a major milestone. The vote formalises Parliament's mandate ahead of trilogue negotiations with the Council of the EU. Both institutions now broadly agree on the key changes — which means there is a credible path to a new, extended compliance deadline for high-risk AI systems becoming law well before August 2026.

This article is for informational purposes only and does not constitute legal advice.

What Parliament Voted For

The plenary adopted the joint committee report (A10-0073/2026) prepared by the IMCO and LIBE committees, which was already approved on 18 March by 101 votes to 9, with 8 abstentions — an overwhelming majority.[2]

The Parliament's key positions are:

  • High-risk AI systems (Annex III) — the deadline shifts from 2 August 2026 to 2 December 2027. This covers AI used in biometrics, employment decisions, education, essential services, law enforcement, and border management.
  • AI embedded in regulated products (medical devices, machinery, toys) — new deadline of 2 August 2028.
  • AI watermarking / content labelling — Parliament proposes a shorter extension than the Commission suggested: a new deadline of 2 November 2026 (not February 2027).
  • New ban on "nudifier" apps — AI systems that generate or manipulate sexually explicit images to resemble identifiable real people without their consent will be prohibited.
  • SME support extended to small mid-cap enterprises — lighter compliance guidance and regulatory sandbox access will apply beyond the current SME thresholds.

Why Did They Vote to Delay?

The reason for the extension is practical, not political. The technical standards that businesses need to actually demonstrate compliance — published by European standardisation bodies CEN and CENELEC — are not going to be ready by August 2026. These standards define what "good" looks like in practice: how to document an AI system, how to assess performance, what human oversight mechanisms must look like.

Forcing companies to comply against undefined standards would create compliance theatre — businesses ticking boxes without any clear benchmark. The Parliament's rapporteurs concluded a fixed extension to December 2027 is better than artificial urgency against a moving target.

What Happens Next: Trilogue

Yesterday's vote starts the formal EU legislative process known as trilogue — closed-door negotiations between the European Parliament, the Council of the EU (which adopted its own mandate on 13 March), and the European Commission.[3]

Given that both institutions agree on the core elements — the December 2027 high-risk deadline, the nudifier ban, the SME support extension — there is genuine optimism that negotiations could conclude within a few months. But "a few months" in EU terms could still mean July or August 2026. The final amended text isn't law until it clears trilogue, is formally published in the Official Journal of the EU, and enters into force.

The August 2, 2026 date remains legally valid until that process is complete.

The Critical Distinction: What Is NOT Being Delayed

This is the most important thing for SMEs to understand. Not all of the EU AI Act is affected by the Omnibus amendments. Several key obligations remain firmly on the August 2026 timetable:

  • Transparency obligations (Article 50) — chatbots must disclose they're AI, emotion recognition must notify users, AI-generated content must be labelled. Still August 2026.
  • Prohibited AI practices (Article 5) — social scoring, mass biometric surveillance, subliminal manipulation. Already in force since February 2025. No change.
  • GPAI model obligations — rules for foundation models (GPT, Gemini, etc.) already in force since August 2025. No change.
  • AI literacy (Article 4) — already required since February 2025, though enforcement is expected to be proportionate.

If your main concern has been high-risk AI conformity assessment paperwork, there's genuine relief coming. If your concern has been transparency disclosures on your chatbot or AI-generated content labelling, nothing has changed for you.

What SMEs Should Do Right Now

Here's practical guidance based on where things stand today:

  • If you use customer-facing chatbots: Add an AI disclosure to the first message. This is August 2026, it's not changing, and it's a five-minute fix.
  • If you publish AI-generated marketing content: Label it. The watermarking deadline may arrive as early as November 2026 under Parliament's position.
  • If you use high-risk AI in HR, credit, or education: The full conformity assessment timeline is likely to shift to December 2027 — but start your documentation work now. Early compliance is a competitive advantage, and you don't want to scramble in 2027.
  • Don't use "the deadline is changing" as a reason to pause entirely. The transparency and prohibited practices rules are immovable and enforcement-ready.

The Bottom Line

Yesterday's plenary vote is the most significant AI Act development since the law entered into force. The EU is officially acknowledging that the August 2026 high-risk deadline was unrealistic given the state of standards development, and is building a more workable timeline.

For SMEs, this is good news — but not a reason to disengage from compliance. The transparency obligations that apply to most everyday AI use cases are unchanged. And the extended window for high-risk compliance is an opportunity to do it properly, not an invitation to do it last-minute in 2027.

Use the EU AI Audit free assessment to understand exactly where you stand — it takes under ten minutes and maps your AI systems to the obligations that actually apply to you.

This article is for informational purposes only and does not constitute legal advice.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →