← Zurück zum Blog
News
5 min read25 March 2026

High-Risk AI Deadlines Extended — What the EU Actually Agreed

Parliament and Council have agreed new fixed deadlines: December 2027 and August 2028 for high-risk AI. Here's what changed, what didn't, and what it means on Monday morning.

Two New Dates. One Important Caveat.

The EU Parliament and Council have both agreed: high-risk AI compliance deadlines are being pushed back. The new dates are December 2, 2027 for standalone high-risk AI systems, and August 2, 2028 for AI embedded in regulated physical products like medical devices and machinery. The original deadline for both was August 2026.[3]

The caveat: this isn't law yet. Both institutions have adopted their negotiating positions, which means trilogue — the closed-door negotiations between Parliament, Council, and Commission — can begin. Given that both sides broadly agree on the core elements, the amended text could be finalised before August 2026. But "could" and "will" are different things. Until it's published in the Official Journal, August 2026 is still the legal deadline.

Why the Extension Happened

The problem was never political will — it was technical standards. The EU standardisation bodies CEN and CENELEC were supposed to publish harmonised standards that tell companies what "compliant high-risk AI" actually looks like in practice. They're running late. Standards are now expected late 2026 at the earliest. Forcing businesses to comply against standards that don't exist yet would produce compliance theatre, not compliance.[2]

So both institutions agreed to set fixed dates tied to when those standards will realistically land. This is the EU doing something unusual: admitting its own timeline was unrealistic and adjusting accordingly.

What Counts as High-Risk

Annex III of the AI Act defines the list: AI used in biometrics, hiring and HR management, credit and insurance, education, critical infrastructure, law enforcement, and border control. If you use AI to screen job applicants, score creditworthiness, or assess student performance, you're in high-risk territory.

General productivity tools — writing assistants, ChatGPT for drafting, spam filters — are not high-risk. Customer service chatbots are limited risk. The extension primarily benefits companies using AI in those Annex III categories.[4]

The New Ban Worth Knowing About

Alongside the deadline changes, both Parliament and Council are adding a new prohibition: AI systems that generate non-consensual sexually explicit imagery of identifiable real people — so-called "nudifier" apps — will be explicitly banned. This is a direct legislative response to documented harms. For most businesses, it's background noise. But it signals the EU is willing to add prohibitions as harmful use cases emerge.

What Hasn't Changed

Transparency obligations under Article 50 — chatbot disclosures, AI content labelling, emotion recognition notifications — are not affected by this extension. August 2, 2026 remains the date. The prohibited AI practices ban has been in force since February 2025. Neither is part of the high-risk delay.

There's also a small addition from the Council's position worth watching: even if your AI system is self-assessed as not high-risk, you may still need to register it in the EU AI database. That's new, and it increases transparency obligations beyond what the original text required.[1]

The Practical Bottom Line

If you've been losing sleep over high-risk AI conformity assessments due in August 2026, genuine relief is coming — just not quite yet. Keep working on it. When the Omnibus is formally adopted, adjust your timeline.

If your main AI compliance concern is chatbot disclosures and basic transparency, this extension doesn't change anything for you. August 2026 is your date. The good news is that for most SMEs, that compliance work takes hours, not months.

This article is for informational purposes only and is not legal advice.

Kennen Sie Ihr EU KI-Risikoniveau in 10 Minuten

Unser kostenloses Audit führt Sie durch die genauen Fragen zur Klassifizierung Ihrer KI-Systeme und zeigt, was Sie vor dem 2. August 2026 tun müssen.

Kostenloses Audit starten →

⚠️ Keine Rechtsberatung — nur zur Orientierung