Big News From Brussels: High-Risk AI Gets More Time
In a significant development for EU businesses, both the European Parliament committees and the Council of the EU have agreed on new, fixed compliance deadlines for high-risk AI systems — pushing back key obligations by more than a year. This is part of the broader Digital Omnibus Package, a reform effort aimed at simplifying the EU's digital regulations.
Here's the short version: if your business uses or deploys high-risk AI systems (more on what that means below), you now have until 2 December 2027 or 2 August 2028 to fully comply — depending on your situation. The August 2026 deadline that was looming for many companies has been pushed back.
This article is for informational purposes only and does not constitute legal advice.
What Changed and When
The story starts with the European Commission's Digital Omnibus Package, presented in November 2025. The proposal was designed to simplify and streamline several digital regulations, including the EU AI Act.
On 13 March 2026, the Council of the EU agreed its negotiating position on the package. Then, on 18 March 2026, the committees of the European Parliament voted in favour of the same approach — with 101 votes in favour, 9 against, and 8 abstentions.
The two institutions now agree on the new timeline:
- 2 December 2027 — for standalone high-risk AI systems (those that aren't embedded in a physical product)
- 2 August 2028 — for high-risk AI systems embedded in products covered by EU sectoral safety legislation (think medical devices, machinery, vehicles)
The reason for the extension: key technical standards that businesses need to actually comply with high-risk rules were not going to be ready in time. Rather than forcing companies to comply against an undefined standard, the EU decided to set fixed dates tied to when those standards will realistically be available. The Parliament's plenary vote is expected on 26 March 2026, after which formal trilogue negotiations with the Council will begin to finalise the text.
What Counts as a High-Risk AI System?
If you're an SME wondering whether this affects you, the key question is whether you use AI in any of these areas:
- Human resources — automated CV screening, performance monitoring, or employment decisions
- Credit and insurance — AI-assisted creditworthiness assessments
- Education — AI that evaluates students or determines access to education
- Critical infrastructure — AI in energy, water, or transport systems
- Safety components in regulated products (medical devices, machinery, etc.)
General productivity tools — AI writing assistants, translation tools, basic chatbots — are typically not high-risk under the current classification.
New Bans: Deepfakes and Abusive Content
Alongside the deadline changes, both the Council and Parliament are adding a new prohibition to the AI Act: AI systems that generate non-consensual sexual and intimate content, including child sexual abuse material, will be explicitly banned. This is a direct response to widely reported abuses of so-called "nudifier" AI tools.
The Parliament goes slightly further, proposing to ban "nudifier" systems that create or manipulate sexually explicit images resembling identifiable real persons without consent — unless the system has effective safeguards built in.
These bans are not about penalising the victims of such content — they target the AI tools themselves and their providers.
Other Changes Worth Noting
Beyond deadlines and bans, the Digital Omnibus Package includes a few other changes relevant to SMEs:
- SME support extended to small mid-caps: The Parliament's position backs extending support measures (lighter compliance guidance, regulatory sandboxes) to small mid-cap enterprises, not just micro-businesses and SMEs.
- Watermarking deadline shortened: MEPs want a shorter extension for AI watermarking (labelling AI-generated content) — with a new compliance date of 2 November 2026. If your business produces AI-generated images, audio, or video at scale, this is worth monitoring.
- Registration obligations reinstated: Even if your AI system is self-assessed as not high-risk, you may still need to register it in the EU AI database under the Council's position. This is a notable addition that increases transparency obligations.
- AI regulatory sandboxes delayed: Sandbox programmes where businesses can test AI under regulatory supervision will now not be required until 2 December 2027.
What This Means for SMEs Right Now
If you were already working toward the August 2026 high-risk AI deadline: don't stop. The official text isn't final yet — trilogue negotiations still need to conclude. The August 2026 date remains on the books until the Omnibus amendments are formally adopted.
If you haven't started compliance work yet: this extension buys you time, but it does not eliminate your obligations. The requirements haven't changed — only the timeline has shifted. Use the extra time to:
- Inventory which AI systems you use and how they're classified
- Assign someone to own AI compliance internally
- Check whether transparency obligations (chatbot disclosures, AI content labelling) already apply to you under the August 2026 rules — those haven't moved
The core August 2, 2026 deadline for transparency requirements, AI literacy obligations, and General-Purpose AI model rules remains unchanged. Only the high-risk specific rules have been extended.
Bottom line: this is good news if you operate high-risk AI systems and were struggling to meet the timeline. For most SMEs using standard business AI tools, the immediate compliance picture hasn't changed significantly — August 2026 still requires action on transparency and literacy.
This article is for informational purposes only and does not constitute legal advice.
Sources
- [1]Ebner Stolz — EU Parliament and Council in favour of new deadlines and bans on AI regulation
- [2]EU AI Act Newsletter #98 — Parliament, Council and the Omnibus Showdown (Future of Life Institute)
- [3]European Parliament press release — MEPs support postponement of certain AI rules
- [4]Council of the EU — Position to streamline rules on artificial intelligence
Know your EU AI Act risk level in 10 minutes
Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.
Start Free Audit →