The Quiet Annex Nobody Is Talking About
While most of the EU AI Act commentary in recent weeks has focused on the high-risk compliance deadline moving from August 2026 to December 2027, a different battle is playing out over something far more technical: Annex I.
Annex I of the AI Act is the list of EU sectoral product laws into which the AI Act's requirements will be integrated. It covers everything from machinery and medical devices to toys and radio equipment. Getting Annex I right matters because it determines which existing regulatory frameworks — and which experienced sectoral regulators — will handle AI compliance for products that already have safety oversight.
On April 8, 2026, CDT Europe joined 32 other civil society organisations in an open letter to the European Parliament and Council raising concerns about what Parliament's position on the AI Omnibus does to Annex I.[1] Their message: the proposed changes to Annex I could fundamentally reshape how AI is regulated across Europe's product sectors — and not in a way that benefits businesses or protects people.
What Parliament Wants to Do to Annex I
The Commission originally proposed keeping the AI Act's sectoral integration approach largely intact — AI requirements get added to existing EU product safety laws, with each sector retaining its own enforcement authority. That was the Annex I-A approach.
Parliament's position in the Omnibus goes significantly further. According to A&O Shearman's analysis of the trilogue negotiations, Parliament wants to move all Annex I-A product categories into Annex I-B and then horizontally integrate AI Act requirements across multiple sectoral laws simultaneously.[2] Rather than adding AI rules into each product regulation separately, this approach puts AI requirements into one place and lets them flow across multiple laws at once — machinery, toys, radio equipment, medical devices, pressure equipment, PPE, gas appliances, cableways.
The civil society letter calls on co-legislators to "preserve the integrity and scope of the AI Act and maintain Section A Annex I" — meaning the original structure that keeps sectoral integration specific and grounded in existing regulatory frameworks.
Why the Structure of Annex I Actually Matters
At first glance, this looks like a debate only lawyers and policy specialists would care about. But the structure of Annex I determines two things that have real practical consequences.
First, enforcement expertise. When an AI system embedded in a medical device malfunctions, the regulator who handles it should know medical devices. When a safety-critical AI in a machine causes harm, the relevant authority should understand machinery safety. Horizontal integration — applying AI rules across sectors from a single legal reference point — risks dispersing enforcement authority in ways that don't match the actual expertise on the ground. The concern from civil society is that this structural change could, over time, weaken the quality of enforcement for AI-related harms in regulated products.
Second, clarity for businesses. A horizontal integration approach sounds efficient. One set of AI rules, applicable everywhere. But in practice, when something goes wrong — when a company's AI-powered product is found to cause harm — the question of which regulatory framework applies becomes critical. Fragmenting AI requirements across multiple sectoral laws without clear guidance creates compliance uncertainty that could fall heaviest on smaller businesses that lack large regulatory affairs teams.
The Clock Is Ticking: 17 Days to the Political Trilogue
What makes this week's civil society letter particularly timely is the countdown. A political agreement on the consolidated AI Omnibus text is expected at the next formal trilogue meeting on April 28, 2026 — seventeen days from today.[2] If that timeline holds, the final text — including the fate of Annex I — could be locked in within weeks and published in the Official Journal by July.
The civil society letter is explicitly addressed to co-legislators in the context of the ongoing trilogue. Its purpose is to make sure that, in the rush to reach a political agreement before summer, the structural question of Annex I doesn't get traded away as a negotiating chip for something else.
What This Means for SMEs
If you manufacture or sell products in the EU that incorporate AI — machinery with AI safety controls, consumer devices with AI features, medical or health-adjacent products — the structure of Annex I affects your compliance pathway.
Under the original Annex I-A approach: your AI obligations sit inside your existing sectoral regulatory framework. You deal with the regulator you already know, using the compliance processes you already have. The AI Act adds requirements, but within a familiar structure.
Under Parliament's Annex I-B horizontal integration approach: the AI requirements you need to follow may not sit inside your sectoral law at all — they may sit in a separate horizontal layer that applies across sectors. That creates ambiguity about which authority to engage with and which standards to follow when your product crosses sectoral lines.
For SMEs specifically, that ambiguity is a cost. Larger companies have regulatory affairs teams that can navigate unclear frameworks. Smaller companies need clear, predictable rules — not structural uncertainty that requires expensive external advice to resolve.
The Bottom Line
The civil society letter on Annex I is not a broad attack on the Omnibus — the organisations signing it are not against simplification in principle. It is a specific warning about a structural change that could, in the view of 33 civil society groups, weaken the coherence of AI regulation in regulated product sectors for years to come.
For SMEs watching the trilogue from the outside: the April 28 date is worth marking. Whatever gets agreed at that meeting — including the fate of Annex I — will shape the compliance landscape you operate in through 2027 and beyond. The deadline delay for high-risk AI is the headline. The architecture of Annex I is the subtext that could matter just as much.
This article is for informational purposes only and does not constitute legal advice.
Sources
Kennen Sie Ihr EU KI-Risikoniveau in 10 Minuten
Unser kostenloses Audit führt Sie durch die genauen Fragen zur Klassifizierung Ihrer KI-Systeme und zeigt, was Sie vor dem 2. August 2026 tun müssen.
Kostenloses Audit starten →⚠️ Keine Rechtsberatung — nur zur Orientierung