← Back to Blog
News
4 min read28 March 2026

Everyone Celebrated the AI Act Delay. Nobody Mentioned the November Deadline.

The EU Parliament voted to push high-risk AI rules to 2027. But the same vote set a firm November 2, 2026 deadline for labelling AI-generated content — the rule that will actually affect most small businesses.

The Vote Everyone Heard About

On Thursday 26 March, the European Parliament voted 569 to 45 to adopt its position on the AI Omnibus — the package that delays high-risk AI rules from August 2026 to December 2027. Industry groups cheered. LinkedIn filled with relief. Most of the coverage stopped there.

But the same vote contained a second deadline that got almost no attention — one that is likely to affect far more small businesses than the high-risk rules ever would.

The November Deadline Nobody Mentioned

Buried in the Parliament's position: AI-generated content labelling requirements now have a firm deadline of November 2, 2026. That's Article 50 of the AI Act — the transparency rule that says if you're generating images, videos, audio, or text using AI and publishing it publicly, you need to disclose that it's AI-made.

Think of it like the cookie banner, but for AI output. If your marketing team uses Midjourney to make product visuals, if you post AI-written blog summaries, if your customer emails are drafted by a model — you'll need to label them. The European Commission is finalising the practical Code of Practice for how to do this in early June, just two months before November's enforcement window opens.

The fine print here actually matters: the rule doesn't apply to everything. Short AI-assisted text that goes through genuine human editorial review is currently exempt. But "human review" isn't well defined yet — and national regulators will eventually decide what that means in practice. If you're pressing "publish" on AI content with one eye closed, that probably doesn't count.

What This Means on Monday Morning

If you run a small business that uses generative AI tools — for social posts, marketing copy, product images, customer emails — you have roughly seven months to work out a labelling approach. That's actually enough time to do it properly, which is why it's worth starting now rather than in October.

The practical steps are simpler than the legal language suggests. Audit what you're publishing that was made (or substantially assisted) by AI. Decide how you'll disclose it — a brief label, a footer, a consistent note in the content itself. The Code of Practice will give more specific guidance in June, but the basic principle won't change: if an ordinary person wouldn't know it was AI-made, you should tell them.

The high-risk rules — the ones about AI in hiring, credit scoring, medical devices — those are delayed to 2027 and they were always aimed at larger deployments. Most SMEs will never touch Annex III. But almost every business that uses a modern AI tool creates AI-generated content. November 2 is your real deadline.

Also in the Vote: Nudifier Apps Banned

One more thing from Thursday's session: the Parliament added an explicit ban on AI systems used to generate realistic non-consensual sexual images of real people — the so-called "nudifier" apps. The ban only applies to systems that lack technical safeguards to prevent misuse. MEPs described it as a win for women's rights; the provision now heads into trilogue negotiations with the Council and Commission. It's not final law yet, but it signals where the regulation is going on prohibited AI practices.

This article is for informational purposes only and is not legal advice.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →