← Zurück zum Blog
Policy Updates
5 min read29 March 2026

The EU's Draft Rulebook for AI Content Labels Is Out. Here's What It Actually Requires.

The European Commission published a second draft Code of Practice on AI content labelling on 5 March 2026 — a practical guide to complying with Article 50's transparency rules. It's more flexible than the first draft. But the clock is ticking.

A Practical Rulebook for AI Labels

On 5 March 2026, the European Commission published the second draft of its Code of Practice on the Marking and Labelling of AI-generated content.[2] It's not the final version — that's expected in early June, about two months before August 2, 2026, when Article 50 of the EU AI Act becomes enforceable. But it's the clearest picture yet of what regulators actually expect from businesses that create or publish AI-generated content.

Here's the short version: the new draft is more flexible and less burdensome than the first draft published in December 2025. But "more flexible" still means specific obligations. If your business uses AI to generate images, video, audio, or text for public audiences, you need to know what this Code says.

What Article 50 Actually Requires

The Code of Practice is the practical handbook for complying with Article 50 of the AI Act — the transparency chapter. Two obligations sit at its core.

First, providers (companies that build AI tools) must mark their AI's outputs in a machine-readable format so that content generated by their systems can be detected programmatically.[3] Think of it as an invisible label embedded in every AI-generated image or audio file — something that detection software can read even if a human can't see it.

Second, deployers (companies using those tools) must add a human-visible label when they publish AI-generated deepfakes — videos, images, or audio designed to look or sound like real people or real events — and when they publish AI-generated text on matters of public interest. That visible label is the thing your audience actually sees: "This image was generated using AI" or a similar clear disclosure.

The Code is technically voluntary — signing up and adhering to it isn't legally required. But in practice, following the Code will be treated as the clearest way to demonstrate you've met your obligations under Article 50. The Commission has explicitly built this Code as a compliance pathway.[1]

What Changed in the Second Draft

Compared to the first draft, the second draft offers greater flexibility on how disclosures are implemented. Organisations have more latitude to choose methods that suit their technical infrastructure — the Code no longer prescribes a single technical approach for marking. It also reduces the overall compliance burden by streamlining documentation requirements.

The key exceptions have also been clarified. Human-created text that has been through genuine editorial review and published under editorial responsibility is excluded from the labelling requirement. If a journalist uses AI to draft an article, then edits it substantially and publishes it under their byline with editorial accountability, that probably falls under the exception. If a business is pressing publish on AI-generated content with minimal human review, it probably doesn't.

That distinction matters for marketing teams, newsletter publishers, and content-driven businesses. The test isn't whether AI was involved in creation — it's whether there was genuine human oversight and editorial responsibility before publication.

Why Deployers Bear the Visible Disclosure Obligation

One practical point that trips up a lot of businesses: the visible labelling obligation sits with the deployer, not the AI tool provider. OpenAI, Midjourney, and Adobe Firefly are responsible for the machine-readable marks embedded in their outputs. But if you take those outputs and publish them on your website, in your marketing, or in your communications, the visible human-readable disclosure is your responsibility.

Your AI vendor won't add a label to your published content. That's your customer-facing communication, and the obligation travels with whoever publishes it.

What This Means for SMEs Right Now

The practical implications for small businesses depend on what you're producing. Three common scenarios:

  • AI-generated product images or marketing visuals: If these are published as if they're real photographs or real people, you'll need a visible disclosure. "Generated with AI" in the caption or image description covers this.
  • AI-written newsletters or public commentary: If AI substantially wrote the content and it wasn't genuinely reviewed and edited by a named human author before publication, the text labelling obligation applies.
  • AI-assisted content that went through real editorial review: If a human reviewed it, edited it meaningfully, and is editorially responsible for what was published — you're likely in the exception. Document that process.

The June finalisation of the Code will bring more specific guidance. But the core principle isn't going to change: if an ordinary reader wouldn't realise AI was substantially responsible for what they're looking at, you should tell them.

The Timeline

The Code is expected to be finalised in early June 2026. Article 50 becomes enforceable on August 2, 2026. That leaves about two months between publication of the final guidance and the compliance deadline. If you wait for the final Code to start thinking about this, that's not enough time to implement labelling practices across your business.

The smart move: read the second draft now, decide which of your AI-generated content falls in scope, and start building a disclosure approach based on what you know. Adjust for any final changes in June. You'll be in a far better position than if you start from scratch in July.

This article is for informational purposes only and does not constitute legal advice.

Kennen Sie Ihr EU KI-Risikoniveau in 10 Minuten

Unser kostenloses Audit führt Sie durch die genauen Fragen zur Klassifizierung Ihrer KI-Systeme und zeigt, was Sie vor dem 2. August 2026 tun müssen.

Kostenloses Audit starten →

⚠️ Keine Rechtsberatung — nur zur Orientierung