← Retour au Blog
News
5 min read27 March 2026

What EU AI Act Article 50 Means for Your Business: Chatbot Disclosures and AI Content Labels

While everyone's focused on the high-risk AI deadline getting pushed to 2027, there's a quieter obligation arriving on August 2, 2026 that affects far more businesses: Article 50's chatbot and AI content disclosure rules.

The Deadline Everyone's Overlooking

Most businesses tracking the EU AI Act this week are relieved. The European Parliament voted on March 26 to push the high-risk AI compliance deadline to December 2027 — an extension of 18 months. For companies building AI into hiring tools, credit scoring, or medical devices, that's significant breathing room.

But there's a separate deadline that isn't moving, and it affects a much wider group of businesses: August 2, 2026. That's when Article 50 — the transparency chapter — kicks in. No delay has been proposed for it. Parliaments's vote this week didn't touch it. If your business runs a customer-facing chatbot, publishes AI-generated content, or uses AI to analyse how customers behave or feel, this is the clock you should be watching.

This article is for informational purposes only and does not constitute legal advice.

Four Rules, One Underlying Idea

Article 50 of Regulation (EU) 2024/1689 is really about one thing: people should know when they're interacting with AI. The regulation breaks that into four specific obligations, each with its own scope.

The first — and the one that catches the most businesses off guard — covers chatbots. If you deploy an AI system that talks to customers, you must tell them it's an AI. The only carve-out is when it's "obvious from context," which is a narrower exception than it sounds. A bot named "Max" on your customer support page probably doesn't qualify. A clearly labelled AI assistant probably does. When in doubt, the regulation says: disclose.

Say you're running a Shopify store with an AI chat widget handling returns and product questions. Under Article 50, you need a visible indicator — something like "You're chatting with an AI assistant" — before or at the start of the conversation. That's not a heavy lift. But it needs to happen.

The second rule covers emotion recognition and biometric categorisation. AI tools that analyse facial expressions in customer feedback videos, infer sentiment from voice calls, or categorise people by demographic characteristics — all of those require disclosure to the people being analysed. This matters more than most SMEs realise; these features are increasingly bundled into customer experience and HR platforms as standard.

Third, deepfakes. AI-generated video, audio, or images designed to look real must be labelled as AI-generated. There's an exception for clearly satirical or artistic works, but that exception is narrow. Marketing content, synthetic spokesperson videos, AI-generated product demos — these all fall under the rule.

The fourth obligation is the one most likely to create headaches for content teams. AI-generated text intended to inform the public on matters of public interest must be labelled. Think newsletters, market commentary, public-facing articles, or investor updates written with AI assistance. The regulation doesn't require you to stop using AI for content — it just requires transparency about it.

You're the Deployer. That Makes It Your Problem.

Article 50 distinguishes between providers — the companies building AI tools — and deployers — the businesses using those tools to serve customers. OpenAI and Anthropic are providers. If you're a business using their products, you're the deployer.

This distinction matters because the disclosure obligation sits with the deployer. OpenAI isn't going to tell your customers they're talking to an AI on your website. That's your interface, your customer relationship, your responsibility. Providers have their own obligations under Article 50(5) around technical labelling standards, but they can't fulfil your transparency duty for you.

The practical implication: don't wait for your AI vendor to handle this. Check what disclosures are already built into the tools you use, then decide what you need to add at the product level.

The Watermarking Question

Parliament's vote this week did include one small concession on Article 50: the technical watermarking standard — the machine-readable label that allows AI content to be verified programmatically — may be delayed from August to November 2, 2026. This is because the European AI Office hasn't finished defining what that standard should look like.

This is a narrower point than it appears. The human-readable disclosure requirement — the label your users actually see — isn't delayed. What's potentially delayed is the underlying technical mechanism that might allow automated detection of AI content across platforms. For most SMEs, the watermarking standard is something your platform providers deal with. The disclosure to your end user remains your job, and August 2 is still the date.

One more caveat: Parliament's position isn't law yet. These changes still need to survive trilogue negotiations with the Council of the EU. Until an amendment is published in the Official Journal, the August 2, 2026 deadline for Article 50 stands as written.

What to Actually Do

The good news is that Article 50 compliance isn't complicated. It requires clarity, not infrastructure. A practical approach for most businesses:

  • Map your AI touchpoints. List every place customers interact with AI — chatbots, automated emails, AI-generated content — and check whether each one already makes the AI nature visible.
  • Add disclosure language. For chatbots, a single line at the start of the conversation is enough. For published content, a label or brief note. Keep it simple and visible.

Also worth checking: your contracts with AI vendors. Make sure you understand what obligations fall on them versus you, and that your service agreements reflect that shared responsibility.

There's something almost refreshing about Article 50 compared to the rest of the EU AI Act. It's not asking businesses to prove their AI is safe, document its training data, or engage a third-party auditor. It's asking them to be honest with customers. That's a standard most businesses would want to meet anyway — and the ones that don't are probably the reason the rule exists.

This article is for informational purposes only and does not constitute legal advice.

Connaissez votre niveau de risque IA en 10 minutes

Notre audit gratuit vous guide à travers les questions exactes pour classifier vos systèmes d'IA et identifier ce que vous devez faire avant le 2 août 2026.

Démarrer l'audit gratuit →

⚠️ Pas un conseil juridique — à des fins d'orientation uniquement