← Back to Blog
News
5 min read27 March 2026

What EU AI Act Article 50 Means for Your Business: Chatbot Disclosures and AI Content Labels

Article 50 of the EU AI Act enters into force on August 2, 2026. It requires businesses that deploy chatbots, AI content generators, and deepfake tools to disclose AI interaction to users. Here's what you need to know.

The Disclosure Deadline That's Still On

While much of the EU AI Act news this week has focused on deadline delays — the European Parliament voted on March 26 to push high-risk AI compliance to December 2027 — one key obligation is still firmly on track for August 2, 2026: Article 50 transparency rules.

If your business uses a chatbot, an AI content generator, or any AI tool that interacts with customers or produces public-facing content, Article 50 directly affects you. Here's what it requires, in plain English.

This article is for informational purposes only and does not constitute legal advice.

What Article 50 Actually Requires

Article 50 of Regulation (EU) 2024/1689 sets out four specific transparency obligations that apply from August 2, 2026:

  • Chatbot disclosure (Article 50(1)): If your business deploys an AI system that interacts directly with customers — a chatbot on your website, an AI assistant in your app — you must clearly inform users they are talking to an AI. The only exception is when it is "obvious from context." If there's any doubt, you must disclose.
  • Emotion and biometric AI (Article 50(2)): If you use AI that analyses emotions or categorises people by characteristics like age or gender, you must inform those people. This covers tools used in HR, retail analytics, and customer experience monitoring.
  • Deepfake disclosure (Article 50(3)): Anyone deploying AI to generate or manipulate video, audio, or images in a way that makes them appear real (deepfakes) must label the content as AI-generated. Limited exceptions apply for artistic or satirical works, provided the AI nature is clearly inferable.
  • AI-generated text for public information (Article 50(4)): If your business publishes AI-generated text intended to inform the public on matters of public interest — news articles, market updates, public communications — you must label it as AI-generated. This matters for media companies, marketing agencies, and anyone using AI to write content at scale.

Who Is Affected?

Article 50 splits responsibility between providers (the companies who build the AI tool) and deployers (the businesses who use it to serve customers).

For most SMEs, the relevant role is deployer. You are buying or licensing an AI tool — a chatbot platform, a content generation tool, a customer service AI — and using it in your products or services. That makes you responsible for ensuring the disclosure requirements are met in your product interface, your content pipelines, and your terms of service.

Providers of large AI models (like the companies behind ChatGPT or Claude) carry separate obligations to ensure their systems are technically capable of machine-readable labelling. But deployers cannot simply rely on providers to handle everything: the disclosure to your end-user is your responsibility.

What About the Watermarking Deadline Extension?

The European Parliament's vote this week proposed delaying the mandatory watermarking requirements — the technical standards for machine-readable AI content labels — from August 2026 to November 2, 2026. This is a narrower extension than some had hoped for, and it only covers the technical watermarking standard, not the broader disclosure obligations.

Critically, the text of the Parliament's position still requires that deployers make disclosures from August 2, 2026. The delay to November simply acknowledges that the technical standard for how to embed machine-readable labels hasn't been finalised yet. The human-readable disclosure requirement — telling your users they're talking to an AI — is not delayed.

And remember: none of these Parliament positions are law yet. Trilogue negotiations with the Council of the EU must conclude first. The August 2, 2026 deadline is legally binding until any amendment is published in the Official Journal.

Practical Steps for SMEs

If you use any customer-facing AI tool, here's what to do before August 2026:

  • Audit your AI touchpoints. List every place in your business where customers interact with AI: chatbots, email automation, content generated on your website, social media tools. For each one, ask: does the customer know they're interacting with AI?
  • Add visible disclosures. For chatbots, add a clear label such as "You are chatting with an AI assistant." For AI-generated content, add a brief label or disclosure note. These do not need to be intrusive — just clear.
  • Update your terms of service and privacy notices. Your legal documents should reflect how you use AI and what data is processed through AI tools.
  • Check your AI vendor contracts. Confirm that the AI providers you use are meeting their obligations under Article 50(5) on technical standards, and that your contracts reflect shared compliance responsibilities.

The Bigger Picture

The EU AI Act's transparency rules represent one of the more straightforward parts of the regulation for most SMEs. Unlike the complex documentation and risk-assessment requirements for high-risk AI systems, Article 50 compliance largely boils down to a simple principle: be honest with your customers about when they're interacting with AI.

That principle is also good business practice. Customers who discover they were unknowingly talking to an AI — or reading AI-generated content presented as human-written — are far less likely to trust you going forward. Compliance and trust happen to align here.

With five months to go before August 2, 2026, Article 50 is one of the most actionable items on your compliance checklist. Start now, and you'll be ahead of most of your competitors.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →