← Tilbage til Blog
Transparency
4 min read10 March 2026

The Chatbot Fix Is One Sentence. Here's the Harder Stuff.

Most businesses can make their chatbot compliant in five minutes. Then there are the edge cases — emotion detection, deepfakes, AI-generated content — that are actually more complicated.

The Simple Fix

If you have a customer service chatbot, the EU AI Act compliance fix is literally one sentence. Change the first message to: "Hi, I'm an AI assistant. How can I help?" That's it. You're compliant with Article 52's chatbot disclosure requirement. Clear, timely, done.[1]

Think of it like the cookie banner, but for AI. The law requires that people know they're interacting with an AI system before or at the start of the conversation. Not buried in a terms page. Not visible only if they look for it. Right there, upfront. The only exception is when it's genuinely obvious — like a clearly labelled "AI Assistant" widget with a robot icon. A chatbot named "Max" that chats in a friendly human tone doesn't qualify as obvious.

What's Actually Harder

The chatbot case is easy. Three other areas in Article 52 are more complicated.

Emotion recognition. AI that detects or infers emotions from faces, voices, or text in workplace or educational settings has been banned since February 2025. Not limited — banned. If your customer service platform analyses caller sentiment to route calls, or your HR software scores interview candidates on emotional cues, that's worth checking carefully. The line between "sentiment analysis" and "emotion recognition" is closer than vendors often admit.

Deepfakes and synthetic media. AI-generated video, audio, or images that look like real people or real events must be clearly labelled as AI-generated. That includes synthetic spokesperson videos, AI voiceovers that sound like specific individuals, and generated product images presented as real photography. The label needs to be prominent — not a footnote, not just a metadata tag.

AI-generated text for public interest topics. If your business publishes newsletters, market commentary, or public-facing articles written with AI assistance on topics of public interest, those need disclosure too. The watermarking standard for machine-readable labels is still being finalised, but the human-readable disclosure requirement applies from August 2026.[2]

You're the Deployer. This Is Your Job.

One more thing worth knowing: Article 52 obligations sit with the deployer — the business using the AI tool — not just the vendor who built it. If you've embedded a third-party chatbot on your site, OpenAI isn't going to add a disclosure on your behalf. That's your interface, your customer relationship, your responsibility. Check what your vendor's tool already shows by default, and decide what you need to add on top.

What Standard Chatbots Don't Trigger

A regular customer service chatbot — answering questions, routing enquiries, handling returns — is classified as limited risk under the AI Act. The only mandatory obligation is the transparency disclosure. No conformity assessment, no technical documentation, no EU database registration. Just tell users it's AI.

The exception: if your chatbot makes decisions about customers — pricing, eligibility for services, access decisions — the risk classification may be higher and the obligations more demanding. Run a quick audit to confirm where your system sits.

This article is for informational purposes only and is not legal advice.

Kend dit EU AI Act-risikoniveau på 10 minutter

Vores gratis audit guider dig gennem de præcise spørgsmål for at klassificere dine AI-systemer og identificere, hvad du skal gøre inden 2. august 2026.

Start gratis audit →

⚠️ Ikke juridisk rådgivning — kun til vejledningsformål