The Product Your Vendor Should Have Told You Was Already Illegal
Here is something your contact center AI vendor probably did not highlight in the sales presentation. The voice analytics platform scoring your agents' emotional state — flagging stress levels, inferring frustration from vocal tone, tracking "engagement" from how your people sound — has been illegal across the entire European Union since February 2, 2025.[2]
Not restricted. Not "subject to forthcoming regulation." Illegal. Banned outright under Article 5(1)(f) of the EU AI Act. Fines of up to €35 million or 7% of global annual turnover, whichever is higher. That is the top fine tier in the entire regulation.[3]
And here is the part that is about to make August very uncomfortable for a large number of European businesses. The customer-facing side of the same platform is something different: not banned, but about to become high-risk. And high-risk comes with its own demanding compliance regime that kicks in on August 2, 2026 — which is approximately 100 days from today.[1]
The Split-Personality Problem Nobody Is Talking About
Modern contact center voice analytics platforms do not sit cleanly on one side of the call. They listen to the agent, they listen to the customer, they produce outputs about both. Under the AI Act, that single piece of software is simultaneously two different legal objects.[1]
- The agent-facing half is prohibited. Emotion inference applied to employees in a workplace is banned under Article 5. No conformity assessment can make it legal. No amount of "but we got consent" changes this. The EU drew a line here: using AI to monitor how your employees are feeling at work is a violation of fundamental rights. It has been a violation since February 2025.[2]
- The customer-facing half is about to be high-risk. Emotion inference applied to customers is permitted under the AI Act — the EU drew a different line for consumers, reasoning that customers can theoretically walk away. But "permitted" does not mean "unregulated." From August 2, 2026, customer emotion recognition is classified as high-risk under Annex III, Category 4. That classification brings the full weight of the AI Act's high-risk obligations.[1]
If you are running a unified platform that does both, you are running two different compliance problems in the same product. Most vendors have not been transparent about this. That is about to become a serious problem for the companies that bought on the assumption their compliance was the vendor's problem.
What High-Risk Means When the Invoice Arrives
Let us translate "high-risk" into what a contact center operator or CX leader actually has to do by August 2.[1]
- Conformity assessments. Before deploying customer emotion recognition, you must have a formal conformity assessment demonstrating the system meets the AI Act's technical requirements. Your vendor is primarily responsible, but you as deployer must verify the assessment exists and is current.
- Human oversight in the loop. No purely automated decisions about customer treatment based on inferred emotional state. If your routing engine automatically drops a flagged "frustrated" caller into a retention queue with no human judgment applied, that is an AI Act violation.
- Customer transparency notices. Article 50(3) requires that deployers of emotion recognition systems inform affected customers clearly, before the analysis happens. That means your IVR flow or call opening must say something like: "This call will be analyzed by an emotion recognition system that infers your emotional state from your voice." How many IVR flows currently say this? Approximately none.
-
Logging and traceability. Every deployment must generate logs sufficient to trace operation.
Retention periods apply. These are auditable by regulators.
Post-market monitoring and incident reporting. Serious incidents involving the system must be reported to national authorities. Your vendor carries the primary obligation, but you sit in the reporting chain.
And then there are the fines. High-risk violations carry up to €15 million or 3% of global turnover. Stack that on top of GDPR exposure — emotion recognition processes biometric data, triggering GDPR's special category protections — and the combined theoretical exposure reaches 7% of turnover.[1]
What to Do in the Next 100 Days
If your contact center or CX operation uses any form of voice analytics or emotion inference, here is what you should be doing right now:
- Audit your stack immediately. Identify every AI system in your contact center that performs emotion inference, sentiment analysis, vocal stress scoring, or similar analysis on agents, customers, or both. Get the vendor to confirm in writing which category each output falls into.
- For agent-facing emotion AI: act now. The Article 5 prohibition has been in force for over a year. If your platform is still running agent emotion analysis on EU workers, you are already non-compliant. Contact your vendor to understand whether they can disable this feature for EU deployments, or explore alternative platforms for your EU operations. "We did not know" is not a defence.
- For customer-facing emotion AI: start your conformity assessment process now. August 2 comes whether you are ready or not. Begin by requesting your vendor's conformity assessment documentation. If they do not have one, your system may not be deployable in the EU after August 2.
- Write your customer transparency notices. You need to inform customers before the analysis happens. Work with your legal and CX teams to draft the required disclosure for your IVR, call opening scripts, and any relevant privacy notices. Do not bury it — the AI Act requires meaningful, upfront notice.
The Bigger Picture
The EU has drawn a deliberate line: emotion inference at work is a violation of fundamental rights. Emotion inference on customers is a high-risk activity that demands serious compliance.[3] Both positions reflect the EU's view that inferring emotional state from biometric signals — voice, face, physiological data — is among the most sensitive forms of AI processing that exists.
The August 2 deadline is not the start of this story. The start was February 2025, when the prohibitions came into force. What August 2 does is activate the enforcement infrastructure for everything the prohibitions covered, plus the high-risk framework that has been waiting in the background. Regulators across Europe spent 2025 building the capacity to act on violations. That capacity is now largely in place.
For SMEs: the vendor community is not going to solve this for you. Many vendors have not disclosed the dual-compliance status of their platforms. The compliance responsibility sits with you as deployer. 100 days is enough time to get this right, but only if you start the work now.
This article is for informational purposes only and does not constitute legal advice.
Sources
- [1]CX Today — Customer Emotion AI Becomes High-Risk in August 2026. Most Contact Centers Have No Idea What's Coming. (April 24, 2026)
- [2]UC Today — EU AI Act Shock: Emotion Recognition Is Now Illegal at Work. So Why Is Your Vendor Still Selling It? (April 22, 2026)
- [3]FireTail Blog — Article 5 and the EU AI Act's Absolute Red Lines (April 22, 2026)
Kennen Sie Ihr EU KI-Risikoniveau in 10 Minuten
Unser kostenloses Audit führt Sie durch die genauen Fragen zur Klassifizierung Ihrer KI-Systeme und zeigt, was Sie vor dem 2. August 2026 tun müssen.
Kostenloses Audit starten →⚠️ Keine Rechtsberatung — nur zur Orientierung