The Deadline Everyone Missed
Most EU AI Act coverage centres on August 2, 2026 — when obligations for high-risk AI systems become fully enforceable. That date is real. It is four months away. And it is not the most urgent compliance question right now.
That distinction belongs to February 2, 2025. On that date, the EU AI Act's eight prohibited AI practices under Article 5 became enforceable.[1] The prohibitions are live. Fines of up to €35 million or 7% of global annual turnover apply now. And unlike the August 2026 high-risk obligations — which the Digital Omnibus may push to December 2027 — Article 5 is not part of any proposed delay. There is no trilogue negotiation that changes this. It is the law, and it has been since last February.
A detailed analysis from FireTail published April 21, 2026 cuts through the widespread confusion about what Article 5 actually prohibits and how enforcement is taking shape across member states.[1] The article is the most concrete published account of what Article 5 means in practice — and it arrives at a moment when most small and medium businesses have not yet integrated the prohibited practices into their AI compliance thinking at all.
What Article 5 Actually Bans
The EU AI Act's prohibited practices are not a grey area. They represent the EU's judgment that certain AI applications are incompatible with fundamental rights and democratic values — and the Commission reinforced that position in guidelines published on February 4, 2025, two days after the prohibitions took effect.[2] Those guidelines break each prohibition into cumulative conditions and provide practical examples of what falls in scope and what does not. They are the clearest signal available of how regulators will interpret borderline cases.
The eight prohibitions cover:
- Subliminal and manipulative techniques: AI systems that deploy techniques operating below conscious awareness, or that exploit psychological vulnerabilities to distort behaviour and cause significant harm.[1]
- Exploiting vulnerabilities: AI systems that target individuals based on age, disability, or socioeconomic circumstances to exploit those vulnerabilities for commercial or harmful ends. The Commission's guidelines explicitly cite AI advertising tools that identify users showing signs of financial hardship and target them with exploitative offers.[1]
- Social scoring by public authorities: General-purpose social scoring of individuals based on social behaviour or personal characteristics, leading to detrimental treatment in contexts unrelated to where the data was collected.[1]
- Predictive policing based on profiling: AI systems that assess the likelihood of criminal offence based solely on profiling or personality traits, absent verifiable facts directly linked to criminal activity.[1]
- Untargeted facial recognition scraping: Building or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV — a provision directly targeting the data acquisition practices of controversial biometric surveillance providers.[1]
- Emotion inference in workplaces and educational settings: Deploying AI systems that infer emotional states of individuals in employment or education contexts is prohibited — though the same technology may be permitted in other settings, such as driver safety monitoring in automotive applications.[1]
- Biometric categorisation by sensitive characteristics: Using biometric data to categorise individuals based on race, political opinions, religious beliefs, sex life, or sexual orientation. The narrow exceptions cover lawful dataset labelling and law enforcement categorisation under strictly controlled conditions.[1]
- Real-time remote biometric identification in public spaces: The real-time use of remote biometric identification systems in public spaces for law enforcement purposes — with narrow exceptions requiring prior judicial authorisation and EU database registration.[1]
The Enforcement Structure Is Already Taking Shape
The penalty structure for Article 5 violations reflects how seriously the EU treats these provisions. At up to €35 million or 7% of global annual turnover, violations carry steeper fines than any other category of non-compliance in the Act.[1] There is no tiered fine structure here — this is the maximum for all Article 5 violations, regardless of company size.
The enforcement landscape varies by member state, and Ireland's proposed implementation offers a concrete example of what multi-domain enforcement looks like in practice. Ireland's proposed approach assigns prohibited practice enforcement across three different bodies: the Central Bank for financial services, the Workplace Relations Commission for employment contexts, and the Data Protection Commission for everything else.[1]
The implication is important: a single organisation with AI systems operating across multiple domains — for example, an AI hiring tool and an AI-powered financial product — could face scrutiny from more than one authority simultaneously under Article 5. The enforcement architecture is not a single channel; it is a multi-body network that reflects the breadth of the prohibited practices.
The Compliance Gap Is Real and Wide
Here is the uncomfortable reality the FireTail analysis surfaces: understanding the prohibitions is only the first step. The challenge for providers is ensuring that their systems do not violate Article 5 through optimisation, fine-tuning, or integration with other services — in ways that may not be obvious from the original product design.[1]
The Commission's guidelines make clear that deployers bear responsibility for how they use systems, regardless of what the provider's terms of service say. But providers who build systems capable of prohibited practices — even if they prohibit those uses — are not fully insulated from regulatory attention if those capabilities are reasonably foreseeable. Developers need to monitor how systems actually behave in deployment, not just design intent.
The compliance challenge is not just about obvious violations. A recommender system trained purely on engagement maximisation can, over time, evolve into something that exploits psychological patterns in ways that meet the prohibition's conditions — even if that was never the explicit design goal. The conditions are cumulative: the technique must be subliminal or manipulative, and it must cause or be reasonably likely to cause significant harm. That second condition is where optimisation-driven systems can drift into scope without anyone intending it.
What This Means for Your Business Right Now
Three things that follow directly from Article 5 being live and enforced:
- Check your AI tools against the eight prohibitions before the next quarterly review. If you are using or building AI systems in any of the prohibited practice areas — emotion inference in workplace or education contexts, biometric categorisation, systems that exploit vulnerability — the clock on compliance is not counting down from August 2026. It has been running since February 2025.
- Watch your optimisation objectives. Systems trained on engagement maximisation, behavioural modification, or conversion optimisation deserve scrutiny under Article 5's subliminal manipulation prohibition. The question to ask: could this system's optimisation drive it toward techniques that exploit psychological vulnerabilities as a side effect? If the answer is yes, you have a compliance question to resolve now.[1]
- Read the Commission's February 2025 guidelines. They are the definitive interpretation of how regulators will apply Article 5. The guidelines break each prohibition into practical conditions and give concrete examples — including the financial hardship advertising example that clarifies the vulnerability exploitation prohibition in consumer contexts.[2] If your AI systems operate in consumer finance, healthcare, employment, or education, these guidelines are mandatory reading.
The Bottom Line
Article 5 is not a future concern. It is a present enforcement reality that has escaped the attention of most SMEs in part because the August 2026 date has dominated the headlines. The prohibitions represent the EU's absolute red lines — applications of AI that the Commission has judged incompatible with European values regardless of commercial benefit.
For businesses using AI in any of the eight prohibited categories: the time to act was February 2025. For businesses that have not yet reviewed their AI portfolio against Article 5, the enforcement infrastructure is in place, the guidelines have clarified the boundaries, and the fines are the highest in the Act. There is no further deadline to wait for.
This article is for informational purposes only and does not constitute legal advice.
Sources
Connaissez votre niveau de risque IA en 10 minutes
Notre audit gratuit vous guide à travers les questions exactes pour classifier vos systèmes d'IA et identifier ce que vous devez faire avant le 2 août 2026.
Démarrer l'audit gratuit →⚠️ Pas un conseil juridique — à des fins d'orientation uniquement