← Zurück zum Blog
Deadlines
5 min read20 March 2026

The August 2026 Deadline Most Businesses Are Getting Wrong

Most businesses are planning for 2027. The real deadline is August 2026 — and it's already closer than you think.

The Date That Keeps Getting Ignored

Talk to SME owners about the EU AI Act and you'll hear a version of this: "we have time, there's a delay to 2027." Not exactly. The 2027 extension applies to high-risk AI conformity assessments — the heavy compliance machinery for things like AI hiring tools and credit scoring. For most businesses, the deadline that matters is August 2, 2026. And that one isn't moving.[1]

The law came into force August 2024. It rolls out in phases. Two of those phases have already passed.

What Already Happened

February 2025 was the first deadline. That's when banned AI practices became illegal — social scoring, emotion recognition in offices and schools, real-time facial recognition for mass surveillance. These are now prohibited outright, no grace period, no SME carve-out. If you're running a tool that categorizes employees by emotional state, you've been breaking EU law for over a year. Most businesses aren't — but it's worth knowing the line exists.

August 2025 was the second deadline. Rules for foundation AI models kicked in — the obligations on OpenAI, Google, Anthropic to document, test, and disclose their models. Unless you build AI models from scratch, this phase didn't affect you directly.

August 2, 2026: What Actually Hits You

Three things become enforceable on this date.

Transparency obligations. Every customer-facing AI — chatbots, voice bots, AI-generated emails — must disclose that it's AI. This is Article 50. It's not complicated, but it's mandatory from August 2026.

High-risk AI rules. Using AI in HR, credit decisions, education, or essential services? The full compliance package applies: documentation, human oversight, risk assessments, EU database registration. This is the complicated part — and where the proposed 2027 extension would help, once it clears trilogue.

AI literacy. Article 4. Staff who use AI tools need sufficient understanding of what those tools do and when to question them. A documented one-hour briefing per tool typically satisfies this.[3]

The 2027 Extension: What It Is and Isn't

The European Parliament voted in March 2026 to push high-risk AI conformity assessments back to December 2027. That's real relief for companies using AI in hiring, credit, or education — the most demanding compliance work just got an 18-month extension.

But it hasn't cleared trilogue yet. It's not law. And it doesn't touch transparency obligations. The chatbot disclosure rule, AI content labelling, the emotion recognition ban — all of those are still August 2026.[2]

Most SMEs only have chatbots and writing assistants anyway. For them, the 2027 extension is largely irrelevant. The August date is what matters.

What to Do Before August

First, figure out what AI your company actually uses. Then classify each tool — is it high-risk (HR decisions, credit, education), limited risk (chatbot, generated content), or minimal risk (writing assistant, spam filter)?

For most small businesses, the work is: add an AI disclosure to your chatbot, label any AI-generated content, brief your team on the tools they use. That's genuinely it. For companies with high-risk AI, the timeline is tighter and the work is heavier. Don't bank on the 2027 extension saving you — it's not final.

This article is for informational purposes only and is not legal advice.

Kennen Sie Ihr EU KI-Risikoniveau in 10 Minuten

Unser kostenloses Audit führt Sie durch die genauen Fragen zur Klassifizierung Ihrer KI-Systeme und zeigt, was Sie vor dem 2. August 2026 tun müssen.

Kostenloses Audit starten →

⚠️ Keine Rechtsberatung — nur zur Orientierung