← Back to Blog
News
5 min read26 March 2026

Only 8 of 27 EU States Have Designated an AI Enforcement Authority — 5 Months Before the Deadline

A new European Parliament research report reveals a major readiness gap: most EU member states still haven't named who will enforce the AI Act. Here's what that means for SMEs.

A Striking Gap at the Heart of EU AI Enforcement

The EU AI Act was supposed to have a functioning enforcement network in place by August 2, 2025. Each of the EU's 27 member states was legally required to designate a national single point of contact — the authority responsible for overseeing AI Act compliance within their borders.

Seven months past that deadline, only 8 of 27 member states have done it, according to a March 2026 research report published by the European Parliament's research service.[1] The remaining 19 countries — nearly three-quarters of the EU — have not yet formally named who will enforce the world's most comprehensive AI law.

And the main enforcement clock — August 2, 2026 — is now less than five months away.

This article is for informational purposes only and does not constitute legal advice.

What August 2, 2026 Actually Activates

August 2, 2026 is when the bulk of the AI Act becomes enforceable. On that date:

  • High-risk AI system rules kick in for the systems listed in Annex III — including AI used in hiring, credit scoring, education access, biometric identification, and essential services.
  • Transparency rules under Article 50 take effect — meaning AI chatbots must disclose they're AI, emotion recognition systems must notify users, and AI-generated content must carry machine-readable watermarks.
  • AI regulatory sandboxes must be operational in each member state.
  • National enforcement authorities can begin levying fines — up to €35 million or 7% of global annual turnover for prohibited AI practices, and up to €15 million or 3% for other obligations.

These are not distant theoretical consequences. They are scheduled to apply in 130 days.

Why the Readiness Gap Is a Problem — Even For You

The missing enforcement infrastructure creates what experts call regulatory arbitrage: companies may face meaningfully different enforcement pressure depending on which EU country they operate in. A business in a country without a functioning market surveillance authority faces a very different practical risk profile than one operating in a country where the regulator is already staffed and active.

This might sound like good news for businesses in slower jurisdictions. But there are two reasons it's not:

  1. The EU AI Office can step in. The European AI Office — headquartered in Brussels and operational since 2025 — has direct enforcement powers over general-purpose AI models (like GPT-4 or Gemini) and can support national enforcement where gaps exist.
  2. National authorities are being set up rapidly. The political pressure to have enforcement infrastructure in place by August 2026 is intense. Countries that haven't designated authorities yet are under increasing pressure from Brussels to move.

In other words: the absence of a named authority today does not mean enforcement won't happen. It means enforcement may arrive more suddenly — and less predictably — when it does.

The Standards Gap Makes It Worse

There's a second structural problem: the technical standards companies need to demonstrate compliance don't exist yet. The European standardisation bodies CEN and CENELEC were supposed to deliver harmonised technical standards by 2025. They missed that deadline. They are now targeting the end of 2026 — after the enforcement date.

This creates a genuine catch-22 for high-risk AI system operators: you need to comply, but the official benchmarks for measuring compliance haven't been published yet. The practical answer, according to legal experts, is to build robust internal documentation now — risk assessments, quality management processes, technical documentation — and be ready to align to standards as they emerge.

What This Means for SMEs

If you're a small or medium business using AI, here's the practical takeaway:

  • Don't assume slow enforcement means no enforcement. The infrastructure is being built and political pressure is strong. Being unprepared when your national authority becomes active is a real risk.
  • Focus on what's certain today. The AI literacy requirements (you must train staff who use AI) and prohibited AI practices have been in force since February 2025. These don't depend on national authorities and the European AI Office can enforce them directly.
  • Document everything. Even without finalised standards, building records of your AI systems — what they do, what data they use, what risks they pose — is the right first step. Good documentation protects you in any enforcement scenario.
  • The high-risk deadline may shift. Parliament is expected to formally vote today on pushing high-risk AI obligations back to December 2027. But this isn't final until trilogue concludes — and lower-risk transparency obligations remain on the August 2026 timetable.

The Bottom Line

The EU AI Act enforcement apparatus is not fully built. Standards are late. Most national authorities haven't been formally named. But the law is real, the deadline is approaching, and the political will to enforce it is clear. For SMEs, the right response is not to wait for the infrastructure to catch up — it's to build the internal compliance foundation now, so you're ready whenever enforcement arrives.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →