← Retour au Blog
Policy Updates
4 min read18 April 2026

EU AI Act Regulatory Sandboxes: Most Member States Are Not Ready — But Yours Can Still Use Them

The August 2, 2026 deadline for AI regulatory sandboxes is approaching, but most EU Member States have not yet established theirs. An April 2026 European Parliament analysis reveals only one fully operational sandbox across all 27 Member States as of August 2025. Here is what that means for SMEs developing or deploying high-risk AI systems.

The Regulatory Lifeline Most SMEs Do Not Know About

There is a part of the EU AI Act designed specifically to help small businesses navigate compliance: AI regulatory sandboxes. They are mentioned in Article 57, and they offer something genuinely valuable — a controlled environment where you can develop and test an AI system with direct regulatory guidance, before you launch it commercially and before enforcement applies.[2]

The catch: the deadline for Member States to have these sandboxes operational is August 2, 2026.[1] And according to a European Parliament analysis published April 1, 2026, as of August 2025 — less than a year before that deadline — only one out of 27 Member States had a fully operational AI regulatory sandbox.

Here is what that gap means for you.

What a Regulatory Sandbox Actually Is

An AI regulatory sandbox is a controlled framework established by a national competent authority — a regulator — that gives AI developers or deployers the chance to test an AI system under regulatory supervision before full market release.[1] The AI Act describes it as a framework for developing, training, validating, and testing an AI system — including in real-world conditions — under a sandbox plan, for a limited time, under regulatory supervision.

The practical benefits for SMEs are concrete. Participation is voluntary, and Article 57 of the AI Act explicitly states that SMEs and startups can access sandboxes free of charge.[3] More importantly, the documentation you generate from sandbox participation can be used to demonstrate your compliance with the AI Act. And crucially — if you follow the regulator's guidance in good faith, you are protected from administrative fines for infringements of the Act during sandbox testing.[1]

You remain liable for damages to third parties caused by your AI during sandbox testing. But the fine protection while working within regulatory guidance is a meaningful reprieve for a small company that is not sure whether its system meets every technical requirement.

The State of Play: Who Has a Sandbox and Who Does Not

The April 2026 EP Think Tank analysis gives the clearest picture yet of where Member States stand on implementation.[1] As of August 2025:

  • Spain was the only Member State with a fully operational AI regulatory sandbox — it opened in 2025 and was already hosting 12 high-risk AI systems.
  • Five Member States were actively implementing their sandboxes.
  • Four Member States had declared their intention to establish a sandbox.
  • Sixteen Member States had not yet communicated any concrete plans publicly.[1]

Denmark stands apart as an early mover — it launched an AI regulatory sandbox in March 2024, run jointly by the Danish Agency for Digital Government and the Danish Data Protection Agency, giving private and public organisations access to AI-specific regulatory guidance.[1] If you are a Danish SME, your national sandbox already exists and is accepting participants.

The broader EU picture is uneven. Institutional approaches differ: in some countries, data protection authorities lead the sandbox; in others, new dedicated AI agencies are being established; some Member States are coordinating across multiple existing regulators.[1] This fragmentation is one of the three core challenges the EP Think Tank analysis identifies.

Three Problems the EP Think Tank Identified

Beyond the raw readiness gap, the analysis outlines three structural challenges that SMEs should understand — because they affect how useful the sandbox network will actually be by August.

Design challenges. A regulatory sandbox is not a single thing — it involves eligibility criteria, testing oversight, and an exit pathway. Getting that design right is technically demanding. Early-stage AI systems do not need the same kind of support as late-stage systems approaching market readiness. A well-designed sandbox needs to handle both.[1]

Fragmentation. National authorities are responsible for running their own sandboxes. Some will have more resources than others. If the capacity gap between Member State regulators is large, AI providers might gravitate toward sandboxes in countries perceived as less stringent — creating a race-to-the-bottom risk for the quality of enforcement across the EU.[1]

Time. The Commission has not yet adopted final secondary legislation specifying exactly how sandboxes should operate — the draft was published in December 2025 and feedback closed in January 2026.[4] Member States that have not started building their sandboxes are now designing them in a vacuum, without final guidance, while recruiting and training staff and building capacity under a hard deadline.

What This Means for Your Compliance Timeline

If your company is developing or deploying a high-risk AI system — an AI used in hiring, credit scoring, education, healthcare, or similar consequential applications — the sandbox question matters for your compliance timeline in two ways.

First, if your national sandbox is not yet operational by August 2, 2026, you lose the ability to use the sandbox as a pre-launch compliance tool for that deadline. The sandbox is not a mandatory pathway — participation is voluntary — but the fine protection and direct regulatory guidance it offers are most valuable precisely when you are not yet sure whether your system is compliant.

Second, the EP Think Tank notes that the EU is exploring an EU-level sandbox and stronger cross-border coordination through the Digital Omnibus legislation.[1] If your country does not have an operational sandbox, a future EU-level option could become relevant — though the Omnibus is still making its way through Parliament and is not law yet.

For SMEs in countries that do have operational or near-operational sandboxes — Denmark, Spain, and the five actively implementing — now is the time to make contact with the national competent authority and explore whether your system qualifies for participation.

What You Can Do Right Now

Three steps, depending on where you are in the AI development cycle:

  • Check your national sandbox status. The AI Act Service Desk at ai-act-service-desk.ec.europa.eu tracks which competent authorities are responsible in each Member State. If your country has an operational sandbox, the entry criteria and contact process are listed there.[3]
  • If you are developing a high-risk AI system, consider applying now. Even if your country's sandbox is not yet operational, contacting the national authority early puts you on the list for when it opens. The regulatory guidance you receive during sandbox testing is documentation you can use to demonstrate compliance — and if you follow the guidance in good faith, the fine protection is real.[1]
  • If your national sandbox is not coming in time, look at cross-border options. The AI Act allows Member States to establish joint sandboxes with other countries.[2] The European Commission is also required to develop a single interface with information on all national sandboxes — that resource should be available before August 2026.

The Bottom Line

The EU AI Act's sandbox framework is one of the most SME-friendly provisions in the entire regulation — free access, regulatory guidance, documentation you can use for compliance, and fine protection while you test. By design, it is there to lower the cost of getting compliance right.

The problem is that by April 2026 — four months from the deadline — most Member States do not have their sandbox operational yet. The August 2, 2026 date is still the legal deadline. But the infrastructure gap means the compliance lifeline the EU promised to SMEs is not going to be equally available everywhere when companies need it most.

If you are an SME with a high-risk AI system in development, the lesson is clear: do not wait for your national sandbox to materialise. If it exists, use it. If it does not, get regulatory guidance now through other channels — and start building your compliance documentation with or without the sandbox.

This article is for informational purposes only and does not constitute legal advice.

Connaissez votre niveau de risque IA en 10 minutes

Notre audit gratuit vous guide à travers les questions exactes pour classifier vos systèmes d'IA et identifier ce que vous devez faire avant le 2 août 2026.

Démarrer l'audit gratuit →

⚠️ Pas un conseil juridique — à des fins d'orientation uniquement