← Back to Blog
Fines & Enforcement
5 min read12 March 2026

EU AI Act Fines: How Much Can You Actually Be Fined?

€35 million sounds scary. But what are the realistic enforcement risks for SMEs? Here's an honest look.

The Headline Numbers

The EU AI Act sets out a tiered penalty structure that's designed to get attention — and it does:

  • Up to €35 million or 7% of global annual turnover (whichever is higher) for violations involving prohibited AI practices — like deploying social scoring systems or biometric mass surveillance.
  • Up to €15 million or 3% of global annual turnover for violations involving high-risk AI system obligations — failing conformity assessments, missing documentation, inadequate human oversight.
  • Up to €7.5 million or 1.5% of global annual turnover for providing incorrect or misleading information to supervisory authorities.

These are the numbers that get quoted in media coverage. They're real — and they can apply to any EU business. But they require significant context to understand.[1]

The Critical Word: "Maximum"

Every penalty listed above is a maximum. The EU AI Act explicitly requires that national authorities apply fines that are "effective, proportionate and dissuasive" — with proportionality being a core principle. For SMEs, this matters a great deal.

The regulation specifically instructs authorities to take into account the size and economic resources of the offending organisation when setting penalties. A fine of €35 million applied to a 20-person SME would almost certainly fail the proportionality test in any legal challenge.

More practically: regulators set precedents by pursuing the most egregious cases first. That means large organisations using prohibited AI at scale, not a small business that forgot to put a chatbot disclosure on its customer service bot.

Who Actually Enforces This?

Enforcement of the EU AI Act operates at two levels:

National Competent Authorities (NCAs) handle most cases — they are responsible for supervising compliance within their member state. Each EU country is required to designate one or more NCAs, and many are still in the process of setting these up. Enforcement capacity will vary significantly by country in the early years.

The European AI Office, established within the European Commission, oversees General-Purpose AI (GPAI) models — the foundation models like GPT-4 and Gemini that power most AI products. The AI Office has already begun its work, issuing guidance and initiating consultation processes with major AI developers.[2]

The Realistic Enforcement Timeline for SMEs

Here's the honest picture: the European AI Office is focused on GPAI model providers — OpenAI, Google, Meta, Anthropic. That will continue to dominate regulatory attention for the next 12–18 months.

National authorities are still staffing up. Enforcement of SME compliance is likely to begin with the highest-profile, highest-risk cases: companies using banned AI practices, or using high-risk AI with no oversight whatsoever.

But "enforcement hasn't started yet" is not a compliance strategy. When GDPR launched in 2018, many SMEs assumed they'd fly under the radar. Some did. Many others faced complaints, data subject requests they weren't equipped to handle, and costly remediation — and the major fines followed 18–24 months after the law came into force. The EU AI Act trajectory looks similar.

The Risk That Matters More Than Fines for SMEs

For most small businesses, the realistic near-term risk isn't a regulatory fine. It's reputational damage.

Consider: a customer discovers your company is using AI to screen job applications without disclosure. They post about it. A journalist picks it up. You're now associated with "secret AI hiring" — which, as of 2026, is also illegal. The reputational cost of that story could far exceed any proportionate fine a regulator might issue.

Transparency disclosures aren't just legal compliance. They're trust signals. Customers who know you're using AI responsibly — and who are told so explicitly — are more likely to trust you than those who discover it accidentally.

The Honest Advice

If your business only uses minimal or limited risk AI (writing assistants, spam filters, recommendation engines, chatbots with appropriate disclosures), your enforcement risk is genuinely low right now. Adding transparency notices and basic documentation is all you realistically need to do before August 2026.

If you're using high-risk AI — particularly in HR, credit, or education — your risk is material. High-risk AI with no conformity assessment, no human oversight measures, and no documentation is exactly what the regulation was designed to address. Don't wait.

The cost of compliance is measured in hours. The cost of non-compliance, even in reputational terms, could be measured in customers.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →