Plain English Guide

The EU AI Act: What SMEs Actually Need to Know

No legal jargon. No 200-page PDFs. Just a clear explanation of what the EU AI Act is, who it affects, and what your business needs to do before August 2026.

Find Out Your Risk Level →

What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive law specifically regulating artificial intelligence. It was adopted by the European Union in 2024 and started taking effect in 2025, with the main deadline for most businesses falling on August 2, 2026.

The regulation takes a risk-based approach: the more potential harm an AI system could cause, the stricter the requirements. A recruitment AI that helps decide who gets hired has more rules than an AI that suggests your next playlist. The law covers any business that operates in the EU or provides services to EU customers — regardless of where the company is based.

For most small and medium businesses, the practical impact is straightforward: tell people when they're interacting with AI, keep basic records of what AI you use, and have a simple policy covering how you use it responsibly. For businesses using AI in hiring, credit decisions, or healthcare, there's more work involved — but it's manageable with the right guidance.

Key Deadlines

The EU AI Act rolls out in phases. Here's what matters and when.

Feb 2, 2025
In Effect

Prohibited AI Provisions

Rules banning prohibited AI systems took effect — including social scoring, subliminal manipulation, and most real-time biometric surveillance.

Aug 2, 2025
In Effect

GPAI Model Rules

Rules for general-purpose AI (GPAI) models — like GPT-4, Gemini, Claude — came into force. Providers of these models must comply; businesses using them face new transparency rules.

Aug 2, 2026
Main Deadline

Transparency & General Provisions ⭐

THE main deadline for most businesses. Transparency obligations for AI systems (disclose when customers interact with AI), general governance requirements, and full GPAI user-facing rules.

Aug 2, 2027
2027

High-Risk Systems (Annex I)

Extended compliance requirements for high-risk AI systems listed in Annex I — covering sectors like safety components in products regulated by EU law.

Aug 2, 2030
2030

High-Risk Systems (Annex III)

Full compliance for all remaining high-risk AI systems listed in Annex III — including existing AI systems already in service before 2024.

Who Does It Apply To?

The EU AI Act classifies AI use into four risk levels. Your obligations depend on where your AI use lands.

🚫 Prohibited

These AI uses are banned entirely under the EU AI Act.

Social scoring systems (rating people based on behaviour)
Subliminal manipulation of behaviour
Real-time biometric surveillance in public spaces (with narrow exceptions)
AI that exploits vulnerabilities of specific groups
Emotion recognition in workplaces and schools (in most cases)
⚠️ High Risk

These AI uses are allowed but require significant compliance work before deployment.

AI in recruitment and HR screening
AI in credit scoring and financial decisions
AI in healthcare diagnostics
AI in critical infrastructure (energy, water, transport)
AI used in education to evaluate students
Biometric identification systems
⚡ Limited Risk

These AI uses are allowed but require specific transparency disclosures.

Customer service chatbots (must disclose AI interaction)
AI-generated content (must label as AI-made)
Deepfakes and synthetic media
Emotion recognition systems (outside prohibited zones)
✅ Minimal Risk

Most AI uses fall here. No specific EU AI Act requirements — but good practice still applies.

AI-powered spam filters
AI in games and entertainment
AI for internal productivity (summarising, drafting emails)
Recommendation engines (Netflix, Spotify-style)
Marketing optimisation tools

What SMEs Need to Do

Prioritised by effort level. Start with the quick wins.

🟢 Low effortQuick wins
  • Add AI disclosure notices to any chatbot ('This chat uses AI')
  • Label AI-generated content on your website
  • Brief your team on what AI they're using
🟡 Medium effortDocumentation basics
  • Write a short AI policy (what AI you use, how, and governance rules)
  • Create an AI system register (list all AI tools and their purpose)
  • Review your privacy notices to include AI data processing
  • Ensure GDPR compliance for any AI processing personal data
🔴 High effort (high-risk only)High-risk requirements
  • Conduct a conformity assessment for each high-risk AI system
  • Implement and document human oversight processes
  • Register high-risk systems with national authorities
  • Maintain technical documentation as required by the Act

Glossary

Key terms from the EU AI Act, explained plainly.

GPAI (General-Purpose AI)
AI models trained on large datasets that can be used for many different tasks — like ChatGPT, Claude, or Gemini. The EU AI Act has specific rules for companies providing GPAI models.
High-risk AI system
An AI system used in a context where errors could significantly harm people — like hiring, credit decisions, healthcare, or critical infrastructure. These require the most compliance work.
Conformity assessment
A mandatory process for high-risk AI systems where you document how the system works, test it, and verify it meets EU AI Act requirements — before deploying it.
Transparency obligation
The requirement to tell people when they're interacting with AI. For example: disclosing that a chatbot is AI, or that content was AI-generated. Applies to most Limited Risk AI uses.
AI system register
A document listing all AI systems your company uses — including what they do, what data they process, and how decisions are made. Required for high-risk systems; good practice for all.
Prohibited AI
AI uses that are banned entirely under the EU AI Act, including social scoring systems, subliminal manipulation, and most real-time biometric surveillance.
DPIA (Data Protection Impact Assessment)
A GDPR requirement for high-risk data processing. If your AI processes personal data in a way that could affect people's rights, you likely need a DPIA.
Technical documentation
Detailed records required for high-risk AI systems: what the system does, how it was trained, its limitations, testing results, and how it makes decisions.
Deployer
Under the EU AI Act, any company that uses an AI system in a professional context is a 'deployer' — even if they didn't build it. Most SMEs are deployers.
Provider
A company that builds or sells an AI system. If you've custom-built AI, you may be both a provider and a deployer, with additional obligations.

Find out your risk level in 5 minutes

Take the free audit. Answer 5 questions about your AI use and get a personalised risk assessment and action plan.

Start Free Audit →

⚠️ This guide is for educational purposes only — not legal advice.