The EU AI Act: What SMEs Actually Need to Know
No legal jargon. No 200-page PDFs. Just a clear explanation of what the EU AI Act is, who it affects, and what your business needs to do before August 2026.
Find Out Your Risk Level →What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive law specifically regulating artificial intelligence. It was adopted by the European Union in 2024 and started taking effect in 2025, with the main deadline for most businesses falling on August 2, 2026.
The regulation takes a risk-based approach: the more potential harm an AI system could cause, the stricter the requirements. A recruitment AI that helps decide who gets hired has more rules than an AI that suggests your next playlist. The law covers any business that operates in the EU or provides services to EU customers — regardless of where the company is based.
For most small and medium businesses, the practical impact is straightforward: tell people when they're interacting with AI, keep basic records of what AI you use, and have a simple policy covering how you use it responsibly. For businesses using AI in hiring, credit decisions, or healthcare, there's more work involved — but it's manageable with the right guidance.
Key Deadlines
The EU AI Act rolls out in phases. Here's what matters and when.
Prohibited AI Provisions
Rules banning prohibited AI systems took effect — including social scoring, subliminal manipulation, and most real-time biometric surveillance.
GPAI Model Rules
Rules for general-purpose AI (GPAI) models — like GPT-4, Gemini, Claude — came into force. Providers of these models must comply; businesses using them face new transparency rules.
Transparency & General Provisions ⭐
THE main deadline for most businesses. Transparency obligations for AI systems (disclose when customers interact with AI), general governance requirements, and full GPAI user-facing rules.
High-Risk Systems (Annex I)
Extended compliance requirements for high-risk AI systems listed in Annex I — covering sectors like safety components in products regulated by EU law.
High-Risk Systems (Annex III)
Full compliance for all remaining high-risk AI systems listed in Annex III — including existing AI systems already in service before 2024.
Who Does It Apply To?
The EU AI Act classifies AI use into four risk levels. Your obligations depend on where your AI use lands.
These AI uses are banned entirely under the EU AI Act.
These AI uses are allowed but require significant compliance work before deployment.
These AI uses are allowed but require specific transparency disclosures.
Most AI uses fall here. No specific EU AI Act requirements — but good practice still applies.
What SMEs Need to Do
Prioritised by effort level. Start with the quick wins.
- ☐Add AI disclosure notices to any chatbot ('This chat uses AI')
- ☐Label AI-generated content on your website
- ☐Brief your team on what AI they're using
- ☐Write a short AI policy (what AI you use, how, and governance rules)
- ☐Create an AI system register (list all AI tools and their purpose)
- ☐Review your privacy notices to include AI data processing
- ☐Ensure GDPR compliance for any AI processing personal data
- ☐Conduct a conformity assessment for each high-risk AI system
- ☐Implement and document human oversight processes
- ☐Register high-risk systems with national authorities
- ☐Maintain technical documentation as required by the Act
Glossary
Key terms from the EU AI Act, explained plainly.
Find out your risk level in 5 minutes
Take the free audit. Answer 5 questions about your AI use and get a personalised risk assessment and action plan.
Start Free Audit →⚠️ This guide is for educational purposes only — not legal advice.