← Back to Blog
Risk Categories
7 min read18 March 2026

Is Your AI System "High Risk" Under the EU AI Act? A Plain-Language Guide

HR software, credit tools, chatbots — understanding which AI systems require the most compliance work.

Four Categories, Very Different Consequences

The EU AI Act doesn't treat all AI the same way. It uses a tiered risk approach, classifying every AI system into one of four categories: Prohibited, High Risk, Limited Risk, and Minimal Risk. Where your system falls determines what you have to do.[1]

For most SMEs, the critical question is: are any of my AI systems high-risk? Because if so, the compliance workload is significantly heavier. Here's how to figure out where you stand.

High-Risk AI: The Full List

Annex III of the EU AI Act defines the categories of AI systems considered high-risk. These aren't edge cases — they include tools used by thousands of ordinary businesses every day:

Employment and HR

AI used in hiring and workforce management is explicitly listed as high-risk. This includes:[1]

  • CV and application screening AI — any system that automatically filters or ranks candidates
  • AI-assisted interview tools — including platforms that analyse facial expressions or speech patterns during video interviews
  • Performance monitoring AI — systems that automatically assess worker output or behaviour
  • Termination decision support tools

If you're using an HR platform with built-in AI — even one from a major vendor like Workday, SAP, or Greenhouse — check whether it's doing any of these things. If it is, it's likely high-risk.

Credit and Finance

AI that influences financial decisions about individuals is high-risk:

  • Credit scoring models
  • Loan approval or pricing AI
  • Insurance risk pricing using AI-derived scores

If you operate in financial services and any part of your decisioning involves an AI component, this almost certainly applies to you.

Education

  • AI that determines access to educational opportunities
  • Automated exam scoring systems
  • AI monitoring student behaviour during exams

Law Enforcement and Critical Infrastructure

These categories matter primarily for larger organisations: AI tools used by police or courts, and AI embedded in energy grids, water systems, or transport networks.

Why "High Risk" Matters: The Compliance Requirements

If you're a provider of a high-risk AI system (i.e., you built it), you face the most demanding obligations:[2]

  • Conformity assessment — a formal process to verify the system meets EU AI Act standards
  • Technical documentation — detailed records of how the system works, what data it was trained on, its performance metrics
  • Human oversight measures — the ability for humans to monitor, intervene, and override
  • Registration — high-risk AI systems must be registered in the EU AI database before deployment
  • Post-market monitoring — ongoing tracking of the system's performance in real-world use

If you're a deployer (you're using someone else's high-risk AI), your obligations are lighter but still real: you must use it as instructed, implement human oversight, conduct a data protection impact assessment where required, and keep logs of how it's being used.

Provider vs. Deployer: Which Are You?

This distinction is crucial. Here are two examples to illustrate:

Scenario A: You use an AI-powered CV screening tool purchased from a third-party HR software vendor. → You are a deployer. The vendor is the provider. You have deployer obligations (correct use, oversight, logs) but not the full provider burden.

Scenario B: Your engineering team built a machine learning model that scores customer creditworthiness. → You are the provider. All high-risk obligations apply, including conformity assessment and EU database registration.

Limited Risk: Chatbots and Content AI

Most customer-facing AI chatbots fall into the limited risk category. The obligation here is primarily about transparency: users must be informed they're talking to an AI system.[3] AI that generates synthetic content (text, images, audio) also falls here — it must be labelled as AI-generated in most contexts.

This is manageable. A short disclosure in your chatbot's first message is often all that's needed.

Minimal Risk: Most Everyday AI

The good news: the majority of AI tools used by SMEs are minimal risk. Spam filters, email writing assistants, product recommendation engines, SEO tools, image editing — none of these carry specific legal obligations under the EU AI Act. There's no rule saying you must do anything special for them, though documenting your use of them is good practice.

The key action is classification. Don't assume you're in the minimal-risk bucket without checking. Our free audit walks you through the exact questions to determine your risk level in under 10 minutes.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →