← Back to Blog
Risk Categories
6 min read18 March 2026

High-Risk AI: The Distinction Most Businesses Get Wrong

You bought an AI hiring tool. You didn't build it. But you're still on the hook. Here's how provider vs. deployer actually works — and which category your business falls into.

The Thing People Keep Getting Wrong

"We didn't build it, so it's not our problem." This is the most common misconception about high-risk AI compliance. The EU AI Act disagrees. It splits the world into providers (companies that build AI systems) and deployers (companies that use them). If you license a screening tool from HireVue to filter job applications, you're a deployer. You still have obligations.[1]

The obligations are different — lighter than what HireVue faces as the provider — but real. You must use the tool within its intended purpose, implement meaningful human oversight, conduct a data protection impact assessment if required, and keep logs of how it's used. "We just bought it off the shelf" is not a compliance strategy.

What Makes an AI System High-Risk

Annex III of the AI Act lists the categories. These aren't hypothetical edge cases. They're tools that thousands of ordinary businesses use every day.[2]

HR and hiring is the big one for SMEs. Any AI that screens CVs, ranks candidates, analyses interview performance, monitors employee output, or informs termination decisions is high-risk. Workday has AI-powered talent tools. So does SAP SuccessFactors. So does Greenhouse. If you're using the AI features in your HR platform — not just the admin interface, but the AI-driven screening or scoring — check whether it's doing something on this list. It probably is.

Credit and financial decisions also land here. AI that contributes to loan approvals, creditworthiness scores, insurance pricing, or investment eligibility is high-risk. If you work in fintech or financial services and AI is anywhere in your decisioning process, assume it applies.

Education is the less obvious one: AI that determines who gets admitted to programmes, automates exam scoring, or monitors student behaviour during exams falls under Annex III. EdTech platforms increasingly bundle this in by default.

What High-Risk Actually Requires

If you're the provider — meaning you built the system — the compliance burden is substantial. Conformity assessment. Technical documentation covering training data, architecture, performance metrics. Human oversight mechanisms. EU database registration before deployment. Post-market monitoring after launch.

If you're the deployer — meaning you bought it — the requirements are lighter but not optional. Use it as instructed by the provider. Maintain human oversight. Do a data protection impact assessment where required. Keep usage logs. That's the core of deployer duty.

The fine print here is actually important: if your vendor hasn't given you the documentation you need to fulfil your deployer obligations, that's a problem with your contract, not a compliance exemption. Chase your vendors.[3]

The Other Categories (Most SMEs Live Here)

Not everything is high-risk. Customer service chatbots are limited risk — you just need to tell users they're talking to AI. Writing assistants, spam filters, product recommendation engines, SEO tools: these are minimal risk. No specific legal obligations, though documenting what you use is sensible practice.

The majority of SMEs using standard SaaS tools are somewhere between limited and minimal risk. The compliance work is manageable: add a disclosure to your chatbot, label AI-generated content, brief your team. Not trivial, but far from the full conformity assessment world of high-risk AI.

The Honest Question to Ask Yourself

Does any AI in your business influence decisions about individual people — who gets hired, who gets credit, who gets access to services? That's the clearest signal you're in high-risk territory. Start your compliance work now. The proposed 2027 extension for high-risk obligations isn't final yet, and even if it passes, you don't want to be scrambling in late 2027.

This article is for informational purposes only and is not legal advice.

Know your EU AI Act risk level in 10 minutes

Our free audit walks you through the exact questions to classify your AI systems and identify what you need to do before August 2, 2026.

Start Free Audit →

⚠️ Not legal advice — for guidance purposes only