The Tool Nobody Told You to Worry About
Somewhere in your company right now, an AI agent is doing something without a human checking the output first. Maybe it is routing support tickets. Maybe it is flagging unusual transactions. Maybe it is updating records in your CRM after reading an email. Whatever it does, it is probably acting autonomously — making decisions, triggering workflows, moving things forward without a person in the loop.
That is increasingly normal in modern businesses. It is also, under the EU AI Act, a meaningfully different compliance situation than running a chatbot or an AI-powered report generator. Autonomous agents are categorised differently, face harder obligations, and by August 2026 will be enforcement priorities — not because regulators announced it loudly, but because the law has always said so.
What Makes an AI Agent "High Risk"
The EU AI Act uses a risk-tiered framework. Simple chatbots face light transparency requirements. AI that recommends a hiring decision faces high-risk obligations. Autonomous agents — systems that take actions, approve transactions, and trigger real-world outcomes without a human reviewing every output — sit firmly in the high-risk category for any consequential application.[2]
The reason is structural. The AI Act requires that high-risk AI systems be designed so that their outputs can be understood by the humans who operate them.[4] For a chatbot, that is relatively straightforward — a person reads the response and decides whether to share it. For an autonomous agent that approved a credit decision, escalated a support ticket, or flagged a transaction as suspicious without human review, that requirement is much harder to meet. The system is not just generating content; it is taking consequential actions.
Enterprise compliance teams have been aware of this distinction for some time. What is newer is that SMEs are now using agentic tools as well — not because they built them, but because they are embedded in the SaaS platforms they already pay for. Your ERP system might be running an AI agent. Your accounting software might have an autonomous workflow engine. Your HR platform might include agents that screen and score candidates without human review. The question is not whether you built an AI agent — it is whether you are running one.
The August 2026 Countdown for Agents Is Real
The EU AI Act's enforcement timeline does not distinguish between startups and enterprises. High-risk obligations for AI systems — including autonomous agents used in employment, credit, education, and critical infrastructure — apply from August 2, 2026 under the original text. The Digital Omnibus may push the conformity assessment deadline for high-risk systems to December 2027, but it has never been in question that August 2026 is when enforcement activity begins.[2]
For autonomous agents specifically, enforcement from August means two things. First, national market surveillance authorities — the bodies responsible for AI Act compliance in each EU country — will be able to investigate AI systems that appear to operate without adequate human oversight, documentation, or risk controls. Second, the EU AI Office in Brussels has cross-border enforcement authority for GPAI models and systemic risks, and is actively building its supervisory capacity.
The Covasant analysis published April 13 frames the stakes plainly: organisations running autonomous agents across fraud detection, credit decisioning, HR automation, or regulatory reporting need to treat the August deadline as an active countdown, not a future consideration.[2] That framing applies to SMEs as much as to enterprises, if not more so — smaller companies have less regulatory capacity and fewer resources to retrofit governance into already-deployed agentic systems.
What Article 9 and Article 13 Actually Require for Agents
Two parts of the AI Act are particularly relevant for anyone running autonomous agents.
Article 9 — Risk Management — requires that high-risk AI systems have an ongoing, evidence-based risk management process built into every stage of deployment: development, preparation, and production. This process must be under constant review.[3] For an autonomous agent, this means you need to be able to demonstrate that you identified the risks the agent could pose before you deployed it, that you have controls in place to mitigate those risks, and that you are actively monitoring whether those controls are working.
Article 13 — Transparency — requires that high-risk AI systems be designed so that deployers can understand the system's output. This is sometimes called the 'glass box' requirement versus the 'black box' critique. An AI agent from a third-party vendor must be interpretable by its users — not an opaque system whose reasoning is invisible — and must be supplied with enough documentation to ensure safe and lawful use.[4]
These two requirements together create a practical challenge for autonomous agents: can you explain what the agent decided and why, and can you show that you managed the risk of the decision being wrong? If the answer to either is no — if the agent operates in a closed loop with no human oversight and no audit trail — you have a compliance gap.
Why Shadow AI and Agent Sprawl Are the Real Problem for SMEs
The compliance risk around autonomous agents is amplified by a pattern that compliance analysts call shadow AI or agent sprawl: individual teams or departments within a company deploying AI agents without centralised oversight, creating a situation where the business does not fully know what its own AI systems are doing.[1]
This is not an enterprise-specific problem. A 50-person company using three different SaaS platforms, each with embedded AI agents, may have agents acting on their behalf in ways nobody has documented. The EU AI Act does not exempt small businesses from the requirement to know what AI is operating in your organisation and to maintain appropriate oversight.
The governance issue starts with a basic inventory problem: you cannot manage what you cannot see. The AI Act's Article 9 risk management requirement is practically impossible to satisfy if you do not know which autonomous agents are operating in your business and what they are doing.[3]
What You Should Do Right Now
Three steps that are both compliant with the AI Act and genuinely good business practice:
- Map your AI agents. Go through your key business processes — customer service, finance, HR, operations — and identify where autonomous or semi-autonomous AI tools are making decisions without human review at the point of decision. This is your agent inventory, and it is the starting point for everything else.
- Identify your human oversight points. For each agent in your inventory, confirm whether there is a meaningful human checkpoint in the workflow. If an agent is approving, denying, escalating, or flagging something without a person reviewing the output before it takes effect, that is a potential high-risk AI system under the Act.
- Ask your vendors for documentation. If you are using third-party SaaS tools with embedded agents, ask the vendor specifically: how does this system comply with Articles 9 and 13? What technical documentation can they provide? A vendor that cannot answer those questions is a risk you need to understand.
The Distinction That Matters
GDPR governed how you handled data. The EU AI Act governs how your AI systems make decisions — and for autonomous agents, those decisions happen at machine speed and at scale, often without a human reviewing the output before it propagates. That is a qualitatively different compliance challenge.
For businesses that have been treating AI governance as a future concern, the arrival of enforcement in August 2026 is the moment to make it a present concern. The good news is that the first step — knowing what agents you are running — is also the step that gives you the most operational insight into your own business. Compliance and management discipline overlap here more than they do in most regulatory contexts.
This article is for informational purposes only and does not constitute legal advice.
Sources
Conozca su nivel de riesgo IA en 10 minutos
Nuestra auditoría gratuita le guía a través de las preguntas exactas para clasificar sus sistemas de IA e identificar lo que necesita hacer antes del 2 de agosto de 2026.
Iniciar auditoría gratuita →⚠️ No es asesoramiento jurídico — solo con fines orientativos