← Back to Blog
Policy Updates
4 min read17 April 2026

Your AI Agents Are Generating Decisions Right Now. Can You Prove It? Article 12 and the Logging Obligation SMEs Can't Afford to Ignore

A fresh technical analysis published April 16, 2026 breaks down exactly what the EU AI Act requires from AI agent logging — and the answer is more specific and more demanding than most businesses expect. Automatic recording, tamper-evidence, six-month retention, and no finalised technical standard yet. Here is what every SME running AI agents needs to understand before August.

The Obligation Nobody Is Talking About

Of all the EU AI Act requirements landing on small and medium businesses this year, the one that gets the least attention is also one of the most technically specific: the obligation to generate and retain automatic logs of your AI systems' decisions. It is Article 12 of the AI Act, and if you are running an AI agent — a semi-autonomous system that makes calls, takes actions, or triggers workflows without human review at every step — this obligation applies to you from August 2, 2026.

A detailed technical analysis published by Help Net Security on April 16, 2026 cuts through the confusion around what Article 12 actually requires.[1] The article is worth reading in full if you have a technical background, but the practical implications for SMEs deserve a plain-language treatment — because most small businesses running AI agents today almost certainly do not satisfy these requirements, and the August deadline is less than four months away.

What Article 12 Actually Requires

The regulation says high-risk AI systems "shall technically allow for the automatic recording of events (logs) over the lifetime of the system."[2] Two words in that sentence do the heavy lifting.

Automatic means the system generates logs on its own. You cannot satisfy this requirement with a spreadsheet that someone fills in manually after the fact. If a regulator asks to see your AI agent's decision log for the past month, and you produce a PDF that someone typed up from memory, that is not Article 12-compliant documentation. The system has to record itself.

Lifetime means from the moment the system is deployed to the moment it is decommissioned — not just the current release, not just when someone remembers to turn on logging. If you decommission a high-risk AI agent and a regulator asks about a decision it made two years earlier, you need to have that log. For at least six months after the system goes offline.[1]

The logs must cover three categories of information: situations where the system might present a risk or underwent a substantial modification; data needed for post-market monitoring; and data for operational monitoring by deployers. The EU AI Act does not prescribe a specific format, log level, or field structure — only that these three purposes are served.

The AI Agent Problem Nobody Expected

Here is where it gets interesting for businesses running AI agents. Standard application logging captures what an agent does perfectly well: a tool call, an LLM response, a final output. Your existing application logs probably already record those events.

The problem is not capturing the data. The problem is proving the log has not been altered.

The Help Net Security analysis puts it plainly: "A regulator asks you to prove the log wasn't modified. Application logs live on infrastructure someone controls. They can be edited or replaced without anyone noticing."[1] Article 12 does not use the word "tamper-proof." But if your logs can be silently altered and you cannot demonstrate otherwise, their evidentiary value is zero — and a regulator who asks to see your AI agent's decision history is going to form a view about whether what you show them is genuine.

This is the technical challenge that has compliance engineers paying close attention to cryptographic signing of AI logs: signing each agent action with a key the agent does not hold, chaining each signature to the previous one, and storing the receipt somewhere the agent cannot reach. Change one entry and the chain breaks visibly. Whether a company uses post-quantum signatures or another scheme matters less than whether the underlying principle is implemented: the signing key lives outside the agent's trust boundary, every action gets a receipt, and the receipts form a verifiable chain.[1]

The Six-Month Rule and Sector Variations

Articles 19 and 26 set a six-month minimum for log retention. If you are a financial services company, you can fold AI logs into your existing regulatory paperwork — MIFID II, DORA, and similar frameworks already mandate record-keeping that covers much of what Article 12 requires. For everyone else, six months is the floor, though sector-specific regulations may extend that window.[1]

This matters practically. When you evaluate AI tools from vendors, you need to ask not just whether the system generates logs, but whether it generates logs that survive six months of storage in a form you can actually read — and whether you have a credible story for how those logs are protected against post-hoc modification.

The Standards Gap Is Real

Two draft technical standards are relevant here, but neither has been finalised: prEN 18229-1, covering AI logging and human oversight, and ISO/IEC DIS 24970, focused on AI system logging.[1] Both are in draft. Neither provides a definitive technical specification that companies can simply follow.

This creates a familiar regulatory paradox: companies are building to outcomes defined by the law without a finalised standard that tells them precisely how to achieve those outcomes. The Help Net Security analysis frames this pragmatically: "Teams that get logging right now will be ahead when the standards land. Teams that wait risk retrofitting under pressure."[1]

The August 2026 Deadline Is Real — For Now

The Digital Omnibus negotiations are underway, with trilogues ongoing and a political agreement expected around April 28. The Commission proposed delaying the high-risk system obligations — including Article 12 — possibly to December 2027. But nothing has passed into law yet.[1]

The August 2, 2026 date is still the enforceable deadline unless and until the Omnibus legislation is adopted. The EPthinktank reported in March 2026 that the enforcement architecture is not yet fully in place — only 8 of 27 member states had designated their single points of contact for AI Act enforcement by mid-March. But the law applies regardless of whether member states have their infrastructure ready. SMEs cannot use incomplete enforcement preparation as a reason to delay compliance.

Penalties for non-compliance with Annex III obligations — which includes the logging requirements — are capped at EUR 15 million or 3% of global annual turnover, whichever is higher.[3] For a small company with EUR 8 million in revenue, that is up to EUR 240,000. Article 99 does require penalties to be proportionate and dissuasive, taking into account company size and economic viability — meaning SMEs should generally face lower fines in practice than the headline maximum. But the penalty structure is the same for all entities; the proportionality adjustment happens in the application, not the statute.[3]

Meanwhile, GDPR enforcement runs in parallel. AI systems that process personal data — which includes most agents operating in hiring, CRM, customer service, or financial contexts — may face additional penalties under GDPR for inadequate logging of automated decisions. A single compliance gap can trigger simultaneous action under two different regulations.[4]

The Three Questions to Answer Right Now

Whether you are building AI agent logging in-house or evaluating vendor tools, there are three questions you need to be able to answer before August:[1]

  • Can your system generate logs automatically at every decision point? Not just when someone remembers to enable debugging. Every consequential AI action — a recommendation, a classification, an approval, a flag — needs a log entry generated by the system itself.
  • Can those logs survive tampering? Can you demonstrate to a regulator that the log you are showing them is the log that was generated at the time of the decision? If the answer requires trust rather than evidence, you have a gap.
  • Can you keep them for six months in a format a regulator can read? Storage is the easy part. The question is whether you have a retrieval system that can produce a readable, auditable log chain on demand — and whether that storage is on infrastructure the AI agent itself cannot access.

The Practical Starting Point

If all of this sounds like enterprise-level infrastructure engineering, it is worth being precise about where the burden actually falls. The AI Act's logging obligations apply to high-risk AI systems — which includes agents used in hiring, credit scoring, insurance pricing, healthcare triage, and similar consequential decisions. If your AI agent is making or materially influencing decisions in one of those domains, the obligations apply to you regardless of company size.

If your agent is operating in a lower-risk context — a chatbot, an internal search tool, a drafting assistant — Article 12 does not apply. But it is worth noting that the question of what constitutes a "high-risk" context is not always obvious. An agent that routes customer service tickets based on AI scoring is operating in a context that could be considered consequential. Get a clear read on your risk classification before assuming you are exempt.

The broader point from the Help Net Security analysis stands regardless of your current state of compliance: the companies that take logging seriously now — building the infrastructure, establishing the key management practices, getting the chain of custody right — will be in a fundamentally better position when the standards are finalised and enforcement becomes routine. The companies that wait will be retrofitting under time pressure, which is never a good position to be in when regulators are watching.

This article is for informational purposes only and does not constitute legal advice.

Know your EU AI Act risk level in 10 minutes

Our free audit walks you through the exact questions to classify your AI systems and identify what you need to do before August 2, 2026.

Start Free Audit →

⚠️ Not legal advice — for guidance purposes only