The Deadline Everyone Forgot About
If you have been paying attention to EU AI regulation, the conversation has been dominated by the high-risk AI compliance deadline — August 2026 under the original text, potentially pushed to December 2027 by the Omnibus. That debate is real and it matters.
But there is a different regulatory clock ticking that has received far less attention, and that clock has no delay attached to it. It is running under the GDPR, it started in 2018, and as of March 2026 it just became an enforcement priority for 25 European data protection authorities simultaneously.
On March 19, 2026, the European Data Protection Board launched its Coordinated Enforcement Framework (CEF) action for 2026. The focus this year: compliance with GDPR Articles 12, 13, and 14 — transparency and information obligations.[1] Twenty-five national Data Protection Authorities (DPAs) across Europe are now in the field, examining whether companies are telling individuals the truth about how their personal data is being processed by AI systems.
Findings will be shared and aggregated in the second half of 2026, with a consolidated EDPB report to follow.[2] That report will shape enforcement priorities across Europe for years.
Why AI Transparency Is a GDPR Issue, Not Just an AI Act Issue
Here is something many small businesses do not realise: the GDPR already requires you to tell people when their personal data is processed by AI.
Article 13 of the GDPR requires controllers to provide information at the time personal data is collected — including about the existence of automated decision-making, the logic involved, and the significance and consequences of such processing.[1] Article 14 covers the same obligation where data was not collected directly from the individual. Article 12 requires that this information be provided in concise, transparent, intelligible, and easily accessible form.
For any business using AI that processes personal data — whether it is an AI hiring tool, a customer scoring system, a chatbot that logs conversations, or an automated CRM workflow — your privacy notice must disclose: that AI is being used, what it is doing with the data, and what consequences the individual faces as a result. This has been the law since 2018. It is not a new requirement.
What is new is that, for the first time, this obligation is the subject of a pan-European coordinated enforcement action — with 25 DPAs simultaneously examining the same question, using the same framework, and producing a shared report that will set the baseline for how GDPR AI transparency gets enforced across the EU.[1]
What Regulators Are Actually Looking At
The CEF 2026 action is not a vague survey. Participating DPAs will contact controllers directly — in some cases through formal enforcement actions, in others through fact-finding exercises — and examine how their privacy notices address AI processing.[1]
The specific checklist regulators are applying to AI-related transparency includes:
- Does the privacy notice identify when AI is used? Not just whether data is "processed automatically" — does it explicitly say "we use AI"?
- Does it describe the purpose and logic of the AI processing? The GDPR requires disclosure of the logic involved in automated processing. A generic reference to "automated decision-making" is likely insufficient under the standard the EDPB is applying.
- Does it explain the consequences for the individual? If AI makes a decision that affects a person's job application, credit access, or service eligibility, the individual must be told what happens as a result — and what rights they have to challenge it.
- Is the information provided at the right time? For directly collected data, disclosure must happen at the point of collection. For data obtained indirectly — for example, AI-sourced data about individuals from third-party sources — a separate notification obligation applies under Article 14.
The EDPB's report explicitly frames this as a quality-of-notice question: not just whether companies have a privacy policy, but whether that policy genuinely informs individuals about AI processing in the way the GDPR requires.[2]
Why This Matters More Than It Sounds
For small and medium businesses using AI, there is a natural assumption that GDPR compliance is about having a privacy policy — a checkbox exercise handled when the company was founded or when the website was built. That assumption is increasingly wrong.
A privacy notice written in 2019 for a business that did not then use AI is almost certainly not GDPR-compliant today for that same business if it now runs AI tools that process personal data. The obligation to disclose AI processing is not a one-time event — it is an ongoing duty that updates as your data processing practices evolve.
The EDPB's own report acknowledges that GDPR compliance must evolve as AI use evolves. The Helsinki Statement — adopted in July 2025 and referenced in the April 2026 annual report — commits the Board to practical tools and templates to help organisations navigate this complexity.[2] But those tools are still in development. In the meantime, enforcement is moving ahead of guidance.
There is also a structural tension that the CEF 2026 action will inevitably surface: the GDPR's transparency requirements and trade secret protections can pull in opposite directions. A company cannot hide behind IP rights to avoid disclosing that AI is being used or how it affects individuals. But there is genuine uncertainty about how much technical detail about AI systems companies are required to share publicly. The EDPB's consolidated report, expected in H2 2026, should begin to answer that question.
The AI Act Overlap You Cannot Ignore
The GDPR transparency obligations and the EU AI Act's Article 50 disclosure requirements overlap significantly in practice. Both require informing individuals when they are interacting with an AI system. Both require disclosing the purpose and consequences of AI processing. Both require some explanation of the system's logic or functioning.[3]
For SMEs, this overlap is a practical opportunity. If you are already working on your AI Act transparency disclosures — your chatbot disclosures, your AI content labels, your user-facing AI notices — those same disclosures can often satisfy GDPR Articles 13 and 14 requirements if drafted carefully. One disclosure document, properly structured, can cover both regulations.
The reverse is also true: if you have a GDPR-compliant privacy notice that properly discloses AI processing, you are likely most of the way to meeting Article 50 of the AI Act for the same systems. The key is to audit both sets of requirements together, not in isolation.
What You Should Do Right Now
Three specific steps that address both the CEF 2026 action and your ongoing GDPR obligations:
- Audit your privacy notice for AI disclosures. Go through every AI tool you run that processes personal data — hiring tools, CRM AI features, customer service chatbots, analytics systems, anything that scores or profiles users. Check whether your privacy notice mentions each of them explicitly. If it says anything like "we use automated decision-making" without naming the specific AI tool or explaining its consequences, that disclosure is almost certainly insufficient under the standard CEF 2026 is applying.
- Check your Article 50 AI Act disclosures. If you are subject to Article 50 — if you deploy a chatbot, an AI content generation tool, or an emotion recognition system — your AI Act disclosure and your GDPR privacy notice should be consistent and complementary. Use the AI Act disclosure work to inform your GDPR notice updates, and vice versa.
- Document your AI processing inventory now. You cannot disclose AI processing accurately if you do not know what AI is running in your business. The AI inventory you build for the AI Act is the same foundation you need for GDPR Articles 13 and 14 compliance. One exercise, two regulatory benefits.
The Closing Window Before H2 2026
The coordinated enforcement action is already running. DPAs are in the field, examining privacy notices, and in some cases already contacting controllers with questions. The consolidated EDPB report — which will set the enforcement standard for all 25 participating authorities — is expected in the second half of 2026.[1]
That report will not just describe what it found. In the EDPB's coordinated enforcement framework, findings from one year's action shape the next year's priorities.[4] Whatever CEF 2026 identifies as the most common gaps in AI transparency compliance will directly inform how DPAs allocate enforcement resources in 2027 and beyond.
For SMEs: the AI Act's high-risk deadline debate is a real and important conversation. But GDPR's transparency obligations on AI are not a future concern — 25 regulators are examining your privacy notice right now, and the consolidated report lands in H2 2026. Update your disclosures before that report does it for you.
This article is for informational purposes only and does not constitute legal advice.
Sources
- [1]EDPB — CEF 2026: Coordinated Enforcement Action on Transparency and Information Obligations (March 19, 2026)
- [2]NicFab Newsletter #16 — EDPB Annual Report 2025: CEF 2026 and GDPR AI transparency (April 14, 2026)
- [3]EU AI Act — Article 50: Transparency Obligations for Certain AI Systems
- [4]EDPB — CEF 2024 report: Right to access
Conozca su nivel de riesgo IA en 10 minutos
Nuestra auditoría gratuita le guía a través de las preguntas exactas para clasificar sus sistemas de IA e identificar lo que necesita hacer antes del 2 de agosto de 2026.
Iniciar auditoría gratuita →⚠️ No es asesoramiento jurídico — solo con fines orientativos