What Changes on August 2, 2026
The General-Purpose AI obligations in Chapter V of the AI Act have been in effect since August 2, 2025 for providers. But the AI Office — the Commission body charged with supervising GPAI — has been operating with one arm tied behind its back. From August 2, 2026, that changes.[1]
From that date, the AI Office gains the full suite of enforcement powers the AI Act grants it: the power to request documentation from GPAI providers under Article 91, to conduct formal evaluations under Article 92, to issue compliance instructions under Article 93, and — critically — to impose fines under Article 101.[2] Before August 2, 2026, the AI Office can ask. From that date, it can demand.
The timeline matters because GPAI model providers that entered the market before August 2025 have until August 2, 2027 to come into full compliance with Chapter V. That two-year window is running. But the AI Office's enforcement powers — the ability to actually compel compliance and penalise non-compliance — are already active as of August 2, 2026. Providers operating without a compliance plan are already exposed.
What the AI Office Can Actually Do From August 2
The powers being unlocked are not cosmetic.[2]
- Documentation requests (Article 91): The AI Office can require any GPAI provider to submit technical documentation — including training data summaries, model architecture descriptions, testing results, and safety evaluation reports — within a set timeframe. Failure to respond or providing incomplete documentation triggers penalties.
- Formal evaluations (Article 92): The AI Office can conduct its own testing of GPAI models to verify reported capabilities and safety characteristics. If a provider has claimed a model does not cross the systemic risk threshold, the AI Office can evaluate it independently and challenge that claim.
- Compliance measures (Article 93): If the AI Office identifies non-compliance, it can require the provider to take specific corrective actions — including restricting the model's availability, mandating risk mitigation measures, or ordering withdrawal from the EU market.
- Fines (Article 101): For prohibited AI practices under Article 5, penalties reach up to EUR 35 million or 7% of global annual turnover. For GPAI-specific violations, the AI Office can impose fines of up to 3% of worldwide turnover, or EUR 15 million, whichever is higher.
These are not hypothetical powers sitting in a legal text. They activate on August 2, 2026 — which means the six-month countdown to the AI Office's full enforcement mode is already running.
Who Is Most Exposed
GPAI model providers — companies that build and release foundation models that others build on top of — face the most direct exposure from August 2. But the downstream effect for businesses using those models is real and immediate.[3]
The AI Act's GPAI obligations include requirements that providers build into their models: adversarial testing documentation, incident reporting to the AI Office, cybersecurity measures for models above the 1025 FLOPs systemic risk threshold, and registration in the EU database for high-capability models. These obligations attach to the provider, not the individual business using the model.
For SMEs that are using GPT-4 class models, Claude, Gemini, or open-source models above the systemic risk threshold in their products: the compliance chain that protects you depends on your model provider being in compliance with the AI Office. If the AI Office starts enforcing against model providers from August 2, providers that are not compliant may face orders to restrict or withdraw their EU availability. That directly affects which models you can continue to use in the EU market.
The companies most exposed heading into August 2 are providers who have not yet registered in the EU database, have not submitted their technical documentation to the AI Office, or have models above the systemic risk threshold without documented adversarial testing. If you are using a model from a provider that has not completed these steps, the AI Office's enforcement action against the provider will have direct consequences for your AI-enabled products.
The GPAI Code of Practice: Why Providers Are Paying Attention
The General-Purpose AI Code of Practice became effective August 2, 2025 as a voluntary compliance route. Providers who follow the Code and can demonstrate adherence receive what the Commission has described as "focused AI Office enforcement attention" — meaning the AI Office treats Code-adherent providers as lower enforcement priority, all else being equal.[1]
This is worth understanding for SMEs evaluating AI tools. A vendor that has actively engaged with the GPAI Code of Practice — conducting adversarial testing, filing incident reports, maintaining technical documentation — is demonstrating a level of regulatory engagement that a vendor that has not is not. The Code is voluntary, but the AI Office's enforcement attention is real, and the distinction between compliant and non-compliant providers will become clearer from August 2.
What to Watch for Between Now and August
Three things to track as the enforcement date approaches:
- AI Office documentation requests: Once enforcement powers activate, expect the AI Office to begin requesting documentation from the largest GPAI providers relatively quickly — the providers that are already on the AI Office's radar. Watch for any public statements or press coverage about which providers the AI Office contacts first.
- EU database registrations: The EU AI Act database for GPAI models (managed by the AI Office) is the public record of which models have been registered and their compliance status. If a model you rely on is not registered, that is a signal worth investigating — the provider may not have engaged with the compliance process at all.
- Omnibus interaction: The Digital Omnibus, if it concludes before August 2, 2026, could affect how GPAI obligations are framed or timed. Watch whether the final agreement modifies Chapter V in ways that affect the AI Office's enforcement scope or the systemic risk threshold definition.[3]
The Practical Bottom Line for SMEs
If you are building on top of GPAI models — using them as the foundation for your products, services, or internal workflows — the AI Office's activation on August 2 affects you in two ways.
First: if your model provider is non-compliant with Chapter V and the AI Office takes enforcement action against them, your access to that model in the EU may be disrupted. The compliance of your suppliers is now a supply chain risk.
Second: the GPAI obligations that providers must meet include incident reporting. If a serious incident occurs with a model you are using — a safety failure, a demonstrated systemic risk event — the provider is required to report it to the AI Office. That reporting chain is the mechanism that gives the AI Office visibility over model behaviour. It is also the mechanism that should eventually flow back to you as downstream notification. Whether that notification chain functions smoothly in practice is one of the open questions as the enforcement date approaches.
Six months is not a long time. If you are using GPAI models above the systemic risk threshold in your business, now is the moment to audit which providers you rely on, whether they have engaged with the GPAI Code of Practice, and whether they are registered in the EU database. The AI Office's enforcement powers are about to become real — and the compliance posture of your model providers will matter more than it has to date.
This article is for informational purposes only and does not constitute legal advice.
Sources
Kend dit EU AI Act-risikoniveau på 10 minutter
Vores gratis audit guider dig gennem de præcise spørgsmål for at klassificere dine AI-systemer og identificere, hvad du skal gøre inden 2. august 2026.
Start gratis audit →⚠️ Ikke juridisk rådgivning — kun til vejledningsformål