← Back to Blog
Policy Updates
4 min read12 April 2026

The EU Just Asked: Is ChatGPT a "Large Online Platform"? The Answer Could Affect Millions of European Businesses.

On April 11, 2026, the European Commission confirmed it is assessing whether ChatGPT has crossed the 45 million DSA user threshold for "large online platform" designation. If confirmed, OpenAI faces very significant new obligations — and the downstream effects on European SME users of ChatGPT are worth understanding.

The Number Nobody Expected

On Friday, April 11, 2026, the European Commission confirmed it is actively assessing whether OpenAI's ChatGPT has crossed a specific regulatory threshold under the EU's Digital Services Act (DSA).[3] The trigger: OpenAI itself published user numbers showing ChatGPT above the 45 million monthly active users required for a platform to be classified as a "Very Large Online Platform" (VLOP) under the DSA.[1]

Commission spokesman Thomas Regnier said the Commission services were "currently assessing this information." His full comment: "OpenAI has published user numbers for ChatGPT above the 45 million DSA threshold for designation."[1] Regnier also noted that Large Language Models could "potentially" be in the scope of the DSA — a significant legal observation given the DSA was not designed with AI models in mind.[3]

This is a new and fast-moving story. And for the millions of European SMEs using ChatGPT — or building products on top of it — the implications extend well beyond OpenAI's regulatory obligations.

What the DSA Threshold Actually Means

The DSA's "Very Large Online Platform" designation applies to platforms with 45 million or more monthly active users in the EU. Once designated, a platform faces a substantially heavier regulatory burden — not because it did anything wrong, but simply because its reach is considered systemic.[2]

Those obligations include:

  • Systemic risk assessments: Designated platforms must commission and publish annual reports examining the societal risks their services pose — from disinformation to fundamental rights impacts to electoral manipulation.
  • Independent auditing: VLOPs must submit to annual independent audits of their compliance systems, risk management practices, and algorithmic accountability.
  • Crisis response protocols: Platforms must have structures in place to respond to real crises — natural disasters, public health emergencies — including co-ordination with Commission requests.
  • Data access for researchers: Designated platforms must give vetted researchers access to data to study systemic risks — effectively opening a channel for academic scrutiny of how the platform operates.
  • Enhanced content moderation obligations: VLOPs must have clear terms of service and enforce them consistently, with reporting obligations around illegal content.

These are obligations designed for social media platforms and search engines — not AI model providers. That's part of why the Commission's observation that LLMs could "potentially" fall within scope is significant. The DSA framework wasn't built for this case.

Why This Is More Complicated Than It Looks

A VLOP designation for ChatGPT raises a legal and conceptual puzzle that the EU is essentially working out in real time. The DSA governs platforms — services that host and moderate content from third parties. ChatGPT is a model: it generates content, it doesn't host user content in the way Facebook or YouTube does.

Commission spokesman Regnier's acknowledgment that LLMs are "potentially" in scope suggests the legal basis for treating ChatGPT as a VLOP is not straightforward. There are at least two ways the Commission could be thinking about this:

First: ChatGPT could be treated as a platform because its interface includes user-generated prompts and model outputs that could be considered "hosted content." This is a broad reading but one that brings generative AI tools within the DSA's scope.

Second: The Commission could be developing a parallel interpretation — that AI models with sufficiently large user bases pose systemic risks analogous to those the DSA was designed to address, even if the legal mechanism is novel.

Neither interpretation is settled. The assessment process — which is ongoing — will test whether the DSA's VLOP framework can practically apply to a conversational AI product, or whether new legal instruments are needed to fill the gap.

What This Means for SMEs

For the small and medium businesses that use ChatGPT — as a tool, an API integration, a content generator — the VLOP question might seem like a problem for OpenAI, not for you. That view is partly correct, but incomplete.

Your AI vendor being designated creates compliance downstream. If ChatGPT is formally designated a VLOP, OpenAI faces significant new obligations and compliance costs. Those costs typically flow into pricing. Businesses on ChatGPT Enterprise or API plans have already seen price sensitivity around regulatory changes — a VLOP designation would reinforce that pressure.

It signals where AI regulation is heading for platforms. The DSA is already in enforcement mode. The AI Act is mid-transition. The Commission's willingness to test whether the DSA covers LLMs tells you something about how regulators view AI service providers — not just as product regulations, but as systemic platforms with societal effects. That framing will influence how the AI Act's own platform-level obligations get interpreted.

It could affect your own AI Act disclosures. If you're a deployer using ChatGPT in a customer-facing product, the DSA obligations on OpenAI don't directly affect your Article 50 transparency requirements. But if OpenAI's compliance architecture changes — if they introduce new content moderation or system management processes — the documentation they provide to downstream businesses changes too. Watch for updated technical documentation from OpenAI if this designation is confirmed.

The Intersection of Two Regulators

What makes this story particularly worth watching is that it sits at the intersection of two distinct EU regulatory frameworks: the Digital Services Act (platform governance) and the EU AI Act (AI-specific rules). The Commission is processing this question through the DSA lens, but the implications for AI Act compliance and the AI Office's own supervisory jurisdiction are real.

The European AI Office — established in November 2025 — has cross-member state enforcement authority for general-purpose AI models. If ChatGPT's VLOP assessment leads to scrutiny of the model's capabilities, behaviour, or systemic risk profile, there is a plausible argument that the AI Office has jurisdiction too. Whether the two frameworks would be applied in parallel or sequentially is an open question.

For SMEs: the regulatory environment for AI is thickening. The DSA, the AI Act, and the evolving question of which framework covers which type of AI service are all moving at the same time. Your compliance foundation — knowing which AI tools you use, how they function, and what documentation your vendors provide — matters more, not less, as this landscape gets more complex.

This article is for informational purposes only and does not constitute legal advice.

Know your EU AI Act risk level in 10 minutes

Our free audit classifies every AI system you use and tells you exactly what to do before August 2, 2026.

Start Free Audit →