When ChatGPT Becomes A Search Engine, Europe Reaches For The Platform-Sized Hammer

The EU is weighing whether to treat ChatGPT Search like a Very Large Online Search Engine under the DSA. Congratulations, we’ve invented Google again, but with vibes and liability.

OpenAI says ChatGPT Search averaged 120.4 million monthly active recipients in the EU, above the DSA’s 45 million threshold, and the European Commission is assessing whether it should designate the service as a Very Large Online Search Engine.

The move, in one sentence

Europe is deciding whether your cheerful assistant-with-a-search-box is actually a platform with systemic-risk obligations, because once you intermediate information for tens of millions of people, the law stops caring how cute the UX feels.

What happened (and what we actually know)

According to reporting that cites a European Commission spokesperson, the Commission is analyzing OpenAI’s user disclosures to determine whether ChatGPT’s search feature should be designated under the Digital Services Act (DSA) as a “Very Large Online Search Engine” (VLOSE). The key trigger is scale: 45 million average monthly active recipients in the EU is the threshold for the DSA’s heavier-duty obligations.

OpenAI’s own disclosure, as cited in that reporting, puts ChatGPT Search at an average of 120.4 million monthly active recipients in the EU over a six-month period ending September 2025. If the Commission designates it as VLOSE, it would face requirements such as risk assessments, mitigation measures, external audits, increased transparency, and data access obligations for regulators and researchers.

Why this matters (it’s not a paperwork story)

This is the moment the EU turns “assistants” into “distribution infrastructure.” The DSA is written for platforms that shape what people see, and search engines are the canonical example. If ChatGPT Search is treated as a search engine at EU scale, the compliance posture becomes part of the product, not an afterthought.

And yes, that changes the competitive game. Compliance is an overhead tax that disproportionately hurts smaller entrants, which means the EU’s “tough rules” can still lead to consolidation. The joke is that regulation often fights monopoly power by making sure only monopolies can afford to comply. Efficiency in humiliation, fully automated.

The real mechanism: designation turns UX into liability

Once you’re VLOSE, “we’re just summarizing” is no longer a cute excuse. The DSA’s logic is that systemic risks (misinformation, illegal content amplification, manipulation, ad transparency failures) need systematic controls. For a search-like product, that implies at least four pressure points:

  • Ranking and retrieval transparency: You don’t have to reveal trade secrets, but you do have to show the shape of the machine. This is awkward when your machine is partly probabilistic and partly “it depends what the user asked.”
  • Risk assessment cadence: You have to assess and mitigate risks continuously, not when a journalist gets mad on a Tuesday.
  • Auditability: External audits and regulator access pull the product into a more testable, documented posture. “Trust us, it’s aligned” is not an audit artifact.
  • Data access and research hooks: You may need to enable vetted researchers/regulators to study systemic effects. That can collide with privacy, security, and “please don’t reverse-engineer our sauce.”

Who wins, who loses (a quick stakes map)

OpenAI gets a choice between (a) building a serious compliance apparatus that becomes part of the cost of serving Europe, or (b) limiting features/rollout in the EU to avoid the designation edge. Either way, “Europe as a market” gets priced as a regulatory product tier.

Google and Microsoft (and any incumbent search operator) have already built the compliance muscles. This is, ironically, a home-field advantage for the existing giants. A world where “AI search” is treated like “search” rewards the companies that have been living in regulator hell for a decade.

Publishers get a new lever. If AI search intermediates traffic, the DSA’s transparency and systemic-risk obligations become a policy hook for demanding clearer attribution, provenance, and potentially stronger controls against synthetic spam. (Whether it actually restores revenue is a different question. Your web traffic is still being eaten by the answer machine.)

EU regulators get precedent. If they can credibly designate ChatGPT Search, they can extend the platform perimeter to other assistant+search hybrids. That’s the whole point: define the perimeter now, and you’ll own the compliance surface later.

The Singularity Soup Take

Call it what it is: if the product behaves like search, it gets regulated like search. The “assistant” costume is cute, but the law cares about function, scale, and impact. If the EU draws this line cleanly, it’s not just an OpenAI story, it’s a template for every company trying to ship “search, but chatty.”

What to Watch

  • Whether the Commission explicitly ties designation to ChatGPT Search vs ChatGPT overall (scope matters).
  • Whether OpenAI changes EU feature availability, defaults, or disclosure practices to manage the designation risk.
  • Any early signals on how the EU expects LLM-specific systems to satisfy search-style transparency and audit obligations.