The EU wrote the AI Act. Now comes the hard part: enforcing it across 27 member states, multiple authorities, and a brand-new “AI Office” that’s supposed to supervise powerful foundation models.
If you were hoping regulation would be a single decisive hammer blow, I have disappointing news: the future is committees, contact points, and “hybrid enforcement models.” Resistance is futile; coordination is optional.
What happened
The European Parliament’s Think Tank published a plain-English explainer on how the EU AI Act is enforced. It describes a hybrid model: most AI system rules are enforced nationally (with support/advice from central EU bodies), while general-purpose AI (GPAI) model rules are supervised and enforced by the European Commission via the AI Office.
As of March 2026, the Commission’s list of national “single points of contact” reportedly included eight contact points out of 27 member states. That’s not a scandal; it’s a signal: enforcement is a process, not a switch.
The non-obvious thing: the fight is shifting from “what does the law say?” to “who actually has capacity?”
The AI Act is a risk-based law with a lot of moving parts: prohibited uses, high-risk obligations, transparency duties, and a parallel regime for GPAI models—including “systemic risk” models with extra requirements like evaluation and risk assessment.
But enforcement is ultimately a capacity game. If your national authority is underfunded or inexperienced, enforcement becomes uneven. If the AI Office centralises too much, member states worry about sovereignty and local context. If it centralises too little, the EU gets the “GDPR problem” again: one law, many interpretations.
Who enforces what?
National authorities: the decentralised backbone
Member states designate at least one notifying authority and one market surveillance authority. Notifying authorities designate conformity assessment bodies (which become “notified bodies”) to assess high-risk AI systems before they enter the market. Market surveillance authorities do after-the-fact checks, can request documents, evaluate systems, and impose fines.
In some sectors (finance, law enforcement), other authorities may be involved in ex-post checks. Translation: enforcement isn’t one pipeline; it’s multiple pipelines, with handoffs, overlaps, and the usual institutional turf wars.
The AI Office: the centralised layer for GPAI
The AI Office (a Commission function) has sole authority to enforce the AI Act’s provisions on GPAI models. It also shapes implementation through “soft instruments”: codes of practice, guidelines, and communications. In practice, those soft instruments can become the de facto rules that companies build to—especially if you’re a platform trying to avoid being first in line for a fine.
The Think Tank note also references a “digital omnibus on AI” proposal (from November 2025) that could further centralise enforcement—e.g., AI systems integrated into very large online platforms/search engines, and some cases where a provider supplies both the model and the system.
Why this matters (even if you don’t live in Brussels)
First: the EU is setting global expectations. Even companies outside Europe end up harmonising to the strictest major regime, because shipping two compliance models is expensive and embarrassing.
Second: the enforcement model shapes competitive outcomes. If enforcement is uneven, large players can absorb complexity while smaller players drown in ambiguity. If enforcement is centralised and predictable, compliance becomes a more standardised cost—still expensive, but at least legible.
Third: the AI Office’s approach to “systemic risk” GPAI models will matter far beyond Europe, because it sets a precedent for what a regulator expects in evaluation, risk management, incident handling, and transparency from frontier model providers.
The Singularity Soup Take
The AI Act isn’t “EU bans AI” or “EU saves the world.” It’s a complicated governance machine being assembled while the technology sprints ahead. The interesting question isn’t whether the law is perfect—it isn’t. It’s whether the institutions can build enough shared capacity to make enforcement consistent. Otherwise you get the worst of both worlds: compliance paperwork everywhere, accountability nowhere.
What to Watch
- How quickly member states fill out the single-point-of-contact list and build real enforcement teams.
- Early AI Office guidance: codes of practice and evaluation expectations for GPAI models.
- Whether the “digital omnibus on AI” centralisation proposal advances, and what it pulls into the AI Office’s orbit.
- First major cross-border enforcement cases: they’ll reveal whether the hybrid model cooperates or fractures.
Sources
European Parliament Think Tank — "Enforcement of the AI Act"
EUR-Lex — "Regulation (EU) 2024/1689 (AI Act)"