
For the past two years, we’ve been told that artificial intelligence is already transforming the economy, reshaping work, and ushering in a new industrial revolution. The rhetoric is breathless. The demos are dazzling. And yet, outside of a few well-publicised productivity gains and a mountain of pilots, the world looks… oddly familiar.
No productivity boom.
No mass reorganisation of institutions.
No obvious macro shock.
This has led to a growing suspicion that the AI narrative is mostly hype — a speculative future sold in the present tense. That suspicion is understandable. But it may also be incomplete.
The real mistake isn’t assuming AI will change everything.
It’s misunderstanding how that change arrives.
The Capability–Impact Gap
There is now a clear mismatch between what AI systems can do in isolation and what they actually change in the real world.
Modern models can write code, summarise documents, classify data, generate plans, and reason locally about problems. From a purely technical standpoint, this is impressive. But business and government don’t run on isolated intelligence. They run on reliability, accountability, integration, and trust.
The result is a widening gap:
-
AI capability has advanced rapidly
-
AI deployment has advanced cautiously
-
AI impact has diffused quietly
This gap is often misread as failure. In reality, it reflects where AI is currently allowed to operate.
Agency: Where AI Delivers First
In the short term, AI’s real power comes from agency — the ability to take on tasks, pursue goals, and act over time.
Agentic AI doesn’t feel revolutionary because it doesn’t announce itself. It:
-
follows up
-
retries
-
monitors
-
escalates
-
fills gaps humans forget
This is why its impact hides in places we don’t usually measure:
-
fewer dropped handoffs
-
quieter inboxes
-
smoother operations
-
marginally faster workflows
Agentic systems don’t replace departments. They shave friction. And because friction is everywhere, their effects spread thinly across organisations rather than exploding in one visible place.
That’s why AI looks underwhelming at the macro level — even as it becomes increasingly embedded at the micro level.
The Ceiling of Agency
But agency alone doesn’t transform systems.
Task-level delegation has limits:
-
agents inherit human priorities
-
they optimise locally
-
they wait for permission
-
they don’t resolve trade-offs between competing goals
An organisation full of agents still behaves like the organisation that deployed them — just slightly faster.
This is where many AI initiatives stall. Not because the models are weak, but because execution is not where real power sits.
Coordination: Where Power Actually Lives
The biggest impact of AI will not come from doing tasks.
It will come from coordinating systems.
Coordination means:
-
setting priorities across teams
-
allocating resources
-
resolving trade-offs
-
sequencing actions
-
deciding what doesn’t get done
Humans are notoriously bad at this — not because we lack intelligence, but because we are:
-
slow
-
political
-
inconsistent
-
biased toward local incentives
Coordination is where inefficiency hides.
It’s also where authority lives.
Once AI begins to coordinate — even partially — small gains stop cancelling each other out. Bottlenecks dissolve instead of migrating. Feedback loops tighten. Latency collapses. The system itself starts behaving differently.
This is the moment when “lots of small effects” snap into something structural.
And it won’t look dramatic.
It will look like:
-
“The system recommends…”
-
“The optimiser adjusted…”
-
“The workflow rebalanced…”
No takeover. Just fewer human decisions — and eventually, fewer places for humans to intervene meaningfully.
Why This Hasn’t Happened Yet
The main barrier to AI coordination isn’t compute, data, or models.
It’s authority.
We are comfortable delegating work.
We are not comfortable delegating priorities.
Coordination carries responsibility, blame, and moral weight. When things go wrong, someone must answer for it. Institutions are therefore cautious — even when AI coordination would be objectively more efficient.
This creates a paradox:
The largest gains are available last, because they require the most trust.
Three Futures for Coordinating AI
If AI does move into coordination, three broad paths emerge:
-
Soft Centralisation
AI coordinates quietly; humans supervise and rubber-stamp. Efficiency rises, power concentrates invisibly. -
Fragmented Autonomy
Multiple coordinating systems compete. Meta-coordination becomes the bottleneck. Humans arbitrate between machines. -
Institutional Capture
Coordination AI becomes infrastructure — owned by states, platforms, or mega-firms — shaping society the way money systems and logistics networks already do.
None of these require artificial general intelligence. None require consciousness. None require runaway capability growth.
They require only one thing: trust in machine-mediated coordination.
The Verdict
AI is not yet transforming everything — and the hype that says otherwise is premature.
But dismissing AI as overpromised and underdelivered misses the deeper shift already underway.
We are moving from:
-
tools → agents
-
agents → coordinators
-
coordination → power
The real transformation won’t arrive as a sudden rupture. It will arrive as a quiet reallocation of who — or what — decides how the world fits together.
And by the time it’s visible at the macro level, it will already be embedded too deeply to easily undo.
That’s not hype.
That’s how infrastructure takes over.