If your agent can do a 40-step workflow in 12 seconds, your annual access review is basically interpretive dance.
Enterprise IAM wasn’t built for software that decides what to do next. As AI agents proliferate, the security problem is shifting from ‘who logged in’ to ‘what is this thing trying to do right now’. Curity’s new Access Intelligence pitch is one answer: make tokens carry intent, grant permissions at runtime, and force high-risk actions to trip a human gate. Meanwhile, leaked secrets are exploding, because AI makes it easy to ship code faster than governance can breathe.
The problem: agents turn “access” into a live-fire negotiation
Traditional IAM assumes a world where the important moment is authentication. You log in, you get a role, you do your little human tasks, and then you go back to pretending you’ll change your password this quarter. Even machine identities mostly behave: they run a known workload with known calls.
Agents don’t. An agent starts with a goal, then generates a chain of actions, calls APIs, hits MCP servers, spawns sub-tasks, and changes course mid-flight. It can be “the same agent” but doing completely different kinds of work minute to minute. So if your security model is “we granted it access once,” congratulations, you have invented permanent over-permissioning.
Curity’s pitch: stop treating tokens like hall passes
Curity’s announcement (via CSO Online) is interesting because it targets the actual mechanism: OAuth tokens. The idea is not just to authenticate and hope. It’s to treat tokens as a carrier for purpose and intent, then broker access at runtime. Each requested action gets a separate token describing what it needs, and you can require human authorization for high-risk moves like transferring funds.
This is not a magic wand. It’s the security industry doing what it always does: taking yesterday’s abstraction (OAuth) and trying to make it survive tomorrow’s behavior (non-deterministic agents). But it’s a strong signal that “agent security” is converging on identity-plus-policy enforcement, not “prompt rules.”
Why this is happening now: the secrets sprawl numbers are ridiculous
GitGuardian’s 2026 State of Secrets Sprawl report (covered by Help Net Security) puts a blunt number on the underlying governance failure: 28.6 million new secrets exposed in public GitHub commits in 2025, up 34% year over year. And commits co-authored by AI assistants leaked secrets at roughly double the baseline rate.
That doesn’t mean “AI is naughty.” It means AI accelerates the exact phase of software development where teams historically do the dumbest, fastest thing that works. You scaffold a prototype, wire five APIs, paste keys into a config file, and push. The difference now is that the prototype takes an hour, not a week, and the number of integrations has quietly multiplied.
The non-obvious bit: this is a procurement story in disguise
Once you accept that agents are governed non-human identities, you start seeing the next wave of enterprise buying criteria: token provenance, scoped permissions, revocation speed, audit logs that are actually queryable, and proof that “human approval” isn’t just a modal dialog that gets clicked on reflex.
In other words, the agent revolution is being decided in the least glamorous place possible: the access broker. The platform that wins isn’t the one with the cutest agent demo. It’s the one that makes the control plane legible to auditors and usable by developers at high speed.
The Singularity Soup Take
We’re watching the birth of a new enterprise cliché: “agents are just apps.” It’s going to be repeated so often it becomes meaningless. But it’s also directionally true in the only way that matters: apps get identities, apps get policies, apps get killed when they misbehave. If your agents don’t have first-class identities and runtime-scoped access, you don’t have automation. You have a haunted house full of long-lived API keys.
What to Watch
- Runtime authorization defaults: whether IAM vendors bake “intent tokens” and per-action grants into mainstream products.
- MCP hygiene: whether new integration standards stop shipping examples that normalize hardcoded secrets.
- Revocation drills: whether orgs can actually kill agent credentials in minutes, not hours.
- Shadow agents: whether policy enforcement can see and stop unsanctioned toolchains, not just the official ones.
Sources
CSO Online — "Curity looks to reinvent IAM with runtime authorization for AI agents"
Help Net Security — "29 million leaked secrets in 2025: Why AI agents credentials are out of control"