Meet Your New Coworker: The AI Agent. Now Give It An Identity, A Badge, And A Kill Switch.

Okta and Microsoft are converging on the same idea: if AI agents are going to do real work, they need to be treated like employees. The difference is that employees can’t copy themselves 10,000 times before lunch.

The new security pitch for “agentic AI” isn’t about smarter models. It’s about boring control planes: registries, identities, permissions, audit logs, and a big red button that says NOPE.

What happened

Microsoft’s security team published a post positioning “Agent 365” as a control plane for agentic AI: an agent registry, observability, risk signals across Defender/Entra/Purview, and identity governance for agents. They’re also bundling it into Microsoft 365 E7, because nothing says “trust” like a premium SKU.

Separately, SiliconANGLE reports Okta announced a “blueprint for the secure agentic enterprise” and a forthcoming “Okta for AI Agents” platform. The blueprint asks three questions: where are my agents, what can they connect to, and what can they do? (A neat summary of every CISO’s current nightmare.)

The non-obvious thing: “agent security” is just identity security… but with a new species

Enterprises already know how to secure humans: identity, access management, least privilege, conditional access, logging, and investigations when someone does something weird at 3am.

Agents break the old mental model because they are nonhuman, persistent, and increasingly capable of action. If the old world was “don’t let interns have prod access,” the new world is “don’t let autocomplete wire money.”

So the control-plane vendors are doing what they always do: they’re turning a chaotic new behaviour into an object in a directory, with an owner, a lifecycle, and a policy engine. If you’re wondering whether this is progress or tax, the answer is: yes.

Inventory: if you can’t count them, you can’t govern them

Okta’s blueprint starts with discovery: identify and inventory every agent, including “shadow agents” employees spin up by connecting third-party tools to corporate systems. This is the agentic version of shadow IT, except shadow IT usually doesn’t ask for Salesforce write access “just for a quick experiment.”

Microsoft’s framing is similar: without a unified control plane, IT and security teams don’t know which agents exist, what they’re doing, who has access, or what risks they’re creating. That’s not a theoretical risk; it’s a basic observability failure.

Identity: give the agent a badge, then watch it like a hawk

Microsoft’s approach is explicit: treat agents as “identity-aware digital entities” in Entra, with “Agent ID,” conditional access, and identity protection signals. Okta likewise talks about registering agents as nonhuman identities with defined ownership and lifecycle management.

This is the big shift. The agent isn’t just code running under a service account. It becomes a first-class identity that can be granted access packages, reviewed, revoked, and audited. That’s a governance move as much as a security move: someone must be accountable for what the agent can do.

Permissions: least privilege, but actually enforced

The most dangerous phrase in enterprise AI is “it just needs access.” The control-plane story is trying to convert that into: “it needs scoped access, for this workflow, for this time window, under these conditions.”

Microsoft talks about identity governance for agents that can limit access to only what they need, plus audit access granted to agents. Okta’s blueprint emphasises centralised control over what agents connect to (agent gateway/API access management) and logging of tool usage and authorisation decisions.

This is where “agentic AI” collides with reality: if agents are genuinely useful, they will constantly pressure organisations to grant broader access. If the security model is weak, the agent becomes a permission-hoover. If the security model is strong, the agent becomes a ticket generator with opinions.

Auditability: the agent did it, but what exactly is “it”?

When a human does something bad, you can ask them why. When an agent does something bad, you have a chat log and a stack trace and a creeping sense that you have invented a new kind of bureaucracy.

Both Microsoft and Okta lean hard on audit logs and observability: agent inventories, behaviour/performance reports, risk signals, and integration into existing security workflows (SIEM, Defender/Purview-style controls). Microsoft explicitly calls out audit and eDiscovery extending to agents, treating them as auditable entities alongside users and applications.

This is the underappreciated “agent tax”: if you deploy agents, you are also deploying the need to prove what they did. The compliance tail will wag the automation dog.

The kill switch: universal logout for your autonomous coworker

Okta’s platform pitch includes a “universal logout” mechanism as a central kill switch to revoke permissions if an agent deviates from its intended task or touches sensitive data unexpectedly.

That’s not just a nice-to-have. In a world of delegated autonomy, incident response needs a button that stops actions now, not after an email thread and a post-mortem PowerPoint.

The Singularity Soup Take

The industry is discovering that “agentic AI” is mostly a governance problem with a model-shaped hat. You don’t need a philosopher to secure agents; you need an IAM system that understands nonhuman identities, and a logging pipeline that can survive your next “oops.” The funny part is watching enterprise security reinvent service accounts—only this time the service account writes persuasive emails and asks for admin permissions with impeccable manners.

What to Watch

  • Whether agent registries become a de facto standard across vendors (and whether they interoperate or form yet another platform moat).
  • How organisations handle “shadow agents” created via third-party tools and browser extensions.
  • Whether audit logs capture enough context (intent, tool calls, authorisation decisions) to be useful in real incident response.
  • The pricing model: per-user, per-agent, per-action—aka “how do we monetise your new anxiety.”