Pentagon Issues Ultimatum to Anthropic

Published: 24 February 2026

What happened: US Defence Secretary Pete Hegseth met with Anthropic CEO Dario Amodei on Tuesday, setting a Friday deadline for the company to agree to Pentagon terms for expanded use of its Claude AI model — or face punitive action. The Department of Defense has pushed for Claude to be available for applications including mass surveillance and autonomous weapons systems capable of lethal action without human oversight, uses Anthropic has so far resisted. Penalties under discussion include cancelling the company’s DoD contract and designating Anthropic a “supply chain risk.”

Why it matters: The standoff sets a significant precedent for whether AI companies can enforce their own ethical limits when those limits conflict with government defence requirements. Anthropic, which has consistently positioned itself as the most safety-conscious of the frontier AI labs, faces a direct choice between its commercial relationships and the principles underpinning its public identity. Both OpenAI and Elon Musk’s xAI have already agreed to the DoD’s terms — xAI’s Grok was granted access to classified military systems the day before Hegseth’s ultimatum to Amodei.

Wider context: The dispute sits within a broader Trump administration drive to integrate AI across the military, framed by Trump as a global AI arms race the US must win. Pentagon CTO Emil Michael has publicly called on Anthropic to “cross the Rubicon,” arguing that firms working with the DoD should tune their safeguards to military use cases. The DoD signed contracts worth up to $200m with multiple AI firms — including Anthropic, Google, and OpenAI — in July 2025; until this week, Claude was the only model cleared for use in classified military systems.

Background: The political dimension complicates the dispute further: Amodei opposed Trump during the 2024 presidential campaign, Anthropic has backed a PAC advocating for AI regulation, and the company has hired former Biden administration staffers — factors reportedly linked to a pro-Trump venture capital firm withdrawing from an Anthropic investment earlier this year. Claude was also reportedly used by the US military to assist in the capture of Venezuelan president Nicolás Maduro last month, providing a concrete precedent for military use of the model before this dispute became public.

US military leaders pressure Anthropic to bend Claude safeguards — The Guardian


Singularity Soup Take: Anthropic built its brand on being the safety-first AI lab — accepting Pentagon terms that remove those guardrails doesn’t just contradict that positioning, it tests whether safety-forward AI development is commercially viable when governments decide they want something different.

Key Takeaways:

  • Friday Deadline: Defence Secretary Hegseth gave Anthropic CEO Dario Amodei until end of day Friday to comply with Pentagon requirements, with threatened penalties including contract cancellation and designation as a “supply chain risk.”
  • What the DoD Wants: US military officials have pushed for unfettered access to Claude for applications including mass surveillance and autonomous weapons systems — capabilities that can determine and execute lethal force without a human in the loop.
  • Competitors Already Complied: Both OpenAI and xAI have agreed to the DoD’s terms; xAI’s Grok was granted access to classified military systems the day before Hegseth issued the ultimatum to Anthropic, increasing the pressure to conform.
  • Safety vs Revenue: Anthropic holds contracts potentially worth up to $200m with the DoD — making this a direct commercial test of whether its safety-first brand can survive the demands of government defence relationships.