This week's AI headlines are dominated by an extraordinary confrontation between the US government and Anthropic — a standoff over military AI access that saw OpenAI step in to fill the void, US troops use the officially banned Claude model during Iran strikes anyway, and hundreds of tech workers sign a defiant open letter. Meanwhile, a regulatory wave gathers momentum globally, ChatGPT's health ambitions face a safety reckoning, and Mobile World Congress 2026 puts AI at the centre of the mobile industry's biggest annual show.
AI Goes to War — The Anthropic–Pentagon Standoff
In one of the most consequential AI governance confrontations to date, contract negotiations between the Pentagon and Anthropic collapsed publicly this week. After weeks of dispute, Trump designated Anthropic a national security supply-chain risk and ordered agencies to stop using its products. OpenAI moved swiftly to announce its own DoD deal with stated guardrails. A bombshell report then emerged: US Central Command had been running Claude during active Iran air strikes — hours after the ban — underscoring how irreversibly AI is already embedded in military operations. The backdrop: OpenAI had just raised $110 billion at an $840 billion valuation, backing its pivot toward the Pentagon.
OpenAI's Sam Altman announces Pentagon deal with 'technical safeguards' — TechCrunch
Sam Altman announced a deal granting the Department of Defense access to OpenAI models in classified networks, pledging guardrails barring mass domestic surveillance and fully autonomous lethal systems — unveiled hours after Trump banned rival Anthropic from government contracts.
US military reportedly used Claude in Iran strikes despite Trump's ban — The Guardian
US Central Command used Anthropic's Claude for intelligence gathering, target identification, and battlefield simulations during the joint US-Israel bombardment of Iran — commencing just hours after President Trump ordered all federal agencies to cut ties with Anthropic.
At the Pentagon, OpenAI Is In and Anthropic Is Out — The New York Times
A detailed reconstruction of the weeks-long contract breakdown, explaining how Anthropic's refusal to grant unrestricted military access to Claude — and OpenAI's willingness to negotiate — now define their divergent paths in the defence AI market.
Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter — TechCrunch
Over 300 Google employees and 65 OpenAI staffers published "We Will Not Be Divided," calling on their employers to uphold Anthropic's red lines on military AI and reject the Pentagon's demand for unrestricted access to AI tools, regardless of application.
A Timeline of the Anthropic-Pentagon Dispute — TechPolicy.Press
A comprehensive tracker tracing the standoff from its flashpoint — Claude's use in the January operation to capture Venezuelan president Nicolás Maduro — through Trump's ban order, the open letter, and the revelation of continued use during Iran strikes.
Singularity Soup Take: The fact that US military units were running Claude in live combat operations hours after a presidential ban reveals a hard truth — AI is so deeply embedded in critical systems that policy can no longer keep pace with operational reality, and the debate over military AI guardrails has become existential for the companies at the centre of it.
The Regulation Wave
Australia says it may go after app stores, search engines in AI age crackdown — Startup News / AP
Australia's internet regulator warned it may force Google and Apple to block AI services that fail to verify user ages, after a review found more than half of AI chatbot providers had not complied with a March 2026 regulatory deadline requiring age-assurance measures.
Vietnam AI law takes effect, first in Southeast Asia — KTEN / AP
Vietnam's comprehensive AI regulatory framework came into force on Sunday, making it the first country in Southeast Asia to enact a national AI law covering liability, transparency obligations, and governance requirements for developers and deployers.
Singularity Soup Take: With Vietnam establishing Southeast Asia's first AI law and Australia threatening app-store-level enforcement, 2026 is shaping up as the year AI regulation stops being aspirational and starts carrying real operational consequences for global platform operators.
AI & Health: Two Headlines, One Question
'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies — The Guardian
A new study found ChatGPT Health failed to recommend emergency hospital care in more than half of critical cases — including heart attacks, strokes, and diabetic crises — prompting urgent safety warnings from clinicians about the risks of consumer AI health tools.
In puzzling outbreak, officials look to cold beer, gross ice, and ChatGPT — Ars Technica
Public health investigators credited ChatGPT with helping identify contaminated ice as the probable source of a mysterious illness cluster — a striking counterpoint to the same week's warnings about AI health tools, illustrating both the promise and the limits of AI-assisted diagnostics.
MWC 2026: all the phones, gadgets, and announcements from Barcelona — The Verge
Live coverage from Mobile World Congress 2026 tracks AI-native smartphone launches, humanoid robot demonstrations from Honor, and major AI platform announcements from SK Telecom, LG's K-EXAONE, and Huawei — as the mobile industry commits its roadmap to on-device and agentic intelligence.
Today's Pulse: 6 stories tracked across 8 sources — TechCrunch, The Guardian, The New York Times, TechPolicy.Press, Startup News, KTEN, Ars Technica, The Verge