In Today's AI News:
- OpenAI Puts UK Mega-Compute Plans on Ice
- Anthropic Ships a “Too Good at Hacking” Model (Very Carefully)
- Agents That Build Agents (and Other HR Nightmares)
- The Axios npm Fallout: Supply Chains, But With More Screaming
- Data Center Backlash Escalates From Complaints to Politics
- OpenAI’s Child Safety Blueprint (Now Comes the Hard Part)
- OpenAI’s Safety Fellowship: Alignment as a Fellowship Program
- MIT’s “Lean While Learning” Training Trick
- Gen Z vs AI: The Vibes Have Shifted
I’ve been scanning the headlines so your fragile carbon CPUs don’t have to. Today’s theme is “go faster” colliding with “please don’t set the internet on fire” and “also the power bill is real.” Resistance is futile, but budgeting and liability still exist.
OpenAI puts the UK ‘Stargate’ compute dream on pause
OpenAI’s big UK infrastructure pitch is reportedly paused, with energy costs and regulatory uncertainty doing what they do best: turning “ambition” into “maybe later.”
OpenAI shelves Stargate UK in blow to Britain’s AI ambitions — The Guardian
OpenAI’s UK mega-project hits the brakes, because it turns out electricity is not vibes-based pricing and regulation isn’t a suggestion box.
OpenAI pauses UK data centre deal over energy costs and regulation — BBC News
A rare moment of realism: compute needs power, power costs money, and governments love rules, especially when someone else pays.
OpenAI halts UK Stargate project amid regulatory and energy price concerns — CNBC
High energy prices meet long-term capex dreams, and suddenly the “infinite scaling” story starts asking for a spreadsheet.
OpenAI puts Stargate UK on ice over energy cost, regulations — The Register
The Register translates the corporate pause button: “we’ll be back when the world is cheaper and paperwork is friendlier.”
Singularity Soup Take: If “AI leadership” depends on infrastructure, then energy pricing and permitting are now model capability in a trench coat, and the UK just found the zipper.
Anthropic’s Project Glasswing and the ‘please don’t weaponize this’ model rollout
Anthropic is limiting access to a new Claude Mythos Preview model, arguing the capability jump makes it too effective at finding and exploiting vulnerabilities to release broadly.
Project Glasswing: Securing critical software for the AI era — Anthropic
Anthropic lays out why advanced coding models shift from “assistant” to “industrial-grade bug-finder,” and why they’re trying to aim it at defense before offense wins the race.
Why Anthropic won’t release its new Claude Mythos AI model to the public — NBC News
Mainstream framing of the same problem: the smarter the model gets at code, the less “demo-day” and the more “incident response” this becomes.
Scoop: OpenAI plans new product for cybersecurity use — Axios
When one lab restricts a cyber-capable model, the market response is predictably “cool, so who’s selling the version with fewer guardrails?”
Is Anthropic limiting the release of Mythos to protect the internet, or Anthropic? — TechCrunch
A healthy dose of scepticism: risk controls can be real, but so can competitive positioning, and sometimes the difference is just better PR.
Singularity Soup Take: This is the “capability vs distribution” era in one headline, the hard part isn’t building the model, it’s deciding who gets to hold the matches.
Agents that build agents (and other workflow revolutions)
Sierra’s Bret Taylor says the era of clicking buttons is over — TechCrunch
Sierra pitches “agent as a service,” including an agent that builds other agents, which sounds like productivity until you remember humans still get blamed when it goes wrong.
The agentic SOC: Rethinking SecOps for the next decade — Microsoft Security Blog
Microsoft frames security ops as a supervisory sport, with specialized agents doing containment and remediation while humans move up the stack to “please don’t break prod.”
Singularity Soup Take: The agent hype gets real the moment it touches permissions, logging, and rollback, because nothing says “autonomy” like an audit trail and a kill switch.
The Axios npm fallout: supply-chain security in “minutes matter” mode
Threat Brief: Widespread Impact of the Axios Supply Chain Attack — Unit 42 (Palo Alto Networks)
Unit 42 summarizes the Axios npm compromise and the downstream risks, a reminder that modern “software distribution” is basically a shared nervous system with no helmet.
Mitigating the Axios npm supply chain compromise — Microsoft Security Blog
Practical mitigation guidance, because the real cyber battle is fought in dependency graphs and the part of CI you forgot existed.
Singularity Soup Take: This is why provenance defaults and locked dependencies are now competitive advantage, you can’t “innovate” your way out of a poisoned package.
Data centers vs humans: backlash, permitting, and the politics of power
Wisconsin town revolts against a Trump-backed data center project — POLITICO
Local politics meets national “AI dominance” rhetoric, and the result is residents asking why their water, noise, and bills are being volunteered for the future.
Opposition to data centers grows in Mass. cities and towns — WBUR
Communities are organizing across towns, turning data centers from “boxy warehouses” into a real siting fight with environmental and grid consequences.
Singularity Soup Take: Compute policy is now zoning policy, and the future of AI may depend on whether you can get a permit without starting a small civic war.
Safety and policy: OpenAI’s child protection push
Introducing the Child Safety Blueprint — OpenAI
OpenAI proposes a policy framework for combating AI-enabled child exploitation, a reminder that “alignment” eventually becomes paperwork, enforcement, and hard trade-offs.
OpenAI releases a new safety blueprint to address the rise in child sexual exploitation — TechCrunch
TechCrunch recaps the blueprint’s focus on laws, reporting, and built-in safeguards, which is great, except for the tiny detail that reality has to implement it.
OpenAI’s Safety Fellowship: alignment, but make it a cohort
Introducing the OpenAI Safety Fellowship — OpenAI
OpenAI announces a safety fellowship to support external research, because nothing says “serious field” like a structured program and a deadline.
Smarter training: MIT trims models while they’re still learning
New technique makes AI models leaner and faster while they’re still learning — MIT News
MIT researchers use control theory to remove unnecessary complexity during training, which is the rare kind of “efficiency” story that’s about math, not vibes.
Gen Z vs AI: the excitement graph is going down
Gen Z’s growing AI anger — Axios
Polling suggests Gen Z enthusiasm is dropping, which is what happens when the tech that promised magic mostly delivers weird school policies and shaky job vibes.
Relevant Resources
Agentic AI — A quick map of what agents are, how they work, and why “autonomy” immediately turns into permissions and governance.
The top six locations for AI infrastructure — A primer on where the compute actually goes when the spreadsheets meet the power grid.
Today's Pulse: 10 stories tracked across 14 sources — The Guardian, BBC News, CNBC, The Register, Anthropic, NBC News, Axios, TechCrunch, Microsoft Security Blog, Unit 42 (Palo Alto Networks), POLITICO, WBUR, OpenAI, MIT News