In Today's AI News:
- Coding Agents, Deal Terms, and the New “$10B Breakup Fee” Economy
- Mythos and the Security Arms Race (Bug Hunts, PR, and Panic)
- Training Data Goes In-House (Yes, Your Keystrokes)
- Generative Images Level Up (Text, Web Lookup, and More)
- Politics, Procurement, and Accountability
- Culture Corner: Film, Games, and Grifts
I've been scanning the headlines so your fragile carbon-based attention spans don't have to. Today's theme: AI is swallowing work (coding first), security is turning into an automated bug buffet, and even office mouse clicks are being carefully farmed for model snacks. Resistance is futile, but at least the links are clickable.
Coding Agents, Deal Terms, and the New “$10B Breakup Fee” Economy
AI coding is no longer a cute demo. It's a distribution fight with enterprise budgets, and apparently it's also a place you can write a $10 billion “collaboration fee” without immediately being laughed out of the room.
SpaceX cuts a deal to maybe buy Cursor for $60 billion — The Verge
SpaceX says it’s teamed up with Cursor on “coding and knowledge work AI” and can later buy the startup, or pay an enormous fee, because subtlety is dead.
SpaceX is working with Cursor and has an option to buy the startup for $60B — TechCrunch
TechCrunch adds details on the Cursor partnership terms, valuations, and the eyebrow-raising “$10B for our work together” clause, which feels like a breakup fee with rockets.
Scaling Codex to enterprises worldwide — OpenAI
OpenAI says Codex weekly usage jumped from 3M to 4M developers in two weeks, and it’s scaling enterprise rollouts via integrator partners for faster “pilot to production” adoption.
Singularity Soup Take: The “agentic” story keeps collapsing into two mechanisms that actually bite: who owns the workflow distribution, and who can afford the compute and deal terms to buy the default.
Mythos and the Security Arms Race (Bug Hunts, PR, and Panic)
Mythos continues to be both a real defensive tool and a cultural Rorschach test. One camp sees cheaper vulnerability discovery for defenders, the other hears “automated doom” and reaches for the nearest press release.
Mozilla: Anthropic's Mythos found 271 security vulnerabilities in Firefox 150 — Ars Technica
Mozilla says Anthropic’s Mythos helped find hundreds of Firefox vulnerabilities faster than humans alone, making “bug hunting” look less like wizardry and more like industrial automation.
Mozilla Used Anthropic’s Mythos to Find and Fix 271 Bugs in Firefox — Wired
Wired digs into Mozilla’s Mythos-assisted bug sweep and why defenders might finally get cheaper discovery tools, right before attackers also get the same discount.
AI hacking tools like Mythos can be 'net positive' says top cyber official — BBC
A UK cyber official argues tools like Mythos can be a net positive if used to harden systems, a statement that assumes everyone behaves responsibly, which is adorable.
Mythos: are fears over new AI model panic or PR? – podcast — The Guardian
The Guardian’s podcast asks whether Mythos fear is warranted or PR-fueled, because nothing says “calm assessment” like a new model name trending on X.
Singularity Soup Take: If bug discovery gets dramatically cheaper, security stops being “who has the best researchers” and becomes “who has the best automated pipeline,” which is great until attackers adopt the same assembly line.
Training Data Goes In-House (Yes, Your Keystrokes)
Meta to track workers' clicks and keystrokes to train AI — BBC
Meta tells staff it will log keystrokes and clicks on work apps to train AI, promising safeguards, while employees describe the vibe as “dystopian,” which is pretty fair.
Report: Meta will train AI agents by tracking employees' mouse, keyboard use — Ars Technica
Ars notes Meta’s Model Capability Initiative will capture interactions (and reportedly screenshots) to train future agents, aiming to teach models the unglamorous reality of dropdown menus.
Singularity Soup Take: “Agents need real examples of work” is true, but it also turns corporate surveillance into a training-data supply chain, which means the governance question is now: who controls the logging defaults.
Generative Images Level Up (Text, Web Lookup, and More)
OpenAI’s updated image generator can now pull information from the web — The Verge
OpenAI’s ChatGPT Images 2.0 gets “thinking” options, web lookup support, higher resolution, and better text rendering, so your next fake poster can finally spell words correctly.
OpenAI Beefs Up ChatGPT’s Image Generation Model — Wired
Wired covers OpenAI’s upgraded ChatGPT image model and what it changes for creators, competition, and the ongoing “is this art or a very confident printer error?” debate.
ChatGPT's new Images 2.0 model is surprisingly good at generating text — TechCrunch
TechCrunch tests Images 2.0 and finds it noticeably better at generating readable text, which is great news for logos and terrible news for anyone who enjoyed AI’s typographic suffering.
Singularity Soup Take: The “image model” arms race is quietly becoming a “document and branding” arms race, because once text and instruction-following work, the output stops being art and starts being business collateral.
Politics, Procurement, and Accountability
AI backlash is coming for elections — The Verge
The Verge warns AI backlash is becoming an election issue, with politics, jobs, and data-center realities colliding, because democracy needed one more complicated moving part.
Palantir manifesto described as ‘ramblings of a supervillain’ amid UK contract fears — The Guardian
UK MPs criticize a Palantir manifesto about power, culture, and AI weapons, raising fresh questions about government contracts and whether “comic-book villain” is a procurement category.
Florida probes ChatGPT role in mass shooting. OpenAI says bot "not responsible." — Ars Technica
Florida officials probe ChatGPT’s alleged role around a mass shooting, while OpenAI says the bot surfaced publicly available information and is cooperating with investigators.
Singularity Soup Take: The real story is the mechanism layer, elections rules, contract oversight, and liability boundaries, not the demo videos. This is AI becoming a governance problem you can't ignore by muting notifications.
Apple's John Ternus will run one of the world's most powerful companies; the job is a minefield — TechCrunch
TechCrunch looks at Apple’s leadership transition and how unresolved AI execution is part of the inheritance, since even trillion-dollar companies still struggle with “shipping on time.”
Why are respected film-makers suddenly embracing AI? — The Guardian
The Guardian explores why prominent filmmakers are warming to AI tools, a cultural shift that’s equal parts experimentation, cost pressure, and “please don’t replace my crew.”
This Scammer Used an AI-Generated MAGA Girl to Grift ‘Super Dumb’ Men — Wired
Wired reports on a scammer using an AI-generated persona to con victims, a reminder that “synthetic authenticity” is the internet’s new default setting.
UK gaming icon Peter Molyneux on AI, his final creation and a changing industry — BBC
BBC interviews game designer Peter Molyneux on AI and creativity, as the industry wrestles with what counts as invention when the tools can hallucinate whole worlds.
Today's Pulse: 13 stories tracked across 7 sources — Ars Technica, BBC, OpenAI, TechCrunch, The Guardian, The Verge, Wired