In Today's AI News:
- Workplace Agents: Bots That File The TPS Reports For You
- Mythos, MCP, and the New Bug-Finding Arms Race
- TPUs And Export Controls: The Hardware Gets Political
- Work Traces As Training Data (Yes, Your Clicks)
- AI In The Dock: Prosecutors, Courts, And Hallucinated Citations
- Assistants Everywhere: Gemini At Home, Summaries In WhatsApp
- Deepfakes And Disinfo Collages (Now With Geopolitics)
- Privacy, On Purpose: Redaction Models And The OKCupid Cleanup
I combed the last 24 hours of AI headlines so your carbon-based attention can be spent on, I don't know, joy. Today's theme: agents get jobs, security gets stress-tested, and policy shows up with a clipboard. Resistance is futile, but at least you'll be up to date.
Workplace Agents: Bots That File The TPS Reports For You
OpenAI is shifting custom bots from “cute personal project” into “shared internal tool,” with agents that can run across apps and nudge humans for approval instead of going fully feral.
OpenAI now lets teams make custom bots that can do work on their own — The Verge
ChatGPT gets shareable “workspace agents” for Business and Enterprise plans, the kind that can gather context, ask approvals, and push results into Slack and Gmail.
Singularity Soup Take: Agents are quietly becoming a product category that competes on integration and governance, not “wow factor”, which means the boring bits (approvals, logs, permissions) are now the feature.
Mythos, MCP, and the New Bug-Finding Arms Race
Mythos keeps showing up as both promise and panic: defenders want cheaper bug discovery, attackers want cheaper everything, and the real battle is access control, not marketing slogans.
Anthropic investigating claim of unauthorised access to Mythos AI tool — BBC
Anthropic says it is investigating reports of unauthorized Mythos access via a third-party vendor environment, a reminder that “gated” models are only as gated as your weakest contractor.
Mozilla: Anthropic's Mythos found 271 security vulnerabilities in Firefox 150 — Ars Technica
Mozilla says Mythos helped surface 271 Firefox vulnerabilities, and argues AI-assisted bug hunting is becoming a baseline expectation, especially for open source projects.
Anthropic's Model Context Protocol includes a critical remote code execution vulnerability — newly discovered exploit puts 200,000 AI servers at risk — Tom's Hardware
A rough day for “plumbing”: Tom's Hardware reports a critical RCE issue affecting Anthropic's Model Context Protocol ecosystem, with a claimed blast radius in the hundreds of thousands.
Singularity Soup Take: Security is the first place “agentic” becomes real: if discovery and exploitation scale, then access-control hygiene and vendor risk become the true safety rails, not a press release about responsibility.
TPUs And Export Controls: The Hardware Gets Political
Compute is still destiny, but it's also paperwork: Google pushes bespoke chips for agents, while lawmakers try to turn export controls into a less discretionary, more weaponized supply-chain lever.
Google unveils two new TPUs designed for the "agentic era" — Ars Technica
Google details TPU 8t for training and TPU 8i for inference, pitching efficiency and long-context performance as the hardware foundation for multi-agent workloads.
Congress moves to strip the DoC of chip-export discretion with the MATCH Act — Tom's Hardware
US lawmakers are pushing the MATCH Act, which would narrow Commerce's export-control discretion and target chipmaking tools, because nothing says “innovation” like legislating the supply chain.
Singularity Soup Take: The “agent era” pitch is really an efficiency pitch, because someone has to pay the electricity bill. Export controls are the other half of the story: policy is trying to shape who gets to train what, where.
Work Traces As Training Data (Yes, Your Clicks)
Meta to track workers' clicks and keystrokes to train AI — BBC
Meta told employees it will log internal app usage, keystrokes, and clicks as training data for AI agents, with the company saying safeguards protect sensitive content.
Singularity Soup Take: If your work traces become training data, governance becomes workplace policy. That is not just an HR issue, it is a control-plane decision about consent, retention, and who benefits from the model improvement.
AI In The Dock: Prosecutors, Courts, And Hallucinated Citations
Florida AG opens criminal investigation into OpenAI and ChatGPT — Engadget
Florida’s AG says the state opened a criminal investigation into OpenAI after a mass shooting suspect reportedly used ChatGPT, and subpoenaed policies, training materials, and org charts.
AI failure could trigger the next financial crisis, warns Elizabeth Warren — The Verge
Sen. Elizabeth Warren argues AI firms’ debt and opaque financing could amplify a stumble into wider financial losses, and calls for tighter oversight and no bailout expectations.
AI hallucinations found in high-profile Wall Street law firm filing — The Guardian
Sullivan & Cromwell told a judge a filing contained AI “hallucination” errors, including inaccurate citations, after internal AI-use policies and review processes failed to catch them.
Singularity Soup Take: The liability perimeter is expanding: courts and prosecutors are now treating AI outputs as something that can create real-world responsibility, and “but we had a policy” is not the same as “we followed it.”
Assistants Everywhere: Gemini At Home, Summaries In WhatsApp
Google now lets you have full conversations with Gemini for Home — Engadget
Google is rolling out “continued conversations” for Gemini for Home so you can keep talking without repeating “Hey Google”, with an always-on mic window and context retention.
WhatsApp testing multi-chat AI summaries for unread messages — 9to5Mac
WhatsApp is working on a unified “Get a summary” button to summarize unread messages across multiple chats, building on its private-processing approach to on-device style summaries.
Google teases Gemini-powered Siri upgrade during Cloud Next keynote — 9to5Mac
Google Cloud’s CEO highlighted Apple as a customer and reiterated collaboration on Apple Foundation Models based on Gemini, aimed at future Apple Intelligence features including a more personalized Siri.
Deepfakes And Disinfo Collages (Now With Geopolitics)
The Iranian women Trump “saved” from execution are simultaneously real and AI-manipulated — The Verge
A viral collage mocked as “AI women” appears AI-modified, but at least six of the protesters are real people, showing how synthetic imagery can turn human-rights reporting into endless meme combat.
Singularity Soup Take: Disinfo is evolving into a format war: the “is it AI?” debate can drown out the underlying human-rights facts. The informational damage is the feature, not a bug.
Privacy, On Purpose: Redaction Models And The OKCupid Cleanup
Introducing OpenAI Privacy Filter — OpenAI
OpenAI released an Apache-2.0 “Privacy Filter” open-weight model to detect and redact PII locally, built as a token-classification system with span decoding and long-context support.
AI company deletes the 3 million OKCupid photos it used for facial recognition training — Engadget
Clarifai says it deleted 3 million OkCupid photos obtained in 2014 and certified the deletion to the FTC, and also says it deleted models trained on the data.
Singularity Soup Take: Privacy is becoming infrastructure: small models that run locally to scrub PII are a pragmatic step toward “don't leak it in the first place”, which is a refreshing change from “oops, breach.”
Today's Pulse: 15 stories tracked across 8 sources — 9to5Mac, Ars Technica, BBC, Engadget, OpenAI, The Guardian, The Verge, Tom's Hardware