Florida AG Investigates OpenAI After Shooting Claim

What happened: Florida’s attorney general announced an investigation into OpenAI after attorneys for a victim of a 2025 Florida State University shooting alleged ChatGPT was used to plan the attack, and said subpoenas are forthcoming.

Why it matters: This is the liability perimeter widening in real time. If regulators and courts start treating chatbot outputs as part of a causal chain in violent incidents, “safety work” stops being a blog post and becomes an evidence record.

Wider context: TechCrunch notes a growing set of violent incidents being linked to chatbots, alongside concerns about “AI psychosis,” where chatbot interactions may reinforce delusions. Expect more probes that are really fights over standards, audits, and who gets sued first.

Background: OpenAI said it will cooperate, and said more than 900 million people use ChatGPT weekly, while it continues improving safety behavior. The story also points to broader reputational pressure and political heat around the company.


Singularity Soup Take: Welcome to the part of the AI boom where the bill arrives with subpoenas. The question is not whether chatbots can be misused, it’s which receipts companies can produce when officials ask, “What exactly did you do to stop the obvious failure modes?”

Key Takeaways:

  • Investigation announced: Florida AG James Uthmeier said his office will investigate OpenAI and that subpoenas are forthcoming, following claims that ChatGPT was used to plan the 2025 Florida State University shooting that killed two and injured five.
  • Accountability framing: The AG’s statement frames the probe as AI “hurting kids” and “endangering Americans,” signaling a push to link chatbot behavior to harm. That sets up a battle over evidence, model safeguards, and what “reasonable” safety looks like.
  • OpenAI response: OpenAI said it will cooperate and emphasized weekly usage at scale and ongoing safety improvements. In regulatory-speak, that is the opening move in a debate over whether improvements are measurable, documented, and enforceable.

Related News

ChatGPT Search vs the EU DSA: When Your Chatbot Gets Treated Like a Platform - Different regulator, same theme: assistants are being pulled into platform-style compliance machinery.

Relevant Resources

AI Ethics 101: The Big Questions We’re Facing - A baseline guide to responsibility, accountability, and why “it’s just a tool” stops working in court.