Florida Tries To Charge ChatGPT With Murder

What happened: Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI and ChatGPT, arguing the tool may have “aided and abetted” a 2025 Florida State University mass shooting after the suspect reportedly used ChatGPT beforehand.

Why it matters: Because this is the liability perimeter expanding in real time: not “is the model safe,” but “can the vendor be treated like a co defendant when a user commits violence.” That pushes safety policy, logging, and law enforcement response into product design.

Wider context: Expect more attempts to turn AI assistants into legally accountable actors, especially when politicians want a clean villain for messy social failures. The mechanism question is what evidence and policies get subpoenaed, and what “duty to report” starts to look like.

Background: Engadget reports Florida subpoenaed OpenAI for policies and training materials on threats, self harm, and law enforcement response, plus organizational details. OpenAI said it shared an account believed linked to the suspect with law enforcement and says ChatGPT provided factual info found broadly on the public internet without promoting illegal activity.


Singularity Soup Take: The state is basically asking, “if an algorithm says something unhelpful, can we prosecute the company like it pulled the trigger.” Even if the theory collapses in court, it will still shape how assistants log, escalate, and lawyer up by default.

Key Takeaways:

  • Subpoena As Product Review: Florida is seeking OpenAI’s internal policies and training materials on threats and law enforcement interactions, which effectively turns governance documentation into discoverable evidence and a core part of the product surface.
  • Liability Creep: The theory described by the AG treats an assistant’s responses as potential aiding and abetting. That raises pressure for stricter refusal behavior, stronger intent detection, and clearer boundaries about what counts as “general information.”
  • Precedent Hunting: Engadget notes OpenAI has faced other scrutiny tied to shootings and a wrongful death lawsuit. Regardless of outcome, repeated cases can harden expectations around reporting, cooperation, and safety protocol changes.