What happened: A weekly tracker from the Transparency Coalition highlights a surge of U.S. state bills moving this week on chatbot protections, disclosure rules, deepfake restrictions, and other AI governance measures — including Oregon’s SB 1546, which cleared final approval.
Why it matters: While federal AI rules remain fragmented, statehouses are rapidly setting de facto standards for kid‑focused chatbot safeguards, synthetic‑media disclosure, and sector rules (like health insurance), creating compliance pressure that vendors may have to treat as “national by default.”
Wider context: The update notes multiple states nearing adjournment dates, which can accelerate last‑minute compromises; it also shows how policy attention is clustering around a few themes: minors’ safety, transparency/provenance, and deepfakes.
AI Legislative Update: March 6, 2026 — Transparency Coalition
Singularity Soup Take: The politics may be local, but the technical reality isn’t — if lawmakers keep legislating “chatbots” and “deepfakes” without precise definitions and testable requirements, we’ll get checkbox compliance instead of real safety outcomes.
Key Takeaways:
- Oregon milestone: Oregon’s SB 1546 — framed as a major chatbot safety bill — reportedly won final legislative approval and was sent to the governor’s desk, signalling momentum for kid‑protection requirements on conversational systems.
- Adjournment crunch: Several states (including Utah and Washington) are approaching session deadlines, which can push AI bills through quickly and increase the odds of broad, hard‑to‑interpret language becoming law.
- Policy convergence: Across states, the most active categories are disclosure/provenance for AI‑generated media, deepfake controls, and rules for high‑impact uses like insurance decisions — a roadmap for where enforcement fights may land first.