The First Real AI Regulation Won’t Be Model Rules — It’ll Be ‘Duty to Warn’

Connecticut’s sprawling AI bills and B.C.’s push to mandate reporting certain chat activity point to the same destination: governments are turning AI from a product into a monitored service, with liability attached.

Everyone says they want “AI regulation,” but most frameworks still read like product safety checklists: disclose the bot, avoid discrimination, document your training data, publish a risk plan. The more immediate regulatory frontier is operational: when an AI system sees something that looks like imminent harm, what are companies legally required to do — and what evidence are they required to preserve?

What Happened

In Connecticut, lawmakers are considering a package of bills aimed at building an AI governance framework and online safety rules. WSHU reports that Senate Bill 5 (a 97-page omnibus bill) ranges across transparency for consumer data, subscriptions and chatbots, automated decision-making, workforce training, and definitions of “catastrophic risks.” It also includes disclosure and appeal rights for job applicants when employers use AI in employment decisions, and detailed requirements for “AI companion chatbots,” including guardrails and protocols around suicidal ideation and self-harm language — with extra protections for minors.

A second Connecticut proposal, Senate Bill 86, focuses more explicitly on economic development tools: an AI regulatory sandbox, expanded “AI-ready” datasets through the state’s open-data portal, and a state-level data governance role (a Chief Data Officer) to coordinate how agencies manage and publish data.

Meanwhile in British Columbia, Premier David Eby has called for a federal minimum standard requiring AI chat services to report certain types of activity to police, after a mass shooting in Tumbler Ridge. Kelowna Capital News reports Eby says OpenAI’s Sam Altman agreed to apologize to families and to work with the province on designing regulations that could be implemented immediately — specifically around mandatory reporting when companies have information suggesting harm will be caused.

Why It Matters

These bills aren’t just “AI policy.” They’re the opening move in a new liability regime for AI services. Once lawmakers start treating chat systems and automated decision tools as something closer to a regulated service — rather than a static product — the compliance problem changes. The hard question becomes: what does an AI company know, when does it know it, and what is it obligated to do with that knowledge?

A “duty to warn” approach is seductive because it feels concrete. It’s also a minefield. First, you need a definition of “reportable” content that is narrow enough to avoid turning companies into constant tip lines, but broad enough to catch real threats. Second, you need a standard for “information that harm is going to be caused” that is legally meaningful in a world where users perform, role-play, and test boundaries. Third, you need a logging and evidence standard — because without retention rules, compliance becomes “trust us, we saw something.”

For vendors, this pushes systems toward surveillance-by-design, even when everyone insists that’s not the goal. To detect and report, you have to monitor. To monitor responsibly, you have to log. To log at scale, you need policies around data minimization, access controls, and audit trails. In other words: even if the aim is harm prevention, the implementation is an infrastructure for observation.

For users, the trust equation changes. People don’t just need to know “this is a bot.” They need to know what happens to their conversations, how long they’re stored, when they can be reviewed by humans, and under what triggers they can be handed to law enforcement. The compliance posture becomes a product feature — whether companies like it or not.

Wider Context

There’s a pattern forming across jurisdictions: governments are starting where they have leverage and where political pressure is loudest — employment, minors, consumer protection, and public safety. That’s not because these are the only important issues. It’s because they map onto existing legal machinery: anti-discrimination law, duty-of-care doctrines, and the long history of regulating “unsafe” communication environments.

Connecticut’s proposals point to a practical reality: most “AI harms” aren’t spectacular AGI scenarios; they’re mundane systems embedded in hiring workflows, customer service, education, and mental-health adjacent products. And while lawmakers talk about “catastrophic risk,” the enforceable parts of bills tend to be about disclosures, appeals, and procedures — the things regulators can audit.

The British Columbia push adds a new vector: if governments can establish reporting obligations for AI chat providers, they effectively turn private model operators into semi-regulated intermediaries, similar to how finance and telecoms have been treated. That creates incentives for companies to standardize their monitoring pipelines and to lobby for safe harbors — protections from liability if they report, and protections if they don’t report “because the system didn’t flag it.”

The Singularity Soup Take

“Duty to warn” will be the first truly consequential AI regulation because it forces companies to pick a side: either you operate like a privacy-preserving tool, or you operate like a monitored service with reporting obligations. Trying to be both will fail. The policy goal — preventing harm — is legitimate. But if lawmakers don’t pair it with strict limits, transparency, and due-process guardrails, they’ll build a reporting regime that quietly normalizes always-on monitoring while still missing most real threats. The right framing isn’t “should AI report?” It’s “what monitoring is acceptable, who audits it, and what rights do users retain when the system misfires?”

What to Watch

Watch whether Connecticut’s bills converge into enforceable, auditable requirements or sprawl into aspirational language that’s hard to implement. Watch whether “AI companion chatbot” rules become a template other states copy — especially the self-harm protocols and minor-specific restrictions. And watch the next step in the B.C. reporting conversation: if a reporting mandate advances, the fight will move immediately to the definition of reportable content, retention periods, and who has access to the logs. That’s where the real civil-liberties stakes live.