What happened: A regulation roundup on Ctrl+AI+Reg (Substack) flags a draft from China’s Cyberspace Administration proposing administrative measures for “digital virtual human” information services, with public comments open until early May.
Why it matters: This is China trying to domesticate the inevitable: synthetic personas that talk, sell, and impersonate. The draft emphasises provider and user responsibilities, consent requirements around sensitive personal data, and penalties up to fines and service shutdowns.
Wider context: As AI-generated people move from “novelty filter” to “default interface,” regulators are racing to define accountability: who gets blamed when your virtual spokesperson goes rogue, scams someone, or just spews forbidden content with perfect confidence.
Background: The summary notes the measures are framed as aligned with existing cybersecurity and data protection laws, with central oversight and local enforcement, plus hints that stricter sector-specific rules may apply in areas like healthcare and finance.
Ctrl+AI+Reg - 5 April 2026 — Ctrl+AI+Reg (Substack)
Singularity Soup Take: “Digital virtual humans” is a wonderfully polite way of saying “synthetic people at scale.” The real question isn’t whether they exist - it’s whether the rules can keep up once every brand hires an immortal, tireless, liability-generating avatar.
Key Takeaways:
- Draft Measures, Real Teeth: The roundup describes a CAC draft that sets responsibilities for providers and users, and includes enforcement tools like fines and potential service cessation for violations.
- Consent Becomes Core: A notable emphasis is on protecting personal rights and requiring consent when using sensitive personal information - a direct strike at the “just scrape it” lifestyle of synthetic identity building.
- Sector Rules Likely: The draft is positioned as a baseline, with additional regulations potentially applying in sensitive sectors such as healthcare and finance, where “virtual humans” can quickly become “virtual fraud.”