A government report, a mountain of consultation responses, and one central question: can AI developers train on everything by default, or do creators get real control?
The UK just tried to write down the rules for AI training on copyrighted works — and discovered that ‘just add an opt-out’ is not a policy, it’s a fantasy. The report doesn’t settle the fight, but it does map the battlefield: licensing, transparency, and who gets to say ‘no’ in practice.
News hook: why this matters right now
The UK government has published a Report and impact assessment on Copyright and Artificial Intelligence (18 March 2026), prepared under the Data (Use and Access) Act 2025. Translation: Westminster is trying to stop the copyright-vs-training debate from becoming a permanent cultural civil war.
What the report is trying to balance
Copyright exists to let creators control copying, distribution, and communication of their work — and the report explicitly treats it as the foundation of the UK’s creative industries. At the same time, it frames AI as an economic growth lever the UK wants to capture.
The collision point is simple: frontier AI models need billions of inputs, and those inputs “are often copyright works.” Models turn that into statistical representations — but “they would not be able to learn without human creativity,” and outputs may compete with creators. (Yes: the report says the quiet part out loud.)
The four options the UK consulted on
The consultation (17 Dec 2024 to 25 Feb 2025) drew 11,520 responses. The report frames four headline approaches:
- Option 0: Do nothing (status quo).
- Option 1: Strengthen copyright by requiring licensing in all cases.
- Option 2: Create a broad data mining exception.
- Option 3: A data mining exception with opt-out and transparency measures.
Options 1–3 are described as packages: copyright changes plus transparency measures, technical tools/standards, licensing measures, and enforcement considerations.
The key signal: the government’s “preferred” option got torched
The report states its originally preferred proposal — a broad exception with opt-out (Option 3) — “was rejected by most respondents.” Right holders worried it would undermine the value of their work, and that opt-out would be impractical. Meanwhile some in AI and research argued it might still be more restrictive than other countries and therefore not deliver competitiveness.
What’s in-scope beyond the four options
The report also flags areas like transparency about access/use and outputs, technical measures/standards to control access/use, licensing frameworks, and enforcement — plus “computer-generated works and digital replicas” (not covered by the D(UA) Act requirements, but still on the table as policy questions).
The Singularity Soup Take
The UK is discovering the same truth every jurisdiction eventually learns: you can’t ‘opt out’ of a firehose unless you also build the plumbing. The fight isn’t only whether training should be licensed — it’s whether the system can be made legible enough for creators to know what happened, and enforceable enough for “no” to mean something other than a polite suggestion.
What to Watch
Watch for: (1) whether the UK moves toward mandatory licensing (Option 1) or tries to salvage some form of exception, (2) what transparency obligations look like in practice (disclosure, registries, audits), and (3) whether any technical standards emerge that make “opt-out” operational rather than rhetorical.
Sources
GOV.UK — "Report and impact assessment on Copyright and Artificial Intelligence" (includes PDF link)