A White House executive order frames state-level ‘bias mitigation’ requirements as forced deception — and bets the FTC and federal funding can do what Congress won’t.
The fight over AI regulation in the U.S. is no longer just ‘how strict’ — it’s who gets to write the rules. A December executive order is now colliding with March deadlines that pull the FTC, Commerce Department and DOJ into a coordinated attempt to kneecap state AI laws. If that strategy holds, the next year of AI governance may be shaped less by safety frameworks than by preemption doctrine and grant conditions.
What Happened
The White House’s December 11, 2025 executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” is explicit about its target: a growing patchwork of state AI laws. It argues that 50 different regimes raise compliance costs (especially for startups), can spill across state borders into interstate commerce, and — most controversially — can force developers to embed “ideological bias” by requiring systems to avoid “algorithmic discrimination.”
The order sets three concrete machinery pieces in motion.
First, it directs the Attorney General to create an “AI Litigation Task Force” whose sole job is to challenge state AI laws that conflict with the order’s federal policy, including on theories like burdens on interstate commerce and federal preemption.
Second, it tasks the Commerce Department with publishing an evaluation of existing state AI laws within 90 days. The evaluation must, at minimum, identify laws that require models to “alter their truthful outputs,” and those that compel disclosures that may violate the First Amendment.
Third, it directs the FTC chair to issue a policy statement within 90 days explaining how the FTC Act’s ban on unfair and deceptive practices applies to AI models — and when state laws that require alterations to “truthful outputs” are preempted because they would effectively mandate deception.
Legal analysts have noted the final text also carves out areas it says it is not trying to preempt, including child-safety protections, certain compute/data-center infrastructure topics, and state procurement — narrowing the scope while still pressuring transparency and bias-focused state regimes.
Why It Matters
The non-obvious move here is the attempt to reframe bias mitigation as deception.
In most public debates, “bias mitigation” is treated as either a technical quality issue or a civil-rights compliance requirement. The executive order’s theory flips that: if a model reflects statistical patterns in its training data, forcing output changes to reduce disparate impact can be cast as forcing the model to be less “truthful.” If “truthful outputs” are then marketed as truthful, the argument goes, the state is forcing the company into deceptive conduct.
That framing matters because it shifts the battlefield from AI-specific statutes into the FTC’s consumer-protection universe. It can turn debates over audits and discrimination into fights over preemption doctrine — with years of litigation as the price of admission.
Even if the federal government doesn’t win cleanly, the deterrence effect is real. If state lawmakers expect ambitious AI transparency rules to trigger expensive constitutional fights — plus threats to discretionary grant money — fewer of those bills get introduced.
The result, in the near term, is a governance vacuum: fewer enforceable rules with clear tests and timelines, and more ambiguity that both industry and regulators can weaponize.
Wider Context
Since 2023, the U.S. has leaned on agency guidance, voluntary commitments and procurement rules rather than a comprehensive AI statute. States have moved into the gaps: algorithmic discrimination, documentation and audit requirements, training-data disclosure, and sector-specific use constraints.
The executive order effectively tries to reverse that trend using tools the federal government already controls: litigation posture, federal funding conditions, and expansive interpretations of existing statutes.
There’s also a deeper conflict over what “truth” means in machine outputs. A model can reflect correlations in data while still being harmful or misleading in context. Treating raw statistical reflection as “truth” is a philosophical move dressed up as consumer protection.
If the order’s approach sticks, expect companies to shift from “build to the strictest state rule” (the privacy playbook) to “fight the strictest state rule” — while branding the outcome as national consistency.
The Singularity Soup Take
The U.S. is trying to regulate AI by arguing about adjectives. “Truthful outputs” sounds commonsense until you remember that AI systems are not truth engines — they’re pattern engines. Bias mitigation isn’t forcing a model to lie; it’s an attempt to stop the model from laundering historical unfairness into future decisions. If Washington wants a lighter-touch framework, it should pass one. Preemption without a replacement is politics disguised as governance — and it risks giving the public fewer safety guarantees, not more.
What to Watch
Watch three things.
First, whether the Commerce Department’s evaluation names specific state laws and specifies why they’re “onerous” — vagueness will signal the goal is deterrence, not standards.
Second, whether the FTC policy statement treats bias mitigation as inherently deceptive, or narrows the issue to specific misrepresentation claims.
Third, the early litigation targets. The first statutes chosen will reveal whether this is a scalpel aimed at genuinely unworkable rules, or a precedent-setting attempt to chill transparency and audit obligations broadly.
Sources
The White House — "Ensuring a National Policy Framework for Artificial Intelligence"
Paul Hastings — "President Trump Signs Executive Order Challenging State AI Laws"