The Dangerous Federal Power Grab: Undermining State AI Protections in the Name of Innovation
Published
- 3 min read
The Executive Order and Its Provisions
On December 11, 2025, President Donald Trump signed an executive order that represents one of the most significant federal interventions into technology policy in recent history. The order aims to establish a “minimally burdensome” national framework for artificial intelligence by effectively superseding state-level AI regulations that the administration views as obstacles to innovation. This sweeping directive calls upon the U.S. attorney general to create an AI litigation task force specifically tasked with challenging state AI laws deemed inconsistent with federal policy. Perhaps most alarmingly, it orders the secretary of commerce to identify “onerous” state AI laws and withhold broadband funding from states that maintain these protections.
The executive order emerges against a backdrop of rapidly evolving state-level AI legislation. Thirty-eight states enacted AI regulations in 2025 alone, responding to the explosive growth of generative AI systems like ChatGPT and growing public concerns about algorithmic discrimination, privacy violations, and potential catastrophic risks. These state laws represent diverse approaches to balancing technological innovation with public safety concerns, ranging from prohibitions on AI-powered stalking to regulations preventing behavioral manipulation through AI systems.
State-Level Protections Under Threat
The executive order specifically targets several groundbreaking state laws that have emerged as models for responsible AI governance. Colorado’s Consumer Protections for Artificial Intelligence represents the nation’s first comprehensive state law regulating AI systems used in employment, housing, credit, education, and healthcare decisions. This legislation focuses on protecting citizens from algorithmic discrimination by requiring organizations using “high-risk systems” to conduct impact assessments, notify consumers when AI is being used in consequential decisions, and publicly disclose their risk management strategies.
California’s Transparency in Frontier Artificial Intelligence Act takes a different approach by establishing guardrails specifically for the most powerful AI models—those costing at least $100 million to develop and requiring extraordinary computing power. This law addresses the unique risks posed by frontier models, including potential malicious use, malfunctions, and systemic risks that could theoretically cause catastrophic harm to society. It requires developers to incorporate national and international standards, provide catastrophic risk assessments, and establishes reporting mechanisms for safety incidents.
Texas and Utah have pursued alternative regulatory approaches. The Texas Responsible AI Governance Act restricts AI systems used for behavioral manipulation while creating liability protections for businesses that document compliance with responsible AI frameworks. Notably, it establishes a “sandbox” environment for safe AI testing. Utah’s Artificial Intelligence Policy Act focuses on disclosure requirements, ensuring companies using generative AI tools bear ultimate responsibility for consumer harms and cannot shift blame to the AI systems themselves.
The Constitutional and Democratic Implications
This executive order represents a profound threat to both constitutional principles and democratic governance. The Tenth Amendment explicitly reserves powers not delegated to the federal government to the states, and for centuries, consumer protection and public safety have been primarily state responsibilities. The administration’s attempt to centralize AI regulation through executive fiat—bypassing Congressional authority—undermines fundamental principles of federalism and checks and balances.
The order’s mechanism for enforcement is particularly concerning. By threatening to withhold Broadband Equity Access and Deployment Program funding from states that maintain their AI protections, the administration is effectively holding essential infrastructure hostage to force compliance with its policy preferences. This coercive approach subverts the democratic process and punishes states for exercising their constitutional authority to protect their citizens.
Furthermore, the creation of an AI litigation task force specifically designed to challenge state laws represents an alarming weaponization of the justice system against democratically enacted legislation. This approach prioritizes corporate interests over citizen protections and establishes a dangerous precedent where federal power can be used to systematically dismantle state-level safeguards.
The Innovation vs. Protection False Dichotomy
The administration and big tech companies have framed this debate as a choice between innovation and regulation, creating a false dichotomy that serves corporate interests rather than public good. The reality is that thoughtful, well-designed regulations often foster innovation by creating predictable environments, building public trust, and establishing clear guidelines for responsible development.
State AI laws like those in Colorado, California, and Texas were not created in opposition to innovation but rather to ensure that innovation proceeds responsibly. These regulations address genuine concerns about algorithmic discrimination, privacy violations, and potential catastrophic risks—concerns that tech companies have largely failed to address through self-regulation. By attempting to override these protections, the administration is effectively telling citizens that corporate profits matter more than their rights and safety.
The notion that multiple state regulations create an undue burden on companies ignores the reality of modern business operations. Large technology companies already navigate complex regulatory environments across multiple jurisdictions and countries. The argument that state AI laws are uniquely burdensome seems disingenuous when these same companies successfully comply with varying state laws regarding privacy, consumer protection, and business regulations across countless other domains.
The Human Cost of Deregulation
Behind the abstract policy debate lies a very real human cost. Algorithmic discrimination in housing, employment, credit, and healthcare decisions can devastate lives and perpetuate historical inequalities. Without state-level protections like those in Colorado and Illinois, individuals subjected to discriminatory AI systems have limited recourse. The administration’s push for a “minimally burdensome” framework effectively means minimally protective—placing corporate convenience above fundamental civil rights.
The risks associated with frontier AI models are not theoretical exercises but potential existential threats. California’s requirement for catastrophic risk assessments and safety incident reporting represents a prudent approach to technology that could potentially be weaponized or malfunction with devastating consequences. Dismantling these safeguards in the name of innovation is reckless and demonstrates a shocking disregard for public safety.
The Path Forward: Respecting Democracy and Rights
True innovation cannot flourish in a environment that disregards democratic principles and fundamental rights. Rather than overriding state protections, the federal government should work collaboratively with states to develop complementary frameworks that respect both innovation and safety. Congress, not the executive branch, should take the lead in establishing federal AI standards through democratic deliberation and legislative process.
States have historically served as laboratories of democracy, experimenting with different approaches to complex policy challenges. The diversity of state AI regulations represents this democratic process in action, with different jurisdictions testing various approaches to balancing technological advancement with public protection. This experimentation should be celebrated and studied, not suppressed through federal overreach.
As citizens committed to democracy, freedom, and liberty, we must vigorously oppose this executive overreach and support states’ rights to protect their citizens. We must demand that our federal representatives respect constitutional principles and democratic processes rather than allowing corporate interests to dictate policy through executive fiat. The future of both our democracy and responsible AI development depends on maintaining this crucial balance between innovation and protection, between federal authority and state sovereignty, between corporate interests and human rights.