Skip to main content

President Trump’s plan to block states from regulating artificial intelligence represents a dangerous consolidation of power that undermines America’s federalist principles while potentially stifling innovation. This executive order, expected this week, would create a single federal rulebook for AI governance—a move that appears logical on the surface but reveals a concerning pattern of federal overreach that ignores the laboratory of democracy that state-level regulation provides.

The False Choice Between Innovation and Regulation

Trump’s justification that ‘there must be only one rulebook if we are going to continue to lead in AI’ presents a false dichotomy between regulation and innovation. The history of technological advancement in America tells a different story. The internet flourished not because of absence of regulation, but because of thoughtful guardrails that protected consumers while allowing companies to experiment.

Consider California’s data privacy laws, which preceded federal action and ultimately influenced how tech companies approach data governance nationwide. These state-level initiatives didn’t hamper Silicon Valley—they created clear expectations that allowed businesses to innovate responsibly. Similarly, state-level AI regulations could establish important protections while the slower federal apparatus catches up to the technology.

Congressman Tom Emmer’s assertion that AI regulation ‘should be light-touch’ with ‘guardrails to protect the most likely victims’ sounds reasonable, but leaves crucial questions unanswered: Who determines what ‘light-touch’ means? What specific guardrails would protect vulnerable populations? Without these details, the call for centralized regulation appears more about control than protection.

The National Security Smokescreen

The national security argument presented by Professor Manjeet Rege deserves scrutiny. While legitimate concerns exist about foreign access to American data, this justification often serves as a convenient smokescreen for expanding federal power. The TikTok controversy demonstrated how national security claims can be weaponized for political purposes rather than substantive protection.

The real national security threat isn’t state-level regulation—it’s the lack of any meaningful oversight. When AI systems make decisions affecting employment, healthcare, criminal justice, and financial opportunities, the absence of transparency and accountability creates vulnerabilities that adversaries can exploit. States like Minnesota have begun addressing these concerns with targeted legislation on AI-generated child sexual abuse material, while other states like Colorado have implemented broader frameworks.

These state-level initiatives aren’t impediments to national security—they’re essential components of it. They provide diverse approaches to regulation that can identify best practices far more efficiently than a single federal approach.

The Economic Reality Behind the Rhetoric

The economic argument that conflicting state regulations would harm American competitiveness ignores the reality of how technology companies already operate. Major tech firms navigate different regulatory environments across the globe, from the EU’s GDPR to California’s CCPA. These companies have developed sophisticated compliance mechanisms that can adapt to varying requirements.

Microsoft, for example, has publicly supported reasonable AI regulation and invested in compliance infrastructure. Their ability to navigate the EU’s AI Act hasn’t diminished their innovation—in fact, it’s positioned them as a leader in responsible AI development. Similarly, Google has adapted to varying privacy requirements across jurisdictions while continuing to advance its AI capabilities.

The notion that American companies can’t handle different state approaches underestimates their adaptability and sophistication. It also ignores the economic benefits of thoughtful regulation, which creates consumer trust and market stability.

The Democratic Deficit in Centralized Regulation

Perhaps most concerning is the democratic deficit created by centralizing AI regulation exclusively at the federal level. States are often more responsive to their citizens’ needs and can address harms more quickly than the federal government. Minnesota’s efforts to criminalize AI-generated abuse material and explore ethical principles through MNIT demonstrate this responsiveness.

Federal regulation moves slowly and is vulnerable to regulatory capture by industry interests. The revolving door between federal agencies and the companies they regulate is well-documented. By contrast, state legislatures and agencies can respond more nimbly to emerging concerns and are often closer to the communities affected by these technologies.

The EU’s approach to AI governance recognizes this reality by establishing baseline protections while allowing member states to implement additional safeguards. This model balances uniformity with democratic responsiveness—something Trump’s executive order would eliminate.

Alternative Viewpoints: The Case for Federal Coordination

Proponents of federal preemption make valid points about the need for coordination. AI doesn’t respect state boundaries, and a patchwork of contradictory regulations could create genuine compliance challenges. Some baseline federal standards are necessary, particularly for high-risk applications like facial recognition or automated decision systems in critical infrastructure.

However, federal standards should establish a floor, not a ceiling. States should retain the authority to implement additional protections based on their citizens’ needs and values. This approach—federal baseline with state flexibility—has worked effectively in environmental regulation, consumer protection, and healthcare.

The concerns about Chinese competition also merit consideration. China’s centralized approach to AI development has produced rapid advancement, albeit with troubling implications for surveillance and human rights. America needs a coordinated strategy to remain competitive, but this doesn’t require eliminating state authority.

A Better Path Forward

Rather than blocking state regulation entirely, the federal government should establish minimum standards that address the most serious risks while explicitly preserving states’ authority to exceed these baselines. This approach would provide regulatory certainty for businesses while maintaining democratic responsiveness.

The federal government should focus on areas where national uniformity is essential—like critical infrastructure protection, national security applications, and interstate commerce—while allowing states to address local concerns about privacy, discrimination, and consumer protection.

This collaborative federalism approach would harness America’s greatest strength: our diversity of ideas and approaches. It would allow us to experiment with different regulatory models, learn from successes and failures, and develop more effective governance frameworks than either a single federal approach or a completely fragmented state-by-state system could provide.