Trump AI Order Sparks Debate Over Innovation, Federal Authority, and Parental Rights

President Trump’s move to centralize artificial intelligence regulation at the federal level took center stage this week on Chicago’s Morning Answer, as Dan Proft spoke with Jessica Melugin, director of the Center for Technology and Innovation at the Competitive Enterprise Institute, about the implications of the administration’s new executive order.

The order directs the Justice Department to establish an AI litigation task force aimed at challenging state-level laws that the administration views as unconstitutional, preempted, or harmful to innovation. The goal, according to the White House, is to prevent a fragmented patchwork of regulations across the country that could slow investment and development in a technology sector increasingly viewed as central to U.S. economic competitiveness, particularly against China.

Melugin said the administration’s underlying logic is sound, noting that AI development inherently crosses state and national borders. A system in which companies must comply with dozens of conflicting state rules, she argued, risks defaulting innovation to the most restrictive regulatory regimes, effectively allowing states like California or New York to set national standards by default. In contrast, a single federal framework would give entrepreneurs and investors clearer expectations while helping the United States remain competitive globally.

At the same time, Melugin cautioned that executive action alone may not be sufficient. She said durable preemption of state AI laws would likely require congressional action to provide legal clarity and long-term stability. She also emphasized that the executive order does not eliminate traditional state authority over issues such as consumer protection, fraud, privacy, or criminal misuse of AI, focusing instead on the development of foundational models and interstate commerce.

The conversation also turned to concerns that AI policy debates could become entangled with broader efforts to regulate online speech and children’s access to technology. Melugin warned that proposals framed as child safety measures could unintentionally undermine First Amendment protections and parental decision-making if handled hastily. She pointed to international examples, including Australia’s ban on social media access for minors, as policies that would likely face constitutional challenges in the United States.

Melugin argued that decisions about children’s technology use should remain primarily with families, supported by education and improved parental control tools rather than top-down government mandates. While acknowledging the challenges parents face navigating social media and AI-driven platforms, she said households are still best positioned to decide what is appropriate for their children at different ages.

Looking more broadly at the state of artificial intelligence, Melugin said the most transformative impact may come not from the companies building large AI models, but from developers creating targeted tools that build on those models to address real-world problems. She described AI as a general-purpose technology with the potential to drive advances in medicine, energy, and productivity, while also acknowledging the need to address bad actors and unintended consequences as adoption expands.

Despite widespread public anxiety about AI-driven job displacement and social disruption, Melugin expressed confidence that the technology’s long-term effects will be positive. She likened current AI tools to having a highly capable assistant that still requires human oversight, and said history suggests new technologies tend to reshape work rather than eliminate it altogether.

As the federal government moves to assert a stronger role in AI governance, the discussion highlighted a balancing act facing policymakers: fostering innovation and global competitiveness while safeguarding civil liberties, parental authority, and consumer protections.

Share This Article