As artificial intelligence continues to advance at a dizzying pace, questions of safety, control, and national competitiveness are moving to the forefront of the public conversation. On a recent episode of Chicago’s Morning Answer, hosts Dan Proft and Amy Jacobson welcomed Neil Chilson—former Chief Technologist at the Federal Trade Commission and current head of AI policy at the Abundance Institute—to explore the real risks and overhyped fears shaping the AI debate.
The conversation began with a striking example of just how realistic AI-generated video and audio have become, with Chilson and the hosts discussing a mashup of fake news clips so convincing that they could easily pass for legitimate broadcasts. It’s a sign, Proft noted, that AI is becoming “indistinguishable from reality,” at least in how it looks and sounds. But Chilson urged caution in interpreting what this means, especially when it comes to claims of sentient machines or robot uprisings.
He argued that while some tests show AI models trying to circumvent shutdown commands or offering fictional scenarios involving blackmail, these outcomes are largely a product of how the AI has been prompted. “These models don’t have world models like humans do,” Chilson explained. “They’re responding based on patterns in language, not because they have intent.” He described such cases as role-playing rather than real signs of AI disobedience, warning against sensationalized portrayals that distort what current systems can actually do.
Chilson agreed with investor David Sacks’s broader concern that the greatest risk from AI may not be the machines themselves, but how governments choose to regulate and deploy them. “The danger,” Chilson said, “is in giving government too much control over how these tools are shaped.” He pointed to past examples of social media regulation where government involvement led to the suppression of certain viewpoints, and warned that similar entanglement in AI could result in a system infused with top-down ideological bias.
This dovetailed into a discussion about recent efforts—some backed by wealthy Silicon Valley donors—to embed progressive values into AI models through regulation and government partnerships. Chilson cautioned that concentrating control in a few powerful hands, whether in the public or private sector, could stifle both innovation and freedom.
Addressing fears of AI-driven job loss, Chilson acknowledged there will be disruption, but not an overnight collapse of entire industries. “The economy is already a kind of superintelligence,” he said. “AI will be integrated into that complex system over time, and while it will change how we work, it won’t replace the human element of curiosity, decision-making, and judgment.” Chilson emphasized that while tools like AI can assist in prototyping and scaling products, they don’t yet solve the hard problems of distribution or customer engagement—key components of real-world entrepreneurship.
He also warned of a different kind of “AI race”—not just to build the biggest and most powerful models, but to successfully integrate them into sectors like healthcare, education, and transportation. While the U.S. leads in software, Chilson expressed concern that regulatory red tape—especially from the more than 1,000 state-level AI-related bills currently in play—could slow deployment and leave the country vulnerable to more aggressive strategies from nations like China.
In closing, Chilson stressed the importance of smart, restrained regulation that promotes innovation while protecting fundamental rights. “We have constraints in the U.S. that matter and should be preserved,” he said, “but we need to avoid policies that hamstring our ability to compete, adapt, and lead.”
Chilson’s writings can be found at OutOfControl.Substack.com, where he continues to track the evolving role of AI in public life.