As artificial intelligence continues its rapid expansion into everyday life, experts remain divided over whether the technology is accelerating beyond expectations or still struggling to meet its most ambitious promises.
Recent warnings from insiders at major AI companies have fueled public anxiety. Some researchers at leading firms such as OpenAI and Anthropic have publicly voiced concern about the pace and potential risks of development, with a few even departing their roles in protest. Tech investors and entrepreneurs have compared the current moment to the early days of the COVID-19 pandemic, suggesting society may be underestimating the speed and scale of disruption ahead.
At the same time, others argue the United States may actually be falling behind in key areas. A recent Wall Street Journal opinion piece warned that while American AI firms have excelled at building large language models capable of generating text and images, the country risks ceding ground to China in scientific and mathematical AI systems critical to national competitiveness. With Beijing set to unveil its next five-year plan, observers note that China continues pouring substantial state-backed investment into strategic technologies.
Neil Chilson, former chief technologist at the Federal Trade Commission and current head of AI policy at the Abundance Institute, says the confusion is understandable.
Artificial intelligence, he argues, is not a single product but a broad general-purpose technology with applications across nearly every sector of the economy. That breadth makes it difficult to summarize progress with a single narrative. In some domains, AI appears remarkably advanced. In others, it remains inconsistent or even rudimentary.
Experts describe the cutting edge of AI capability as “jagged,” meaning performance can be impressive in certain narrow tasks while surprisingly weak in adjacent ones. That uneven frontier contributes to mixed public perception.
Chilson rejects comparisons to a looming pandemic-style crisis, arguing that AI is fundamentally different. Whereas COVID-19 was an external shock that forced individuals to retreat and governments to impose restrictions, AI is a tool. Its impact depends largely on how individuals and organizations choose to deploy it.
He points to examples of individuals using AI systems to tackle specialized problems, including cases in which parents of children with rare diseases leveraged AI tools to connect experts and analyze medical research more efficiently. Such stories, he says, illustrate how the technology can empower users rather than overwhelm them.
Concerns about job displacement remain central to the debate. While some worry that artificial general intelligence, often referred to as AGI, could eventually perform most human tasks, Chilson notes that current systems primarily automate specific tasks rather than entire occupations. Jobs typically involve a complex mix of technical duties and interpersonal skills, many of which remain difficult for machines to replicate.
Instead of wholesale job elimination, he predicts AI will more likely augment workers by making them more productive. The challenge, he says, lies in ensuring workers can adapt and build new skills as tasks evolve.
Access to the technology has also improved dramatically. While premium AI systems can cost hundreds or thousands of dollars per month for enterprise users, many consumer-facing tools are available at low cost or free. Even entry-level versions today outperform paid versions from just a year ago, reflecting the speed of development.
As for what comes next, Chilson resists predicting a single breakthrough moment. Advances are occurring simultaneously in areas ranging from medical research to software coding and robotics. Rather than a single leap, he expects continued incremental expansion across multiple domains.
The policy response remains another point of contention. Some industry leaders have called for robust federal regulation, while others warn that premature or overly restrictive rules could stifle innovation.
Chilson favors a targeted approach, suggesting policymakers focus on concrete harms such as fraud, misuse, and intellectual property violations rather than regulating abstract model development. Existing laws governing deception and copyright, he argues, may already provide a framework for addressing many AI-related disputes, with courts currently working through key cases.
Ultimately, the debate reflects the broader uncertainty surrounding a transformative technology. For some, AI represents a disruptive force capable of reshaping labor markets and global power dynamics. For others, it is simply the next step in a long tradition of technological innovation.
What remains clear is that artificial intelligence is neither fading into irrelevance nor marching uniformly toward dystopia. Its trajectory, like many technological revolutions before it, appears likely to be uneven, contested, and shaped as much by human choices as by machine capability.


