A dispute between the Trump administration and artificial intelligence company Anthropic has intensified debate about how governments should use emerging AI technologies, with critics arguing that the capabilities of current systems are far more limited than their proponents suggest.
The conflict escalated after the administration moved to prohibit Anthropic’s AI systems from being used across federal agencies, citing disagreements over the company’s restrictions on military applications. The company, led by CEO Dario Amodei, has advocated stronger oversight of artificial intelligence and warned about the potential risks of using advanced AI in areas such as autonomous weapons.
Officials within the administration responded sharply, accusing Anthropic of attempting to influence government policy while limiting how its technology could be used. The dispute reflects a broader divide within the technology sector between companies that support stronger regulation and those who argue that excessive oversight could slow innovation.
Missy Cummings, director of the Mason Autonomy and Robotics Center at George Mason University, said much of the debate surrounding AI governance is being driven by inflated expectations about what the technology can currently do.
“These systems are basically linear algebra on steroids,” Cummings said in an interview on Chicago’s Morning Answer. “They don’t think, they don’t understand, they don’t reason, and they make mistakes frequently.”
Cummings said Amodei is correct to warn against using current generative AI systems in lethal autonomous weapons but argued that the broader narrative surrounding artificial intelligence has often exaggerated its capabilities.
She pointed to public statements from technology leaders predicting mass unemployment or near-term artificial general intelligence as examples of what she described as “fear-driven hype.” According to Cummings, such claims have contributed to unrealistic expectations among policymakers and the public.
The practical limitations of AI, she said, become particularly concerning when discussions turn to military or safety-critical uses.
“If you’re talking about writing a report internally, the mistakes may not matter as much,” she said. “But if you’re talking about controlling weapons systems, the reliability simply isn’t there.”
The disagreement has also played out in financial markets and the technology sector. Some investors have speculated that advances in so-called “agentic AI” could disrupt large software companies by automating many business functions. Cummings dismissed those predictions, saying the technology remains far from capable of replacing complex workplace tasks.
She argued that businesses that rely too heavily on AI automation may ultimately face new costs correcting errors created by the systems.
“Humans will still have to supervise everything these systems produce,” Cummings said, suggesting that AI could create demand for new types of oversight roles rather than eliminate jobs entirely.
Beyond generative text systems, some technology leaders have predicted rapid advances in robotics. Tesla CEO Elon Musk, for example, has said humanoid robots could soon perform a wide range of tasks and eventually be produced at large scale.
Cummings said such forecasts are unrealistic in the near term. She noted that current industrial robots operate primarily in controlled environments such as factory assembly lines and remain far from capable of navigating complex, unpredictable settings like homes or workplaces.
“We are nowhere near humanoid robots working in unstructured environments,” she said.
The debate over AI governance also reflects broader concerns about how governments should regulate emerging technologies. Economist Tyler Cowen has argued that policymakers should pursue a middle ground that gives government some oversight without allowing it to dominate development.
Cummings said the biggest challenge may be the lack of technical expertise within government institutions. In her view, policymakers often rely on information provided by technology companies themselves rather than independent technical analysis.
She recently published research identifying multiple “failure modes” in large language models, highlighting how frequently the systems generate incorrect or misleading outputs.
According to Cummings, the current generation of AI systems requires constant human oversight, a dynamic similar to the reality behind many so-called self-driving cars that still rely heavily on human monitoring.
Another growing concern is the enormous energy demand associated with AI development. Large data centers powering AI models require significant electricity and water resources, particularly in regions such as Northern Virginia where many facilities are concentrated.
Cummings said the rapid expansion of data centers has already driven up electricity costs in some areas and warned that building ever larger facilities may not be the most efficient long-term solution.
Instead, she argued that companies should invest more heavily in optimizing algorithms and improving efficiency rather than simply expanding infrastructure.
While AI continues to generate enormous investment and public attention, Cummings believes the technology remains in an early stage of development.
“There is utility in these systems,” she said, pointing to applications such as image generation and data analysis. “But they are still highly error-prone and require significant human supervision.”
As governments and technology companies continue to debate regulation and national security uses, the gap between the promise of artificial intelligence and its current capabilities may remain a central issue in shaping policy decisions in the years ahead.


