The question of whether the United States will win the artificial intelligence race against China, and what winning actually requires, is at the center of a new book generating significant attention in national security and technology policy circles.
Wynton Hall, Breitbart News director of social media, distinguished fellow at Peter Schweizer’s Government Accountability Institute, and author of Code Red: The Left, the Right, China, and the Race to Control AI, joined Dan Proft on Chicago’s Morning Answer to lay out the threat landscape he argues most Americans are dangerously unprepared to understand.
Proft opened the conversation by referencing a widely circulated analysis from Sequoia Capital principal and physicist Sean McGuire arguing that Elon Musk and xAI are positioned to win the AI arms race, and that what looks like chaos surrounding Musk’s various ventures is actually the whiplash of rapid reprioritization after clearing successive bottlenecks. Hall said the more important question embedded in that argument is not who wins among American labs but whether the United States as a whole wins against China, and he argued the stakes could not be higher on two distinct levels. Economically, roughly a third of the S&P 500 is built around the seven largest technology firms, all of which are substantially AI plays, and the sector is on track to attract five trillion dollars in investment over the next two years. Militarily, Hall said whoever achieves dominance in artificial intelligence, whether through recursive self-improvement in which code updates and improves itself or through more near-term applications, will have full-spectrum battlefield dominance across encryption, cybersecurity, missile systems, and infrastructure. China has had an explicit plan since 2017 to achieve that dominance by 2030, he said, and the race is real.
On the chip export control debate, Hall said there is broader bipartisan agreement than the public argument suggests that advanced chips representing the bleeding edge of American capability, the Blackwell and Vera Rubin series from Nvidia, must not flow to China, and that maintaining that chokepoint is essential to preserving an American lead. The controversy over Nvidia selling its H200 accelerators to Chinese customers, which prompted Senator Elizabeth Warren to claim the chips would drive up laptop and smartphone prices while helping China surpass the United States in AI, drew a gentle correction from Hall, who noted that the H200 is not used in consumer devices. He said Warren’s specific factual error aside, the principle of maintaining export controls on the most sensitive chips is sound and widely shared across the political spectrum.
Hall addressed at length what he described as a precedent-setting confrontation between Anthropic and the Trump administration over the terms of a two-hundred-million-dollar government contract. The core dispute involved contract language specifying the AI would be used for all lawful purposes, with Anthropic seeking to bake in restrictions on autonomous weapons and mass surveillance that Hall and others argued would give a private vendor effective veto power over battlefield command decisions. He said the principle at stake is straightforward: the commander-in-chief must retain final authority over how AI is deployed in military contexts, and no corporate vendor’s terms of service can be allowed to override or interfere with that authority downstream. He distinguished this from dismissing concerns about autonomous weapons or surveillance entirely, calling alignment, safety, and security genuinely essential considerations, but said the definition of those terms matters enormously and cannot be left to AI companies to determine unilaterally.
On the question of political bias baked into AI systems, Hall described the first chapter of Code Red as addressing precisely this issue, and said the book’s launch was accompanied by a study he conducted that set off a reaction among United States senators. He asked Google Gemini’s most advanced deep research model to survey current senators and assess whether their public statements had violated the platform’s hate speech policy. The model returned a thirty-four-hundred-word research document concluding that seven Republican senators had violated the policy and zero Democrats had. The sourcing the model used to reach those conclusions included the Southern Poverty Law Center, Human Rights Watch, GLAAD, and Wikipedia. The model also identified JD Vance and Marco Rubio as current senators, apparently unaware they had become vice president and secretary of state respectively. Senators Rick Scott, Marsha Blackburn, and Tom Cotton publicly responded to the findings. Hall said his concern is not that a private company has the right to build a biased product, which he argued it does, but that Google receives billions of dollars in federal procurement contracts while producing a system that returns ideologically skewed research to users who may not recognize it as such.
He connected that concern to a broader infrastructure of what he called scan-and-ban technology, in which self-appointed international organizations like the Global Disinformation Index designate right-of-center publishers and voices as hate speech or misinformation, then signal to ad networks largely controlled by left-leaning Silicon Valley forces to demonetize and effectively silence those outlets. Hall argued that the Murthy v. Missouri case, in which the Supreme Court largely sidestepped the question of whether the Biden administration had used social media platforms as cutouts for government censorship, left the underlying mechanism intact, and that AI amplifies the potential scale of that problem exponentially.
Asked to prioritize the threats, Hall declined to rank them, saying Code Red treats the AI challenge as a multifront war with threat vectors spanning education, employment, human relationships, national security, and even faith and reason. He argued that while conservatives have well-developed instincts on traditional policy battlegrounds like taxes, abortion, and national security, AI remains an ideological jump ball where the right is dangerously undercoached. His closing argument was essentially a warning about timing: with Republicans controlling the White House, Senate, and House simultaneously, the current moment represents the most favorable environment conservatives are likely to have to shape AI policy for the foreseeable future, and when the political pendulum swings back, he said, it will swing back hard.


