NextFin News - The Trump administration is preparing to formalize a sweeping purge of Anthropic’s artificial intelligence from the federal government, with an executive order expected as early as this week to codify the removal of the company’s technology from all executive branch agencies. The move follows a high-stakes standoff between the Pentagon and the San Francisco-based startup over the military’s right to use AI for autonomous weaponry and mass surveillance—guardrails that Anthropic refused to lower, prompting U.S. President Trump to label the firm a "national security risk."
The conflict reached a breaking point in late February when the Department of Defense, recently rebranded by the administration as the Department of War, designated Anthropic as a supply-chain risk. This classification, typically reserved for foreign adversaries like Huawei or ZTE, effectively blacklists the company from federal procurement. Defense Secretary Pete Hegseth argued that the military must have the latitude for "any lawful use" of AI, rejecting Anthropic’s insistence on ethical restrictions for its Claude model. In response, Anthropic filed a lawsuit on Monday in California, alleging that the administration is engaging in "unprecedented and unlawful" retaliation that violates the company’s First Amendment rights.
The financial stakes are immediate and severe. Anthropic’s legal filing notes that federal contracts are already being canceled, and the "supply chain risk" label has cast a shadow over private-sector deals, threatening hundreds of millions of dollars in near-term revenue. The Treasury Department, led by Scott Bessent, has already signaled it will discontinue use of the company’s products. For a company that positioned itself as the "safety-first" alternative to OpenAI, the irony is sharp: the very guardrails designed to ensure ethical AI have now become the primary obstacle to its commercial survival within the world’s largest economy.
While Anthropic faces a federal exodus, its rivals are moving to fill the vacuum. OpenAI recently reached an agreement with the Pentagon to provide AI services for national security purposes, though the deal has not been without internal friction. Caitlin Kalinowski, OpenAI’s head of robotics, resigned in protest over the weekend, highlighting the deep cultural divide between Silicon Valley’s engineering talent and the administration’s "America First" military requirements. Despite the political headwinds, Anthropic’s consumer-facing Claude app has seen a surge in downloads, briefly overtaking ChatGPT in the past week as users react to the high-profile dispute.
The administration’s aggressive stance signals a new era of "algorithmic sovereignty," where the White House demands total control over the underlying logic of the tools it employs. By treating a domestic AI leader as a security threat, U.S. President Trump is forcing the tech industry to choose between adherence to internal safety charters and the lucrative, if ethically complex, demands of the state. The outcome of Anthropic’s lawsuit will likely define the legal boundaries of how much "speech" an AI company can embed in its software before it is deemed a hindrance to national defense.
Explore more exclusive insights at nextfin.ai.
