NextFin News - Anthropic filed two separate lawsuits on Monday against the Trump administration, seeking to overturn a Department of Defense designation that labeled the artificial intelligence firm a "supply chain risk" after it refused to grant the military unrestricted use of its Claude chatbot. The legal challenge, filed in California federal court and the federal appeals court in Washington, D.C., marks the first time the U.S. government has used such a national security designation against a domestic technology company to compel cooperation with military objectives. The move follows a March 4 letter from Defense Secretary Pete Hegseth, which argued that Anthropic’s refusal to allow its technology to be used for fully autonomous weapons and mass surveillance constituted a threat to national security.
The escalation represents a fundamental rupture in the relationship between Silicon Valley’s "safety-first" AI labs and a Trump administration determined to integrate advanced machine learning into every facet of American hard power. While Anthropic has previously partnered with national security contractors like Palantir for data processing and intelligence analysis, CEO Dario Amodei has drawn a hard line at lethal autonomy. The lawsuit alleges that the administration is engaging in an "unlawful campaign of retaliation" and violating the company’s First Amendment rights by punishing it for its stated ethical policies. By designating Anthropic as a supply chain risk—an authority typically reserved for blocking foreign adversaries like Huawei—the Pentagon has effectively frozen the company out of the most lucrative government contracts.
The financial stakes for Anthropic are immense. Valued at $380 billion and projecting $14 billion in revenue this year, the company is fighting to prevent the "supply chain risk" label from bleeding into its private-sector business. More than 500 customers currently pay Anthropic at least $1 million annually, and the company has spent the last week reassuring corporate clients that the Pentagon’s designation is narrow in scope. However, the reputational damage of being labeled a risk by the U.S. President is difficult to quantify. U.S. President Trump has already ordered federal agencies to phase out Claude within six months, a directive that could dismantle Anthropic’s foothold in the civilian government market.
The vacuum left by Anthropic was filled almost instantly by its chief rival. Just hours after the Pentagon moved against Anthropic, OpenAI announced a comprehensive deal to provide ChatGPT to the Department of Defense for "any lawful purpose." This divergence in corporate strategy has split the AI industry. While OpenAI CEO Sam Altman has opted for a pragmatic, if controversial, alignment with the administration’s "America First" defense posture, Anthropic has become a rallying point for AI researchers concerned about the lack of human oversight in warfare. The resignation of OpenAI’s head of robotics, Caitlin Kalinowski, and a supportive legal brief filed by 30 leading developers from Google and OpenAI, suggest that the Trump administration’s aggressive tactics may be triggering a new wave of talent migration within the sector.
The judiciary must now decide whether the executive branch can use national security authorities to override the terms of service of a private software provider. If the courts uphold the Pentagon’s designation, it would set a precedent where "supply chain risk" becomes a catch-all tool for the U.S. President to enforce ideological or operational compliance among domestic tech firms. For now, the market is rewarding Anthropic’s defiance in unexpected ways; consumer downloads of Claude have surged, briefly overtaking ChatGPT in popularity as the public reacts to the high-profile standoff. The outcome of this litigation will likely define the boundaries of corporate autonomy in the age of state-sponsored AI development.
Explore more exclusive insights at nextfin.ai.

