NextFin News - A federal judge in San Francisco has halted the Trump administration’s attempt to blacklist Anthropic, delivering a significant legal blow to the White House’s efforts to force Silicon Valley’s compliance with military AI mandates. On Thursday, Judge Rita F. Lin of the Northern District of California issued a preliminary injunction ordering the administration to rescind its designation of Anthropic as a "supply chain risk," a label typically reserved for hostile foreign entities like Huawei or ZTE. The ruling effectively freezes a directive that would have forced all federal agencies and government contractors to sever ties with the AI startup, which the court characterized as a likely violation of free speech and a retaliatory strike by the Department of War.
The conflict erupted in February 2026 after negotiations between Anthropic and the Pentagon collapsed over the "acceptable use" of the company’s Claude AI models. Anthropic, led by CEO Dario Amodei, insisted on strict guardrails prohibiting its technology from being deployed in autonomous weapons systems or for mass surveillance. The administration, spearheaded by Secretary of War Pete Hegseth, viewed these ethical constraints as an impediment to national security. When Anthropic refused to waive its safety protocols, U.S. President Trump issued a directive on Truth Social ordering an immediate cessation of all government engagement with the firm, followed by the formal "supply chain risk" designation by the Pentagon.
Judge Lin’s decision was pointed, noting that the government’s actions appeared to be a calculated "attempt to cripple" a private company for its refusal to align with political objectives. During the proceedings, the court highlighted that the Pentagon only moved to blacklist Anthropic after the company publicly voiced concerns about the militarization of its models. This sequence of events suggested the administration’s national security justification was a pretext for punitive measures. The injunction restores Anthropic’s ability to work with non-defense agencies, such as the National Endowment for the Arts, which had been caught in the broad sweep of the administration’s ban.
The legal victory for Anthropic has been bolstered by an unusual coalition of supporters. Nearly 150 retired federal and state judges filed an amicus brief earlier this month, expressing alarm over the weaponization of the "supply chain risk" label against a domestic firm. Legal analysts, including those from the Electronic Frontier Foundation, have argued that if the administration’s logic held, any software company with an ethics policy could be declared a national security threat. However, some defense hawks, such as Senator Tom Cotton, have defended the administration’s stance, arguing that any company receiving federal R&D support should not be allowed to dictate terms to the military during a period of heightened global competition.
While the injunction provides immediate relief, the broader battle over the "AI supply chain" is far from over. The Trump administration has characterized Anthropic as a "radical-left organization" that undermines American interests, a narrative that resonates with a segment of the electorate wary of "woke" technology. For Anthropic, the challenge remains maintaining its commercial viability while its primary competitor, OpenAI, has taken a more conciliatory approach toward military integration. The case now moves toward a full trial, which will likely serve as a definitive test of whether the executive branch can use procurement power to bypass the First Amendment rights of technology providers.
Explore more exclusive insights at nextfin.ai.

