NextFin News - On March 9, 2026, Anthropic filed two sweeping lawsuits against the U.S. Department of Defense and other federal agencies, marking a definitive rupture between the artificial intelligence sector and the second Trump administration. The litigation, filed in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C., follows U.S. President Trump’s executive directive to "immediately cease" all federal use of Anthropic’s technology. The administration’s move to designate the AI firm a "supply-chain risk" came after CEO Dario Amodei refused to waive safety guardrails that prevent the Claude model from being used for mass domestic surveillance or fully autonomous lethal weapons systems. By choosing litigation over capitulation, Anthropic has drawn a sharp contrast with the "Capitulating Nine"—a group of elite Big Law firms that recently retreated from representing clients targeted by the administration’s political pressure.
The conflict escalated rapidly after Defense Secretary Pete Hegseth characterized Anthropic’s insistence on human oversight for "kill decisions" as "defective altruism" and a lack of patriotism. The formal blacklisting effectively bars Anthropic from a $200 million Pentagon contract and threatens billions in projected revenue from the broader defense supply chain. While the administration frames the move as a national security necessity, Anthropic’s legal team, led by WilmerHale, alleges an "unlawful campaign of retaliation" that violates First Amendment freedoms. The timing is particularly sensitive; just hours after Anthropic’s negotiations with the Pentagon collapsed, OpenAI signed a new expansive agreement with the Department of War, with Sam Altman publicly praising the department’s "deep respect for safety."
This divergence in corporate strategy reveals a fundamental disagreement on how to survive the current political climate. For Big Law, the calculation has been largely defensive. Firms like Paul Weiss, Skadden, and Kirkland & Ellis—part of the so-called Capitulating Nine—have historically prioritized the preservation of their profit-per-partner metrics and immediate access to federal power. When faced with threats of tax audits or the loss of government-adjacent business, these institutions chose to sever ties with controversial clients. Their logic suggests that the rule of law is a luxury of stable times, whereas the current era requires a pragmatic surrender to avoid institutional "de-banking" or regulatory harassment. In contrast, Anthropic is betting that its brand as the "ethical" AI provider will yield higher long-term dividends than a compromised federal contract.
The market response suggests Anthropic’s gamble may already be paying off. The day after the Pentagon’s blacklist was announced, the Claude app surged to the top of the iPhone App Store, surpassing ChatGPT for the first time. Reports indicate that over a million new users are signing up daily, driven by a public perception of the company as a principled underdog. This "conscience premium" has even moved traditionally cautious tech giants; Microsoft recently filed an amicus brief in support of Anthropic, despite having billions of dollars in its own federal contracts at risk. This collective action by Silicon Valley suggests a growing realization that if one firm can be arbitrarily designated a supply-chain risk for maintaining safety standards, no technology provider is truly safe from executive whim.
The legal battle now centers on whether the "supply-chain risk" statute can be used as a tool for political discipline. While the administration argues that any refusal to integrate AI fully into the military chain of command constitutes a risk, Anthropic’s lawyers argue that the designation is a pretext for punishing a company that refused to become a "state-directed" enterprise. The four law firms that previously stood their ground against the administration—Perkins Coie, Jenner & Block, WilmerHale, and Susman Godfrey—have already secured several procedural victories in similar cases, providing a roadmap for Anthropic’s defense. These firms have demonstrated that while the executive branch holds the power of the purse, the judiciary remains a viable, if slow, check on the exercise of that power.
Ultimately, the standoff between Amodei and the White House is a test of institutional endurance. Big Law’s retreat was predicated on the belief that justice delayed is justice denied, making immediate surrender the only "rational" business choice. Anthropic, however, is treating the legal system as a strategic asset rather than a liability. By framing its resistance as a defense of both the First Amendment and the future of responsible AI, the company is attempting to build an enduring institution that outlasts the current political cycle. The outcome of these lawsuits will likely determine whether the American AI industry remains a collection of independent innovators or becomes a regulated utility of the state.
Explore more exclusive insights at nextfin.ai.

