NextFin News - U.S. Senator Elizabeth Warren has formally accused the Pentagon of "retaliation" against AI startup Anthropic, escalating a high-stakes legal and political battle just hours before a critical federal court hearing. In a letter sent March 23 to Defense Secretary Pete Hegseth, Warren argued that the Department of Defense (DoD) weaponized its supply-chain risk tools to blacklist the company after Anthropic refused to relax its safety protocols regarding mass surveillance and autonomous lethal weapons. The timing of the intervention is surgical, arriving as U.S. District Judge Rita Lin prepares to hear Anthropic’s request for a preliminary injunction in San Francisco today, March 24, 2026.
The dispute centers on the Pentagon’s decision earlier this month to designate Anthropic as a supply-chain risk—the first time such a label has been applied to a major domestic AI firm. While the Trump administration maintains the move is a necessary national security judgment, Warren’s letter suggests a more transactional motive. She contends that if the DoD simply had a grievance with a specific contract, it could have terminated that agreement. Instead, by invoking the supply-chain risk designation, the government has effectively "blacklisted" Anthropic from the broader federal marketplace, a move Warren characterizes as a punitive response to the company’s refusal to cross its own ethical "red lines."
Court filings have already begun to puncture the Pentagon’s narrative of an urgent security threat. Evidence surfaced showing that on March 4, just one day after the designation was finalized, Under Secretary Emil Michael emailed Anthropic CEO Dario Amodei stating the two parties were "very close" on the exact issues now cited as existential risks. This suggests that the "unacceptable risk" was perhaps a negotiable point until negotiations soured. Furthermore, Anthropic has submitted sworn declarations debunking the DoD’s claim that the company could remotely interfere with Claude AI models once deployed in air-gapped, classified environments. Without a "kill switch" or back door, the government’s fear of corporate interference appears technically unfounded.
The irony of the situation is sharpened by reports that the Pentagon continues to rely on Anthropic’s technology for active military operations, including intelligence work related to Iran. This creates a glaring logical gap: the government argues Anthropic is too dangerous to trust, yet it remains too useful to quit. This "dual-track" reality undermines the legal threshold for a supply-chain risk designation, which typically requires a clear and present danger that cannot be mitigated through standard contractual oversight.
For the broader AI industry, the outcome of today’s hearing carries existential weight. If the court allows the designation to stand, it sets a precedent where the U.S. government can use administrative blacklisting to bypass the First Amendment and force private companies to build tools they find ethically abhorrent. It transforms "national security" from a shield into a sword used to compel corporate alignment with specific military objectives. Silicon Valley is watching closely; if safety limits on autonomous killing are treated as supply-chain risks, the "AI safety" movement may soon find itself in a terminal collision with the "America First" defense posture of the Trump administration.
Explore more exclusive insights at nextfin.ai.
