NextFin News - The U.S. Department of Justice has escalated its legal confrontation with Anthropic, arguing in a federal court filing on Tuesday that the artificial intelligence startup cannot be trusted with national security systems because it attempted to dictate how the military uses its technology. The filing, submitted in response to Anthropic’s lawsuit against the Trump administration, marks a definitive break between the executive branch and one of the world’s leading AI developers. At the heart of the dispute is a "supply-chain risk" designation that effectively blacklists Anthropic from defense contracts, a move the government now defends as a necessary precaution against potential corporate "sabotage."
The friction began in February 2026 when negotiations over a $200 million contract collapsed. Anthropic, led by CEO Dario Amodei, insisted on "red lines" that would prohibit its Claude AI models from being used for mass surveillance of U.S. citizens or in fully autonomous lethal weapons. Defense Secretary Pete Hegseth and the Trump administration viewed these ethical safeguards not as corporate responsibility, but as a security vulnerability. According to the Justice Department, the refusal to provide unrestricted access led the Pentagon to "reasonably" determine that Anthropic staff might maliciously subvert the design or operation of a national security system if they felt their corporate values were being compromised during active warfighting operations.
The government’s legal strategy is to frame Anthropic’s ethical constraints as a physical threat to the supply chain. Justice Department attorneys argued that the First Amendment does not grant a company the right to "unilaterally impose contract terms on the government." By maintaining the ability to disable or alter the behavior of its models, Anthropic is viewed by the Pentagon as a "rogue" contractor. The filing suggests that the military cannot rely on a system where a private entity holds a "kill switch" based on its own internal moral compass, particularly in high-stakes environments like the ongoing conflict in Iran where AI-driven data analysis is already deeply embedded.
For Anthropic, the stakes are existential. The company claims the "supply-chain risk" label is a form of illegal retaliation that could cost it billions of dollars in revenue this year alone. Beyond the loss of direct government contracts, the designation serves as a toxic signal to the broader commercial market. If the label holds, any business using Claude for Department of Defense work must cease operations with the startup. Anthropic has argued that its models are already being used responsibly through partners like Palantir to process complex data and streamline document review, but the administration appears intent on replacing these tools with products from competitors who are more willing to grant the Pentagon total autonomy.
The broader AI industry is watching the case as a bellwether for the relationship between Silicon Valley’s "safety-first" culture and the Trump administration’s "America First" defense policy. While companies like OpenAI and Google have also grappled with military applications, Anthropic’s public stance on AI safety has made it a primary target for an administration that views technological restraint as a strategic weakness. Secretary Hegseth has reportedly even considered invoking the Defense Production Act to compel the company to provide a version of Claude stripped of its safeguards, a move that would represent an unprecedented intervention into private software development.
The legal battle now moves to a hearing scheduled for next Tuesday in San Francisco, where Judge Rita Lin will decide whether to grant Anthropic a temporary reprieve. The government’s filing makes it clear that it has no intention of backing down, asserting that the Pentagon "cannot simply flip a switch" to accommodate a contractor it no longer trusts. As the military moves to phase out Claude over the next six months, the case will likely define the legal boundaries of how much control a private company can retain over its intellectual property once it enters the theater of war.
Explore more exclusive insights at nextfin.ai.
