NextFin News - The U.S. Department of Justice filed a formal appeal on Thursday to reinstate a sweeping ban on government use of Anthropic’s artificial intelligence technology, escalating a high-stakes legal battle over the executive branch’s power to dictate the ethical boundaries of military AI. The filing, reported by Bloomberg, seeks to overturn a March 26 district court ruling that temporarily blocked U.S. President Trump’s administration from enforcing a "supply chain risk" designation against the San Francisco-based AI startup.
The conflict originated in February when the Pentagon, under Defense Secretary Pete Hegseth, demanded that Anthropic guarantee its Claude AI models could be used for "all lawful use cases," including lethal autonomous weapons systems. Anthropic, which was founded on a "safety-first" charter by former OpenAI executives, refused the mandate. The company instead proposed contract language that would explicitly prohibit its technology from being used in autonomous weaponry or for mass domestic surveillance. In response, the administration designated Anthropic a national security risk, effectively barring federal agencies and their contractors from utilizing the company’s services.
U.S. District Judge Jia Cobb’s initial ruling paused the ban, citing concerns over due process and the potential for "irreparable harm" to the company’s commercial viability. However, the judge stayed her own order for one week to allow the Department of Justice to appeal, a window the government has now utilized. The administration argues that the President possesses broad authority to secure federal supply chains against entities that refuse to comply with national security requirements, particularly in the burgeoning field of frontier AI.
For Anthropic, the stakes are existential. During court hearings in March, attorneys for the company testified that the "supply chain risk" label had already caused more than 100 enterprise customers to reconsider their contracts, potentially jeopardizing billions of dollars in future revenue. The company’s legal team has argued that the designation is a retaliatory measure intended to punish protected speech—specifically, Anthropic’s refusal to endorse certain military applications of its software.
The legal standoff has divided the technology sector. While some national security hawks argue that the U.S. cannot afford to have its most advanced AI labs "opt out" of the defense mission, industry trade groups have expressed alarm. In a letter to Secretary Hegseth, tech advocates warned that designating a domestic, venture-backed company as a supply chain risk—a label typically reserved for foreign adversaries like Huawei or Kaspersky—sets a "dangerous and unpredictable precedent" for the entire Silicon Valley ecosystem.
The outcome of the appeal will likely hinge on the court’s interpretation of Section 3252 of the U.S. Code, which governs the Department of Defense’s authority to manage supply chain risks. Anthropic contends that this authority is limited to specific defense procurements and cannot be used to trigger a government-wide boycott or interfere with a company’s private commercial relationships. The Department of Justice, conversely, maintains that the refusal to support military requirements constitutes a fundamental vulnerability in the national security infrastructure.
As the case moves to the appellate level, the broader AI industry is watching closely. A victory for the administration would signal a new era of "technological conscription," where AI developers may be forced to choose between lucrative government-adjacent markets and their own internal safety protocols. Conversely, a victory for Anthropic would bolster the right of private firms to set ethical guardrails on the dual-use applications of their intellectual property, even in the face of direct pressure from the White House.
Explore more exclusive insights at nextfin.ai.
