NextFin News - The escalating confrontation between the U.S. Department of Defense and Anthropic has reached a fever pitch as David Sacks, the Trump administration’s AI czar, publicly characterized the artificial intelligence startup as "ruthless" and "not always on the side of the angels." The remarks, delivered during a period of intense friction over the military’s use of the Claude AI model, signal a fundamental breakdown in the relationship between the White House and one of the world’s most prominent AI safety-focused firms.
The dispute centers on the Pentagon’s attempt to force Anthropic to remove restrictive safety guardrails that prevent its technology from being used in lethal autonomous weapons systems and domestic surveillance. According to reports from NPR and TechCrunch, Defense Secretary Pete Hegseth has threatened to invoke the Defense Production Act to compel compliance, or alternatively, to designate Anthropic as a "supply chain risk"—a label typically reserved for foreign adversaries like Huawei. This follows the Pentagon's move earlier in March 2026 to officially ban Anthropic’s technology, a decision that was subsequently halted by a federal judge citing potential First Amendment retaliation.
Sacks, a venture capitalist and co-founder of PayPal who has long championed a "pro-innovation" and anti-regulation stance in Silicon Valley, has emerged as a primary ideological antagonist to Anthropic’s "Constitutional AI" approach. Known for his libertarian leanings and his role as a key donor and advisor to U.S. President Trump, Sacks has consistently criticized what he terms "woke AI"—models that incorporate social or ethical constraints he views as politically biased. His current position as a central figure in the administration’s AI policy gives his critiques significant weight, though they often clash with the safety-first ethos of the "effective altruism" movement that birthed Anthropic.
The financial stakes of this blacklisting are substantial. In legal filings, Anthropic executives noted that a permanent ban could cost the company multiple billions of dollars in projected revenue by 2026. The company’s public sector business was previously on a trajectory to reach several billion dollars in annual recurring revenue within five years, bolstered by a $200 million defense contract signed in 2025. The loss of this revenue stream, combined with the reputational damage of being labeled a national security risk, represents a significant headwind for a company that has raised billions from investors including Amazon and Alphabet.
However, Sacks’ perspective is not a universal consensus within the defense or technology sectors. Former Department of Defense officials, including Brad Carson, have expressed concern that banning Anthropic leaves the military without its most capable AI tools for sensitive environments. Critics of the administration’s hardline approach argue that forcing a private company to weaponize its technology against its founding principles sets a dangerous precedent for the American tech industry. They suggest that the "supply chain risk" designation is being used as a political cudgel rather than a legitimate security measure.
The legal battle remains fluid. While Judge Rita Lin’s recent injunction provides Anthropic with a temporary reprieve, the administration’s intent to use the Defense Production Act suggests a long-term strategy to subordinate private AI development to national security priorities. The outcome of this clash will likely define the boundaries of corporate autonomy in the age of frontier AI, determining whether a company can legally refuse to participate in the "lawful" but lethal applications of its own inventions.
Explore more exclusive insights at nextfin.ai.

