NextFin News - Anthropic, the artificial intelligence startup that once positioned itself as the industry’s "safety-first" alternative to Silicon Valley’s giants, is currently scrambling to mend a fractured relationship with the Department of Defense. The rift reached a breaking point this week when U.S. President Trump’s administration issued a blunt ultimatum: open the company’s proprietary Claude models for unrestricted military use or face the immediate termination of all federal contracts. The tension has been laid bare by a jarring operational paradox: even as the Pentagon publicly threatens to blackball the firm, U.S. military operations in Iran have continued to rely on Anthropic’s tools for real-time intelligence and strike coordination.
The standoff centers on a February 24 directive from Defense Secretary Pete Hegseth to Anthropic CEO Dario Amodei. The Pentagon demanded that the company remove "usage restrictions" that prevent its AI from being used in lethal autonomous weapons systems or direct combat targeting. Amodei has long argued that such guardrails are essential to prevent catastrophic AI accidents, but the Trump administration views these ethical constraints as a bottleneck to national security. According to the Wall Street Journal, the military’s reliance on these tools is so deeply embedded that strikes against Iranian-backed targets occurred just hours after a theoretical "ban" on the software was supposed to take effect, highlighting a dependency that the Pentagon is finding difficult to sever.
For Anthropic, the stakes are existential. The company has raised billions from investors like Amazon and Google on the premise that it can serve both the enterprise market and the public sector without compromising its "Constitutional AI" framework. However, the current administration’s "America First" approach to AI development leaves little room for corporate conscientious objection. By insisting on unrestricted access, the Pentagon is effectively demanding that Anthropic hand over the keys to its most sensitive algorithms, a move that would likely alienate the company’s core engineering talent and its more cautious commercial clients.
The financial implications are equally stark. Anthropic’s valuation, which soared during the 2024-2025 AI boom, is predicated on its ability to secure massive government "compute" and service contracts. If the Pentagon follows through on its threat to pivot toward more compliant rivals—such as Palantir or a more hawkish iteration of OpenAI—Anthropic could see its primary revenue engine stall. Yet, the military’s continued use of Claude in the Iranian theater suggests that the Pentagon’s own technical infrastructure is not yet ready for a clean break. The AI’s ability to parse vast amounts of signals intelligence and provide predictive modeling for regional escalations has made it a "sticky" utility that cannot be replaced overnight by a less sophisticated alternative.
This friction reflects a broader shift in how the U.S. government interacts with the tech sector under U.S. President Trump. The era of "voluntary commitments" on AI safety has been replaced by a mandate for military readiness. While Anthropic attempts to negotiate a middle ground—perhaps through a "dual-use" licensing model that separates civilian and military codebases—the Pentagon appears uninterested in nuance. The outcome of this dispute will likely dictate whether the next generation of AI development remains a collaborative effort between Washington and Silicon Valley or becomes a strictly regulated arm of the national defense industrial base.
Explore more exclusive insights at nextfin.ai.

