NextFin News - The fragile detente between the world’s leading artificial intelligence labs shattered this week as Anthropic CEO Dario Amodei accused OpenAI of "straight up lies" regarding its new contract with the Department of Defense. The public broadside, delivered via an internal memo to staff and later confirmed by industry sources, marks the most aggressive escalation in the rivalry between the two San Francisco giants since Anthropic was founded by former OpenAI executives in 2021. At the heart of the dispute is a $200 million Pentagon deal that Anthropic walked away from on ethical grounds, only for OpenAI to "swoop in" and accept terms that Amodei characterizes as a betrayal of AI safety principles.
The fallout began when negotiations between Anthropic and the Pentagon collapsed over the military’s demand for "any lawful use" of the company’s Claude models. According to reports from The Financial Times and The Information, Anthropic sought ironclad guarantees that its technology would not be used for autonomous weaponry or domestic surveillance. When the Pentagon, represented by Under-Secretary of Defense Emil Michael, refused to budge, the deal evaporated. Within hours, U.S. President Trump’s administration pivoted to OpenAI, which signed a contract that reportedly grants the military the broad latitude Anthropic had rejected. Amodei’s memo described OpenAI’s subsequent public messaging—which framed the deal as consistent with safety—as "safety theater" and "gaslighting."
The market reaction to OpenAI’s pivot has been swift and visceral. Data indicates that ChatGPT uninstalls surged by 295% in the days following the announcement, as users reacted to the company’s deepening ties with the military-industrial complex. Conversely, Anthropic has seen a surge in brand equity, with its mobile app climbing to the #2 spot in the App Store. This divergence highlights a growing rift in the AI sector: a "safety-first" camp led by Anthropic and a "national interest" camp led by OpenAI, the latter increasingly aligned with the Trump administration’s push for American AI dominance at any cost.
The Pentagon’s response to Anthropic’s refusal was remarkably punitive. Following the breakdown of talks, the Department of Defense officially designated Anthropic a "supply-chain risk," a move that effectively blacklists the company from future federal contracts. This designation, according to a letter sent to Defense Secretary Pete Hegseth by a tech industry group including Google and Nvidia, is viewed as a retaliatory measure against a company for exercising ethical discretion. The move signals a new era where the U.S. government may treat AI safety guardrails as obstacles to national security, forcing startups to choose between their founding principles and their ability to do business with the state.
For OpenAI, the deal is a financial and strategic windfall that cements its role as the "de facto" national champion of AI, but it comes at a steep reputational price. By accepting the "all lawful purposes" clause, Sam Altman’s firm has effectively lowered the barrier for AI integration into lethal and surveillance systems. The contrast is stark: while Anthropic lawyers were preparing a lawsuit against the Pentagon for its "supply-chain risk" label, Altman was reportedly on the phone with Michael at 10 p.m. on a Friday to finalize the deal. This aggressive pursuit of government revenue suggests OpenAI is prioritizing scale and political alignment over the cautious, multi-stakeholder approach it once championed.
The long-term implications of this rift extend far beyond a single contract. The "safety theater" accusation suggests that the industry’s self-regulatory bodies are failing. If the leading AI companies cannot agree on what constitutes "safe" military use, the task will fall to a polarized Congress or an executive branch that has already shown a willingness to punish dissenters. As the Pentagon integrates OpenAI’s systems into its classified networks, the boundary between commercial AI and state power has effectively vanished, leaving Anthropic as a principled, yet increasingly isolated, outlier in a rapidly militarizing industry.
Explore more exclusive insights at nextfin.ai.
