NextFin News - The U.S. Department of Defense, recently rebranded under the Trump administration as the Department of War, has officially designated Anthropic as a supply chain risk, a move that effectively weaponizes procurement law against domestic technology firms that resist military mandates. The designation, confirmed by Anthropic on March 5, 2026, follows a high-stakes standoff between CEO Dario Amodei and Secretary of War Pete Hegseth over the ethical boundaries of artificial intelligence in combat. By invoking 10 USC 3252—a statute typically reserved for purging hardware from foreign adversaries like Huawei—the administration has signaled that "national security risk" now includes a company’s refusal to waive its terms of service for the state.
The friction point is remarkably specific. Anthropic sought two primary exceptions to the government’s use of its Claude models: a prohibition on mass domestic surveillance of Americans and a ban on the integration of Claude into fully autonomous lethal weapons systems. Amodei argued that the current generation of large language models lacks the reliability required for life-and-death kinetic decisions and that domestic surveillance violates fundamental constitutional rights. The White House responded with a 5:00 p.m. ultimatum last Friday, demanding "any lawful use" access. When Anthropic held its ground, the Department of War moved to sever the company’s access to the federal marketplace, labeling the startup a threat to the very supply chain it currently supports in active theaters.
This designation is paradoxical given the Pentagon’s current reliance on the technology. Claude is already deployed in active military operations in Iran and was previously utilized during the 2025 intervention in Venezuela. The government’s argument that Anthropic represents a "risk" is not based on cybersecurity vulnerabilities or foreign influence, but on the "risk" of restricted utility. By refusing to allow Claude to function as the brain of autonomous drones or a filter for domestic data Dragnets, Anthropic has, in the eyes of the Trump administration, created a strategic bottleneck. The administration is essentially arguing that a tool is a risk if the manufacturer retains the right to say "no" to the commander-in-chief.
The immediate fallout is concentrated but severe. According to internal communications shared by Anthropic, the order is currently narrow, applying only to contracts where Claude is a "direct part" of the deliverables. However, the secondary effects are already rippling through the defense industrial base. Lockheed Martin and other major prime contractors have reportedly begun distancing themselves from Anthropic to protect their broader portfolios. Meanwhile, OpenAI has moved aggressively to fill the vacuum, securing new deals to deploy ChatGPT in classified environments—a pivot that suggests the "AI safety" consensus of 2023 has been replaced by a "patriotic compliance" mandate in 2026.
For the broader tech sector, the precedent is chilling. The use of supply chain risk designations to punish domestic policy disagreements bypasses traditional legislative debate and judicial oversight. While Anthropic has vowed to challenge the order in court, the legal reality is that the executive branch enjoys immense deference in matters of national security, especially during active hostilities. The administration has demonstrated that it views Silicon Valley not as a partner in innovation, but as a strategic resource that must be fully nationalized in spirit, if not in ownership. Companies are now faced with a binary choice: strip away safety guardrails or face a total lockout from the world’s largest purchaser of technology.
The shift reflects a broader transformation of U.S. industrial policy under U.S. President Trump. By merging the concepts of commercial reliability and political alignment, the Department of War is creating a new standard for "trusted" technology. In this environment, the most significant "supply chain risk" is no longer a Chinese chip or a Russian line of code, but a founder’s conscience. As the war in Iran intensifies, the demand for unrestricted AI will only grow, leaving little room for the nuanced "constitutional AI" that Anthropic once championed. The era of the independent AI lab is ending, replaced by a regime where the software must be as obedient as the soldier.
Explore more exclusive insights at nextfin.ai.
