NextFin News - Palantir Technologies CEO Alex Karp confirmed on Thursday that his company continues to utilize Anthropic’s Claude AI models within its defense platforms, effectively defying the spirit of a Pentagon blacklist as the U.S. military remains deeply entangled in a conflict with Iran. Speaking at Palantir’s AIPcon 9 in Maryland, Karp revealed that while the Department of Defense (DOD) has designated Anthropic a "supply-chain risk," the actual removal of the technology from active combat systems is far from complete. The admission exposes a widening rift between the Trump administration’s aggressive regulatory posture and the operational realities of modern, AI-driven warfare.
The friction began last week when the Pentagon, under Defense Secretary Pete Hegseth, issued a formal "supply chain risk" designation against Anthropic. The move followed a breakdown in negotiations over the startup’s "acceptable use" policies, which restricted the application of its Claude models in lethal kinetic operations. U.S. President Trump subsequently boasted about the move, framing it as a purge of "Silicon Valley elites" who refused to fully support the American war effort. However, the reality on the ground in the Middle East tells a different story. According to reports from the Washington Post and CNBC, the military is still leveraging Claude-integrated systems for targeting and intelligence analysis in the ongoing Iran campaign, simply because there is no immediate, high-performance alternative ready to take its place.
Karp’s stance is a masterclass in corporate pragmatism. While he has previously criticized AI companies for attempting to dictate moral terms to the military—once suggesting that such defiance could lead to nationalization—he is now protecting Palantir’s operational continuity. By keeping Claude integrated into Palantir’s Artificial Intelligence Platform (AIP), Karp ensures that the "Department of War," as he pointedly called it, remains dependent on Palantir’s software layer, regardless of which underlying model is being used. He noted that while the Pentagon plans to phase out Anthropic, the transition is not yet active, and Palantir is already preparing to integrate other large language models to fill the eventual vacuum.
The legal battle is also intensifying. Anthropic filed a lawsuit against the Trump administration on Monday, seeking to reverse the blacklist and requesting an emergency stay from an appeals court. The company argues that the "supply chain risk" label is a political weaponization of a technical designation, intended to punish the firm for its safety guidelines rather than to address any genuine foreign subversion. Legal experts cited by DefenseScoop have noted the irony of the Pentagon’s position: officials claim Claude is so vital to national security that they must have unfettered access, yet they simultaneously label it a risk to that very security to justify the blacklist.
For the defense industry, the fallout is uneven. While legacy giants like Lockheed Martin have reportedly instructed employees to cease using Claude to remain in compliance with the new DOD directive, Palantir’s continued use highlights the unique leverage held by software-first contractors. Palantir acts as the "operating system" for the military; if it pulls the plug on a specific model too quickly, it risks blinding commanders in a live theater of war. This dependency gives Karp the cover to wait out the legal and political storm while his engineers quietly test replacements from more compliant providers or open-source alternatives.
The broader implication for the AI sector is a forced alignment with the "all lawful purposes" doctrine demanded by the Trump administration. The era of Silicon Valley companies setting "red lines" for the use of their technology in combat appears to be ending, replaced by a regime where participation in federal contracts requires total submission to military requirements. As the war in Iran continues to demand rapid-cycle intelligence, the Pentagon finds itself in the awkward position of banning the very tools it is currently using to win. The resolution of Anthropic’s lawsuit will likely determine whether the government can legally compel AI firms to drop their ethical guardrails under the guise of national security.
Explore more exclusive insights at nextfin.ai.

