NextFin News - Palantir Technologies CEO Alex Karp confirmed Thursday that his company continues to deploy Anthropic’s Claude AI models within its defense platforms, openly defying the momentum of a Pentagon blacklist that has thrown the U.S. military’s technological supply chain into chaos. Speaking at Palantir’s AIPcon 9 in Maryland, Karp revealed that while the Department of Defense (DOD) is moving to "phase out" the startup, the integration remains active and essential for current operations. The admission highlights a deepening schism between the Trump administration’s aggressive regulatory stance and the operational realities of the ongoing conflict in Iran, where Claude is reportedly still being used to support American military maneuvers.
The friction reached a boiling point last week when the Pentagon officially designated Anthropic as a "supply-chain risk," an extraordinary label typically reserved for foreign adversaries like Huawei or ZTE. This designation, directed by Defense Secretary Pete Hegseth, effectively bans government contractors from using the technology. However, the mandate has hit a wall of practical necessity. Palantir, which partnered with Anthropic and Amazon Web Services in 2024 to bring advanced AI to intelligence workflows, finds itself in the crosshairs of a policy that seeks to purge a tool that has already become foundational to its "Department of War" applications. Karp’s defiance is not merely a corporate stance; it is a reflection of the technical debt the military has accrued by leaning heavily on a single, high-performing model for real-time battlefield analysis.
The legal and political theater surrounding the blacklist is equally volatile. Anthropic filed a lawsuit against the Trump administration on Monday, seeking an emergency stay on the DOD’s action. U.S. President Trump has characterized the move as a necessary purging of unreliable actors, yet the administration’s logic remains opaque to industry observers. The "supply-chain risk" label under Section 3252(d)(4) is intended to prevent sabotage by adversaries, but critics argue the administration is using it as a cudgel to force concessions on usage restrictions. Anthropic’s insistence on safety guardrails—designed to prevent the misidentification of targets—appears to have clashed with a Pentagon leadership that demands unfettered, "black box" control over AI decision-making in the Iranian theater.
While legacy defense giants like Lockheed Martin have already instructed employees to scrub Claude from their systems, Palantir’s continued use suggests a higher tolerance for risk—or perhaps a more desperate reliance on the model’s superior reasoning capabilities. The stakes are quantified in Anthropic’s own projections, which recently estimated its public sector business could reach billions in annual recurring revenue within five years. By severing this tie, the Pentagon is not just sidelining a vendor; it is potentially degrading the analytical speed of its frontline units at a moment when the Iran conflict demands maximum precision. Karp noted that Palantir will eventually integrate other large language models, but the transition is neither immediate nor seamless.
The broader implication for the defense-tech sector is a chilling effect on innovation. If a domestic champion like Anthropic can be blacklisted overnight via a social media post from the Defense Secretary, the "Silicon Valley to Pentagon" pipeline faces a structural threat. Investors are now forced to price in "political risk" for American AI startups, a variable previously reserved for international ventures. As the legal battle moves to the appeals court, the military remains in a paradoxical state: officially banning the very software that its commanders on the ground are using to navigate the complexities of a modern war.
Explore more exclusive insights at nextfin.ai.

