NextFin News - Palantir Technologies is maintaining its integration with Anthropic’s Claude artificial intelligence models despite a formal Pentagon blacklist, CEO Alex Karp confirmed Thursday, highlighting a deepening rift between the defense establishment and the Silicon Valley firms powering the U.S. war effort in Iran. Speaking at Palantir’s AIPcon 9 in Maryland, Karp acknowledged that while the Department of Defense (DOD) has designated Anthropic a supply-chain risk, the software remains deeply embedded in the tactical systems currently deployed in the Middle East. The admission underscores the military’s precarious dependence on commercial AI at a time when the Trump administration is tightening its grip on the defense industrial base.
The conflict erupted last week when the Pentagon officially labeled Anthropic a "supply-chain risk," a move that typically triggers an immediate freeze on procurement and a phased removal of the offending technology. However, the reality on the ground in Iran has complicated this bureaucratic mandate. According to reports, Claude models are currently being used to process vast streams of battlefield intelligence, assisting commanders in real-time decision-making. Karp’s stance is one of pragmatic defiance; he noted that while the "Department of War" plans to phase out the startup, the integration remains active because the mission demands it. Palantir, which partnered with Anthropic and Amazon Web Services in 2024 to bring these models to the intelligence community, now finds itself acting as a buffer between a blacklisted vendor and a dependent military.
This friction has created a fragmented landscape among defense contractors. While Palantir continues its use of Claude, other industry titans have moved swiftly to distance themselves. Lockheed Martin has already instructed its employees to cease using Anthropic’s tools, fearing that continued reliance could jeopardize their standing with the Trump administration’s DOD. The divergence in strategy reflects a broader debate over "sovereign AI"—the idea that the U.S. military must rely on models that are not only technically superior but also politically and legally aligned with national security directives. Anthropic, for its part, has not taken the designation quietly, filing a lawsuit against the administration on Monday to reverse the blacklist and seeking an emergency stay from an appeals court.
The financial stakes for Palantir are significant. The company’s stock rose 1.99% following Karp’s comments, as investors weighed the risk of regulatory blowback against the reality of Palantir’s "sticky" relationship with the Pentagon. By maintaining the integration, Palantir ensures that its platforms—which serve as the operating system for modern warfare—remain functional during active hostilities. However, Karp signaled that the company is already preparing for a post-Anthropic future, stating that Palantir plans to integrate other large language models to ensure redundancy. This "model-agnostic" approach may be the only way for software providers to survive an era where the Pentagon’s list of approved vendors can change with a single executive order.
The standoff serves as a stark reminder of the "dual-use" dilemma inherent in modern AI. Unlike traditional hardware, software models like Claude are updated and iterated upon in real-time, making them difficult to "de-install" without degrading the capabilities of the systems they inhabit. As the war in Iran continues to test the limits of American logistical and technological superiority, the Pentagon faces a choice between maintaining its strict supply-chain purity and keeping the lights on for the AI tools its soldiers have come to rely on. For now, Palantir is betting that the immediate needs of the battlefield will outweigh the long-term concerns of the auditors in Washington.
Explore more exclusive insights at nextfin.ai.

