NextFin News - OpenAI’s pivot toward military applications has triggered its most significant internal crisis since the 2023 board upheaval, as the company’s head of robotics, Caitlin Kalinowski, resigned on March 7 following a controversial deal with the Pentagon. The agreement, finalized on February 28, 2026, marks a definitive end to the era of "AI for everyone" neutrality and has already sparked a 295% surge in ChatGPT uninstalls as consumer trust evaporates. For a company that once positioned itself as a safeguard against the risks of artificial general intelligence, the move into the defense sector represents a fundamental shift in identity that many of its own engineers find irreconcilable.
The departure of Kalinowski, a high-profile hardware veteran who previously led augmented reality efforts at Meta, is more than a symbolic loss. In a public statement, she argued that while AI has a role in national security, the current deal lacks the necessary guardrails to prevent "surveillance of Americans without judicial oversight" and "lethal autonomy without human authorization." Her exit highlights a growing rift between OpenAI’s executive leadership, led by Sam Altman, and the technical staff who fear their work is being weaponized without sufficient ethical oversight. The timing is particularly sensitive as U.S. President Trump’s administration has aggressively pushed for "America First" AI development, often at the expense of the safety-first frameworks favored by Silicon Valley’s research community.
OpenAI’s decision to sign with the Department of Defense was not made in a vacuum. It followed a period of intense political pressure where the Pentagon, under Defense Secretary Pete Hegseth, reportedly threatened to blacklist competitors like Anthropic for maintaining "woke" AI safety protocols that restricted military use. By stepping into the void left by Anthropic’s refusal to remove safeguards, OpenAI has secured a massive revenue stream and a seat at the table of national defense strategy. However, the cost is being measured in human capital. Internal reports suggest that Kalinowski’s resignation may be the first in a wave of departures from the robotics and hardware divisions, where the intersection of physical machines and AI intelligence creates the most direct path to autonomous weaponry.
The market reaction has been swift and bifurcated. While defense contractors and some institutional investors view the deal as a pragmatic alignment with national interests, the consumer-facing side of the business is reeling. The nearly threefold increase in app uninstalls suggests that the "OpenAI" brand, once synonymous with helpful productivity, is now being associated with state surveillance. This creates a strategic dilemma: the company is trading its broad-based consumer goodwill for deep-pocketed government contracts. In the long run, this could hinder OpenAI’s ability to recruit top-tier talent who are increasingly wary of the "defense tech" label, potentially ceding the lead in consumer innovation to more specialized, ethically-focused rivals.
The controversy also exposes the fragility of OpenAI’s governance structure. Despite the 2023 restructuring intended to provide more stability, the speed with which the Pentagon deal was pushed through suggests that commercial and political interests now outweigh the company’s original non-profit mission. As the robotics team loses its leadership, the development of embodied AI—the next frontier for the company—faces significant delays. Without a clear consensus on where the "red lines" for lethal autonomy are drawn, OpenAI risks becoming a mere subcontractor for the military-industrial complex, a far cry from the independent research lab that promised to benefit all of humanity.
Explore more exclusive insights at nextfin.ai.

