NextFin News - In a week that has sent shockwaves through the technology sector, the artificial intelligence industry is facing a profound internal crisis as leading safety researchers have publicly resigned from OpenAI and Anthropic. These departures, occurring in mid-February 2026, come at a time when the race for Artificial General Intelligence (AGI) has reached a fever pitch, raising urgent questions about whether the world’s most powerful AI labs are abandoning their long-term safety commitments in favor of aggressive commercial expansion. The resignations are not merely administrative shifts but are accompanied by dire warnings regarding the "peril" facing the global community as commercial pressures begin to outweigh ethical guardrails.
The shake-up began on February 9, 2026, when Mrinank Sharma, the head of Anthropic’s Safeguards Research Team, announced his resignation. Sharma, who had led the team since its inception in 2025 and was a pioneer in researching AI sycophancy and defenses against AI-assisted bioterrorism, published a cryptic and alarming letter on X. In his statement, Sharma noted that while he was proud of his work, he felt compelled to leave because the "world is in peril" due to a series of "interconnected crises." According to reports from OpenTools, Sharma’s exit was particularly striking given Anthropic’s founding mission as a safety-first alternative to its competitors. His departure coincided with the launch of Anthropic’s "Claude Cowork" model, a tool that has sparked intense debate over white-collar job displacement and the ethics of rapid automation.
Simultaneously, OpenAI faced its own high-profile defection. Zoë Hitzig, a prominent researcher at the San Francisco-based lab, resigned and detailed her decision in a guest essay for The New York Times titled "OpenAI Is Making the Mistakes Facebook Made. I Quit." Hitzig’s resignation was primarily triggered by OpenAI’s decision to integrate advertising into ChatGPT, a move she argued would lead to the manipulation of users based on their most private interactions. According to Beritaja, Hitzig expressed a "slow realization" that OpenAI had stopped asking the critical safety questions she was hired to help answer, suggesting that the company’s original principles were being eroded to maximize user engagement and revenue.
This exodus of talent reflects a deepening structural divide within the AI industry. For years, companies like Anthropic and OpenAI operated under a "safety-first" rhetoric, but the current market environment—characterized by massive capital requirements and intense competition—has forced a pivot toward monetization. The tension is palpable: while U.S. President Trump’s administration has signaled a move toward deregulation to maintain American technological dominance, the very experts tasked with preventing catastrophic outcomes are signaling that the guardrails are failing. Sharma’s research into AI-assisted bioterrorism, for instance, highlights a risk profile that extends far beyond simple algorithmic bias, touching on national security and global biological safety.
The financial implications of these resignations are significant. Anthropic, which recently faced a US$1.5 million settlement over copyright claims from authors, is now struggling to maintain its brand identity as the "ethical" AI firm. When the head of safeguards suggests that corporate values no longer govern actions, the premium valuation associated with "safe AI" begins to evaporate. Similarly, OpenAI’s shift toward an ad-supported model marks a definitive departure from its non-profit roots, signaling to investors that the path to AGI will be paved with traditional Big Tech monetization strategies. This shift has drawn sharp criticism from industry peers; notably, Anthropic and OpenAI recently engaged in a public spat over a Super Bowl advertisement that criticized OpenAI’s transparency, a move that OpenAI CEO Sam Altman dismissed as "dishonest."
Looking forward, the departure of figures like Sharma and Hitzig may trigger a broader "brain drain" of safety-conscious talent toward academia or decentralized research collectives. As these researchers seek what Sharma calls "the beauty of courageous speech," the concentrated power of private AI labs may become increasingly insulated from internal dissent. The trend suggests a future where AI safety is no longer an integrated corporate function but a regulatory battleground fought from the outside. With the industry moving toward more autonomous agents and deeper integration into critical infrastructure, the warnings issued this February may well be remembered as the final alarm bells before the guardrails were fully dismantled in the pursuit of the next quarterly earnings report.
Explore more exclusive insights at nextfin.ai.
