NextFin

OpenAI and Anthropic CEOs Refuse to Hold Hands in Group Photo Signaling Deepening AI Industry Schisms

Summarized by NextFin AI
  • On February 19, 2026, OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei visibly declined to join hands during a group photo at the India AI Impact Summit, highlighting a lack of unity among AI leaders.
  • The incident symbolizes a deepening schism in the AI industry, reflecting their opposing views on AI safety and corporate structure amid rising tensions in the geopolitical landscape under U.S. President Trump.
  • OpenAI's valuation has surpassed $150 billion, while Anthropic has secured over $20 billion, showcasing the competitive nature of their rivalry and differing business models.
  • This moment may signal the end of the "consensus era" in AI governance, as both companies prepare for fragmented lobbying efforts that align with their distinct architectural and safety philosophies.

NextFin News - A moment of symbolic friction captured the attention of the global technology community on February 19, 2026, as the leaders of the world’s most prominent artificial intelligence firms demonstrated a visible lack of unity. During a group photo session at the India AI Impact Summit in New Delhi, OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei pointedly declined to join hands with other participants, despite a request for a unified pose. The event, which featured high-profile attendees including Indian Prime Minister Narendra Modi and Google CEO Sundar Pichai, was intended to showcase international cooperation in AI development. However, while other executives and government officials linked arms to signal a shared vision for the future, Altman and Amodei stood side-by-side with their hands at their sides, creating a conspicuous gap in the ceremonial line-up.

According to Newswire, the incident occurred as the summit concluded its primary session focused on inclusive AI growth. The refusal to participate in the gesture was not a result of logistical confusion but appeared to be a deliberate choice by both CEOs, who have increasingly found themselves on opposite sides of the debate regarding AI safety and corporate structure. The summit, held against the backdrop of U.S. President Trump’s renewed focus on American AI dominance, was meant to bridge the gap between Western developers and the Global South. Instead, the image of the two most influential figures in the industry refusing a simple gesture of solidarity has become a viral metaphor for the deepening schism in the Silicon Valley ecosystem.

The tension between Altman and Amodei is rooted in the very origin story of Anthropic. Amodei, a former executive at OpenAI, left the company in 2021 alongside several other researchers due to fundamental disagreements over the direction of the firm—specifically regarding its transition from a non-profit research lab to a multi-billion-dollar commercial entity backed by Microsoft. This historical friction has only intensified as Anthropic, now backed by Amazon and Google, positions itself as the "safety-first" alternative to OpenAI. The refusal to hold hands is a physical manifestation of a competitive reality where these two firms are locked in a zero-sum race for talent, compute resources, and the first iteration of Artificial General Intelligence (AGI).

From a financial perspective, the stakes of this rivalry have never been higher. As of early 2026, OpenAI’s valuation has reportedly surged past $150 billion following its latest funding rounds, while Anthropic has secured over $20 billion in total capital. The divergence in their business models is stark: OpenAI has moved toward a more traditional product-led growth strategy with its "o-series" models, while Amodei has maintained a focus on "Constitutional AI," a framework designed to make models more controllable and less prone to harmful outputs. This ideological gap makes a "unified front" difficult to maintain, even for the sake of a diplomatic photo opportunity.

Furthermore, the geopolitical climate under U.S. President Trump has added a layer of complexity to these corporate relations. The administration’s "AI First" policy, which emphasizes deregulation to outpace international rivals, has found a more receptive audience in Altman’s OpenAI, which prioritizes rapid scaling. In contrast, Anthropic’s emphasis on rigorous safety testing and slower deployment cycles often clashes with the current administration’s push for speed. The refusal to hold hands at a summit hosted by a key U.S. ally like India suggests that these CEOs are no longer willing to perform the "AI for Good" theater when their strategic interests are so diametrically opposed.

Looking ahead, this incident likely signals the end of the "consensus era" of AI governance. Throughout 2024 and 2025, industry leaders frequently appeared together at safety summits in the UK and Seoul, presenting a facade of cooperation. However, as the technical hurdles to AGI diminish and the commercial rewards grow, the incentive to maintain this facade is evaporating. We should expect to see more fragmented lobbying efforts in Washington and Brussels, as Altman and Amodei seek to shape regulations that favor their respective architectural and safety philosophies. The "hand-holding" era is over; the era of unrestricted industrial warfare in AI has begun.

Explore more exclusive insights at nextfin.ai.

Insights

What are the origins of the rivalry between OpenAI and Anthropic?

What technical principles underpin the concept of Constitutional AI?

What is the current market situation for OpenAI and Anthropic?

How has user feedback influenced the development strategies of OpenAI and Anthropic?

What are the latest updates regarding AI policies under the Trump administration?

What recent events highlight the growing schism within the AI industry?

What challenges do OpenAI and Anthropic face in their competition for AGI?

What controversies have arisen from the differing business models of OpenAI and Anthropic?

How does Anthropic's approach to safety differ from OpenAI's?

What implications does the refusal to hold hands have for future AI collaborations?

What historical cases illustrate similar schisms in technology industries?

How do the valuations of OpenAI and Anthropic compare in the current market?

What might the future landscape of AI governance look like post-2026?

What long-term impacts could the current AI rivalry have on global technology?

What are the core difficulties in achieving consensus in AI regulation?

How do current industry trends reflect the competitive nature of AI development?

What factors are limiting the cooperation between AI firms globally?

How does the geopolitical climate influence AI company strategies?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App