NextFin News - Speaking at the AI Impact Summit 2026 in New Delhi on February 19, 2026, OpenAI CEO Sam Altman delivered a landmark address outlining a timeline for the arrival of artificial superintelligence (ASI) and the urgent necessity for a global regulatory architecture. Altman told an audience of policymakers and tech leaders that on the current trajectory, early versions of true superintelligence could manifest within the next two years. He projected that by the end of 2028, the collective intellectual capacity housed within global data centers could surpass that of the entire human population, a shift that would fundamentally redefine the global economic and social contract.
The urgency of Altman’s message centers on the risks of centralization. He argued that if superintelligence is controlled by a single company or a solitary nation, it could lead to "ruin" and the rise of "effective totalitarianism." To mitigate these risks, Altman proposed the creation of a global coordination body, drawing parallels to the International Atomic Energy Agency (IAEA), to ensure that the benefits of ASI are democratized and that safety protocols are enforced across borders. This call for regulation comes as OpenAI reports massive scaling in India, which now boasts over 100 million weekly ChatGPT users, signaling that the infrastructure for this transition is already deeply embedded in emerging economies.
The shift from Large Language Models (LLMs) to superintelligent systems represents a phase change in computational capability. According to Altman, AI has evolved from solving high school-level problems to deriving novel results in theoretical physics and research-level mathematics in just a few years. This rapid vertical scaling suggests that the bottleneck for ASI is no longer algorithmic complexity but rather the physical constraints of energy and silicon. By predicting a 2028 arrival, Altman is signaling to the markets that the "intelligence explosion" is no longer a theoretical long-tail risk but a medium-term certainty that requires immediate capital and policy realignment.
From a macroeconomic perspective, the arrival of ASI threatens to decouple productivity from human labor entirely. Altman acknowledged that it will be "very hard to outwork a GPU," suggesting a future where the marginal cost of intelligence—and by extension, many physical goods and services—approaches zero. While this promises a post-scarcity era in healthcare and education, it also presents a profound challenge to the U.S. and global labor markets. The U.S. President Trump’s administration, which has emphasized American technological dominance, now faces a delicate balancing act: fostering the innovation required to reach ASI first while adhering to the global democratic safeguards Altman is championing.
The push for an IAEA-style regulatory body reflects a growing consensus among tech elites that the "move fast and break things" era is incompatible with superintelligence. The risks Altman highlighted—including the potential for AI-driven warfare and the creation of synthetic pathogens via open-source bio-models—are existential. By advocating for "AI resilience" as a core safety strategy, Altman is shifting the focus from mere technical alignment (ensuring the AI does what we want) to societal defense (ensuring society can survive the AI’s existence). This suggests that future regulation will likely move beyond software audits to include strict monitoring of compute clusters and energy consumption.
Looking ahead, the next 24 months will likely see a surge in "Sovereign AI" initiatives as nations scramble to build domestic capacity to avoid the centralization Altman warned against. We can expect the U.S. President to face increasing pressure to formalize international AI treaties that balance national security with the democratization of compute. If Altman’s 2028 prediction holds true, the window for establishing a global safety framework is closing rapidly. The transition to a world where data centers hold the majority of the planet's intellectual agency will be the defining geopolitical event of the late 2020s, necessitating a total reimagining of human agency and democratic governance.
Explore more exclusive insights at nextfin.ai.
