NextFin News - Speaking at the AI Impact Summit in New Delhi on February 21, 2026, OpenAI CEO Sam Altman issued a provocative defense of the energy consumption required by artificial intelligence, directly comparing the computational costs of large language models to the biological and societal resources needed to raise a human being. According to The Indian Express, Altman argued that the current discourse surrounding AI’s power usage often lacks a fair baseline, noting that it takes approximately "20 years of life—and all the food you consume during that time—before you become smart."
The comments come at a critical juncture for the AI industry, which has faced intensifying scrutiny over the carbon footprint of massive data centers. Altman’s intervention seeks to shift the analytical framework from the absolute energy cost of training a model to the relative efficiency of "inference"—the act of a model generating an answer. He contended that once a model is trained, the energy required to answer a query is likely already more efficient than a human brain performing the same task. This rhetorical shift is not merely a defense of OpenAI’s operations but a strategic attempt to normalize AI’s infrastructure requirements as a necessary evolution of global intelligence.
The data supporting this perspective highlights a stark contrast in energy density. While training a frontier model like GPT-4 is estimated to consume roughly 50,000 kilowatt-hours, a human brain consumes approximately 20 watts of power continuously. Over two decades of development, a human requires thousands of kilowatt-hours in metabolic energy alone, supplemented by the massive energy overhead of modern education and social infrastructure. Altman’s argument suggests that AI represents a "one-time" capital expenditure of energy that yields a near-infinite return on intelligence, whereas human intelligence requires a recurring, high-energy biological investment for every individual.
Beyond the environmental metrics, Altman used the New Delhi summit to address the geopolitical risks of concentrated AI power. He warned that a world where a single company or sovereign state holds a monopoly on advanced AI would be "disastrously bad." Instead, he advocated for a "democratized" version of the technology, even if it necessitates society wrestling with the challenges of rapid deployment. This stance aligns with OpenAI’s "iterative deployment" strategy, which prioritizes putting tools in the hands of the public early to allow for societal adaptation, rather than keeping them behind closed doors in the name of absolute safety.
The implications of this philosophy are particularly resonant in India, which Altman identified as a global leader in AI adoption. By framing AI as a tool for the masses rather than a guarded corporate asset, Altman is positioning OpenAI to capture emerging markets that view AI as a leapfrog technology for economic development. However, this democratization requires a massive expansion of energy infrastructure. Altman noted that the industry must move toward nuclear, wind, and solar power "very quickly" to sustain the trajectory toward superintelligence, which he predicted is "not that far off."
Looking forward, the industry is likely to see a divergence in how energy efficiency is regulated. If Altman’s inference-based comparison gains traction among policymakers, the focus may shift from capping data center power to incentivizing the use of renewable energy and improving the "intelligence-per-watt" ratio. The transition to a "Pax Silica"—a period of stability driven by widely distributed AI—will depend on whether the global energy grid can scale to meet the demands of a technology that Altman now views as a more efficient successor to the biological learning process.
Explore more exclusive insights at nextfin.ai.
