NextFin News - In a move that signals a significant cooling of the "compute-at-all-costs" era, OpenAI has informed investors that it is slashing its projected infrastructure spending target for 2030 by more than half. According to Reuters, the San Francisco-based AI leader now expects to spend approximately $600 billion on compute resources by the end of the decade, a sharp retreat from the $1.4 trillion figure previously circulated in private discussions. This recalibration comes as U.S. President Trump’s administration continues to emphasize American leadership in AI while demanding greater corporate accountability and domestic infrastructure resilience.
The revised forecast, first reported by CNBC on Friday, February 20, 2026, highlights a pivotal moment for the artificial intelligence sector. Despite the reduction, the $600 billion target remains an astronomical sum, averaging roughly $100 billion in annual capital expenditure—a figure that still exceeds the combined annual spending of tech titans like Amazon and Microsoft. However, the $800 billion delta between the old and new projections suggests that OpenAI is responding to intensifying pressure from backers to demonstrate a sustainable path to profitability. According to sources familiar with the matter, OpenAI’s 2025 revenue reached $13 billion, outperforming its $10 billion projection, while actual spending was kept to $8 billion, slightly under the $9 billion budget.
This financial discipline is likely a prerequisite for the company’s rumored initial public offering (IPO), which could value the entity at up to $1 trillion. By lowering the spending ceiling, CEO Sam Altman is effectively signaling to the public markets that the race toward Artificial General Intelligence (AGI) may not require the infinite resources once feared. The news had an immediate impact on the broader ecosystem; Nvidia, the primary supplier of the H-series and B-series GPUs that power OpenAI’s clusters, saw minor after-hours volatility as analysts began to adjust long-term demand models for high-end silicon.
The causes behind this $800 billion "haircut" are multifaceted, blending economic reality with technological breakthroughs. Primarily, investor pushback has become impossible to ignore. Microsoft, which has funneled over $13 billion into the partnership, and other venture participants have reportedly questioned the feasibility of a $1.4 trillion buildout. At a $1.4 trillion spend, OpenAI would have needed to achieve revenue growth unprecedented in the history of capitalism to avoid catastrophic dilution or insolvency. By moderating the target, Altman is aligning the company’s ambitions with the risk tolerance of institutional investors who are increasingly wary of the "AI bubble" narrative.
Technologically, the reduction is supported by rapid advancements in model efficiency. Internal testing at OpenAI suggests that newer architectures are achieving comparable performance gains with significantly less raw compute than their predecessors. Algorithmic optimization—specifically in the realm of inference—has progressed faster than industry analysts predicted in 2024. If the scaling laws are becoming more efficient, the necessity for massive, energy-hungry data centers may be partially offset by smarter software. This shift from "brute force" scaling to "algorithmic elegance" allows OpenAI to maintain its competitive edge while preserving its balance sheet.
The impact of this recalibration will be felt most acutely across the AI supply chain. For years, data center developers and energy providers have been planning "gigawatt-scale" projects based on the assumption of insatiable demand. A 57% reduction in OpenAI’s projected spend may lead to a cooling of the speculative real estate market for data centers. Furthermore, it places a premium on "sovereign AI" initiatives. As U.S. President Trump pushes for energy independence and domestic manufacturing, OpenAI’s more measured spending may align better with national grid capacities and the administration’s focus on efficient infrastructure deployment.
Looking forward, the $600 billion target serves as a new benchmark for the industry. Competitors like Anthropic and Google are likely to follow suit, emphasizing capital efficiency over raw cluster size. The trend is moving away from the "compute wars" of 2024-2025 toward a period of operational consolidation. If OpenAI can successfully scale its revenue toward the $50 billion mark by 2028 while keeping compute costs within this new $600 billion framework, the path to a successful $1 trillion IPO becomes not just possible, but probable. The era of speculative AI spending is maturing into an era of industrial AI execution, where the winner is no longer the one with the most chips, but the one who uses them most effectively.
Explore more exclusive insights at nextfin.ai.
