NextFin News - OpenAI is targeting approximately $600 billion in total compute expenditure through 2030, according to reports emerging in late February 2026. This figure represents a strategic recalibration of the company’s long-term infrastructure roadmap as it lays the groundwork for an initial public offering (IPO) that could value the artificial intelligence pioneer at up to $1 trillion. According to Reuters, the revised spending plan follows a period of internal assessment regarding the sustainability of earlier, more aggressive expansion goals. The company’s 2025 financial performance provided a strong foundation for this shift, with revenue reaching $13 billion—surpassing the projected $10 billion—while actual spending was contained at $8 billion, slightly under the $9 billion budget.
The adjustment in spending expectations comes at a critical juncture for the Sam Altman-led organization. While Altman previously discussed a commitment to spending as much as $1.4 trillion to develop 30 gigawatts of computing resources, the new $600 billion target suggests a more disciplined approach to capital allocation. This shift is reportedly driven by a desire to align infrastructure investment more closely with realistic revenue trajectories. According to CNBC, OpenAI projects its total revenue will exceed $280 billion by 2030, with the financial contribution split nearly equally between its consumer-facing products, such as ChatGPT, and its rapidly expanding enterprise division.
The financial ecosystem surrounding OpenAI continues to expand in tandem with these projections. Nvidia is currently finalizing a $30 billion investment in the company as part of a broader fundraising round aimed at securing over $100 billion in new capital. This round is expected to value OpenAI at approximately $830 billion pre-money, cementing its status as one of the most valuable private entities in history. However, the path to these valuations is not without operational friction. Reports from The Information indicate that the costs associated with running AI models—known as inference—quadrupled in 2025. This surge in operational overhead contributed to a decline in adjusted gross margins, which fell to 33 percent from 40 percent the previous year.
From an analytical perspective, the reduction of the 2030 compute target from $1.4 trillion to $600 billion signals a transition from the "blitzscaling" phase of AI development to a more mature phase of industrial optimization. In the early stages of the generative AI boom, the primary constraint was the sheer availability of compute power, leading to speculative and massive infrastructure pledges. By 2026, however, the focus has shifted toward efficiency and the unit economics of inference. The fourfold increase in inference costs highlights the "success tax" OpenAI faces: as ChatGPT’s weekly active users surpass 900 million, the cost of serving those users threatens to outpace the efficiency gains found in newer model architectures.
The involvement of Nvidia as a direct equity stakeholder, rather than just a vendor, further complicates the industry's competitive dynamics. By securing a $30 billion stake, Nvidia ensures that OpenAI remains tethered to its hardware ecosystem, while OpenAI gains a hedge against the volatile pricing of high-end GPUs. This vertical integration is essential for OpenAI to maintain its 33 percent margin in an environment where competitors like Anthropic and emerging open-source models are driving down the market price of intelligence. The $600 billion spend is likely a reflection of a more sophisticated "compute-per-dollar" forecast, accounting for the anticipated deflation in hardware costs and the rising efficiency of algorithmic training.
Looking forward, the success of OpenAI’s $1 trillion IPO will depend on its ability to prove that AI is a high-margin software business rather than a low-margin utility service. The projected $280 billion in revenue by 2030 implies a compound annual growth rate that requires near-total dominance of the enterprise productivity market. If OpenAI can successfully transition its 900 million users into high-value subscribers while managing the $600 billion infrastructure bill, it will set the standard for the post-SaaS era. However, if inference costs continue to scale linearly with usage, the company may find itself trapped in a capital-intensive cycle that challenges the traditional valuation multiples of the technology sector. Under the current administration, U.S. President Trump has emphasized domestic technological supremacy, and OpenAI’s massive infrastructure plans will likely remain a focal point of national economic policy through the end of the decade.
Explore more exclusive insights at nextfin.ai.
