NextFin

OpenAI Taps Former Intel CTO to Lead Gigawatt-Scale Infrastructure Pivot

Summarized by NextFin AI
  • OpenAI has appointed Sachin Katti, a former Intel CTO, to lead its industrial compute division, marking a shift towards a multi-vendor infrastructure strategy.
  • OpenAI's computing capacity has tripled to approximately 1.9 gigawatts, with a target of adding one gigawatt weekly, reflecting a significant scale of ambition.
  • The cancellation of the Abilene expansion is a strategic decision to await Nvidia’s new AI accelerators, aiming to align data center construction with more efficient chips.
  • OpenAI is diversifying its silicon portfolio by securing multibillion-dollar deals with various companies, indicating a shift from a cloud-only approach to a more complex industrial-scale compute management.

NextFin News - OpenAI has appointed Sachin Katti, a former Intel chief technology officer and Stanford professor, to lead its industrial compute division as the company pivots toward a more aggressive, multi-vendor infrastructure strategy. The hire comes at a critical juncture for the San Francisco-based AI giant, which is currently navigating a volatile landscape of memory chip shortages, power grid constraints, and a shifting technological roadmap that recently led to the cancellation of a major expansion in Abilene, Texas. Katti’s arrival signals a transition from OpenAI’s early reliance on cloud partnerships toward a more hands-on, industrial-scale management of its own physical and silicon destiny.

The scale of OpenAI’s ambition is now measured in gigawatts rather than just parameters. According to company data, OpenAI’s computing capacity more than tripled in 2025 to approximately 1.9 gigawatts. CEO Sam Altman has set a staggering internal target of adding one gigawatt of new AI infrastructure every week, a pace that requires a level of supply chain mastery typically reserved for global semiconductor giants or sovereign states. Katti, who previously oversaw Intel’s Network and Edge Group, is tasked with bridging the gap between OpenAI’s software-defined needs and the hard physical realities of global data center construction.

This leadership change coincides with a strategic retreat from a planned expansion at the flagship "Stargate" site in Texas, a project that was part of a broader $500 billion infrastructure initiative announced alongside U.S. President Trump in January 2026. The decision to skip the Abilene expansion was not a sign of cooling demand, but rather a calculated bet on the next generation of silicon. OpenAI reportedly chose to wait for Nvidia’s forthcoming Vera Rubin AI accelerators, which are expected to deliver ten times the performance per watt of the current Blackwell generation. By delaying certain buildouts, OpenAI is attempting to align its data center "shells" with the arrival of more efficient chips, avoiding the risk of filling expensive floor space with hardware that could be obsolete within eighteen months.

The move also highlights a growing diversification of OpenAI’s silicon portfolio. While the company remains the primary customer for Nvidia’s top-tier chips, it has quietly secured multibillion-dollar deals with Broadcom, Cerebras, and Amazon Web Services. This multi-vendor approach is a direct response to a persistent bottleneck in Chip-on-Wafer-on-Substrate (CoWoS) packaging. With Nvidia projected to consume nearly 60% of TSMC’s CoWoS output in 2026, OpenAI is using Katti’s expertise to navigate a supply chain where access to advanced packaging is as valuable as the chips themselves. Katti has already indicated that the company is taking a more cautious stance on geopolitics and supply chain resilience, suggesting that future data center sites will be spread across Europe, the Middle East, and South America to mitigate regional risks.

For the broader market, OpenAI’s shift toward industrial-scale compute management marks the end of the "cloud-only" era for top-tier AI labs. By hiring an executive with Katti’s pedigree in hardware and networking, OpenAI is effectively building an internal engineering firm capable of designing and operating the world’s most complex machines. The company is currently developing more than half a dozen sites across the United States, including a massive project with Oracle in Wisconsin. As DRAM prices are projected to surge by up to 70% in the second quarter of 2026, the ability to manage these capital-intensive projects with surgical precision will likely determine which AI firms survive the transition from research labs to global infrastructure utilities.

Explore more exclusive insights at nextfin.ai.

Insights

What are the core technical principles behind OpenAI's new infrastructure strategy?

What historical context led to OpenAI's pivot towards a multi-vendor infrastructure?

What feedback have users provided regarding OpenAI's shift in compute management?

What are the current trends in the AI infrastructure market following OpenAI's changes?

What recent updates have been made regarding OpenAI's infrastructure projects?

How does OpenAI's industrial-scale compute management differ from its previous cloud-only approach?

What are the anticipated long-term impacts of OpenAI's infrastructure strategy on the AI industry?

What challenges does OpenAI face in managing its supply chain and infrastructure planning?

How does OpenAI's approach compare to other AI companies in managing infrastructure?

What are the potential consequences of the DRAM price increase for AI firms like OpenAI?

What are the geopolitical considerations in OpenAI's infrastructure expansion strategy?

What role will Katti's expertise play in OpenAI's future infrastructure developments?

What core difficulties does OpenAI encounter in securing advanced chip packaging?

How does the cancellation of the Abilene expansion reflect OpenAI's strategic decisions?

What are the implications of OpenAI's diversification of its silicon portfolio?

What future developments are expected from OpenAI's collaboration with Nvidia?

What lessons can be learned from OpenAI's transition from cloud partnerships?

How does OpenAI's shift affect its competition within the AI sector?

What are the risks associated with OpenAI's aggressive infrastructure expansion goals?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App