NextFin News - OpenAI has appointed Sachin Katti, a former Intel chief technology officer and Stanford professor, to lead its industrial compute division as the company pivots toward a more aggressive, multi-vendor infrastructure strategy. The hire comes at a critical juncture for the San Francisco-based AI giant, which is currently navigating a volatile landscape of memory chip shortages, power grid constraints, and a shifting technological roadmap that recently led to the cancellation of a major expansion in Abilene, Texas. Katti’s arrival signals a transition from OpenAI’s early reliance on cloud partnerships toward a more hands-on, industrial-scale management of its own physical and silicon destiny.
The scale of OpenAI’s ambition is now measured in gigawatts rather than just parameters. According to company data, OpenAI’s computing capacity more than tripled in 2025 to approximately 1.9 gigawatts. CEO Sam Altman has set a staggering internal target of adding one gigawatt of new AI infrastructure every week, a pace that requires a level of supply chain mastery typically reserved for global semiconductor giants or sovereign states. Katti, who previously oversaw Intel’s Network and Edge Group, is tasked with bridging the gap between OpenAI’s software-defined needs and the hard physical realities of global data center construction.
This leadership change coincides with a strategic retreat from a planned expansion at the flagship "Stargate" site in Texas, a project that was part of a broader $500 billion infrastructure initiative announced alongside U.S. President Trump in January 2026. The decision to skip the Abilene expansion was not a sign of cooling demand, but rather a calculated bet on the next generation of silicon. OpenAI reportedly chose to wait for Nvidia’s forthcoming Vera Rubin AI accelerators, which are expected to deliver ten times the performance per watt of the current Blackwell generation. By delaying certain buildouts, OpenAI is attempting to align its data center "shells" with the arrival of more efficient chips, avoiding the risk of filling expensive floor space with hardware that could be obsolete within eighteen months.
The move also highlights a growing diversification of OpenAI’s silicon portfolio. While the company remains the primary customer for Nvidia’s top-tier chips, it has quietly secured multibillion-dollar deals with Broadcom, Cerebras, and Amazon Web Services. This multi-vendor approach is a direct response to a persistent bottleneck in Chip-on-Wafer-on-Substrate (CoWoS) packaging. With Nvidia projected to consume nearly 60% of TSMC’s CoWoS output in 2026, OpenAI is using Katti’s expertise to navigate a supply chain where access to advanced packaging is as valuable as the chips themselves. Katti has already indicated that the company is taking a more cautious stance on geopolitics and supply chain resilience, suggesting that future data center sites will be spread across Europe, the Middle East, and South America to mitigate regional risks.
For the broader market, OpenAI’s shift toward industrial-scale compute management marks the end of the "cloud-only" era for top-tier AI labs. By hiring an executive with Katti’s pedigree in hardware and networking, OpenAI is effectively building an internal engineering firm capable of designing and operating the world’s most complex machines. The company is currently developing more than half a dozen sites across the United States, including a massive project with Oracle in Wisconsin. As DRAM prices are projected to surge by up to 70% in the second quarter of 2026, the ability to manage these capital-intensive projects with surgical precision will likely determine which AI firms survive the transition from research labs to global infrastructure utilities.
Explore more exclusive insights at nextfin.ai.
