NextFin

Jensen Huang on Nvidia’s $2B CoreWeave Bet: “Demand Is Incredible” and the Race to Build AI Factories

Summarized by NextFin AI
  • Nvidia's demand for products is exceptionally strong, driven by AI labs and a rapid expansion of AI applications across various industries, including healthcare and finance.
  • Nvidia's $2 billion investment in CoreWeave aims to support the deployment of Nvidia platforms at scale, accelerating the construction of over 5 gigawatts of AI factory capacity by 2030.
  • New products like the Vera CPU and BlueField storage platform are set to revolutionize AI capabilities, with companies already expressing interest in using these innovations.
  • Nvidia is strategically investing across the AI infrastructure, addressing constraints in GPUs, energy, and memory, while planning for substantial scale-out in 2026 and beyond.

NextFin News - On January 26, 2026, Nvidia founder and CEO Jensen Huang spoke to CNBC from China about the company’s expanded strategic collaboration with CoreWeave and the context for Nvidia’s $2 billion investment in the neocloud. The segment featured Jensen Huang in China and CoreWeave CEO Michael Intrator joining from the U.S.; the discussion was broadcast on CNBC the same day the companies announced the deal.

The conversation focused on the scale of demand for Nvidia’s hardware and software, the details and purpose of the CoreWeave investment and collaboration, the company’s plans for new products such as the Vera CPU and BlueField storage platform, and how Nvidia is thinking about China and the global infrastructure buildout. Below are Jensen Huang’s core statements from the interview grouped by topic.

On current demand for Nvidia’s products and the drivers of that demand

Huang described demand as exceptionally strong and multi‑faceted. He told CNBC that AI labs and AI makers are "racing to the next frontier" and building out pre‑training, post‑training and inference capability. He said demand is being driven both by frontier AI labs and by a rapid expansion of AI applications across industries—healthcare, manufacturing, entertainment, media, financial services, software and engineering.

"The demand is incredible ... these AI systems are trying to are racing towards the frontier and also the number of applications that are being built on top of these AI models is really going through the roof."

Why Nvidia invested in CoreWeave and what the partnership will do

Huang framed the investment as support for a partner that has long used Nvidia architecture and is now committing to deploy future Nvidia platforms at scale. He said the funding and collaboration are intended to accelerate CoreWeave’s procurement of land, power and shells and to speed construction of more than 5 gigawatts of AI factory capacity by 2030. He emphasized that CoreWeave will adopt Nvidia CPUs and storage platforms in addition to GPUs and networking.

"We're making a significant investment in them. Incredibly happy that they're going to sign up and dedicate themselves to our architecture for the next 5 gigawatt and you know many years to come."

Huang also highlighted that the collaboration is broader than equity: CoreWeave will deploy multiple Nvidia generations and Nvidia will test and validate CoreWeave’s software and reference architecture for broader distribution.

On new Nvidia products — Vera CPU and BlueField storage

Huang previewed Nvidia’s Vera CPU and a storage platform based on BlueField. He said Vera is "completely revolutionary" and that companies are already signing up to use Vera by itself. He explained that BlueField‑based storage will be important for agentic AI so systems can have memory and large context, underscoring why those platforms matter for production AI.

"Our CPU is Vera. It's completely revolutionary. We're going to tell everybody more about it. So many companies are now signing up to use Vera all by itself because it's such an incredible CPU."

On China, H200 licenses and how orders will appear

Speaking from China, Huang said he was visiting sites and celebrating Chinese New Year with employees. He addressed the status of H200 export licensing and said Nvidia expects approvals to show up in the market as purchase orders rather than public announcements. He urged analysts to keep potential Chinese business off Nvidia’s formal guidance—calling such revenue a "great bonus" if and when orders materialize.

"I'm here to celebrate Chinese New Year with my employees ... we're looking forward to the H200 licenses being finalized ... the demand for H200's and the demand for Nvidia stacks are really really significant here."

Huang also said: "All of China's business and future potential business are not in our forecast, not in our guidance ... and it'll just be a great bonus when things get sorted out."

On the "circularity" critique and Nvidia’s investment strategy

Huang responded to the criticism that Nvidia is financing its own demand by describing the company’s investments as strategic support across the AI stack. He offered a five‑layer model—power, chips, infrastructure, models, and applications—to explain why Nvidia invests across layers. He emphasized that Nvidia’s investments (for example, the $2 billion in CoreWeave) represent a small fraction of the total capital these infrastructure projects will ultimately need.

"AI is a five layer cake ... you could see us investing across that entire five layer cake of AI ... we've invested two $2 billion in the core weave, but recognize that the amount of funding that needs to be raised yet to support that 5 gawatts is really quite significant. We're investing a small percentage of the amount that ultimately has to go and be provided."

On OpenAI and staged investments tied to buildout milestones

Huang reiterated that Nvidia’s investments in model labs such as OpenAI are structured over time against buildout thresholds. He described Nvidia’s capital as a proportionate contribution tied to the incremental infrastructure those labs raise.

"We said that we would invest a billion dollar, $10 billion for each gigawatt that they would have to go raise ... they have to raise a significant amount in addition to what we decide to invest in them."

On constraints to the AI infrastructure buildout (GPUs, energy, memory)

Huang described a shifting set of constraints as the market scales: GPUs are the limiting factor at times, then energy, then memory as systems grow. He characterized the global demand as moving through those "critical paths of resistance" and said the debate about constraints will continue as the world scales AI computing infrastructure.

"The way that this market works is that you kind of move through the critical path of resistance ... right now access to the GPUs is the limiting factor and then that clears and access to energy is the factor and then ... access to memory is going to be an issue as we go forward."

On CoreWeave’s role in serving diverse customers beyond hyperscalers

Huang and CoreWeave CEO Michael Intrator discussed how the partnership aims to expand availability of Nvidia platforms to a broader set of customers, including enterprises and sovereign clouds that need on‑prem or managed solutions. Intrator emphasized CoreWeave’s software and execution capability and said the partnership formalizes alignment around physical infrastructure, software and operations to deliver solutions to market.

"You bring together the best infrastructure, you bring with the best software and the best operations and you deliver the best product to the consumers of this ... that's what this partnership really represents."

On supply chain, production ramps and Nvidia’s positioning

Huang said Nvidia has been planning and scaling its supply chain across fabs, packaging, memory vendors and ecosystem partners and expects substantial scale‑out in 2026 and beyond. He framed Nvidia as working with every major memory and supply partner to support the coming infrastructure buildout.

"Nvidia's supply chain is the largest in the world with respect to AI and we've had the benefit of ramping and scaling and planning our supply chain for the last couple years ... we're looking forward to a giant scale out this year."

Closing remarks from the interview

Huang closed by restating his long‑term view of AI infrastructure: that it is an enormous, multi‑year buildout and that Nvidia intends to be deeply engaged across chips, systems and software to enable the next generation of AI applications.

"We're in the beginning of the AI infrastructure buildout and the demand is just extraordinary ... this is going to be a substantial infrastructure buildout. We're just in the beginning part of that."

References and further reading:

Explore more exclusive insights at nextfin.ai.

Insights

What are the foundational concepts behind AI infrastructure?

What is the historical context of Nvidia's development in AI technology?

How is the demand for Nvidia's products changing across different industries?

What are recent trends influencing the chip market and AI applications?

What were the key announcements made during Nvidia's recent investment in CoreWeave?

How do Nvidia's new products like Vera CPU and BlueField storage impact AI development?

What is the significance of Nvidia's operations in China for the global market?

What strategic factors are driving Nvidia's investment decisions?

What challenges are currently facing the AI infrastructure buildout?

How does Nvidia's investment strategy address criticisms about circularity?

What are the potential long-term impacts of Nvidia's collaboration with CoreWeave?

How does Nvidia position itself against competitors in the AI hardware space?

What are the critical paths of resistance that affect AI market expansion?

What role does CoreWeave play in Nvidia's strategy for serving diverse customers?

What historical cases illustrate Nvidia's approach to partnerships in the tech industry?

What future developments can we expect in Nvidia's AI infrastructure projects?

How does Nvidia's supply chain strategy relate to its overall market goals?

What are the implications of Nvidia's planned scale-out in 2026 for the AI sector?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App