NextFin News - Jensen Huang, chief executive of Nvidia, spoke to Ed Ludlow on Bloomberg's The Asia Trade during a busy day in Washington, D.C. The conversation, broadcast on November 20, 2025, covered product demand, the ramp for Nvidia's next-generation platform, export controls and the company's view of energy and capacity constraints as AI adoption accelerates.
Blackwell demand and the company's supply outlook
Huang began by characterising current demand for Nvidia's newest GPUs as exceptional. In his words, sales are off the charts for Blackwell
, and cloud instances powered by Nvidia GPUs are effectively sold out. At the same time he sought to reassure customers and markets that production is expanding: We got plenty of Blackwells to sell you. We have lots of Blackwells coming. We're making a lot of Blackwells.
He credited careful planning and a broad partner ecosystem — naming TSMC, memory partners SK Hynix, Micron and Samsung, and systems and packaging partners Foxconn, Quanta and Wistron — for enabling Nvidia to meet a very strong year of demand.
Vera Rubin: timeline, engineering scale and rack architecture
On the company's next major platform, Vera Rubin, Huang described an intensive bring-up process: seven different chips are back in Nvidia's labs and, he estimated, "probably a couple of 20,000 people are working on bringing up Vera Rubin from silicon to systems, the software to algorithms." He gave a target delivery timeframe of about Q3 of the following year and said the company is maintaining its annual cadence: Continuing our once a year cycle, Vera Rubin is already assured a huge success. Everybody's incredibly excited about it. Can't wait to show everybody.
Huang also emphasised continuity in the rack-scale architecture that supports Nvidia's systems. He highlighted a scale-up switch called the Envy Link 72 and described the rack architecture as Nvidia's fifth-generation, developed through transitions starting with Grace Blackwell and Grace Blackwell Ultra. He said the same rack-scale design will be used for Vera Rubin and that prior transitions have smoothed the supply-chain and deployment challenges.
China: guidance, engagement and the company position
Asked to clarify recent comments from Nvidia's CFO about China, Huang reiterated the company's guidance: Our forecast for China is zero.
He said that while the Chinese market is large — he estimated it could be around $50 billion this year — Nvidia's public planning and guidance assume no meaningful revenue from China until regulatory conditions allow reengagement. At the same time, he expressed a desire to serve the market again: We would love the opportunity to be able to reengage the Chinese market with excellent products that we deliver and to be able to compete globally.
Export permissions and diversion controls
Huang addressed recent U.S. Commerce Department permissions allowing certain exports to partners in the Middle East. He described longstanding concerns around "diversion" of technology and said Nvidia has repeatedly tested and sampled data centers worldwide and found no diversion. On compliance, he said there are multiple ways to meet U.S. requirements, including running equipment on U.S. cloud providers or implementing technical and process controls, and pledged continued engagement with both U.S. and foreign governments to ensure appropriate safeguards.
Energy, power and the limits to deployment
When asked whether energy is a larger constraint than chips for AI buildouts, Huang said growth at Nvidia and among AI customers is extraordinarily rapid — he cited a rate of roughly 60% a year and described quarter-to-quarter company growth measured in tens of billions of dollars. He acknowledged that everything becomes a challenge at that scale and stressed the importance of working with land, power and shell providers and energy companies so data-center deployments can be supported. He pointed to Nvidia's advantage in having a broad footprint across every major cloud and many geographies, which helps the company find "nooks and crannies" of available power at large, medium and small scales.
Partnerships with AI developers and disciplined build-out
Huang named major frontier-model developers — OpenAI, Anthropic and Google's Gemini among them — and described Nvidia's role as foundational: "Nvidia's architecture literally runs every model." He said Nvidia and its large customers coordinate closely on visibility of demand and financing before large infrastructure build-outs. He emphasised discipline in both investment and execution: The ambitions are large but the execution is disciplined. We're very disciplined with our investment. We're disciplined with our build out.
Huang characterised the current moment as one of exponential growth in compute demand, adoption and applications, and said Nvidia is working to support the scaling needs of "two of the most consequential companies in history" while continuing to optimise customers' stacks and add capacity where possible.
Closing remarks
Throughout the interview Huang returned to three central themes: extraordinary demand for Nvidia's products, an extensive and well-planned supply chain and system ecosystem, and a disciplined approach to supporting hyperscalers and AI developers while navigating geopolitical controls. He stressed readiness for continued growth, the technical complexity and scale of upcoming product ramps, and Nvidia's desire to reengage constrained markets if and when regulatory conditions permit.
References
Video clip and transcript: Bloomberg — "Nvidia CEO Huang on Blackwell Sales, Vera Rubin and China" (The Asia Trade, Nov 20, 2025).
Explore more exclusive insights at nextfin.ai.

