NextFin News - At the Morgan Stanley Technology, Media & Telecom Conference in San Francisco on March 4, 2026, NVIDIA CEO Jensen Huang spoke onstage with Morgan Stanley partners and moderators including Mark Edelstone. The session mixed a prepared presentation and a wide-ranging question-and-answer conversation about hyperscaler spending, supply chains, geopolitical export friction, strategic investments, and NVIDIA’s product roadmap.
The following report presents Huang’s core statements as delivered at the conference and in related network segments compiled in the program. It organizes his remarks by theme and quotes his words where appropriate.
Wealth transfer to AI infrastructure and systems providers
Huang described the current market as a large reallocation of hyperscaler free cash flow into a narrow set of suppliers. He said major cloud companies are "taking all their free cash flow or the vast majority of it and handing checks directly to Nvidia, Broadcom, AMD, Mellanox, Arista Networks, companies like that." He framed the phenomenon as "the indeed the greatest wealth transfer in history" where cloud capex is being directed to the vendors building AI infrastructure.
Hyperscaler capex, investor expectations and revenue timing
On investor concern about the clouds’ heavy capex, Huang noted that while investors prefer free cash flow, cloud vendors are funding the infrastructure required for AI. He suggested that the revenue payoff from expanded AI deployments will become visible later in the decade, saying that infrastructure normalization could lead to "hundreds of millions of upside to AI and cloud revenues as early as '28, more likely '29 and '30," and urged patience as adoption and productivity gains materialize.
Broadcom, competition and capex winners
Huang acknowledged that the gains are not limited to NVIDIA. He described Broadcom as a backbone for AI networking and custom chips, and said both companies are "capex plays" with different strategies. He characterized NVIDIA as an industry leader and systems player "helping shepherd the whole industry" while noting Broadcom’s role in customized infrastructure.
Supply chain readiness and memory shortages
Reflecting on supply constraints, Huang recounted that NVIDIA has secured wide elements of the supply chain — wafers, memory and packaging — and reminded listeners that shortages of memory and other components were predicted but not universally heeded. He emphasized that capacity is constrained and that securing supplies is a strategic priority.
H200 chips for China and regulatory uncertainty
Huang addressed reports about H200 production for China, explaining that regulatory complexity made it difficult to know whether chips already manufactured would be allowed into the Chinese market. He said thousands of chips were ready but Beijing had not given a full green light. For that reason, NVIDIA reallocated factory space and capacity at TSMC toward newer platforms rather than holding production lines idle.
OpenAI and Anthropic investments
On strategic investments, Huang provided a concrete update: "Just for everybody's update, we finalized our agreement. We're going to invest $30 billion in OpenAI."
He added that the larger $100 billion figure previously discussed was "probably not in the cards" because of OpenAI's expected public listing, and said NVIDIA’s $10 billion investment in Anthropic "probably will be the last as well." Huang framed these moves as part of a broader approach in which providing compute capacity and hardware would enable revenue growth as these companies scale and—he suggested—prepare to enter public markets.
Product roadmap and GTC preview
Huang previewed NVIDIA’s multi‑year data center product roadmap, naming Blackwell Ultra, Vera Rubin and future generations. He said the company is reallocating capacity to accelerate next‑generation platforms and indicated that a new chip would be revealed at NVIDIA’s GTC, scheduled March 16–19, 2026, with a keynote from Jensen Huang on March 16. He described Grock as an accelerator specialization in ultra‑low latency inference and emphasized that NVIDIA is expanding architecture and ecosystem offerings to support agentic and physical AI.
Platform view: data centers, omniverse and physical AI
Huang outlined a holistic view of NVIDIA’s positioning: the company provides hardware for model training in data centers, tools such as Omniverse to teach and test models, and on‑device AGX platforms for real‑time inference in robots. He stressed developer adoption — citing millions of developers building on NVIDIA’s robotic stack — and repeated the company’s belief in a long runway for physical AI, which NVIDIA executives have described as a "multi‑trillion dollar opportunity."
Market and stock outlook
Huang and commentators in the session argued that growth in NVIDIA’s earnings and the ongoing expansion of compute demand support a positive long‑term outlook. Remarks in the program noted that NVIDIA continues to capture the majority of the high‑end inference and training market and that a combination of roadmap execution and hyperscaler capex should drive continued revenue and earnings expansion.
References:
- Full Morgan Stanley conference transcript (Seeking Alpha)
- TechCrunch coverage of Jensen Huang at Morgan Stanley
- NVIDIA GTC 2026 program and keynote schedule
Explore more exclusive insights at nextfin.ai.

