NextFin

“The inference inflection has arrived”: Jensen Huang on Nvidia’s $2B Marvell Partnership

Summarized by NextFin AI
  • Nvidia announced a strategic partnership with Marvell Technology on March 31, 2026, involving a $2 billion investment to integrate Marvell into Nvidia’s NVLink Fusion platform.
  • CEO Jensen Huang emphasized that this partnership marks a fundamental shift in computing, with AI becoming central to data centers and requiring specialized processors.
  • Marvell's contribution focuses on interconnect technology and custom silicon, enhancing the flexibility and innovation of the AI ecosystem.
  • The investment aims to expand the total addressable market (TAM) for both companies, creating new opportunities rather than dividing existing ones.

NextFin News - On March 31, 2026, Nvidia announced a strategic partnership with Marvell Technology and a $2 billion investment as the companies said they would connect Marvell to Nvidia’s NVLink Fusion rack‑scale platform. The announcement and accompanying comments by Nvidia CEO Jensen Huang and Marvell CEO Matt Murphy were published in an Nvidia press release the same day. (nvidianews.nvidia.com)

The discussion that follows presents, in the interviewees’ own words and sequence, the core statements from that conversation: why the partnership was formed, how the companies expect customers to use NVLink Fusion and Marvell’s XPUs, and how both executives see the market opportunity and the investment’s role in expanding Nvidia’s AI ecosystem. The segment aired during the market coverage around the March 31 announcement and was referenced in contemporaneous business coverage. (investing.com)

AI inflection and the platform transition

Jensen Huang opened by describing the moment as a fundamental platform shift in computing: "The AI inflection point has arrived. ... All of the world's data centers are going to be replaced with this new form of doing computing. We call accelerated computing." He emphasized Nvidia GPUs as the power behind most data centers while stressing the need for extensibility for customers who want specialized, semi‑custom processors.

"The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories."

NVLink Fusion, rack‑scale architecture and interoperability

Huang explained how Nvidia is extending its architecture to allow third‑party specialized processors to interoperate with Nvidia systems at rack scale. He described extending the entire chassis of Grace, Blackwell and Vera Rubin systems "through NVLink and connect it to Marvell," enabling customers to use either all‑Nvidia gear or augment Nvidia gear with specialized processors while remaining system‑compatible.

"We're extending our architecture, starting with the networking architecture, basically the entire chassis of our Grace Blackwell and Vera Rubin systems, and we're going to extend that through NVLink and connect it to Marvell."

Marvell’s role: connectivity, photonics and semi‑custom XPUs

Matt Murphy framed Marvell’s contribution around interconnect technology, silicon photonics and custom silicon. He said customers value flexibility and innovation and that the partnership enables a robust ecosystem that can provide complete rack‑scale solutions.

"We have key strengths in interconnect technology, silicon photonics, custom silicon... Marvell, together with Nvidia, can provide... interoperability. We can drive complete solutions, and we can enable customer choice."

Expanding the TAM and why this is not zero‑sum

Murphy recalled early collaboration with Nvidia and argued that companies like Nvidia and Marvell create, rather than merely divide, market opportunity. He said the partnership is intended to increase the overall addressable market so both companies can participate more richly.

"People typically look at the semiconductor market as a zero‑sum game... Companies like Nvidia and Marvell, we create the market. We create the TAM."

Token economics and real‑time, agentic enterprise software

Huang laid out a view of future enterprise software that is generative and processed in real time. He explained that tokens—generated and processed by infrastructure—will become central to enterprise computing, expanding software addressable markets while changing gross‑margin profiles.

"Every single character you see on your display, every piece of information that is produced, has to be generated in real time... In the future, the multiple trillions of dollars of enterprise spend will all be token‑enhanced."

Why Nvidia invested $2 billion

Huang and Murphy described the investment as both strategic and financial: it tightens ties between Marvell’s connectivity and Nvidia’s AI platform, ensures NVLink compatibility for Marvell’s semi‑custom XPUs, and brings Marvell technology into telecom base stations as part of a broader AI‑RAN effort. Huang summarized the rationale as ecosystem expansion and participation in the enlarged TAM.

"This partnership is about extending Nvidia's AI ecosystem and Nvidia's AI architecture... this investment is really a fantastic investment for us. We're smart investors. We've expanded the TAM for both of us as a result of this partnership, and we want to be an investor in that future."

Telecommunications, base stations and AI at the edge

Huang highlighted plans to extend AI infrastructure beyond cloud data centers to telecommunications base stations, describing a future in which base stations run AI models and become part of the AI infrastructure.

"The future telecommunication network... In the future, just like we did with cloud computing, we're going to turn the AI base stations into a part of the AI infrastructure... we're going to be able to run AI models right inside the base stations in the future."

Demand, market reaction and near‑term outlook

Matt Murphy told the show that Marvell’s demand remains strong, pointing to a recent quarter of record earnings and continued data center growth guidance. He said the partnership and cash infusion help "turbocharge" Marvell’s growth and enable broader, rack‑scale solutions together with Nvidia.

"Demand continues to be very strong. We had our best quarterly earnings ever last quarter... This just helps turbocharge our growth and our opportunity."

The companies’ joint announcement and the interview comments were widely reported alongside market coverage on the day of the press release. Coverage highlighted the NVLink Fusion technical integration and the $2 billion investment as the headline elements of the partnership. (nvidianews.nvidia.com)

Selected direct quotes from the interview

Jensen Huang: The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories.

Matt Murphy: Our expanded partnership with NVIDIA reflects the growing importance of high‑speed connectivity, optical interconnect and accelerated infrastructure in scaling AI.

Jensen Huang: We're extending our architecture... and we're going to extend that through NVLink and connect it to Marvell.

References

For the companies’ joint statements and the formal announcement: NVIDIA Newsroom — "NVIDIA AI Ecosystem Expands as Marvell Joins Forces Through NVLink Fusion" (March 31, 2026). (nvidianews.nvidia.com)

Contemporaneous market coverage and reporting: Investing.com — "Nvidia invests $2B in Marvell, forms AI infrastructure partnership" (Mar 31, 2026). (investing.com)

Aggregator coverage and contemporaneous headlines referencing the interview and CNBC coverage: Techmeme roundup (March 31, 2026). (techmeme.com)

Additional reporting that referenced the executives’ comments: Motley Fool / AOL Finance — coverage referencing the CNBC interview (Mar 31, 2026). (aol.com)

Note: The article above presents the interviewees’ statements from the provided transcript, and places them in the context of the companies’ March 31, 2026 announcement and media coverage that day.

Explore more exclusive insights at nextfin.ai.

Insights

What are the key technical principles behind Nvidia's NVLink Fusion platform?

What historical factors led to the formation of the Nvidia and Marvell partnership?

How is the current AI market reacting to Nvidia's $2 billion investment in Marvell?

What recent developments have occurred in the AI infrastructure sector following the Nvidia-Marvell announcement?

What are the long-term impacts of the Nvidia-Marvell partnership on the AI ecosystem?

What challenges do Nvidia and Marvell face in integrating their technologies?

How does the Nvidia-Marvell partnership compare to other similar collaborations in the tech industry?

What market trends are influencing the demand for specialized processors in AI applications?

What are the implications of token economics for enterprise software as discussed by Jensen Huang?

How can the Nvidia-Marvell partnership contribute to expanding the total addressable market (TAM)?

What role will telecommunications base stations play in future AI infrastructure according to Huang?

How does Marvell's interconnect technology enhance Nvidia's AI capabilities?

What are the potential risks associated with the Nvidia-Marvell investment strategy?

What feedback are users providing regarding the interoperability of Nvidia and Marvell's technologies?

How might the partnership between Nvidia and Marvell evolve over the next few years?

What competitive advantages does Nvidia gain through its partnership with Marvell?

What are the implications of using AI in telecommunications base stations for future network operations?

How does the partnership aim to address the growing demand for AI factories worldwide?

What core difficulties might arise from the transition to accelerated computing as described by Huang?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App