NextFin News - The architectural center of gravity in global computing has shifted from the processor to the wire. Nvidia’s networking division, a business once viewed as a secondary appendage to its dominant graphics chip operations, has officially ascended to become the company’s second-largest revenue segment. In the fiscal year ending early 2026, the division generated a staggering $31 billion in revenue, a figure that not only underscores the explosive demand for artificial intelligence but also signals a fundamental change in how data centers are built.
The scale of this achievement is difficult to overstate. In the most recent quarter alone, networking revenue hit $11 billion, representing a 3.5-fold increase over the previous year. To put that in perspective, Nvidia’s quarterly networking sales now exceed the entire networking revenue of long-standing industry titans like Cisco Systems. While the world has been fixated on the H100 and Blackwell GPUs, the "plumbing" that connects these chips has quietly become a financial juggernaut in its own right. U.S. President Trump’s administration has closely monitored these developments as part of a broader push to maintain American leadership in AI infrastructure, recognizing that the ability to move data is now as strategically vital as the ability to process it.
This transformation was set in motion six years ago with the $7 billion acquisition of Mellanox Technologies. At the time, critics questioned the price tag, but the bet on high-speed interconnects has paid off with surgical precision. The division’s growth is driven by two primary technologies: InfiniBand and Spectrum-X Ethernet. InfiniBand remains the gold standard for massive AI training clusters, offering the low latency and high throughput required for thousands of GPUs to act as a single, cohesive brain. However, the real surprise has been the rapid adoption of Spectrum-X, Nvidia’s high-performance Ethernet platform designed specifically for generative AI. By bringing "lossless" networking to the more common Ethernet standard, Nvidia has successfully captured a segment of the market that was previously the stronghold of traditional networking vendors.
The economic logic behind this surge is rooted in the "AI Factory" concept. In a traditional data center, networking was a utility used to connect disparate servers. In an AI factory, the network is the backplane of a single, massive computer. As AI models grow in complexity, the bottleneck is no longer just the speed of a single chip, but the speed at which data can travel between tens of thousands of chips. This has allowed Nvidia to sell not just components, but entire systems. When a cloud provider buys a cluster of Blackwell GPUs, they are increasingly incentivized to buy the accompanying Nvidia switches, cables, and software to ensure peak performance. This "full-stack" lock-in has effectively turned networking into a high-margin toll booth for the AI era.
The competitive landscape is reacting with a mixture of awe and desperation. Traditional networking companies are finding that their general-purpose hardware is ill-equipped for the specialized, bursty traffic patterns of AI workloads. While competitors like Broadcom and Marvell are seeing their own AI-related revenues climb, Nvidia’s advantage lies in its ability to co-design the chip and the network. This vertical integration creates a performance moat that is proving difficult to bridge. For the hyperscalers—Amazon, Google, and Microsoft—the choice is often between building their own custom silicon or paying the "Nvidia tax" to get to market faster. Given the current arms race in generative AI, most are choosing the latter.
Despite the astronomical numbers, the networking division remains something of a "stealth" giant. It receives far less public scrutiny than the compute division, yet its $31 billion annual run rate would make it a Fortune 500 company on its own. The division’s success has also diversified Nvidia’s revenue stream, providing a buffer should the demand for high-end GPUs ever face a cyclical cooling. As long as the world continues to build larger and more complex AI models, the demand for the specialized fabric that holds these systems together will only intensify. The wire, it seems, has become just as valuable as the silicon.
Explore more exclusive insights at nextfin.ai.
