NextFin News - The opening of NVIDIA’s GTC 2026 in San Jose this week has effectively fired the starting gun on the HBM4 era, as Samsung Electronics and SK Hynix unveiled the hardware that will underpin the next generation of artificial intelligence. At the heart of the exhibition is U.S. President Trump’s focus on domestic manufacturing and high-tech sovereignty, yet the technical breakthrough remains firmly in the hands of the South Korean giants. The two firms showcased their latest High Bandwidth Memory (HBM4) solutions, specifically designed to feed the insatiable data appetite of NVIDIA’s newly revealed "Vera Rubin" architecture, the successor to the Blackwell series.
The stakes for this generation of memory are higher than ever before. While previous iterations of HBM were largely about increasing capacity and speed, HBM4 represents a fundamental shift in how memory and logic interact. According to TrendForce, Samsung is expected to begin its HBM4 supply as early as the first quarter of 2026, with SK Hynix and Micron following in the second quarter. This timeline suggests a narrowing gap between the market leader, SK Hynix, and its perennial rival, Samsung, which has spent the last year aggressively restructuring its semiconductor division to regain its footing in the AI supply chain.
SK Hynix used the GTC stage to demonstrate its "Accelerator in Memory" (AiM) technology, a Processing-in-Memory (PIM) solution that embeds computing units directly within the memory chips. This architecture addresses the "memory wall"—the bottleneck where data transfer speeds between the GPU and memory cannot keep pace with the GPU's processing power. By performing matrix calculations within the memory itself, SK Hynix claims it can drastically reduce energy consumption while boosting throughput for large-scale language models. The company is currently delivering final HBM4 samples to NVIDIA, aiming to cement a leadership position it has held since the early days of the HBM2E cycle.
Samsung’s counter-offensive is equally ambitious. The company is not just pitching memory; it is pitching an "AI Factory" ecosystem. At GTC, Samsung detailed how it is integrating NVIDIA’s digital twin technology into its own fabrication plants to optimize the production of HBM4. This co-design approach extends to the hardware itself, where Samsung is working with TSMC to vertically stack computing units and memory in a single package. This 3D integration effectively turns the memory and the processor into a unified silicon entity, a move that could eventually challenge the traditional dominance of standalone GPU architectures.
The competitive landscape is also being reshaped by geopolitical and logistical pressures. While Micron remains a formidable third player, recent market reports suggesting its exclusion from the initial Rubin supply chain were refuted by analysts, who noted that NVIDIA requires a diversified supplier base to meet global demand. However, the technical lead held by the Korean firms in HBM4—particularly in the transition to 12-layer and 16-layer stacks—gives them a significant margin of safety. As AI models grow in complexity, the "highway" that transports data has become more valuable than the engine that processes it, shifting the balance of power in the semiconductor industry toward those who control the memory.
Explore more exclusive insights at nextfin.ai.
