NextFin News - Nvidia’s dominance in the artificial intelligence hardware market is facing a new technical and economic bottleneck that could temper its record-breaking stock rally. Gil Luria, managing director and senior software analyst at D.A. Davidson, warned in a recent client note that the escalating cost and scarcity of High-Bandwidth Memory (HBM) are beginning to squeeze the margins of Nvidia’s latest Blackwell architecture. While the market has focused on GPU compute power, Luria suggests that the "memory wall"—the physical and financial limit of data transfer—is becoming the primary constraint for the next generation of AI scaling.
Luria, who has maintained a notably cautious "Neutral" stance on Nvidia for much of the past year despite the broader market's euphoria, argues that the reliance on HBM3E memory in the Blackwell chips introduces a level of supply-chain fragility not seen in previous cycles. D.A. Davidson’s research indicates that memory now accounts for a significantly larger portion of the total bill of materials for Nvidia’s top-tier H200 and B200 chips compared to the older H100 models. Luria’s long-term perspective has often centered on the cyclicality of the semiconductor industry, and he remains one of the few prominent voices on Wall Street questioning whether Nvidia can maintain its current trajectory as input costs rise.
The analyst's concerns are supported by recent pricing data from the memory sector. South Korean giants Samsung and SK Hynix have reportedly increased prices for HBM3E by approximately 20% for 2026 orders, citing a critical shortage of manufacturing capacity. Because HBM requires complex 3D stacking processes that have lower yields than standard DRAM, the supply remains inelastic. For Nvidia, this means that even as demand for AI chips remains robust, the cost of the "memory sandwich" surrounding its processors is climbing faster than the price increases it can pass on to hyperscale customers like Microsoft and Amazon.
This perspective remains a minority view on the sell-side, where the vast majority of analysts maintain "Buy" ratings based on the sheer volume of Blackwell pre-orders. Most institutional researchers argue that Nvidia’s pricing power is sufficient to absorb these memory costs. However, Luria’s warning highlights a shift from a "chip shortage" to a "memory shortage," a distinction that carries different implications for Nvidia’s valuation. If memory becomes the definitive bottleneck, Nvidia’s ability to ship completed systems could be throttled regardless of how many GPUs it can produce at TSMC.
The competitive landscape adds another layer of uncertainty to Luria’s thesis. AMD’s recently unveiled Instinct MI350 series has doubled down on memory capacity, boasting 288GB of HBM3E—a direct challenge to Nvidia’s Blackwell specifications. If AMD can secure a more stable or cost-effective memory supply through its partnerships, it may offer a better price-to-performance ratio for large language model inference, where memory bandwidth is often more critical than raw compute cycles. This potential for market share erosion is a key pillar of the D.A. Davidson bear case, though it assumes AMD can overcome Nvidia’s formidable software moat.
Ultimately, the "memory problem" identified by Luria serves as a reminder that the AI infrastructure build-out is subject to the same physical and economic laws as any other industrial cycle. The sustainability of Nvidia’s margins will depend on whether it can innovate around the memory wall or if it will be forced to share an increasing portion of its AI windfall with the memory manufacturers. While the broader market continues to bet on uninterrupted growth, the rising cost of HBM suggests that the most profitable era of the AI trade may be entering a more complicated, cost-intensive phase.
Explore more exclusive insights at nextfin.ai.
