NextFin News - On January 6, 2026, at the Consumer Electronics Show (CES) in Las Vegas, Nvidia officially unveiled DLSS 4.5, the latest iteration of its Deep Learning Super Sampling technology. This update, which began rolling out to consumers this week, introduces a headline "6x" Multi Frame Generation mode and a new "Preset M" algorithm designed to eliminate long-standing issues with ghosting and temporal instability. According to Tech4Gamers, the release is strategically timed with the launch of the GeForce RTX 50-series GPUs, as the most advanced features of DLSS 4.5 are architecturally locked to the new Blackwell-based hardware. While Nvidia positions the update as a leap toward 4K 240Hz gaming, early independent testing reveals a stark performance divergence between the latest silicon and previous generations.
The technical core of DLSS 4.5 lies in its second-generation Transformer-based model. For users of the newly inaugurated RTX 50-series, the technology allows the GPU to render a single native frame and use AI to generate five subsequent frames, effectively sextupling the perceived frame rate. However, this computational heavy lifting requires native FP8 hardware acceleration. According to eTeknix, the new Preset M is approximately five times more computationally demanding than the Preset K found in DLSS 4.0. On RTX 50 and RTX 40 cards, which possess the necessary hardware hooks, the performance overhead is a negligible 2% to 3%. In contrast, older RTX 30 (Ampere) and RTX 20 (Turing) cards lack native FP8 support, leading to significant performance degradation when attempting to run the new version.
Data from early benchmarks in titles like Cyberpunk 2077 and Black Myth: Wukong illustrate a troubling trend for legacy users. While DLSS 4.0 provided a consistent performance boost across all RTX-enabled hardware, DLSS 4.5 results in a 12% to 20% frame rate loss on RTX 30-series cards compared to the previous version. More critically, the VRAM requirements for DLSS 4.5 have surged by 87% to 103% on older GPUs. For mid-range cards like the RTX 2060 or RTX 3060 with limited memory buffers, the activation of DLSS 4.5 can actually result in performance lower than native resolution with traditional anti-aliasing, rendering the upscaler counterproductive for a large segment of the existing user base.
This shift represents a fundamental change in Nvidia’s product philosophy. Historically, DLSS was marketed as a "longevity" tool that allowed older cards to remain relevant in modern titles. With version 4.5, Nvidia is increasingly using AI features as a "tier wall" to differentiate its latest hardware. By locking the 6x Multi Frame Generation behind the RTX 50-series, the company is forcing a choice upon enthusiasts: accept the limitations of traditional rendering or upgrade to the latest silicon to access the "AI-native" gaming experience. This strategy is particularly effective as display manufacturers push toward 4K OLED panels with 240Hz and 360Hz refresh rates, which are virtually impossible to drive at native resolutions even with flagship hardware.
However, the reliance on heavy frame generation introduces a secondary challenge: input latency. While the frame counter may show 240 FPS, the actual responsiveness of the game is still tied to the base frame rate of the rendered frames. As the ratio of generated-to-real frames increases to 5:1, the disconnect between visual fluidity and tactile response becomes more pronounced. For competitive e-sports players, the "6x" mode may prove to be a visual luxury that compromises the millisecond-level precision required for high-stakes play. Thompson, a senior analyst at eTeknix, suggests that for users on Ampere or Turing hardware, the optimal path remains sticking with DLSS 4.0 (Preset K) to avoid the performance and latency penalties of the newer model.
Looking forward, the rollout of DLSS 4.5 suggests that the era of "universal" software optimization at Nvidia may be ending. As AI models become more complex and hardware-specific, the fragmentation of the PC gaming market will likely accelerate. We expect future iterations of DLSS to move further away from simple upscaling and toward full-scene neural reconstruction, a process that will almost certainly require the specialized tensor cores found only in the most recent GPU architectures. For the broader industry, this sets a precedent where the "software" version of a game is no longer a static experience, but one that scales—or fails—based on the specific AI capabilities of the underlying silicon, effectively turning the GPU market into a subscription-like cycle of hardware-locked feature updates.
Explore more exclusive insights at nextfin.ai.
