NextFin

Jensen Huang at CES 2026: "A Platform Shift" — From Models to Machines

Summarized by NextFin AI
  • NVIDIA CEO Jensen Huang emphasized that AI represents a significant platform shift in computing, requiring a complete reinvention of the computing stack, from chips to applications.
  • He introduced Cosmos, a world foundation model for physical AI, and Alpamo, an autonomous vehicle model that reasons and acts based on real-time data.
  • The new Vera Rubin AI platform was unveiled, featuring six co-designed chips capable of delivering 100 petaflops of AI performance.
  • Huang highlighted NVIDIA's commitment to open-source models and collaboration with partners to foster innovation across industries.

NextFin News - On January 5, 2026, NVIDIA founder and CEO Jensen Huang delivered a keynote at the Fontainebleau in Las Vegas to open the company’s presence at CES 2026. Huang addressed a live auditorium and an online audience, framing the talk as a launchpad for the company’s work across AI models, robotics, simulation and new infrastructure. (gadgets360.com)

Across roughly 90 minutes onstage, Huang laid out a single, consistent theme: AI is a platform shift that requires reinventing every layer of the computing stack, and NVIDIA intends to deliver that stack — from chips to simulation — in the open. The keynote included multiple product and platform announcements (including a new rack-scale platform and an autonomous-vehicle model) that Huang positioned as practical steps toward "physical AI" that can reason and act. (theverge.com)

Platform shifts and the five-layer stack

Huang opened by describing computing as periodically resetting every 10–15 years, and he placed the current moment alongside previous shifts: mainframe to PC, PC to internet, internet to cloud, cloud to mobile. He asserted that AI is now the next platform shift and that it brings two simultaneous changes: AI becomes the foundation for new applications, and the way software is developed and run is being reinvented.

You no longer program the software, you train the software. You don't run it on CPUs, you run it on GPUs. He framed this as a five-layer reinvention — from chips and infrastructure to models and applications — and said a decade’s worth of computing value is being modernized to this new approach.

Open models, DGX clouds and building in the open

Huang emphasized NVIDIA’s work on open models and internal DGX supercomputers used to develop those models. He described the company’s approach as making model work and supporting libraries available so every company, industry and country can participate.

He listed multiple open projects and libraries — including NeMo libraries, physics-oriented Nemo libraries and model toolkits for biology, weather and robotics — and reiterated that the models, the training data and the lifecycle tools are being open-sourced to encourage broad adoption and trustworthy development.

We open source all the models. We help you make derivatives from them. We have a whole suite of libraries... so that you could process the data, you could generate data, you could train the model, you could create the model, evaluate the model, guardrail the model all the way to deploying the model.

Agentic systems, reasoning and multimodality

Huang described the rise of agentic AI — systems that can plan, use tools, research, and decompose problems into steps — and said these capabilities are unlocking new, practical applications. He explained that reasoning, test-time scaling ("thinking in real time") and multi-model approaches let agents call the right models for each part of a task.

The ability to use reinforcement learning and chain of thought... and search and planning... has made it possible for us to have this basic capability. He argued that multimodality and multi-model architectures are central, and that agentic systems will be the user interface for future enterprise platforms.

Cosmos and physical AI: simulation and synthetic data

Turning to physical AI, Huang introduced Cosmos as a "world foundation model" trained on internet-scale video, driving and robotics data and 3D simulation. He explained that Cosmos learns unified representations that align language, images, 3D and actions and that it enables physically coherent generation and closed-loop simulation.

Cosmos turns compute into data, training AVs for the long tail. Developers can run interactive closed-loop simulations in Cosmos. When actions are made, the world responds.

Alpamo: a "thinking, reasoning" autonomous vehicle model

Huang announced the autonomous driving model he described in the keynote as Alpamo, which he characterized as "the world's first thinking reasoning autonomous vehicle AI," trained end-to-end from camera input to actuation output using both human driving demonstrations and Cosmos-generated synthetic miles.

Alpamo is trained end to end, literally from camera in to actuation out... it reasons about what action it is about to take. It tells you what action it's going to take, the reasons by which it came about that action, and then the trajectory.

Huang showed a live, no-hands demo and explained that Alpamo is coupled with a traceable AV safety stack and policy evaluator: when the model is confident it drives; when confidence is insufficient, the system falls back to classical guardrail systems.

Journalistic coverage of the keynote framed this announcement as part of NVIDIA's public CES 2026 presentation and reported the same model and demo details. (axios.com)

Vera Rubin: a new rack-scale AI platform

Huang unveiled Vera Rubin as NVIDIA’s next-generation rack-scale AI platform, describing it as a system of six co-designed chips (Vera CPU, Reuben GPU, NVLink switch, Connect-X9 NIC, BlueField4 DPU and Spectrum‑X switch) that operate as one supercomputer. He stressed performance, memory expansion for long-context models, confidential computing, and liquid cooling at 45°C as practical engineering advances.

Vera Rubin arrives just in time for the next frontier of AI... capable of delivering 100 petaflops of AI, five times that of its predecessor.

The Verge and other outlets published early coverage of the Vera Rubin platform and the hardware breakdown that Huang presented onstage. (theverge.com)

Robotics, Omniverse and training robots in simulation

Huang reiterated NVIDIA’s work with Omniverse, Isaac Sim and Isaac Lab as the simulation backbone that lets robots learn object permanence, causality and other common-sense physical laws. He showed robot demos and said robots learn inside simulated environments so synthetic, physics-grounded data can be generated at scale for training.

These ideas are common sense to even a little child, but for AI, it's completely unknown. And so we have to create a system that allows AIs to learn the common sense of the physical world.

Partnerships, vertical stacks and ecosystem openness

Throughout the keynote Huang named enterprise partners integrating NVIDIA technology into their stacks — from Palantir and ServiceNow to Snowflake, CodeRabbit and CrowdStrike — and described work with automotive partners to bring the full stack to production vehicles. He reiterated that NVIDIA builds entire stacks but opens them to the ecosystem so partners can adopt parts or the whole system.

We build the entire stack... but the entire stack is open for the ecosystem.

Press reporting confirmed the keynote timing and venue and noted partner demonstrations and planned vehicle deployments tied to the announcements. (gadgets360.com)

Closing and tone

Huang closed by framing the moment as the start of a new industrial era in which AI-powered physical systems — from autonomous cars to factory robots — will be designed, simulated and tested in software before being built in the real world. He repeatedly emphasized openness, simulation-driven synthetic data, and the centrality of GPUs and new infrastructure to enable models that can think and act.

We are reinventing AI across everything from chips to infrastructure to models to applications, and our job is to create the entire stack so that all of you could create incredible applications for the rest of the world.

References

Event and coverage:

Video: NVIDIA Live at CES 2026 with Jensen Huang (NVIDIA livestream & partner feeds were broadcast during the keynote).

Explore more exclusive insights at nextfin.ai.

Insights

What are the key concepts behind the platform shift in AI as described by Jensen Huang?

What historical technology shifts did Huang compare the current AI revolution to?

What feedback have users provided about NVIDIA's new AI models and platforms?

What are the latest trends in the AI industry as highlighted in Huang's keynote?

What recent product announcements did NVIDIA unveil at CES 2026?

How does NVIDIA's Vera Rubin platform differ from its predecessors?

What potential impacts could the rise of agentic AI have on various industries?

What challenges does the AI industry face regarding ethical considerations and trust?

How does NVIDIA's open-source approach influence the development of AI models?

What are the key features of the Alpamo autonomous vehicle model?

How does the Cosmos model integrate various forms of data for AI training?

What are the implications of using synthetic data for training AI systems?

What partnerships is NVIDIA pursuing to enhance its AI ecosystem?

How does Huang foresee the future of AI-powered physical systems evolving?

What core difficulties does the AI industry encounter in achieving widespread adoption?

How do NVIDIA's advancements in simulation contribute to AI development?

What comparisons can be drawn between NVIDIA's AI strategies and those of its competitors?

How might the shift from programming to training software affect software development?

What are the long-term impacts of AI's integration into the computing stack?

What role does multimodality play in the future of AI applications?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App