NextFin

Jensen Huang at CES: "I'm Perfectly Fine" With California's Billionaire Tax and the Push to Industrial AI

Summarized by NextFin AI
  • Nvidia's collaboration with Siemens aims to enhance software, simulation, and automation, integrating AI into Siemens’ systems to improve factory operations.
  • The Vera Rubin GPU is highlighted as being ten times more energy and cost efficient than its predecessor, which could significantly boost customer revenues.
  • Huang emphasizes the energy constraints in the AI industrial revolution, calling for investments in diverse energy sources to overcome historical bottlenecks.
  • Nvidia's strategy includes hiring engineers and licensing technology to expand its software capabilities, focusing on low-latency applications and future platforms.

NextFin News - On January 6, 2026, on the CES floor in Las Vegas, Nvidia CEO Jensen Huang joined Bloomberg Television’s Ed Ludlow for a wide-ranging conversation about Nvidia’s collaboration with Siemens, the company’s newest Vera Rubin platform, supply and energy constraints for AI, China licensing, and taxation policy. Siemens CEO Roland Busch also participated in parts of the interview, which took place during Bloomberg’s coverage of CES.

Partnership with Siemens and an "industrial AI operating system"

Huang described the newly announced, expansive collaboration with Siemens as a cross-spectrum effort to accelerate software, simulation and automation. He said the partnership will speed up EDA software and simulation tools and integrate "physical A.I. and a gigantic A.I." into Siemens’ Teamcenter and factory automation operating system. As Huang put it, "When we accelerate the software, then we'll get to use it to design our chips and systems. When we accelerate their simulation software, we'll use it in our factories to simulate the thermal properties of our factories."

"And when we integrate our automation and agent IC system into their AI industrial operating system, we can then use it in our factory floors with our partners, for example, Foxconn."

From advice to autonomous action: scaling AI in factories

Huang stressed the difference between advisory machine learning and more autonomous, adaptive systems. He said customers already use ML on the shop floor but that the new generation of models enables systems to "really act on your behalf." He acknowledged the scaling challenge: "It requires a lot of skills from our customers, a lot of technology, and it's still not that easy to implement. We are working on it to make it easy to deploy and easy to use."

Vera Rubin, energy efficiency and economic impact

Describing Nvidia's newest systems, Huang highlighted their energy and cost improvements and tied those gains to real-world economic effects for customers. He said each Vera Rubin GPU is "ten times more energy efficient than the last generation" and "ten times more cost efficient than the last generation," and stressed the scale of the engineering effort behind the system. He said greater energy efficiency increases what customers can deliver within a fixed power envelope: "Every time we improve energy efficiency, we're effectively improving both the capabilities for our customers and their revenues because they're always constrained by power."

"115,000 engineering years came together to build this system."

Energy supply, bottlenecks and regional constraints

Huang repeatedly framed the AI industrial revolution as energy constrained and urged investment across energy sources. "There's always energy there, never enough energy," he said, adding that industrial revolutions historically have faced energy limits. He pointed to bottlenecks that run from power generation through high-voltage transformers and medium-voltage switchgear to datacenter-level requirements, and he emphasized the need for policy and infrastructure to catch up regionally.

"Whatever energy you have, you have to make it as energy efficient as possible."

Edge inferencing and factory optimization

Huang explained how inferencing at low latency on the edge changes factory operations. He described controllers and industrial PCs running algorithms trained in the cloud and then deployed to the shop floor to support real-time optimization and higher yields. "Once you start inferencing with low latency, you bring this technology to the edge," he said, noting the broad hardware stack from chips to controllers to industrial PCs.

Memory and supply constraints

Asked about memory shortages, Huang acknowledged the bottleneck but said Nvidia's long-standing relationships with memory suppliers and careful planning mitigate the risk: "While the memory bottleneck is severe, we're fortunate to have worked with all... our major customers and major suppliers." He expressed confidence that supply plans will allow Nvidia to manage the demand curve.

China licensing and demand

On sales to China, Huang said he had not had direct government conversations but that communication flowed through companies. "If the companies are allowed to buy and build your products in China, then there'll be strong demand and we're seeing strong demand," he said, indicating demand is real where licensing permits.

Software competence, licensing and future platforms

Huang and the Siemens discussion touched on software expansion and potential integrations in life sciences and other domains. Huang said Nvidia had hired engineers and licensed complementary architectures focused on low-latency token generation and inference, and he framed those moves as part of building new segments and future use cases. About the mix of hiring versus licensing, he said simply that Nvidia had hired engineers and "we also license their technology."

"They designed an architecture that is very, very different than what we've done. It's focused on low latency token generation and really is incredibly good at inference."

Speculative ideas: datacenters in space and the same chips on different platforms

Responding to a question about the idea of space-based datacenters, Huang said the chips could be the same while system design, cooling and power would differ radically. "There's lots of energy in space... cooling is abundant in space," he said, noting that tokens and intelligence can be transferred back to Earth even if hardware production in orbit poses practical challenges.

Autonomous driving and industry comparisons

When asked about other autonomous-driving approaches, Huang said Nvidia's stack is also vision-based and that Tesla and other companies are making strong progress. He praised Tesla's execution and encouraged continued development: "They're doing a great job."

On the proposed California billionaire tax

Asked about the proposed California initiative taxing billionaires, Huang offered a concise personal position. He framed Nvidia's choice of Silicon Valley as talent-driven and indicated he had not been preoccupied by the tax debate. In his words: "We chose to live in Silicon Valley, and whatever taxes, I guess, they would like to apply, so be it. I'm perfectly fine with it. It never crossed my mind once." He added, responding to concerns about talent and relocation, that Nvidia has global offices "wherever there's talent" while stressing the company’s continued presence in Silicon Valley.

References and further viewing:

Explore more exclusive insights at nextfin.ai.

Insights

What are the key components of Nvidia's Vera Rubin platform?

What historical factors have influenced energy constraints in industrial revolutions?

How does the collaboration between Nvidia and Siemens impact software and automation?

What are the current trends in the AI industrial market as discussed by Huang?

What recent updates have been made regarding Nvidia's supply chain management?

What are the potential long-term impacts of energy-efficient GPUs in factories?

What challenges does Nvidia face regarding memory shortages?

How does Nvidia's AI technology compare to Tesla's approach in autonomous driving?

What are the implications of the California billionaire tax for tech companies?

What is the significance of edge inferencing in factory optimization?

How does Nvidia plan to address energy supply bottlenecks in the future?

What licensing challenges does Nvidia face in the Chinese market?

What innovations does Huang foresee in AI applications for life sciences?

What are the key differences between advisory machine learning and autonomous systems?

How does Nvidia's energy strategy align with industry trends towards sustainability?

What factors contribute to Nvidia's confidence in managing supply demands?

What are the proposed next steps for Nvidia's development in industrial AI?

How does Huang view the competition in the autonomous driving industry?

What role does policy play in addressing energy constraints for AI industries?

What are the potential benefits of space-based datacenters as discussed by Huang?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App