NextFin

Demis Hassabis on Whether Classical AI Can Decode the Universe

Summarized by NextFin AI
  • Demis Hassabis discussed his Nobel Prize lecture, proposing that many natural patterns are shaped by survival processes and can be efficiently modeled by classical learning algorithms.
  • He emphasized the importance of structure in natural systems, suggesting they occupy lower-dimensional landscapes that neural networks can exploit for efficient modeling.
  • Hassabis introduced a new complexity class for systems modelable by classical learning algorithms, framing the P vs NP question as central to understanding the universe as an informational system.
  • He highlighted DeepMind's Alpha projects as evidence that classical systems can solve problems previously thought to require quantum resources, linking practical scientific goals to broader questions about computation and reality.

NextFin News - On July 23, 2025, Demis Hassabis joined Lex Fridman for a long-form conversation on the Lex Fridman Podcast (episode #475). The interview followed Hassabis’s recent Nobel Prize lecture and ranged across topics from a bold conjecture about the learnability of natural systems to the limits of classical computation and the surprising physics captured by modern video models. Lex Fridman hosted the discussion; Hassabis spoke as the CEO of Google DeepMind and a recent Nobel laureate.

The conversation was recorded for the Lex Fridman Podcast and published on July 23, 2025. Hassabis placed his technical claims against the background of DeepMind’s Alpha projects and recent advances in multimodal models, framing the work as part of a broader scientific mission to build better world models and ultimately accelerate discovery.

Learnable patterns in nature: a conjecture from a Nobel lecture

Hassabis opened with a provocative claim he had presented in his Nobel Prize lecture: that many patterns found in nature are not arbitrary but shaped by survival and selection processes and therefore amenable to efficient modeling. He summarized the idea succinctly: any pattern that can be generated or found in nature can be efficiently discovered and modeled by a classical learning algorithm. Drawing on examples from DeepMind’s work—AlphaGo and AlphaFold—he described those systems as models of very high‑dimensional, combinatorial environments that make otherwise intractable search problems tractable by guiding search with learned structure.

Why natural systems might be modelable

Hassabis argued that natural systems often bear the imprint of evolutionary or selection pressures, from protein folds to planetary orbits, creating structure that a learner can exploit. He suggested that such systems occupy lower‑dimensional manifolds or landscapes with gradients that neural networks are well suited to follow. In his words, natural systems have structure because they were subject to evolutionary processes that shape them, and that structure is what makes efficient modeling possible.

Computation, P vs NP, and a proposed class of learnable systems

The discussion moved to computational theory. Hassabis said he has long been fascinated by the P versus NP question and proposed thinking about a new complexity class for systems that are modelable by classical learning algorithms—what he and Lex described as the set of "learnable natural systems". He framed the question of what is modelable by classical (non‑quantum) Turing‑style machines as central, and he suggested that viewing the universe as an informational system makes the P vs NP question effectively a physics question.

Evidence from Alpha projects and polynomial tractability

Using AlphaGo and AlphaFold as touchstones, Hassabis explained how building a model of the environment—the dynamics and constraints of a system—can transform an intractable brute‑force search into a polynomial‑time, tractable process. He emphasized that these successes show classical systems can go much further than previously assumed: in his view, a neural‑network‑based system running on classical computers has already solved problems many thought required quantum resources.

Veo and video models: passive observation, intuitive physics

Hassabis discussed Veo, DeepMind’s video generation model, as striking evidence that passive observation can yield an intuitive model of physical dynamics. He noted that Veo can model liquids quite well, surprisingly well, capturing materials, specular lighting and the behavior of fluids in ways he finds compelling. That capability led him to question the necessity of embodiment for building intuitive physical understanding: systems can, he said, learn much about the mechanics of the world by watching video alone.

On understanding versus anthropomorphism

Hassabis was careful to distinguish mechanistic model‑building from anthropomorphic claims about consciousness or deep philosophical understanding. He argued that generative models that can predict coherent next frames demonstrate a form of understanding—an operational, predictive grasp of dynamics—while stopping short of asserting human‑style conceptual insight. As he put it, the models have modeled enough of the dynamics to generate convincing short sequences, which is an important but not anthropomorphic form of understanding.

Emergence, chaos, and the limits of learnability

The interview addressed boundary cases. Hassabis acknowledged systems that are chaotic or possess singularities—where tiny differences in initial conditions produce widely divergent outcomes—may be particularly hard to model. Cellular automata and emergent phenomena sit near that boundary: some may be efficiently simulatable, others may not. He flagged these as open questions while reiterating that many real‑world systems possess structure that makes learning feasible.

World models, interactivity, and the path to AGI

Hassabis described a trajectory from video models that predict frames to richer, interactive world models that support planning and agency. He suggested the next stages include making generated worlds interactive, building what he called true world models—the mechanics and objects of a world—on top of which planning systems can operate. That vision connects directly to DeepMind’s long‑term aim: using AGI as a tool to help scientists answer foundational questions about computation, physics and biology.

Practical and scientific ambitions

Throughout the conversation Hassabis returned to practical scientific goals—protein structure, genome‑to‑function mapping, and simulating biological systems—while linking them to grander questions about computation and the nature of reality. He positioned AGI as a research instrument: an accelerating tool that could help probe questions such as whether fundamental computational limits are rooted in physics.

Below are reference links to the episode and related materials.

References:

Explore more exclusive insights at nextfin.ai.

Insights

What are the origins of classical AI and its principles?

How do the concepts of survival and selection influence natural systems modeling?

What is the current state of the chip market in relation to AI developments?

What user feedback has been received regarding DeepMind's Alpha projects?

What recent updates have emerged in AI policy affecting classical AI?

How do the latest advancements in multimodal models impact AI research?

What challenges does classical AI face in modeling chaotic systems?

What are the potential long-term impacts of understanding natural systems through classical AI?

What are the core controversies surrounding the limits of classical computation?

How do AlphaGo and AlphaFold exemplify advancements in classical AI?

What comparisons can be drawn between classical AI and quantum computing?

What are the emerging trends in AI that might shape future developments?

How does the P vs NP question relate to classical learning algorithms?

What insights can be gained from the Veo model regarding intuitive physics?

How do DeepMind's ambitions connect AGI with scientific discovery?

What are the proposed future directions for interactive world models?

What evidence supports the claim that some patterns in nature are learnable?

What limitations exist in using generative models for understanding dynamics?

How does Hassabis differentiate between mechanistic model-building and anthropomorphism?

What role do evolutionary processes play in shaping natural systems for modeling?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App