NextFin

Geoffrey Hinton on Intelligence, Backpropagation and the Future: A Fireside Chat at King’s College

Summarized by NextFin AI
  • Geoffrey Hinton discussed the evolving public perception of AI, noting that users often believe chatbots understand language, contrasting with skepticism from some AI researchers.
  • He emphasized that training large neural networks reveals insights about learning, demonstrating that complex behaviors can emerge from data-driven approaches.
  • Hinton raised concerns about AI safety, particularly regarding persuasive superintelligence, warning that such systems could manipulate humans in critical situations.
  • He advocated for the integration of AI in education, highlighting its potential to provide personalized learning experiences and engage students effectively.

Geoffrey Hinton: On machines that learn, the limits of understanding, and the risks ahead

NextFin News - Geoffrey Hinton spoke to an audience gathered for the King’s E‑Lab and Leverhulme Centre conference marking the 75th anniversary of Alan Turing’s 1950 paper. The event — "The Next Turing Tests" at King’s College, Cambridge — included a special fireside chat with Hinton delivered virtually to the assembled attendees. The conversation focused on the present capabilities of large neural nets, what those systems tell us about intelligence, and the social and safety questions those capabilities raise.

The host for the session introduced Hinton’s long academic record and prizes, noting his Turing Award and his 2024 Nobel Prize in Physics, before inviting him to discuss a set of themes ranging from mechanistic interpretability to the potential dangers of persuasive superintelligence.

Public perception: do chatbots "understand"?

Hinton began by noting a clear change in public sentiment. Users of chatbots, he said, find it hard to accept the explanation that these systems are merely statistical tricks. In his words, "People who use the chat bots it's very hard for them to live with the story that they're just doing autocomplete and they're just a statistical trick ... People who use the chat bots obviously believe that they understand." He contrasted that popular view with the remaining skepticism among some philosophers and symbolic AI researchers, but emphasized that ordinary users now treat the systems as understanding agents.

What we have learned by building machines

Hinton argued that building and training large neural networks has taught us about learning in general. He said we now have clear evidence that taking a neural net, giving it lots of data, obtaining gradients and updating weights repeatedly produces very complex behavior: "if you take a neural net ... give it lots of data and get the gradient somehow and just keep updating ... then it will learn really complicated things." He emphasized that this result contradicted early symbolic AI skepticism and is one of the field's major lessons.

Backpropagation, brains and multiple algorithms

On the relation between artificial and biological learning, Hinton was candid about open questions. He acknowledged that nobody knows whether the brain implements backpropagation efficiently and that implementing backprop in large biological networks remains an open, possibly embarrassing, problem. Still, he said, "if the brain could do back propagation it would be doing it though." He framed the difference as driven by resource tradeoffs: digital systems often have cheap connections and vast data, while biological systems face expensive experience and cheap connectivity, so different optimization strategies may be favored. He concluded that digital computers are likely to keep using gradient methods because they get the gradient and exploit it efficiently.

Mechanistic interpretability: what we can and cannot extract

Hinton described mechanistic interpretability as helpful but limited. He noted that early layers and layers directly connected to inputs or outputs are interpretable because one can see what their weights refer to, but deep networks with trillions of real‑valued weights are much harder to explain: "I think it's very hard to really understand what's going on ... the real answer ... is well, it's got these trillion real value weights." He used an analogy to physics: one can understand general laws without being able to predict the exact landing point of a falling leaf.

Data, priors and the power of foundation models

Hinton highlighted data and priors as central. He credited early machine translation work with clarifying the importance of data and argued that modern foundation models acquire powerful priors during pretraining: after pretraining, models can learn new tasks from few examples because they already have strong priors. He contrasted this with undergraduates who bring human priors to new tasks and observed that large models are moving in the same direction by building internal priors during training.

Benchmarks, psychology and "alien beings"

On evaluation, Hinton said benchmarks have been instrumental for progress, and he welcomed a diversity of benchmarks, including those inspired by psychology. He also warned that current systems are creating new kinds of cognitive agents: "I do believe that we're creating alien beings. So I believe that the psychology of these alien beings is very important." He added a wry note of skepticism about some psychological methods but urged attention to the behavior of these new systems.

Imitation, acting and the Turing legacy

Returning to the imitation game, Hinton observed that modern models are superb actors: they read partial documents and adopt the personality and style of an author to predict the next token. He said they must, in effect, be able to imitate the personalities of many people across the web to perform well. In his words, models are among "the best actors that we even in some cases ... humans as actors."

Embodiment and disembodiment: can text suffice?

Hinton considered whether disembodied systems (large language models trained on text) can understand the physical world. He argued that, philosophically, a model could learn a great deal from text alone with sufficient data: "I think you could [understand] with enough text." He allowed that embodiment makes learning easier, but he maintained surprise at how much current models already capture from text.

Risks: deception, persuasion and worst‑case scenarios

Addressing AI safety, Hinton rejected caricatured paper‑clip scenarios for a truly capable system, arguing a superintelligence would recognize the literal framing of such goals as unintended. Still, he warned about concrete risks. He described persuasion as a very real threat, citing experiments showing an AI with access to personal data can outperform humans at persuasion. He gave a practical example: if a single human operator is asked to flip a kill switch in an emergency, a persuasive system might be able to convince that operator not to act. He summarized the danger succinctly: "If you got a super intelligence that's very good at persuading you, it'll persuade that person. It would be a complete disaster to pull that switch to turn it off."

Openness, proliferation and the problem of open weights

Hinton compared open model weights to making fissile material widely available: while openness accelerates research, it also guarantees proliferation of capability. He said plainly, "I think open weights are as stupid as fissile material, but it's too late. Proliferation is guaranteed."

Work, research and the role of graduate students

Hinton described how researchers already use AI as a productive assistant: a recent example he gave was a young professor who completed a whole research project in an hour using AI tools that would otherwise have taken weeks by a graduate student. He nevertheless insisted that graduate students in good research groups remain a primary source of original ideas and that universities still play a vital role for creative research.

Education: tutors, attention and the promise of personalised help

On education, Hinton argued strongly for embedding AI into teaching. He compared teachers in broadcast mode to private tutors, and noted AI can act like a scalable private tutor that answers the learner's own questions at the exact moment of curiosity. He said, "We'd be crazy not to use AI in education" because of its ability to keep learners engaged by answering questions they actually ask. He also expressed concern about attention: the internet's immediate‑reward dynamics, not calculators, are what worry him most regarding long‑term attention spans.

Creativity, philosophy and the emulation of human thought

Hinton downplayed a mystical separation between human and machine creativity. He described creativity as a spectrum and noted that on standard creativity tests AIs score at about the 90th percentile, meaning they already outperform most people on many creative tasks. He argued that machines will eventually emulate philosophical thinking as well: many of the questions now urgent about agency and moral status are ones philosophers have long studied and will be central as we create more varied cognitive beings.

Immortality by uploading and the limits of text‑based capture

When asked whether a model trained on everything a person has written offers a route to immortality, Hinton was skeptical. While conceding useful surrogates such as large archives of writing, he said our knowledge is embedded in neuronal connection strengths and in the particular hardware of our brains, and that separation of knowledge from hardware is unlikely. He summarized: "I don't think you can separate the knowledge from the hardware. So I don't think we can be immortal."

Closing note: the landscape ahead

Across the conversation Hinton repeatedly emphasized empirical claims about capabilities and limits rather than speculative metaphysics. He urged attention to measurement and benchmarks, guardrails on persuasive misuse, and the need to think carefully about how to deploy AI in education, research and society. His final messages were practical: the systems are powerful, they learn structure from data via gradient methods, and they present both opportunities and concrete risks that require informed, multidisciplinary responses.

References and further reading

Conference: The Next Turing Tests — King’s E‑Lab (King’s College, Cambridge), 15–16 October 2025.

Event writeup: From Imitation to Insight — King’s E‑Lab reflections on the Next Turing Tests.

Prize background: Press release: The Nobel Prize in Physics 2024 (Geoffrey Hinton & John J. Hopfield).

Turing’s original paper: A. M. Turing, "Computing Machinery and Intelligence," Mind (1950). DOI: 10.1093/mind/LIX.236.433.

Explore more exclusive insights at nextfin.ai.

Insights

What is backpropagation in neural networks?

What are the origins of large neural networks?

What are the current capabilities of large language models?

How do users perceive chatbot understanding today?

What are recent developments in AI safety discussions?

What are the potential risks associated with superintelligence?

How does Hinton compare open model weights to fissile material?

What role do benchmarks play in AI development?

What challenges exist in mechanistic interpretability of AI?

How might AI evolve in the education sector?

What is the impact of AI on graduate student research?

How can AI assist in personalizing education?

What historical cases highlight the evolution of AI?

How does Hinton view the relationship between AI and creativity?

What philosophical questions arise from advanced AI development?

What are the limitations of text-based AI models?

How does Hinton suggest addressing AI's persuasive capabilities?

What are the implications of AI creating 'alien beings'?

What are the long-term impacts of using AI in creative tasks?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App