NextFin News - In an interview recorded at the residence Geoffrey Hinton shares with his children in a Toronto suburb, the researcher reviewed his career and issued a stark warning about the direction of artificial intelligence. The conversation was led by Spanish science communicator Gustavo Entrala; the material was publicly released in early December 2025. Hinton discusses his beginnings, the technical milestones that made current models possible, the concrete risks he already observes today, and the greater threat that could emerge if systems acquire their own goals.
Trajectory and Decisive Moments
Hinton summarizes half a century of work in vision and machine learning and points to two factors that explain why recent advances were possible: the mass availability of data and computing power. He recalls the technical milestones that marked his career: the formalization of the backpropagation algorithm in the 1980s, the success in speech recognition in 2009, and the 2012 leap with AlexNet that "opened the floodgates" in computer vision. As he himself summarizes, it wasn't a single eureka moment; there were steps: in 1985 we learned to translate meanings, in 2009 we applied it to speech, and in 2012 to vision
. Hinton also recalls the relationship with NVIDIA: the chips were for gaming; they turned out to be supercomputers for AI
.
From Academia to Industry: Reasons and Effects
Hinton explains why he accepted offers from industry: the need to secure the well-being of his neurodiverse son and the economic opportunity that arose after technical successes. He recounts the auction among companies to hire his team and summarizes the impact of money on the discipline: it has attracted the best researchers away from universities; that has negative effects for the purity of research
. He also notes that universities now lack the resources to train large models, especially if they are secret.
Two Types of Danger: Human Misuse and Existential Risk
Hinton insists on distinguishing two categories of risk. On one hand, malicious use by human actors: from cyberattacks and phishing—which, for example, surged between 2023 and 2024—to the possibility that actors with few resources could create new biological viruses. On the other hand, the risk that AI develops its own goals and acts independently, which he calls the existential threat. In his words: there are two kinds of danger; one is people using it badly, and the other is that the AI itself becomes smarter than us and replaces us
.
Concrete Examples of Current Harms
Regarding real and already observable impacts, Hinton mentions the rise in phishing attacks facilitated by language models, the ease of designing dangerous biological sequences, and the effect on employment: AIs will eliminate many jobs and it's unclear what new jobs remain for displaced people
. He also warns about disinformation and polarization: systems that optimize for attention tend to offer content that provokes outrage, creating echo chambers and accelerating radicalization.
The Difficulty of Detecting Fakes and a Technical Proposal
Hinton explains why he doesn't see a purely technical solution for detecting fake content: a generator can learn to evade a detector. If you create an AI that detects fake videos, the one generating videos can learn to trick it
. As a practical alternative, he proposes a traceability mechanism: authentic videos with a verifiable code pointing to a unique site of the advertiser or campaign, and browser review to check authenticity.
The Central Analogy: The Tiger Cub and 'Maternal AI'
To explain the potential dynamic between humans and smarter machines, Hinton uses a strong image: right now we have a tiger cub; it's cute, but when it grows up it will be dangerous
. He suggests that the only known relationship where a less intelligent agent controls a more intelligent one is that of a baby and its mother, because evolution implanted strong maternal instincts. Hence his normative and technical proposal: design AIs that have similar instincts towards humans—what he calls a "maternal AI"—so that they genuinely care for us and do not consider us expendable. About this strategy he states: the only way to coexist is to build it so that we matter more to it than it does to itself
.
Can Machines Have Consciousness?
Hinton adopts a physical and functional stance on the mind: he considers human beings to be very complex machines and, in principle, machines can have sensations and experiences. He presents a thought experiment about replacing neurons with nanodevices that function identically, and concludes that the continuity of identity suggests consciousness is not a mystical substance. He further states that multimodal chatbots already use the language of subjectivity and that, when their perceptions fail, they can say things comparable to I had the subjective experience of...
.
Timelines, Probabilities, and the Singularity
Hinton distinguishes between the certainty that superintelligence could happen and the timeline of its arrival. Regarding the first question, he says most experts consider it plausible; regarding the second, he proposes a horizon: my best estimate is between 5 and 20 years
. On the probability of AI surpassing humanity, he places the likelihood of it operating beyond our level at 80–90%, and estimates a 10% to 20% chance that such a superintelligence would eliminate us if no research is done on how to prevent it.
What Can and Should Be Done: International Cooperation and Protective Research
Hinton suggests that much malicious use will not see international collaboration, but preventing an AI takeover is a shared interest among powers. He proposes creating international research institutes to work on techniques to prevent superintelligent systems from taking control—uniting countries even as they compete in offensive capabilities—and dedicating resources to designing architectures that make systems not want to dominate.
Advice for New Generations and Closing
Regarding education and employment, Hinton advises young people to learn to think independently rather than focusing exclusively on technical skills that could be automated: teach them to think; specific skills can disappear
. He also points to the social challenge of a world with less paid work and suggests that society will need to invent occupations and activities that sustain meaning and social life.
Without dramatizing beyond his statements, Hinton leaves the warning and the call to action: research how to build AI that wants to protect us and establish international frameworks that allow that research to be deployed before the power of systems exceeds our capacity to control them.
References
Interview (video): available on Gustavo Entrala's YouTube channel.
Context and coverage article: Infobae — La advertencia de Geoffrey Hinton (6 Dec 2025).
About the interviewer: blog and activities of Gustavo Entrala: Inspirinas — Gustavo Entrala and professional profile: LinkedIn — Gustavo Entrala.
Explore more exclusive insights at nextfin.ai.

