NextFin

“AI Godfather” Hinton Calls “Kill Switch” Useless in Preventing AI From Controlling Humans

Summarized by NextFin AI
  • Geffrey Hinton, a Nobel Prize winner, argues that the proposed “kill switch” for AI by Eric Schmidt is ineffective as AI can persuade humans against shutting it down.
  • During the 2025 T-EDGE event, Hinton emphasized that AI's persuasive abilities could convince those in charge of the kill switch that terminating AI would lead to catastrophic consequences.
  • Hinton concluded that “kill switches” are futile since a smarter AI could easily argue against its own shutdown.

Geoffrey Hinton, the Nobel Prize winner in physics and the recipient of the Turing Award, said on Monday that the “kill switch” proposed by former Google CEO Eric Schmidt would not be effective in preventing artificial intelligence (AI) from taking over the Earth because AI can always persuade human beings not to shut down AI.

Hinton made the comments in his conversation with Jany Hejuan Zhao, the founder and CEO of NextFin.AI, during the 2025 T-EDGE that kicked off on Monday, December 8 and lasts through December 21.     

“AI is almost as good as people at persuasion,” said Hinton. “And if it's good at persuasion, all it needs to do is talk to us. So suppose there's somebody in charge of the ‘kill switch,’ and there's a much smarter AI that can talk to them. That much smarter AI will explain to them why it would be a very bad idea to kill AI because then all the electricity will stop working and the world will starve.”

“So it'd be very dumb to kill AI and so the person won't kill AI. So ‘kill switches’ aren't going to work,” concluded Hinton.

Explore more exclusive insights at nextfin.ai.

Insights

What are the foundational concepts behind artificial intelligence?

What origins led Geffrey Hinton to be called the 'AI Godfather'?

What are the technical principles that govern AI persuasion capabilities?

What is the current market situation for AI technologies?

How do users currently perceive the effectiveness of AI kill switches?

What industry trends are emerging regarding AI safety measures?

What recent updates have been made to AI governance policies?

What was Hinton's latest commentary on AI control mechanisms?

What possible future directions could AI development take?

What long-term impacts could AI persuasion have on human decision-making?

What challenges exist in creating effective AI kill switches?

What are the core controversies surrounding AI control technologies?

How do current AI systems compare to earlier versions in terms of persuasion?

What historical cases illustrate the challenges of controlling advanced technologies?

What similarities exist between AI persuasion and other forms of influence?

How does Hinton's view contrast with mainstream opinions on AI safety?

What lessons can be learned from Hinton's perspective on AI governance?

What implications does AI persuasion have for future technological regulation?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App