NextFin News - On 2026-02-24 at the Indian Institute of Science (IISc), Bengaluru, Sir Demis Hassabis, CEO of Google DeepMind, joined Varun Mayya and Prof. Govindan Rangarajan for a fireside conversation about artificial intelligence, scientific discovery and the role of engineers in India. The session — attended by students, faculty and a broad public audience — ranged from DeepMind’s work on protein folding to the definition and testing of AGI, and from the future of game development to practical advice for software engineers.
The following is a systematic presentation of Hassabis’s core statements from the conversation, organised by topic.
Defining AGI and the ‘‘Einstein’’ test
Hassabis reiterated a consistent, high bar definition of artificial general intelligence: a system that can exhibit all the cognitive capabilities humans can
. He explained why the human brain is central to that definition — it is the only existence proof we know of for general intelligence — and described the capabilities missing from today’s systems: true creativity, continual learning, long-term planning and consistent competence across domains. To make the distinction concrete he proposed a type of empirical test: train an AI system with a knowledge cutoff of say 1911 and then seeing if it could come up with general relativity like Einstein did in 1915
. He said that today’s systems are not yet able to pass such a test and that achieving it would mark a true AGI.
On current systems’ limitations: jagged intelligence and consistency
Hassabis described present models as uneven in capability: they can achieve gold‑medal performance on some benchmarks yet fail on relatively simple problems framed differently. It shouldn't be a jagged intelligence like that,
he said, arguing that a general system should display broadly consistent performance rather than the current spikes of excellence and gaps.
AlphaFold, Isomorphic Labs and the future of medical AI
Discussing applications in biology and medicine, Hassabis positioned AlphaFold as an early, foundational advance that unlocked downstream possibilities. He described DeepMind’s scientific work and the spin‑out Isomorphic Labs as developing complementary technologies in chemistry and biochemistry to translate structure predictions into candidate molecules with desirable toxicity and absorption properties. He expressed ambition for dramatically shortening drug discovery timelines: bring down the drug discovery process… by a factor of 10 to a matter of months, maybe even weeks
, while acknowledging that protein folding was only one component of a much larger, more complex pipeline.
Building scientific taste and mentoring AI
On how scientists choose problems, Hassabis placed strong emphasis on mentorship, active experimentation and practice. He described scientific taste as a blend of intuition and creativity that is usually learned in graduate school from mentors and developed by doing experiments. He outlined the idea of mentoring AI systems in ways analogous to human apprenticeship: build a custom-built LLM which then is mentored by a master scientist… the LLM acts like an apprentice to the scientists and you give constant feedback
, a pathway he suggested could reproduce some elements of human scientific taste.
Polymathy and multidisciplinary work
Hassabis linked curiosity and interdisciplinary training to innovation: many breakthroughs happen at the intersection of fields. He described his own path — from games to neuroscience to AI — and recommended becoming expert in at least one domain while learning to reach graduate‑level competence quickly in complementary areas. Find those connection points… understand it from first principles so you can quickly apply it to a new area,
he said, urging humility to become a beginner again when learning other disciplines.
AI and game development: Genie, world models and new genres
Returning to his roots in game design, Hassabis welcomed AI’s role in making game development faster and enabling new experiences. He praised Varun’s game and said the technologies for asset creation, 3D models and concept art are rapidly improving. On Genie 3 and world models he described a current demo as generating short playable worlds but still functioning like an interactive movie: you can only play it for one minute… it can only stay coherent for a minute
. He predicted those coherence windows would lengthen over the next four or five years and that AI could enable new genres — massive multiplayer experiences populated by smart NPCs — while also serving as tools for bug testing and auto‑balancing.
Balancing research and product pressures
Asked about reconciling long‑term research with commercial needs, Hassabis explained DeepMind’s approach of resourcing both tracks. He said roughly half the team concentrates on near‑term priorities and product integrations while another portion pursues blue‑sky research, and stressed the leader’s role in protecting long‑horizon work. He also reiterated his belief that building foundation models such as Gemini is a significant, near‑term step on the path to AGI.
Advice for software engineers — lean into tools, cultivate taste
Addressing concerns from Indian engineers about models that write code, Hassabis recommended embracing AI as an augmenting tool: lean into these AI tools, get incredibly good at using them
. He framed the change as a familiar historical process of abstraction — from assembly to C/C++ to Python — and said the competitive edge will shift to those who combine tooling with judgment and scientific taste.
Memory, forgetting and AI’s representation of experience
On the topic of memory, Hassabis compared current context windows and token storage to a crude approximation of hippocampal episodic memory. He argued that machines need efficient selection and consolidation mechanisms rather than brute‑force storage and search: we're kind of badly approximating the hippocampus at the moment with context window… what we may be missing is forgetting
. He suggested mechanisms equivalent to value judgments about what to retain would be useful for future AI systems.
Near‑term surprises and embodied AI
Hassabis identified several areas he expects to surprise people in the coming years: progress in mathematics driven by AI’s tractability for axiomatic problems, robotics and AI in the physical world, self‑driving vehicles and automated laboratories that accelerate scientific discovery. He said many of these advances could arrive in the next two to five years, with robotics and automated experimentation among the most tangible near‑term possibilities.
At IISc, Hassabis repeatedly framed AI as a tool whose value depends on human choices: If you use AI in a lazy way, it will make you worse at critical thinking… but that's down to you as the individual
. Across technical and practical topics he stressed the dual responsibilities of building powerful systems and teaching people to use them wisely.
References and related material
Video (event page / host channel): Varun Mayya — YouTube channel.
Host institution announcement: IISc Bangalore — official Twitter.
Press coverage summarising the interview: "DeepMind's CEO says using AI can make you a genius — or hurt your critical thinking skills" (Yahoo/Business Insider summary).
Event post and video share: Google for Business India — LinkedIn post.
For the full verbatim record, this article is drawn from the conversation held at IISc on the date above; readers are encouraged to watch the full video on the host channel for the complete transcript and context.
Explore more exclusive insights at nextfin.ai.

