NextFin

“The Day After AGI”: Hassabis and Amodei on Timelines, Risks and the Race to Self‑Improving AI

Summarized by NextFin AI
  • Dario Amodei predicts a rapid advancement towards AGI, suggesting we may be only six to twelve months away from models performing tasks traditionally done by engineers.
  • Demis Hassabis provides a more cautious estimate, indicating a 50% chance of achieving full human cognitive capabilities by the end of the decade, particularly in tasks like coding and mathematics.
  • Both CEOs emphasize the importance of AI systems that can create successor systems, potentially leading to a self-amplifying loop in AI development.
  • The discussion highlights the geopolitical implications of AI advancements, particularly concerning chip exports and the need for international cooperation on safety standards.

NextFin News - On 20 January 2026, at the World Economic Forum annual meeting in Davos, two of the AI industry’s most prominent builders spoke together in a session titled "The Day After AGI." The conversation, moderated by Zanny Minton Beddoes of The Economist, brought together Demis Hassabis, co‑founder and CEO of Google DeepMind, and Dario Amodei, co‑founder and CEO of Anthropic, to discuss timelines, mechanisms of acceleration, economic consequences and the safety and geopolitical challenges that lie ahead.

Timelines to AGI

Dario Amodei described a sharply accelerating path driven by models that can write code and conduct research. He explained the mechanism he expects to propel progress: models that help build the next generation of models and thereby create a feedback loop. As he put it, "I think we might be six to 12 months away from when the model is doing most maybe all of what [engineers] do end to end." He emphasised that while some parts of that loop are limited by non‑AI bottlenecks (chips, manufacturing and training time), the acceleration coming from code and research automation could make the overall process much faster than many expect.

Demis Hassabis offered a more measured view on when systems will exhibit the full range of human cognitive capabilities. He reiterated his prior position — roughly a fifty percent chance by the end of the decade — and stressed differences across domains. Hassabis noted that tasks with verifiable outputs, such as coding and mathematics, lend themselves to automation sooner than open‑ended natural sciences, where empirical testing and creative hypothesis generation remain major obstacles. He said that some "missing ingredients" may yet be required for the very highest levels of scientific creativity.

Self‑improvement and the 'Closing of the Loop'

Both speakers returned repeatedly to the notion of AI systems that can build or substantially accelerate the creation of successor systems. Amodei emphasised the practical reality inside his organisation: engineers increasingly rely on models to produce and refine code, changing the nature of software work. That, he argued, creates the potential for a rapid, self‑amplifying loop. Hassabis agreed the loop is possible in certain domains but warned that fully closing it without human oversight is uncertain and may be constrained by outside factors such as hardware production and real‑world experimentation cycles. He framed the question as not only technical but also one with attendant risks that must be addressed.

Industry dynamics and product progress

Hassabis described DeepMind’s recent efforts to re‑mobilise research talent and accelerate deployment across Google product surfaces, citing model releases and product integrations as evidence of progress. He argued that DeepMind’s broad research bench positions it well to return to the top of capability leaderboards. Amodei discussed Anthropic’s commercial trajectory and the growing revenue potential of increasingly capable models, describing an exponential relationship between capability and revenue. He presented the recent financial growth at Anthropic as a signal that independent model makers can scale commercially, while acknowledging uncertainty about how long those curves continue.

Jobs, labour markets and the near‑term economic effect

The two CEOs considered the effects of accelerating AI on work with different emphases on timing. Hassabis argued the near‑term transition will follow historical patterns: displacement in some roles will be met by the creation of new and often higher‑value jobs, and he urged students and early career workers to become highly proficient with AI tools. He observed early signs of hiring slowdowns at the junior level and suggested AI tools could act as a powerful leapfrog for career entry.

Amodei reiterated a sharper near‑term risk: he has previously warned that half of entry‑level white‑collar positions could be displaced within one to five years, and in Davos he stood by that concern. He pointed out a lag between capability and measurable labour‑market statistics and warned that exponential compounding of capability could soon outpace society’s ability to adapt. Both speakers acknowledged uncertainty about the pace and scale of displacement but agreed that entry‑level knowledge work is likely to be hit first.

Geopolitics, chips and international coordination

Geopolitical competition and hardware supply were central to the discussion. Amodei argued that restricting the export of advanced AI chips is one of the most effective levers for slowing the global pace of capability development, and he used a stark analogy to stress the stakes: selling such chips abroad risks rapidly eroding strategic leads. Hassabis acknowledged the geopolitical complexity and argued for international cooperation and minimum safety standards for deployment. Both speakers said cross‑border impacts demand coordinated approaches, though they recognised the real‑world difficulty of achieving enforceable global agreements in a contested geopolitical environment.

Malign AI behaviours and safety research

Amodei emphasised Anthropic’s longstanding attention to safety research, including mechanistic interpretability efforts to look inside models and understand emergent bad behaviours. He described a posture that is concerned but not fatalistic: "This is a risk. This is a risk that if we all work together, we can address." Hassabis echoed that combination of urgency and tractability, arguing that the technical problems of safety are solvable if the field is given the time and focused collaboration required. Both warned that racing without adequate guardrails increases the chance of harm.

What to watch next

When pressed on what will determine whether AGI comes quickly or more slowly, both participants singled out the development of AI systems that can build AI systems as the decisive factor. Amodei called that loop the crux of a winner‑takes‑all dynamic; Hassabis named additional technical advances — world models, continual learning and improved robotics — that could either substitute for or complement self‑improvement if it does not fully deliver.

In closing, both leaders urged focused attention from industry, governments and other institutions on governance, safety standards and cooperative measures to manage the transition. The session combined concrete technical descriptions, near‑term economic concerns and a persistent emphasis on the need for international coordination as capabilities continue to advance.

References and further viewing:

Explore more exclusive insights at nextfin.ai.

Insights

What are the main concepts behind AGI and its potential impact on society?

What historical developments have led us to the current state of AI technology?

What technical principles are involved in creating self-improving AI systems?

What is the current market situation for AI technologies as discussed by industry leaders?

How do users perceive the effectiveness and safety of current AI models?

What recent updates or advancements have been made in AI research and development?

What policy changes are being discussed to regulate AI development and deployment?

What potential future directions do experts foresee for the evolution of AI systems?

What long-term impacts might AGI have on global labor markets?

What are the main challenges facing the development of self-improving AI?

What controversies exist around the safety and governance of AI systems?

How do DeepMind and Anthropic compare in their approaches to AI development?

What historical cases illustrate the challenges of adapting to technological displacement?

What similar concepts exist in other technological fields that face rapid evolution?

What feedback have industry leaders received regarding the transition to AI-driven jobs?

What are the geopolitical implications of AI chip export restrictions?

How do the leaders propose addressing the risks associated with malign AI behaviors?

What measures are suggested to improve international cooperation on AI safety?

What insights can be drawn from the discussion on AGI's timeline and potential outcomes?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App