NextFin

Andrew Ng: AGI Isn’t Close — "LLMs by themselves are not a path to AGI"

Summarized by NextFin AI
  • Andrew Ng discussed the gap between current large language models and the goal of artificial general intelligence (AGI) during an interview at the World Economic Forum.
  • He endorsed Yann LeCun's critique that LLMs alone are not a path to AGI, emphasizing that no existing technology guarantees AGI.
  • Ng defined AGI as an AI capable of performing any intellectual task a human can, highlighting the need for breakthroughs beyond current technologies.
  • He cautioned against lowering AGI standards, warning that it could mislead the public about the capabilities of current models.

NextFin News - Andrew Ng spoke with Sruthijith KK at ET House during the World Economic Forum in Davos on January 19, 2026. The conversation focused on the gap between today's large language models and the long-discussed goal of artificial general intelligence (AGI), and on what—if anything—current technologies can deliver toward that goal.

The interview was conducted at ET House at the World Economic Forum and was published by The Economic Times. Sruthijith KK guided the discussion toward recent comments from other AI researchers about the limits of current model families and the larger question of whether any single approach will produce AGI.

On Yann LeCun’s critique and whether LLMs are the right path

Asked about Yann LeCun’s recent statement that LLM-based models are not the right way to approach AGI, Ng responded by endorsing the critique while adding important nuance. He told the interviewer that he finds LeCun’s view to be correct in spirit but that the broader issue is more general:

LLMs by themselves are not a path to AGI,

and he emphasized that this observation is not unique to LLMs: no single existing technology, as far as he knows, constitutes a guaranteed path to AGI.

How Ng defines AGI

Ng offered a concrete formulation of what he called the most reasonable definition of AGI: an AI capable of performing any intellectual task that a human can. He illustrated that definition with examples drawn from ordinary human milestones and applied it to test expectations about machine capabilities:

The most reasonable definition of AGI that I know of is AI that could do any intellectual task that a human can. So if we have AGI, it means AI should have spent five years to write a PhD thesis or learn to drive a truck through a jungle in, you know, maybe a minutes, tens of minutes because a human can.

Ng’s formulation frames AGI as breadth and flexible competence across tasks that humans routinely master given time and experience.

On technological limits and the need for breakthroughs

Ng stressed that achieving the kind of general intelligence he described will require advances beyond current practice. He was blunt that present-day model families—when taken alone—don’t meet that bar. At the same time, he cautioned against assuming any known technology is the definitive path forward:

There’s no technology that I know of today that by itself is a path to AGI. To achieve that type of AGI will need new technical breakthroughs that none of us really know what exactly it is.

He added that one can accurately say of any present technology that it is not, by itself, a route to AGI, underscoring the fundamental uncertainty about which future innovations will matter most.

What AGI would mean for everyday work

Ng used a practical comparison to underline how far current systems are from human-like, general capabilities. He suggested AGI would need to be able to perform the full range of tasks a competent remote worker can do—an operational benchmark for broad, flexible performance. By contrast, he said, we remain a long way from that level of competence:

When with AGI it means AI should do all everything that a remote worker would be able to do. We're very far away from that to do everything remote work.

This remark tied the abstract definition of AGI back to concrete, real-world expectations about machines performing cross-domain work.

On redefining AGI and public perception

Although the core transcript focuses on technical definitions and limits, Ng’s comments implicitly warned against lowering standards for what counts as AGI. He noted the risk that narrowing or lowering the definition can mislead businesses and the public about how close machines are to human-level general intelligence. That line of argument underlines the central thrust of his remarks: progress is real and valuable, but the claim that current models alone will produce AGI is unsupported by present evidence.

References:

India must speed up AI upskilling: Coursera cofounder Andrew Ng — The Economic Times (ET@DAVOS)

ET@DAVOS: Andrew Ng, Founder of DeepLearning.AI — Video (Economic Times)

Explore more exclusive insights at nextfin.ai.

Insights

What are the key technical principles distinguishing AGI from current AI models?

What are the historical origins of the concept of AGI?

What is the current market landscape for AI technologies related to AGI?

What feedback have users provided regarding current AI models and their limitations?

What are the recent industry trends impacting the development of AGI?

What recent updates or news have emerged in the AGI development space?

What policy changes are influencing AI research and AGI pursuits?

How might advancements in AI technology evolve toward achieving AGI?

What long-term impacts could the realization of AGI have on the workforce?

What are the core challenges facing researchers in developing AGI?

What limiting factors are currently hindering progress toward AGI?

What controversies surround the definition and expectations of AGI?

How does Andrew Ng's perspective on AGI compare to other AI researchers?

What historical cases illustrate the challenges faced in the pursuit of AGI?

What similar concepts to AGI exist in the field of artificial intelligence?

How do current AI capabilities measure up against Ng's definition of AGI?

What insights can be drawn from Ng's comparison of AGI to remote work tasks?

How might public perception of AGI be affected by evolving definitions?

What are the implications of lowering standards for what constitutes AGI?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App