NextFin

Yann LeCun at Davos: Why LLMs Won't Deliver AGI — The Case for World Models

Summarized by NextFin AI
  • Yann LeCun discussed the limitations of large language models (LLMs) at the World Economic Forum, emphasizing that they cannot reliably predict consequences of actions, which is crucial for intelligent behavior.
  • He introduced the concept of Advanced Machine Intelligence (AMI) and the joint-embedding predictive architecture (JEPA), which focuses on building predictive models of environments rather than generating raw data.
  • LeCun warned against centralization of AI power, advocating for openness in research to foster diversity and prevent risks to democracy and cultural pluralism.
  • He outlined a vision for AI's future, stressing that success will depend on multiple conceptual breakthroughs that enhance systems' understanding and reasoning capabilities.

NextFin News - At AI House during the World Economic Forum in Davos, Switzerland, Yann LeCun spoke in a fireside-format session on January 21, 2026. The conversation, held in the AI House Networking Lounge, featured LeCun in discussion with Marc Pollefeys of ETH Zürich and covered the limitations of large language models, the promise of world models, and his new research venture focused on embodied, predictive AI.

On the path to human-level intelligence

LeCun opened by pushing back on common terminology and timelines. He said he "famously don't like the phrase AGI" because human intelligence is not truly "general," and warned that calling a human-level system "AGI" is a misnomer. He emphasized that machines could be "smarter than humans at some point," but that this will not happen in the immediate future: "It's not going to happen next year. It's not going to happen in two years because we need a few conceptual breakthroughs."

Why LLMs are not the path

LeCun argued that the current LLM paradigm has inherent limits. He contrasted language fluency with real-world understanding and said that "predicting the next word in the text is not that complicated." He warned that relying on LLMs as the basis for agentic systems is dangerous because such models cannot reliably anticipate the consequences of actions. In his words: how can a system possibly plan a sequence of actions if it can't predict the consequences of its actions. He concluded bluntly that "you're not going to get intelligent behavior without that."

The missing ingredient: world models and planning

According to LeCun, the critical component missing from today's dominant models is a predictive model of the environment. He described the need for systems that can understand high-dimensional, continuous, noisy sensory data — video and sensor streams — and that can "build predictive models of how their environment is going to evolve and what their effect on the environment is." He summarized the capability thus: if a system can predict the state at t+1 resulting from an imagined action, you can plan a sequence of actions to accomplish a task.

Advanced Machine Intelligence (AMI) and JEPA

LeCun introduced the research program he has carried forward under the name Advanced Machine Intelligence (AMI), pronounced "ABI," which he described as the successor idea he pursued at FAIR and is now pursuing independently. The technical approach centers on non-generative architectures such as JEPA (joint-embedding predictive architecture). He explained that JEPA makes predictions in a representation space rather than generating raw pixels, and that the key training trick is to force a system to extract and represent as much information as possible about sensory input while predicting forward in representation space. He said prototypes already "understand video, represent it really well, can predict missing parts in a video and ... have acquired a certain sense of common sense," noting they flag impossible events because prediction error spikes.

From digital twins to phenomenological models

LeCun stressed that attempting to simulate physical systems at maximal fidelity is impractical. He argued for abstract, higher-level representations that allow prediction and control: "The way we can understand what's taking place right now in this room is by ... psychology, maybe a little bit of science, economics ... not at the level of quantum field theory." He framed AMI's goal as building phenomenological models — practical, abstract representations that enable simulation and optimal control of complex systems from industry processes to living cells.

Open research, competition, and the danger of concentration

A persistent theme in LeCun's remarks was the value of openness in AI research. He credited the rapid progress of the past decade to an open research culture in which code and papers were shared, and he warned that increasing secrecy at frontier industry labs is "disastrous" because it will slow progress in the West while open contributions proliferate elsewhere. He advocated for an open, cooperative infrastructure that preserves linguistic and cultural diversity: we need a highly diverse population of AI assistance for the same reason we need diversity in the press. He framed the greatest near-term danger as centralized control over the information and recommendation systems that will mediate people's digital lives.

Risks in the next 5–10 years

LeCun urged leaders to focus on concrete near-term risks rather than apocalyptic narratives. He placed the highest priority on capture and centralization of AI power by a few companies or governments, reiterating that this would endanger democracy, cultural and linguistic pluralism, and value systems. Other risks — human misuse of systems and transitional economic dislocation — he treated as serious but manageable. On economic effects he cited economists' projections of sustained productivity improvement rather than sudden mass unemployment, and emphasized that the pace of adoption will be limited by how fast people can learn to use the new tools.

Alignment, control and objective-driven AI

On alignment debates, LeCun argued that current focus on aligning LLM outputs is the wrong frame for future systems. He described a different blueprint — "objective-driven AI" — where systems are given specific objectives and are constrained by guard rails enforced at inference time. He emphasized that these architectures will differ from today's LLMs and insisted that safety must be designed into systems that act in the world, not only into text-producing models trained on limited datasets.

AI and the future of work: augmentation, fundamentals, and learning to learn

LeCun said technology will accelerate and require workers to change jobs over their careers. His advice to students and educators was clear: focus on fundamentals and learn how to learn. If you have the choice between taking a course in mobile app programming or quantum mechanics take quantum mechanics ... the methods that you will learn doing this will allow you to learn to learn, he said. He framed AI primarily as an amplifier of intelligence rather than a simple replacer of human labor.

Vision for 2035: success and failure scenarios

Looking a decade or more ahead, LeCun sketched two contrasting outcomes. Success, he said, would bring systems that understand the physical world, can plan and reason, and in many domains surpass humans while remaining controllable and safe. He reiterated that such advances will come from multiple conceptual breakthroughs rather than a single sudden event: "There's going to be a bunch of conceptual breakthroughs which are going to be in obscure research papers that nobody is going to pay attention to until five years later." He urged attention to scientific work that may now appear peripheral because those papers will seed the next revolution.

References

Event: AI House Davos, World Economic Forum Annual Meeting, Networking Lounge, Davos Platz, Switzerland — January 21, 2026.

Selected references and further viewing:

Explore more exclusive insights at nextfin.ai.

Insights

What are the core principles behind world models in AI?

What limitations do large language models (LLMs) face according to LeCun?

How does predicting actions relate to intelligent behavior in AI systems?

What recent developments have been made in LeCun's Advanced Machine Intelligence (AMI) research?

How does JEPA differ from traditional generative AI architectures?

What are the implications of increasing secrecy in AI research as noted by LeCun?

What risks does LeCun associate with centralization of AI power?

How does LeCun propose to approach alignment in future AI systems?

What are the potential long-term impacts of AI on the workforce according to LeCun?

What contrasting scenarios did LeCun envision for AI by 2035?

How does LeCun believe AI should be used to augment human intelligence?

What educational advice did LeCun give for adapting to future job markets influenced by AI?

What role do conceptual breakthroughs play in the future of AI development?

How does LeCun's view differ from traditional narratives surrounding AGI?

What ethical considerations arise from the deployment of predictive models in AI?

What does LeCun mean by needing a diverse population of AI assistance?

How does LeCun's perspective on LLMs reflect broader industry trends?

What challenges do researchers face in building effective world models?

How does LeCun suggest AI can be both controllable and safe in the future?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App