NextFin

Yann LeCun at Brown: World Models, JEPA and the Limits of LLMs

Summarized by NextFin AI
  • Yann LeCun's lecture at Brown University emphasized the limitations of current AI systems, particularly LLMs, which struggle with real-world sensory inputs.
  • He proposed that true intelligence involves rapid adaptation and problem-solving, contrasting it with the pattern-matching of large text-trained models.
  • LeCun highlighted the importance of world models for predictive planning and safety in AI systems, advocating for hierarchical planning and sensory training.
  • He urged researchers to focus on world-model research and self-supervised learning rather than solely scaling LLMs for human-level AI advancements.

NextFin News - Yann LeCun delivered Brown University’s 2026 Lemley Family Leadership Lecture on April 1, 2026, in Providence, Rhode Island. The talk—introduced by Provost Frank Doyle and presented by Associate Provost for AI Michael Littman—drew a full audience and a wide-ranging discussion about the next generation of AI systems and their limits.

The lecture focused on the technical and conceptual gap between language‑centric models and systems that can act reliably in the physical world. LeCun framed his remarks around a long‑term research program toward world models and the engineering choices he believes are necessary to build agentic, safe, and adaptive systems.

On the limits of LLMs and the provocative opener

LeCun began with a deliberately blunt assessment: AI sucks, he said, to highlight that current systems, especially LLMs, remain deeply limited when faced with real‑world, high‑dimensional, noisy sensory inputs. He noted that LLMs can write code, pass exams and solve many structured tasks, but they are "completely helpless when it comes to the physical world." He stressed that training larger LLMs on more text alone will not produce machines that understand or act reliably in continuous sensory environments.

What intelligence means

LeCun proposed a practical definition of intelligence: the ability to accomplish new tasks and solve novel problems with minimal prior training. He contrasted this form of fast adaptation to the pattern‑matching and memorization that dominate large, text‑trained models. In his words, intelligence is not accumulation of facts; it is rapid generalization and creativity in new situations.

Prediction, planning and why world models matter

Central to LeCun’s argument was the claim that agentic systems must predict the consequences of actions. He emphasized that most contemporary agentic architectures do not predict outcomes before acting and that this is dangerous: "it's a very bad way to produce an action to not be able to predict the consequences of it." By contrast, a world model is a predictive system that, given a state and a candidate action, forecasts the next state and therefore enables planning, optimization and guardrails.

Hierarchical planning and the gap in current practice

LeCun described human and animal planning as intrinsically hierarchical: long‑horizon goals decompose into subgoals and eventually into low‑level actions. He observed that "nobody knows how to do hierarchical planning" at scale in current AI, and identified hierarchical planning as a central open problem for building capable, adaptable agents.

Why sensory training (video, audio, sensors) is essential

To illustrate the mismatch between text and perception, LeCun compared the volume of data available to a child through sensory channels in the first four years to the amount of text used to train large LLMs. He argued that sensory streams carry far more information relevant to understanding the physical world and that human‑level competencies will require models trained on high‑dimensional continuous data rather than text alone.

From generative models to JEPA — representation, not pixels

LeCun outlined why pixel‑level generative video prediction fails: most raw sensory detail is inherently unpredictable at the pixel level, so a model trying to predict pixels spends modeling noise and averages plausible futures into blur. Instead, he advocated joint‑embedding predictive architectures (JEPAs) in which an encoder maps observations into abstract representations and a predictor operates in that latent space. In that framework, the system maximizes predictable information while ignoring unpredictable detail.

SigReg, Logepa and making representations useful

LeCun described ongoing technical work to prevent representational collapse in joint encoders. One concrete proposal he discussed is SigReg, a differentiable regularizer designed to encourage informative, non‑collapsed latent distributions (implemented in collaboration with Brown researchers). He and colleagues have explored variants (humorously named Logepa/Low World Model) that use such regularization and projection techniques to produce isotropic latent distributions that preserve independent, informative components suitable for prediction and planning.

Prediction by search and the role of inference

Contrasting two broad approaches to agentic intelligence, LeCun distinguished autoregressive token prediction (the mechanism behind LLMs) from inference framed as search and optimization. He argued that reasoning is fundamentally a search process and that inference‑by‑search plus accurate world prediction is more computationally expressive for many problems than pure autoregressive prediction.

Safety, guardrails and controllability

LeCun emphasized that world models enable built‑in safety through constrained optimization: by using a predictive model during planning, a system can enforce guardrail objectives that avoid actions likely to lead to harmful outcomes according to its model. He acknowledged model errors but argued that architecture that plans in model space is intrinsically more amenable to safety constraints than current LLM‑only approaches.

Applications, AMI Labs and the near‑term agenda

LeCun described the practical roadmap for AMI Labs (Advanced Machine Intelligence), the company he helped found, explaining that the near term will focus on business‑to‑business applications: industrial process control, manufacturing, aerospace, healthcare diagnostics and other domains rich in sensor data. Over time he envisions consumer‑facing wearable assistants and ubiquitous supportive agents grounded in persistent world models.

Recommendations for researchers and students

LeCun urged researchers interested in human‑level AI to prioritize world‑model research and self‑supervised learning for continuous sensory data rather than devoting all effort to scaling LLMs: IF YOU ARE INTERESTED IN HUMAN‑LEVEL AI, DON'T WORK ON LLMs, he said on a slide. For students he recommended studying fundamentals with long shelf life — learn to learn, favor deep mathematical and physical foundations, and be prepared to change jobs as technology shifts.

Selected highlights from the Q&A

During the audience Q&A he addressed openness and industry–academia collaboration, asserting that exploratory research benefits from openness and resident PhD/postdoc engagement; he discussed AMI Labs’ stance toward academic collaboration and noted the practical difference between exploratory academic‑style research and product‑driven development. He also answered questions about the role of LLMs — noting parts of the brain devoted to language — and reiterated that LLMs will remain an important component but are insufficient by themselves for embodied, general problem solving.

Closing

LeCun closed by restating his long‑term vision: systems with persistent memory, hierarchical planners, action‑conditioned predictive models trained on sensory data, and architectures that enable fast adaptation and controllability. He framed the work as both an engineering challenge and a scientific program to reproduce the kinds of abstract representations that make human prediction and planning possible.

References

Event listing: 2026 Lemley Lecture Featuring AI Pioneer Yann LeCun — Lemley Family Leadership Lecture (Brown University).

Local coverage and reporting: AI pioneer Yann LeCun discusses new frontiers in the field at Brown lecture — The Brown Daily Herald.

Department summary: Yann LeCun discusses a new approach to AI in Brown's Lemley Family Leadership Lecture Series — Brown CS.

Event listing (ticketing): 2026 Lemley Lecture Featuring AI Pioneer Yann LeCun — Eventbrite.

Explore more exclusive insights at nextfin.ai.

Insights

What are world models in the context of AI systems?

What technical principles underpin Yann LeCun's argument against LLMs?

How has user feedback shaped current AI models and their applications?

What recent updates have been made in AI safety regarding predictive models?

What is the future outlook for hierarchical planning in AI?

What challenges do current AI systems face in understanding sensory data?

How do joint-embedding predictive architectures differ from traditional models?

What are the core limitations of LLMs highlighted by Yann LeCun?

What industry trends are influencing the development of AI systems?

How do AMI Labs' applications plan to utilize sensory data in AI?

What are the potential long-term impacts of moving away from LLMs?

What are the controversies surrounding the effectiveness of LLMs?

How does LeCun’s approach compare to traditional AI models?

What role does sensory training play in developing advanced AI?

What recent advancements have been made in AI research related to safety?

How does prediction by search enhance problem-solving in AI?

What are the implications of LeCun's recommendations for AI researchers?

What historical cases have shaped the development of current AI technologies?

What are the key components of LeCun's vision for future AI systems?

How do different approaches to AI affect their adaptability to real-world tasks?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App