NextFin News - On November 16, 2025, Yann LeCun joined a public conversation at Pioneer Works in Brooklyn to discuss the capabilities and limits of current AI systems. The event, titled "Scientific Controversies: Deep Thoughts of Artificial Minds," brought LeCun together with Adam Brown in a conversation hosted by Janna Levin. (pioneerworks.org)
On the apparent intelligence of language models
LeCun began by distinguishing the practical usefulness of today's models from genuine intelligence. He emphasized that fluency in language can give the false impression of human-like understanding: we're fooled into thinking those machines are intelligent because they can manipulate language and we're used to the fact that people who can manipulate language very well are implicitly smart
. He was careful to stress that utility does not equal human-level cognition: they're useful. There's no question. ... Great. They're great tools like you know, computers have been for the last few decades
.
AI's repeating cycle of optimism and disappointment
LeCun placed current excitement about LLMs in a historical context, describing repeated waves of optimism since the 1950s. He recalled early efforts that declared forthcoming human-level machines—symbolic search systems like the General Problem Solver (GPS), the perceptron era, expert systems in the 1980s, and earlier waves of neural networks—and explained why each promise ultimately failed to deliver full intelligence. As he put it, there's been generation after generation of AI scientists since the 1950s claiming that the technique that they just discovered was going to be the ticket for human level intelligence
. He summarized the common difficulty: many important problems have complexity that grows exponentially and cannot be solved by naive search or by reducing knowledge to rules; similarly, early perceptron approaches failed until multi-layer training became possible.
Concrete limits: tasks LLM-based systems will struggle with
Asked to name tasks that LLM-augmented systems will likely never perform well, LeCun gave vivid, concrete examples drawn from real-world interaction. He listed simple household and maintenance tasks as illustrative: clear out the dinner table, fill up the dishwasher
and even more complex manual repairs like plumbing. He argued that current architectures that predict discrete tokens cannot truly understand and act robustly in the physical world: You're never going to have a robot driven by LLMs. It just cannot understand the real world. They just can't.
On whether robots or machines will eventually do those tasks
LeCun clarified that his skepticism was about the particular algorithmic approach of today's LLMs, not about robots or automation in general. He affirmed that machines will accomplish physical tasks, but he insisted the solution will not primarily be generative-token language models: They will. They absolutely will. Just not by this algorithmic approach or this particular approach of the deep learning on the ... if the programmer working on succeeds which may take a while.
In other words, future embodied agents will require different algorithmic ingredients than the present generation of LLMs.
What might succeed: abstract representations and planning
LeCun sketched the direction he considers promising for more general intelligence. Rather than predicting discrete tokens, future systems must learn abstract internal representations and reason over them. He described the ability he expects such systems to acquire: I can reason about what is going to be the effect of me taking this action. Can I plan a sequence of actions to arrive at a particular goal?
He connected this capability to research directions that aim to move beyond surface-level token prediction.
Self-supervised learning and architectural challenges
LeCun discussed self-supervised learning as a broad training paradigm used by today's models, and he explained both its strengths and its limits. He described self-supervised learning as training systems to capture the underlying structure of data by predicting missing parts of input, noting that language models commonly predict the next token. He also pointed out the difficulty of extending that approach directly to other modalities: If you try to predict at the pixel level it doesn't work or it doesn't work very well
, and he observed the engineering costs involved in scaling such approaches to video and other richly structured data. These limitations, he implied, motivate the need for new architectures that can capture higher-level abstractions.
On timelines and expectations
While skeptical of claims that current LLM architectures will suddenly yield human-level general intelligence, LeCun remained confident that machines will, at some point, surpass humans across domains where humans have abilities. He cautioned that the timeline will likely be longer than some public predictions: It will happen. ... It probably take longer than, you know, some of the people in Silicon Valley at the moment are saying
. He reiterated that the route to that future will be via models capable of learning abstract representations and predicting outcomes in those representations rather than merely predicting discrete tokens.
References:
Pioneer Works — Scientific Controversies: Deep Thoughts of Artificial Minds
Eventbrite — Scientific Controversies: Deep Thoughts of Artificial Minds (Nov 16, 2025)
AllEvents — Scientific Controversies: Deep Thoughts of Artificial Minds
Explore more exclusive insights at nextfin.ai.

