Yann LeCun spoke on 2025-11-06 at the FT Future of AI Summit in London, where he joined other 2025 Queen Elizabeth Prize laureates for a panel discussion moderated by the Financial Times' AI editor. In his remarks LeCun set out a two-part diagnosis of the moment in AI: LLMs are powering a wave of useful applications and justify heavy investment, but the belief that scaling today's LLM paradigm alone will produce human-level intelligence is a separate and unwarranted bubble.
The following sections present LeCun’s core statements from the discussion, organized by topic and quoted directly where appropriate.
LLMs create real applications and justify infrastructure investment
LeCun opened by distinguishing two different senses of the word "bubble." On the pragmatic side he emphasized the immediate value of current models: "we're not in a bubble in a sense that there are a lot of applications to develop based on LLMs. LLM is the current dominant paradigm and there's a lot to milk there."
He framed the present investment in software and compute as necessary to extend and deploy these capabilities at scale: "that justifies all the investment that is done on the software side and also on the infrastructure side."
LeCun pointed to the arriving generation of always-on, wearable or personal assistants and argued that serving many users will require enormous computation — a fact that makes current infrastructure spending meaningful rather than purely speculative.
The separate bubble: believing LLM scale alone will reach human-level intelligence
Alongside his recognition of LLM value, LeCun warned against conflating commercial progress with a scientific leap to human-level cognition. He said there is "a sense in which there is a bubble"
tied to the idea that the present LLM paradigm can simply be pushed to the point of human-equivalent intelligence. In his words: "the idea somehow that the current paradigm of LLM would be pushed to the point of having human level intelligence which I personally don't believe in."
We’re missing a fundamental ingredient — breakthroughs are needed
LeCun stressed that additional breakthroughs are required before machines can exhibit the kind of intelligence seen in animals and humans. He contrasted current progress with remaining gaps in embodied and spatial reasoning: "we need kind of a few breakthroughs before we get to machines that really have the kind of intelligence we observe not just in humans but also animals."
To underscore the gap he added a vivid comparison: "We don't have robots that are nearly as smart as a cat, right?"
From this LeCun concluded that progress is not merely an engineering question of more compute, data, and scaling: "It's actually a scientific question of how do we make progress towards the next generation of AI."
Return to foundational research while continuing engineering deployment
LeCun urged the field to balance the current engineering momentum — the building and milking of practical LLM-based products — with renewed focus on the foundational research that originally drove the field. He framed the audience’s role as part of this recalibration: bringing attention back to the scientific problems that will produce the next generation of AI. He emphasized that the present moment should not mean the end of fundamental inquiry but rather a return to it that complements applied work.
Practical takeaway: apply LLMs, but don’t mistake scale for solution
Throughout his remarks LeCun maintained a pragmatic tone: the current paradigm will continue to produce useful applications and justify investment in infrastructure, yet treating scale as a substitute for new ideas risks overpromising what current models can deliver in terms of general or animal-like intelligence. In his formulation, the difference between productive engineering and speculative rhetoric must remain clear: invest in and deploy the tools you have, but keep searching for the scientific breakthroughs that will be necessary to move beyond them.
References:
Event summary and transcript (VideoHighlight)
Aha Moments in AI and Scaling — transcript (Coconote)
Explore more exclusive insights at nextfin.ai.

