{
 "@context": "https://schema.org",
 "@type": "FAQPage",
 "mainEntity": [
   {
     "@type": "Question",
     "name": "Why does Yann LeCun criticize the concept of AGI?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "According to Yann LeCun, the concept of Artificial General Intelligence relies on a misleading abstraction. Human intelligence is not general but specialized, shaped by perception, action, and interaction with the physical world."
     }
   },
   {
     "@type": "Question",
     "name": "What do large language models actually do?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "Large language models predict the next token based on previous tokens. They exploit statistical regularities from massive text corpora, without possessing an internal representation of the world or causal understanding."
     }
   },
   {
     "@type": "Question",
     "name": "Can LLMs truly reason?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "LLMs apply a fixed amount of computation to each token. They lack mechanisms to allocate additional resources to more complex problems, which limits their capacity for adaptive reasoning."
     }
   },
   {
     "@type": "Question",
     "name": "What is the Moravec paradox?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "The Moravec paradox highlights that tasks humans perform effortlessly, such as perception or object manipulation, are often the hardest to automate, while certain abstract tasks are easier for machines."
     }
   },
   {
     "@type": "Question",
     "name": "Why do humans learn from so little data?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "Humans learn through observation, interaction, and persistent memory. They build internal representations of the world from direct experience rather than from textual descriptions alone."
     }
   },
   {
     "@type": "Question",
     "name": "What is a world model in artificial intelligence?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "A world model is an internal representation of the environment that allows a system to anticipate possible futures, simulate the consequences of actions, and guide decision-making before execution."
     }
   },
   {
     "@type": "Question",
     "name": "Why will the future of AI not be purely generative?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "Generative models excel at producing plausible content but struggle to extract causal abstractions. Understanding the world requires architectures capable of representing what is relevant and predictable."
     }
   },
   {
     "@type": "Question",
     "name": "What are JEPA architectures?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "JEPA architectures focus on predicting abstract representations rather than generating raw outputs. They enable planning and optimization of actions based on explicit goals."
     }
   },
   {
     "@type": "Question",
     "name": "Why does Yann LeCun advocate for open source AI?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "Open source is essential to preserve sovereignty, linguistic diversity, and democratic values. AI controlled by a small number of centralized actors would pose significant systemic risks."
     }
   }
 ]
}

Yann LeCun’s Plan to Go Beyond OpenAI

Dernière mise à jour :

January 30, 2026

5 minutes

Language models dominate today’s AI landscape. Yet this trajectory is being challenged by a radically different vision.

Yann LeCun’s vision for the future of AI, beyond LLMs and AGI.

Pencil-style illustrated portrait of Yann LeCun, with a subtle, artistic colored background representing the city of Paris.

Context

Over the past two years, language models have become the dominant reference in artificial intelligence. Their ability to generate text and code has fueled the belief that scaling model size, data, and compute would eventually lead to human-level intelligence.

This trajectory, now widely embraced by the industry, rests on an assumption that is rarely questioned. For Yann LeCun, LLMs are not a natural step toward advanced intelligence. Instead, they expose a structural limitation of current AI systems: their inability to understand the real world beyond textual representations.

Conceptual illustration contrasting text-based generative AI using tokens with an approach grounded in real-world perception and spatial representations.

Why AGI Is a False Problem

The issue with the concept of “Artificial General Intelligence”

The term AGI, or Artificial General Intelligence, suggests the idea of a universal intelligence capable of understanding and adapting to any situation. According to Yann LeCun, this concept is built on a fundamental misunderstanding.

Human intelligence is not general. It is deeply specialized, shaped by perception, action, memory, and continuous interaction with a physical environment. Talking about AGI projects a misleading abstraction onto systems that share neither our cognition nor our relationship with the world.

From AGI to AMI: a shift in perspective

LeCun prefers the term AMI, for Advanced Machine Intelligence, or HLI, Human-Level Intelligence. This choice is not semantic. It reframes research around concrete capabilities: perceiving, planning, and reasoning based on a model of the world, rather than chasing an ill-defined notion of general intelligence.

Words shape scientific goals. In that sense, the obsession with AGI often obscures the real technical bottlenecks facing contemporary AI.

Comparative diagram between an abstract Artificial General Intelligence (AGI) concept and an AMI approach built on perception, memory, world models, and planning.

What Language Models Actually Do

Token prediction as the core mechanism

All large language models rely on a simple principle: predicting the next token based on previous tokens. Trained on massive text corpora, they capture impressive statistical regularities and generate responses that often appear coherent.

This effectiveness should not be confused with an understanding of the world. The model manipulates linguistic representations only. It has no notion of objects, no physical causality, and no internal representation of reality.

An illusion of reasoning

One often overlooked aspect is computation. To generate each token, an LLM applies a fixed amount of computation. Simple and complex problems are treated in exactly the same way. There is no mechanism that allows the model to allocate more resources to harder situations.

When LLMs appear to reason, they are typically replaying learned patterns. This apparent intelligence quickly breaks down as soon as a problem deviates slightly from the training distribution.

Simplified diagram showing how a language model works: prompt converted into tokens, identical computation at each step, then response generation.

The Moravec Paradox: A Key Lens

When simple tasks become impossible

The Moravec paradox highlights a counterintuitive reality: tasks humans perform effortlessly are often the hardest to automate, while certain complex intellectual tasks are relatively easy for machines.

An AI system can solve advanced equations, draft legal text, or analyze financial data. Yet it still struggles with basic actions such as manipulating objects, anticipating physical interactions, or understanding what is possible or impossible in the real world.

Common sense versus formal reasoning

Formal reasoning is a relatively recent cultural invention. It relies on explicit symbols and rules. Common sense, by contrast, is the product of millions of years of evolution and learning through interaction.

LLMs excel in domains rich in text and explicit rules. They fail where intuition, perception, and implicit understanding are required.

Visual comparison between a human struggling with abstract learning and an artificial intelligence unable to perform a simple physical everyday task.

This gap is not due to an abstract superiority of human intelligence, but to how it is constructed.

Why Humans Learn from So Little Data

Observation, interaction, and memory

A child learns about the world without reading manuals. Through observation, experimentation, failure, and correction, they develop an intuitive understanding of complex concepts such as gravity, object permanence, and causality.

This ability relies on direct interaction with the world, combined with persistent memory and an internal representation of reality. This is precisely what current AI systems lack.

The limits of purely textual learning

Even when trained on the entirety of publicly available internet content, LLMs only access an indirect description of the world. Text does not capture physical continuity, real-world constraints, or embodied experience.

This gap explains why simply increasing textual data is insufficient to overcome the fundamental limitations of today’s AI.

World Models: The Missing Piece

Understanding the world before acting

For Yann LeCun, a truly advanced AI must be able to build an internal model of the world. This world model represents the state of reality, imagines possible futures, and evaluates the consequences of actions before they are executed.

This approach fundamentally differs from generative modeling. The goal is no longer to produce plausible outputs, but to simulate reality in order to guide decision-making.

From perception to planning

A world model integrates perception, representation, and prediction into a continuous loop. The system observes the world, builds an abstract representation, simulates multiple future scenarios, and selects the most appropriate action based on objectives and constraints.

This internal simulation mechanism, central to human and animal reasoning, is what language models currently lack.

Conceptual illustration showing the simulation of multiple future trajectories from a world model, with selection of an optimal decision.

Understanding the world is not just about internal models. It also requires rethinking architectures themselves.

Why the Future of AI Will Not Be Generative

The limits of generation

Generative models aim to reconstruct data, whether text, images, or video. While effective for content creation, this approach fails to extract the abstractions required to understand the world.

Predicting pixels or words is not enough to capture underlying causal relationships.

Abstraction as a priority

LeCun argues for architectures capable of representing what is relevant and predictable while ignoring noise. This ability to abstract is essential to move from perception to understanding.

JEPA and Goal-Driven AI

A different architecture for reasoning

JEPA architectures focus on predicting abstract representations rather than generating raw outputs. They allow systems to measure the compatibility between a world state and a goal, then optimize a sequence of actions.

This form of inference through optimization brings AI closer to human reasoning by enabling planning and anticipation.

Controllable and secure systems

By embedding explicit objectives and guardrails, this approach enables the design of more controllable systems that can respect constraints while pursuing goals.

A Scientific and Strategic Challenge

Open source and sovereignty

Beyond research, Yann LeCun emphasizes the importance of open source. AI controlled by a handful of centralized actors would pose major risks to sovereignty, linguistic diversity, and democracy.

Developing open architectures is a necessary condition for a distributed and pluralistic AI future.

What This Vision Changes for Applied AI

Before reasoning, an AI system must perceive accurately. In the real world, data is imperfect, noisy, and heterogeneous. Documents, in particular, are one of the first domains where this understanding is put to the test.

The line between spectacular AI and truly operational AI often lies here: understanding reality faithfully before attempting to automate it.

Conclusion

The future of artificial intelligence will not be determined solely by model size or generative quality. It will depend on systems’ ability to understand the world, build coherent internal representations, and act in a reasoned manner.

The path defended by Yann LeCun is more demanding and slower, but likely more robust. It reminds us of a simple truth often overlooked: before imitating human intelligence, we must first understand how it is built.

Move to document automation

With Koncile, automate your extractions, reduce errors and optimize your productivity in a few clicks thanks to AI OCR.

Author and Co-Founder at Koncile
Jules Ratier

Co-fondateur at Koncile - Transform any document into structured data with LLM - jules@koncile.ai

Jules leads product development at Koncile, focusing on how to turn unstructured documents into business value.