Intelligence Briefing
Turing Award winner (2018). Pioneered convolutional neural networks. Left Meta in Nov 2025 after 12 years to found AMI Labs, a Paris-based startup pursuing world models and JEPA architectures as an alternative to LLMs. Raised €500M at a €3B valuation before even launching.
Yann LeCun is a French computer scientist widely recognized as a pioneer of convolutional neural networks and modern deep learning. His LeNet architecture, developed at Bell Labs, laid the foundation for computer vision AI. After 15 years at AT&T Bell Labs, he joined NYU as a professor and later became Meta's founding director of AI Research (FAIR) and Chief AI Scientist. Known for his contrarian views, he has been one of the most vocal critics of large language models, arguing they are fundamentally incapable of achieving human-level intelligence. In November 2025, he left Meta to found AMI Labs in Paris, pursuing his vision of world models and Joint Embedding Predictive Architectures (JEPA) as the path to Advanced Machine Intelligence.
PhD, Computer Science — Pierre and Marie Curie University (UPMC)
Diplome d'Ingenieur, Electrical Engineering — ESIEE Paris
Operational History
Left Meta After 12 Years
Departed Meta in November 2025 after 12 years, coinciding with Meta's strategic pivot toward LLM-based models under new Chief AI Officer Alexandr Wang.
departureFounded AMI Labs
Founded Advanced Machine Intelligence (AMI) Labs in Paris with CEO Alexandre LeBrun. Raised approximately €500M at a ~€3B valuation before even launching a product. Pursuing world models and JEPA architectures as an alternative to LLMs.
foundingV-JEPA Published
Released V-JEPA (Video Joint Embedding Predictive Architecture), applying the JEPA framework to video understanding by learning abstract representations through masked video prediction.
researchJEPA Architecture Proposal
Published his vision paper "A Path Towards Autonomous Machine Intelligence" proposing Joint Embedding Predictive Architecture (JEPA) as the path to human-level AI, arguing against autoregressive LLMs.
researchLegion of Honor
Appointed Chevalier de la Legion d'Honneur by the French government in recognition of contributions to artificial intelligence.
awardACM A.M. Turing Award
Received the Turing Award jointly with Geoffrey Hinton and Yoshua Bengio for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
awardFounded Facebook AI Research (FAIR)
Recruited by Mark Zuckerberg to create and lead Facebook AI Research (FAIR), one of the world's leading AI research labs. Later became VP and Chief AI Scientist at Meta.
foundingNYU Professorship
Joined New York University as Silver Professor of Computer Science, Courant Institute of Mathematical Sciences, and later co-founded the NYU Center for Data Science.
careerLeNet and Document Recognition
Published "Gradient-based learning applied to document recognition," introducing the LeNet-5 architecture that became the blueprint for modern convolutional neural networks.
researchInvention of Convolutional Neural Networks
Developed the first practical convolutional neural network (CNN) at Bell Labs, applying backpropagation to recognizing handwritten zip codes for the U.S. Postal Service.
researchJoined AT&T Bell Labs
Joined AT&T Bell Labs in Holmdel, New Jersey, where he would develop foundational work on convolutional neural networks over the next 15 years.
careerPhD from Pierre and Marie Curie University
Completed PhD in Computer Science at UPMC (now Sorbonne University) in Paris, working on a framework for learning in neural networks under the supervision of Maurice Milgram.
careerAGI Position Assessment
Not via current LLM approaches
Vocal skeptic of AI existential risk and the LLM path to AGI. Believes autoregressive language models are fundamentally incapable of achieving human-level intelligence. Champions world models and JEPA architectures as the true path to Advanced Machine Intelligence.
- Autoregressive LLMs are fundamentally limited and will not lead to AGI
- World models that understand physics and causality are the path to human-level AI
- Current AI systems do not understand the world as well as a housecat
- Open-source AI is the safest and most beneficial approach
- AI safety doomerism is counterproductive and factually wrong
- Joint Embedding Predictive Architectures (JEPA) are the key to machine intelligence
Believes open-source development is inherently safer than closed development. Opposes heavy regulation. Argues that making AI widely available allows more people to find and fix problems.
LeCun has been remarkably consistent in his views. He has argued against LLM supremacy since before ChatGPT launched. His departure from Meta to found AMI Labs represents the ultimate bet on his contrarian thesis.
Intercepted Communications
“LLMs can do none of those or they can only do them in a very primitive way and they don't really understand the physical world. They don't really have persistent memory. They can't really reason and they certainly can't plan.”
“Existing systems don't understand the world as well as a housecat.”
“Because of the autoregressive prediction, every time it produces a token or a word, there is some level of probability for that word to take you out of the set of reasonable answers... the probability that you stay within the set of correct answers decreases exponentially.”
“The role of a world model is to predict what the outcome of a series of actions is going to be.”
“You certainly don't tell a researcher like me what to do.”
“AI doomers are wrong. The idea that AI will take over the world and destroy humanity is just not realistic given the current state of technology.”
“Open source is not just good for AI — it's essential for safety. Making AI available to everyone is the best way to make it safe.”
Research Output
V-JEPA: Video Joint Embedding Predictive Architecture
2024arXiv preprint
Extended JEPA to video understanding, learning physical world representations through masked video prediction without pixel-level reconstruction.
A Path Towards Autonomous Machine Intelligence
2022OpenReview (preprint)
LeCun's vision paper proposing Joint Embedding Predictive Architecture (JEPA) as the path to human-level AI, directly challenging the LLM paradigm.
Deep Learning
2015Nature
Landmark review paper establishing the foundations and state-of-the-art of deep learning for a broad scientific audience.
Dimensionality Reduction by Learning an Invariant Mapping
2006CVPR
Introduced contrastive loss and the Siamese network framework, a precursor to modern self-supervised learning methods.
Gradient-based learning applied to document recognition
1998Proceedings of the IEEE
Introduced the LeNet-5 architecture, the foundational blueprint for all modern convolutional neural networks. Applied to reading handwritten checks.
Backpropagation Applied to Handwritten Zip Code Recognition
1989Neural Computation
The first practical application of convolutional neural networks, used by the U.S. Postal Service for reading handwritten zip codes.
Field Intelligence
Yann LeCun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI
How AIs Will Match and Exceed Human-level Intelligence
Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning
Known Associates
Geoffrey Hinton
collaboratorCo-recipient of the 2018 Turing Award. LeCun was Hinton's postdoctoral researcher at the University of Toronto in the late 1980s. They now sharply disagree on AI risk — Hinton warns of existential danger while LeCun dismisses "doomerism."
View Dossier →Yoshua Bengio
collaboratorCo-recipient of the 2018 Turing Award and co-author of the landmark "Deep Learning" Nature paper. Both trained at Bell Labs. Bengio has shifted toward AI safety while LeCun remains focused on advancing capabilities through open-source.
View Dossier →Demis Hassabis
rivalRepresents a fundamentally different approach to AI. LeCun champions open-source world models and JEPA; Hassabis leads Google DeepMind with a more closed, reinforcement learning-heavy approach. Both compete for the path to AGI.
View Dossier →Andrew Ng
colleagueFellow AI pioneer and educator. Both advocate for open-source AI and oppose heavy regulation. Ng popularized deep learning education while LeCun advanced the research frontier at FAIR.
View Dossier →Organizational Affiliations
Current
AMI Labs (Advanced Machine Intelligence)
Founder & Executive Chairman
2025-present
Former
Meta (Facebook)
VP & Chief AI Scientist; Founding Director of FAIR
2013-2025
AT&T Bell Labs
Head, Image Processing Research Department
1988-2003
Government Advisory
French National AI Strategy Committee
Advisor
2018
Commendations
2018
ACM A.M. Turing Award
Association for Computing Machinery
Jointly with Geoffrey Hinton and Yoshua Bengio for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
2020
Legion of Honor (Chevalier)
French Republic
France's highest order of merit for contributions to artificial intelligence.
2014
IEEE Neural Network Pioneer Award
IEEE Computational Intelligence Society
For pioneering contributions to the development of convolutional neural networks.
2022
Princess of Asturias Award for Technical and Scientific Research
Princess of Asturias Foundation
Jointly with Geoffrey Hinton, Yoshua Bengio, and Demis Hassabis.
2015
IEEE PAMI Distinguished Researcher Award
IEEE
For contributions to pattern analysis and machine intelligence.
2017
Member of the National Academy of Engineering
National Academy of Engineering
Elected for contributions to machine learning and neural network models for pattern recognition.
Source Material
Dossier last updated: 2025-03-01