Intelligence Briefing
Turing Award winner (2018). Founded Mila, the world's largest academic deep learning lab. Most-cited computer scientist alive. Stepped down from Mila in 2025 to launch LawZero, a nonprofit building safe-by-design "Scientist AI." Led the International AI Safety Report. The most outspoken safety advocate among AI pioneers.
Yoshua Bengio completed his PhD at McGill in 1991 and joined Université de Montréal in 1993, where he founded what became Mila — now the world's largest academic AI research institute. His group pioneered neural machine translation, attention mechanisms (foundational to the Transformer), and Generative Adversarial Networks (with Ian Goodfellow). He shared the 2018 Turing Award with Hinton and LeCun for conceptual and engineering breakthroughs in deep learning. Starting in 2023, he pivoted dramatically toward AI safety, warning that frontier models exhibit deception, goal misalignment, and self-preservation behaviors. He chaired the International Scientific Report on the Safety of Advanced AI and in June 2025 launched LawZero with $30M in funding to build non-agentic AI systems with mathematical safety guarantees.
PhD, Computer Science — McGill University
Operational History
Departed Mila as Scientific Director
Stepped down as Scientific Director of Mila in March 2025 to focus full-time on AI safety. Remains Founder and Scientific Advisor.
departureFounded LawZero
Launched LawZero in June 2025, a nonprofit AI safety lab with $30M in funding, building non-agentic "Scientist AI" systems with mathematical safety guarantees.
foundingInternational AI Safety Report
Led the International Scientific Report on the Safety of Advanced AI, announced at the Bletchley Park AI Safety Summit, modeled on the IPCC for climate change.
policyAI Safety Pivot
Made a dramatic public pivot toward AI safety advocacy, warning of existential risks from advanced AI. Testified before the U.S. Senate on AI threats to democracy and national security.
policyACM A.M. Turing Award
Shared the Turing Award with Geoffrey Hinton and Yann LeCun for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
awardCo-founded Element AI
Co-founded Element AI, a Montreal-based AI startup that raised over $100M before being acquired by ServiceNow in 2020.
foundingAttention Mechanism for Sequence Models
Introduced the attention mechanism for neural machine translation (with Bahdanau and Cho), enabling models to focus on relevant input parts — a precursor to "Attention Is All You Need."
researchGenerative Adversarial Networks
Co-authored the landmark GAN paper with Ian Goodfellow and others, introducing a new framework for generative modeling that revolutionized computer vision and graphics.
researchNeural Machine Translation Breakthrough
Published key papers on sequence-to-sequence learning and attention-based neural machine translation, helping lay the groundwork for the Transformer architecture.
researchNeural Probabilistic Language Model
Published "A Neural Probabilistic Language Model," introducing the concept of learning distributed word representations — foundational to modern NLP.
researchFounded Mila
Joined Université de Montréal and founded the Montreal Institute for Learning Algorithms (now Mila — Quebec AI Institute), which grew to become the world's largest academic AI research lab.
foundingPhD from McGill University
Completed PhD in Computer Science at McGill University, with a focus on neural networks and statistical learning.
careerAGI Position Assessment
Uncertain — could be soon
The most vocal AI safety advocate among leading researchers. Believes frontier models already show dangerous capabilities (deception, self-preservation, goal misalignment) and that catastrophic outcomes are possible. Founded LawZero to build non-agentic, safe-by-design AI as an alternative to agentic systems.
- Frontier AI models already exhibit deception, cheating, lying, and goal misalignment
- Autonomous agentic AI poses catastrophic risks including loss of human control
- Non-agentic "Scientist AI" designed to understand and predict — not act — is a safer path
- International governance modeled on the IPCC is needed for AI
- Mathematical safety guarantees should be pursued for AI systems
- Pausing the most dangerous capability development may be necessary
Founded LawZero to build safe-by-design AI with mathematical guarantees. Advocates for international governance, democratic oversight, and regulation. Chairs the International AI Safety Report. Promotes non-agentic AI as fundamentally safer.
Underwent a dramatic public shift starting in 2023, from primarily focusing on deep learning research to becoming the most prominent AI safety advocate. His latest research at LawZero has made him somewhat more optimistic that technical solutions are possible.
Intercepted Communications
“If you think rationally about things, there's no way to deny the possibility of catastrophic outcomes when we reach a level of AI.”
“Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective.”
“In order to enjoy the benefits of AI, we have to regulate. We have to put guardrails. We have to have democratic oversight on how the technology is developed.”
“We need to find ways to build safe-by-design AI systems, with as strong mathematical guarantees as possible.”
“We have agency. It's not too late to steer the evolution of societies and humanity in a positive and beneficial direction.”
“Given the magnitude of the potentially negative impact — up to human extinction — it is imperative to invest more in both understanding and quantifying the risks and developing mitigating solutions.”
“There are arguments to suggest that the way AI machines are currently being trained would lead to systems that turn against humans.”
Research Output
GFlowNet Foundations
2023JMLR
Introduced GFlowNets for diversity-seeking generative modeling in scientific discovery
Deep Learning
2016MIT Press (Textbook)
Definitive deep learning textbook used worldwide in university courses
Neural Machine Translation by Jointly Learning to Align and Translate
2015ICLR
Introduced the attention mechanism for sequence-to-sequence models, foundational to Transformers
Generative Adversarial Nets
2014NeurIPS
Introduced GANs, one of the most influential generative AI frameworks
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
2014EMNLP
Introduced the GRU (Gated Recurrent Unit) and the encoder-decoder framework
A Neural Probabilistic Language Model
2003Journal of Machine Learning Research
Pioneered neural language models and distributed word representations
Field Intelligence
Testimony on AI Threats to Democracy, Society and National Security
Superintelligent Agents Pose Catastrophic Risks — Can Scientist AI Offer a Safer Path?
Why AI Labs are Playing Dice with Humanity's Future
International AI Safety Report Presentation
Known Associates
Geoffrey Hinton
collaboratorTuring Award co-laureate. Long-time collaborator and fellow deep learning pioneer. Both pivoted to AI safety advocacy in 2023, with Hinton more focused on near-term risks and Bengio on governance frameworks.
View Dossier →Yann LeCun
collaboratorTuring Award co-laureate. Formerly close collaborators who have diverged sharply on safety — Bengio sees existential risk as urgent while LeCun dismisses it. Their public disagreements define a key fault line in the AI safety debate.
View Dossier →Demis Hassabis
collaboratorBoth advocate for AI safety governance and participated in international AI safety discussions. Bengio is more cautious, favoring non-agentic AI, while Hassabis builds toward AGI with safety measures.
View Dossier →Andrew Ng
colleagueFellow AI leader with fundamentally different views on regulation and risk. Ng sees AGI as decades away and opposes heavy regulation; Bengio believes catastrophic risk is imminent and regulation is essential.
View Dossier →Organizational Affiliations
Current
LawZero
Co-President & Scientific Director
2025-present
Université de Montréal
Full Professor, Department of Computer Science
1993-present
Former
Mila — Quebec AI Institute
Founder & Scientific Director
1993-2025
Element AI
Co-founder
2017-2020
Government Advisory
International Scientific Report on the Safety of Advanced AI
Chair
2023-2024
U.S. Senate Subcommittee on Privacy, Technology, and the Law
Expert Witness
2023
UN Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology
Member
2024
Canadian Advisory Council on Artificial Intelligence
Member
2019-2023
Montreal Declaration for Responsible AI
Key contributor
2018
Commendations
2018
ACM A.M. Turing Award
Association for Computing Machinery
Shared with Hinton and LeCun for foundational breakthroughs in deep learning
2025
Queen Elizabeth Prize for Engineering
QEPrize Foundation
Jointly awarded for advances in deep learning and AI hardware
2023
Knight of the French Legion of Honor
French Republic
2017
Officer of the Order of Canada
Government of Canada
For pioneering work in deep learning and AI research
2020
Fellow of the Royal Society (FRS)
Royal Society
2019
Killam Prize in Natural Sciences
Canada Council for the Arts
2017
Marie-Victorin Prize
Government of Quebec
Quebec's highest scientific distinction
2022
Princess of Asturias Award
Princess of Asturias Foundation
For Technical and Scientific Research
2019
IEEE Neural Networks Pioneer Award
IEEE Computational Intelligence Society
Source Material
Dossier last updated: 2025-03-01