← Back to Intelligence Dossier
Yoshua Bengio

Yoshua Bengio

THE CONSCIENCE

Organization
LawZero / Mila

Position
Co-President & Scientific Director, LawZero; Founder & Scientific Advisor, Mila

🇨🇦Canadian
h-Index202
Citations600,000
Followers150K
Awards9
Publications6
Companies4

Intelligence Briefing

Turing Award winner (2018). Founded Mila, the world's largest academic deep learning lab. Most-cited computer scientist alive. Stepped down from Mila in 2025 to launch LawZero, a nonprofit building safe-by-design "Scientist AI." Led the International AI Safety Report. The most outspoken safety advocate among AI pioneers.

Yoshua Bengio completed his PhD at McGill in 1991 and joined Université de Montréal in 1993, where he founded what became Mila — now the world's largest academic AI research institute. His group pioneered neural machine translation, attention mechanisms (foundational to the Transformer), and Generative Adversarial Networks (with Ian Goodfellow). He shared the 2018 Turing Award with Hinton and LeCun for conceptual and engineering breakthroughs in deep learning. Starting in 2023, he pivoted dramatically toward AI safety, warning that frontier models exhibit deception, goal misalignment, and self-preservation behaviors. He chaired the International Scientific Report on the Safety of Advanced AI and in June 2025 launched LawZero with $30M in funding to build non-agentic AI systems with mathematical safety guarantees.

Expertise
Deep LearningNatural Language ProcessingGenerative ModelsAI Safety
Education

PhD, Computer ScienceMcGill University

Operational History

2025

Departed Mila as Scientific Director

Stepped down as Scientific Director of Mila in March 2025 to focus full-time on AI safety. Remains Founder and Scientific Advisor.

departure
2025

Founded LawZero

Launched LawZero in June 2025, a nonprofit AI safety lab with $30M in funding, building non-agentic "Scientist AI" systems with mathematical safety guarantees.

founding
2024

International AI Safety Report

Led the International Scientific Report on the Safety of Advanced AI, announced at the Bletchley Park AI Safety Summit, modeled on the IPCC for climate change.

policy
2023

AI Safety Pivot

Made a dramatic public pivot toward AI safety advocacy, warning of existential risks from advanced AI. Testified before the U.S. Senate on AI threats to democracy and national security.

policy
2018

ACM A.M. Turing Award

Shared the Turing Award with Geoffrey Hinton and Yann LeCun for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

award
2017

Co-founded Element AI

Co-founded Element AI, a Montreal-based AI startup that raised over $100M before being acquired by ServiceNow in 2020.

founding
2015

Attention Mechanism for Sequence Models

Introduced the attention mechanism for neural machine translation (with Bahdanau and Cho), enabling models to focus on relevant input parts — a precursor to "Attention Is All You Need."

research
2014

Generative Adversarial Networks

Co-authored the landmark GAN paper with Ian Goodfellow and others, introducing a new framework for generative modeling that revolutionized computer vision and graphics.

research
2014

Neural Machine Translation Breakthrough

Published key papers on sequence-to-sequence learning and attention-based neural machine translation, helping lay the groundwork for the Transformer architecture.

research
2003

Neural Probabilistic Language Model

Published "A Neural Probabilistic Language Model," introducing the concept of learning distributed word representations — foundational to modern NLP.

research
1993

Founded Mila

Joined Université de Montréal and founded the Montreal Institute for Learning Algorithms (now Mila — Quebec AI Institute), which grew to become the world's largest academic AI research lab.

founding
1991

PhD from McGill University

Completed PhD in Computer Science at McGill University, with a focus on neural networks and statistical learning.

career

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Uncertain — could be soon

The most vocal AI safety advocate among leading researchers. Believes frontier models already show dangerous capabilities (deception, self-preservation, goal misalignment) and that catastrophic outcomes are possible. Founded LawZero to build non-agentic, safe-by-design AI as an alternative to agentic systems.

Key Beliefs
  • Frontier AI models already exhibit deception, cheating, lying, and goal misalignment
  • Autonomous agentic AI poses catastrophic risks including loss of human control
  • Non-agentic "Scientist AI" designed to understand and predict — not act — is a safer path
  • International governance modeled on the IPCC is needed for AI
  • Mathematical safety guarantees should be pursued for AI systems
  • Pausing the most dangerous capability development may be necessary
Safety Approach

Founded LawZero to build safe-by-design AI with mathematical guarantees. Advocates for international governance, democratic oversight, and regulation. Chairs the International AI Safety Report. Promotes non-agentic AI as fundamentally safer.

Underwent a dramatic public shift starting in 2023, from primarily focusing on deep learning research to becoming the most prominent AI safety advocate. His latest research at LawZero has made him somewhat more optimistic that technical solutions are possible.

Intercepted Communications

If you think rationally about things, there's no way to deny the possibility of catastrophic outcomes when we reach a level of AI.

Live Science Interview2024Existential Risk

Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective.

U.S. Senate Testimony2023-07AI Safety

In order to enjoy the benefits of AI, we have to regulate. We have to put guardrails. We have to have democratic oversight on how the technology is developed.

CNBC Interview2024Regulation

We need to find ways to build safe-by-design AI systems, with as strong mathematical guarantees as possible.

LawZero Launch Announcement2025-06AI Safety

We have agency. It's not too late to steer the evolution of societies and humanity in a positive and beneficial direction.

TED Talk2025AI Governance

Given the magnitude of the potentially negative impact — up to human extinction — it is imperative to invest more in both understanding and quantifying the risks and developing mitigating solutions.

Blog Post on AI Catastrophic Risks2023-08Existential Risk

There are arguments to suggest that the way AI machines are currently being trained would lead to systems that turn against humans.

CNBC Interview2024-11AI Safety

Research Output

2020s1
2010s4
2000s1

GFlowNet Foundations

2023

JMLR

Introduced GFlowNets for diversity-seeking generative modeling in scientific discovery

500 citationsw/ Nikolay Malkin, Moksh Jain, et al.

Deep Learning

2016

MIT Press (Textbook)

Definitive deep learning textbook used worldwide in university courses

55,000 citationsw/ Ian Goodfellow, Aaron Courville

Neural Machine Translation by Jointly Learning to Align and Translate

2015

ICLR

Introduced the attention mechanism for sequence-to-sequence models, foundational to Transformers

40,000 citationsw/ Dzmitry Bahdanau, Kyunghyun Cho

Generative Adversarial Nets

2014

NeurIPS

Introduced GANs, one of the most influential generative AI frameworks

65,000 citationsw/ Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville

Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

2014

EMNLP

Introduced the GRU (Gated Recurrent Unit) and the encoder-decoder framework

20,000 citationsw/ Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, et al.

A Neural Probabilistic Language Model

2003

Journal of Machine Learning Research

Pioneered neural language models and distributed word representations

10,000 citationsw/ Réjean Ducharme, Pascal Vincent, Christian Jauvin

Field Intelligence

The Catastrophic Risks of AI — and a Safer Path

TED2025

Testimony on AI Threats to Democracy, Society and National Security

U.S. Senate Subcommittee on Privacy, Technology, and the Law2023-07

Superintelligent Agents Pose Catastrophic Risks — Can Scientist AI Offer a Safer Path?

Simons Institute, UC Berkeley (Karp Distinguished Lecture)2025

Why AI Labs are Playing Dice with Humanity's Future

The Most Interesting People I Know Podcast2024

International AI Safety Report Presentation

AI Seoul Summit2024-05

Known Associates

Organizational Affiliations

Current

LawZero

Co-President & Scientific Director

2025-present

Université de Montréal

Full Professor, Department of Computer Science

1993-present

Former

Mila — Quebec AI Institute

Founder & Scientific Director

1993-2025

Element AI

Co-founder

2017-2020

Government Advisory

International Scientific Report on the Safety of Advanced AI

Chair

2023-2024

U.S. Senate Subcommittee on Privacy, Technology, and the Law

Expert Witness

2023

UN Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology

Member

2024

Canadian Advisory Council on Artificial Intelligence

Member

2019-2023

Montreal Declaration for Responsible AI

Key contributor

2018

Commendations

2018

ACM A.M. Turing Award

Association for Computing Machinery

Shared with Hinton and LeCun for foundational breakthroughs in deep learning

2025

Queen Elizabeth Prize for Engineering

QEPrize Foundation

Jointly awarded for advances in deep learning and AI hardware

2023

Knight of the French Legion of Honor

French Republic

2017

Officer of the Order of Canada

Government of Canada

For pioneering work in deep learning and AI research

2020

Fellow of the Royal Society (FRS)

Royal Society

2019

Killam Prize in Natural Sciences

Canada Council for the Arts

2017

Marie-Victorin Prize

Government of Quebec

Quebec's highest scientific distinction

2022

Princess of Asturias Award

Princess of Asturias Foundation

For Technical and Scientific Research

2019

IEEE Neural Networks Pioneer Award

IEEE Computational Intelligence Society

Source Material

Dossier last updated: 2025-03-01