← Back to Intelligence Dossier
Eliezer Yudkowsky

Eliezer Yudkowsky

AI Alignment Researcher

Organization
Machine Intelligence Research Institute (MIRI)

Position
Co-founder & Senior Research Fellow, MIRI

πŸ‡ΊπŸ‡ΈAmerican
h-Index--
Citations--
Followers100K+
Awards0
Publications2
Companies3

Intelligence Briefing

Self-taught AI alignment researcher who co-founded MIRI (originally the Singularity Institute) in 2000. One of the earliest and most vocal thinkers on AI existential risk. Author of "If Anyone Builds It, Everyone Dies" (2025, New York Times bestseller, co-authored with Nate Soares), which argues that building superintelligent AI under current conditions will lead to human extinction. Also known for the Sequences (rationalist essays) and the Harry Potter fanfiction "Harry Potter and the Methods of Rationality." MIRI is expanding its communications team in 2026 to alert the public to superintelligence dangers.

Expertise
AI AlignmentDecision TheoryAI SafetyRationality

Operational History

2026

MIRI Communications Expansion

MIRI announces expansion of its communications team to raise public awareness about superintelligence dangers.

career
2015

Published The Sequences

A collection of rationalist essays that explore various topics in AI and rationality.

research
2000

Co-founded MIRI

Co-founded the Machine Intelligence Research Institute, originally known as the Singularity Institute for Artificial Intelligence.

founding

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Within the next 10 years, significant advancements in AGI are expected.

Strongly opposes unregulated AI development, advocating for stringent safety measures.

Key Beliefs
  • Superintelligent AI poses an existential risk.
  • Alignment must be solved before any AGI development.
Safety Approach

Proposes international regulations and oversight.

Intercepted Communications

β€œIf anyone builds it, everyone dies.”

If Anyone Builds It, Everyone Dies2025-01-01AI Safety

β€œBuilding a superintelligent AI without solving alignment first is civilizational suicide.”

Public Statement2023-06-15AI Alignment

β€œWe need international regulation on AI development.”

Public Interview2023-09-10Policy

β€œNo one currently knows how to align a superintelligent AI.”

Public Lecture2023-11-20AI Safety

β€œThe dangers of superintelligence are not just theoretical; they are imminent.”

Conference Speech2024-03-05AI Risk

Research Output

2020s1
2010s1

If Anyone Builds It, Everyone Dies

2025

Self-published

New York Times bestseller discussing existential risks of AI.

w/ Nate Soares

The Sequences

2015

A foundational text in AI alignment and rationality.

Known Associates

Organizational Affiliations

Current

Machine Intelligence Research Institute

Senior Research Fellow

2000 - Present

Former

Singularity Institute for Artificial Intelligence

Director

2000 - 2015

Self-employed

Independent Researcher

1999 - 2000

Source Material

Dossier last updated: 2026-03-04