
Intelligence Briefing
Self-taught AI alignment researcher who co-founded MIRI (originally the Singularity Institute) in 2000. One of the earliest and most vocal thinkers on AI existential risk. Author of "If Anyone Builds It, Everyone Dies" (2025, New York Times bestseller, co-authored with Nate Soares), which argues that building superintelligent AI under current conditions will lead to human extinction. Also known for the Sequences (rationalist essays) and the Harry Potter fanfiction "Harry Potter and the Methods of Rationality." MIRI is expanding its communications team in 2026 to alert the public to superintelligence dangers.
Operational History
MIRI Communications Expansion
MIRI announces expansion of its communications team to raise public awareness about superintelligence dangers.
careerPublished The Sequences
A collection of rationalist essays that explore various topics in AI and rationality.
researchCo-founded MIRI
Co-founded the Machine Intelligence Research Institute, originally known as the Singularity Institute for Artificial Intelligence.
foundingAGI Position Assessment
Within the next 10 years, significant advancements in AGI are expected.
Strongly opposes unregulated AI development, advocating for stringent safety measures.
- Superintelligent AI poses an existential risk.
- Alignment must be solved before any AGI development.
Proposes international regulations and oversight.
Intercepted Communications
βIf anyone builds it, everyone dies.β
βBuilding a superintelligent AI without solving alignment first is civilizational suicide.β
βWe need international regulation on AI development.β
βNo one currently knows how to align a superintelligent AI.β
βThe dangers of superintelligence are not just theoretical; they are imminent.β
Research Output
If Anyone Builds It, Everyone Dies
2025Self-published
New York Times bestseller discussing existential risks of AI.
The Sequences
2015A foundational text in AI alignment and rationality.
Known Associates
Nate Soares
collaboratorCo-author of 'If Anyone Builds It, Everyone Dies' and fellow researcher at MIRI.
View Dossier βJames Lyons
collaboratorCollaborates on AI safety research at MIRI.
View Dossier βJoshua Fox
colleagueWorks alongside Yudkowsky at MIRI on alignment projects.
View Dossier βJames Baker
mentorProvided guidance in early AI alignment research.
View Dossier βOrganizational Affiliations
Current
Machine Intelligence Research Institute
Senior Research Fellow
2000 - Present
Former
Singularity Institute for Artificial Intelligence
Director
2000 - 2015
Self-employed
Independent Researcher
1999 - 2000
Source Material
Dossier last updated: 2026-03-04