
Intelligence Briefing
Author of "The Precipice: Existential Risk and the Future of Humanity" (2020), in which he estimated a 1-in-6 chance of existential catastrophe this century, with unaligned AI as the single greatest risk (1 in 10). Founded Giving What We Can in 2009, kickstarting the effective altruism movement. Was a Senior Research Fellow at the Future of Humanity Institute until its closure in 2024. Now a Senior Researcher at Oxford's AI Governance Initiative (AIGI).
BSc, Computer Science — University of Melbourne
DPhil, Philosophy — University of Oxford
Operational History
Future of Humanity Institute Closure
The Future of Humanity Institute closed, ending his role as Senior Research Fellow.
departureJoined Oxford AI Governance Initiative
Became a Senior Researcher at the AI Governance Initiative.
careerFounded Giving What We Can
Initiated the effective altruism movement.
foundingAGI Position Assessment
Unknown
Ranks unaligned AI as the highest existential risk facing humanity. Advocates for treating AI safety as a civilizational priority on par with nuclear non-proliferation. Supports strong international governance frameworks.
Ranks unaligned AI as the highest existential risk facing humanity. Advocates for treating AI safety as a civilizational priority on par with nuclear non-proliferation. Supports strong international governance frameworks.
Intercepted Communications
“We need to treat AI safety as a civilizational priority.”
“The risks from unaligned AI are unprecedented.”
“Effective altruism is about using evidence and reason to figure out how to benefit others as much as possible.”
“We have a moral obligation to mitigate existential risks.”
“The future is uncertain, but we can influence it positively.”
Research Output
The Precipice: Existential Risk and the Future of Humanity
2020Discusses existential risks and their implications.
Known Associates
Eliezer Yudkowsky
collaboratorCollaborated on AI safety research.
View Dossier →Nick Bostrom
mentorMentored by Nick Bostrom at the Future of Humanity Institute.
View Dossier →William MacAskill
colleagueWorked together on effective altruism initiatives.
View Dossier →Elizabeth Sandifer
collaboratorCollaborated on existential risk research.
View Dossier →Organizational Affiliations
Current
University of Oxford
Senior Researcher, Oxford AI Governance Initiative
2024-Present
Former
Future of Humanity Institute
Senior Research Fellow
2015-2024
University of Melbourne
BSc in Computer Science
2000-2004
Source Material
Dossier last updated: 2026-03-04