
Intelligence Briefing
Author of "Superintelligence: Paths, Dangers, Strategies" (2014), the New York Times bestseller that shaped the global conversation on AI existential risk. Founded and directed the Future of Humanity Institute at Oxford from 2005 until its closure in April 2024 due to administrative conflicts with the Faculty of Philosophy. One of the most-cited philosophers in the world. Now leads the Macrostrategy Research Initiative and is focused on AGI governance research.
BA, Philosophy β University of Gothenburg
MA, Philosophy and Physics β Stockholm University
MSc, Computational Neuroscience β King's College London
PhD, Philosophy β London School of Economics
Operational History
Established Macrostrategy Research Initiative
Founded Macrostrategy Research Initiative to focus on AGI governance research.
foundingClosure of Future of Humanity Institute
The Future of Humanity Institute closed due to administrative conflicts with the Faculty of Philosophy.
departureFounded Future of Humanity Institute
Established the Future of Humanity Institute at Oxford University to research global catastrophic risks.
foundingAGI Position Assessment
Within the next few decades.
Advocates for careful consideration and governance of AGI development to mitigate risks.
- AGI poses significant existential risks.
- Proactive governance is essential.
- Technical safety research must precede AGI development.
Promotes a combination of ethical frameworks and technical safeguards.
Intercepted Communications
βThe future is uncertain, but we can take steps to shape it.β
βIf we succeed in creating superintelligent AI, we must ensure it is aligned with human values.β
βWe need to be proactive about the risks posed by advanced AI.β
βThe simulation argument raises profound questions about our existence.β
βA vulnerable world is one where a single mistake can lead to catastrophe.β
Research Output
The Vulnerable World Hypothesis
2019Discusses risks of global catastrophic events.
Superintelligence: Paths, Dangers, Strategies
2014Influential book on AI safety and existential risk.
Simulation and Self-Identification
2011Analyzes implications of the simulation argument.
Ethical Issues in Advanced Artificial Intelligence
2008Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AIs
Explores ethical considerations in AI development.
Anthropic Bias: A Key to the Puzzle of the Universe
2006Investigates anthropic reasoning in cosmology.
Known Associates
Elizabeth Anscombe
mentorInfluential philosopher and mentor during Bostrom's studies.
View Dossier βElon Musk
collaboratorHistorical donor to the Future of Humanity Institute.
View Dossier βStewart Wootton
colleagueCollaborated on various philosophical projects.
View Dossier βMiriam Tyler
colleagueWorked alongside Bostrom at the Future of Humanity Institute.
View Dossier βOrganizational Affiliations
Current
Macrostrategy Research Initiative
Principal Researcher
2026-present
Former
Future of Humanity Institute
Director
2005-2024
Oxford University
Philosopher
2005-2024
Source Material
Dossier last updated: 2026-03-04