← Back to Intelligence Dossier
Nick Bostrom

Nick Bostrom

Philosopher and AI Safety Advocate

Organization
Macrostrategy Research Initiative

Position
Founder & Principal Researcher, Macrostrategy Research Initiative

πŸ‡ΈπŸ‡ͺSwedish
h-Index--
Citations--
Followers--
Awards0
Publications5
Companies3

Intelligence Briefing

Author of "Superintelligence: Paths, Dangers, Strategies" (2014), the New York Times bestseller that shaped the global conversation on AI existential risk. Founded and directed the Future of Humanity Institute at Oxford from 2005 until its closure in April 2024 due to administrative conflicts with the Faculty of Philosophy. One of the most-cited philosophers in the world. Now leads the Macrostrategy Research Initiative and is focused on AGI governance research.

Expertise
Existential RiskAI SafetyPhilosophy of AIDecision TheoryTranshumanismPhilosopher
Education

BA, Philosophy β€” University of Gothenburg

MA, Philosophy and Physics β€” Stockholm University

MSc, Computational Neuroscience β€” King's College London

PhD, Philosophy β€” London School of Economics

Operational History

2026

Established Macrostrategy Research Initiative

Founded Macrostrategy Research Initiative to focus on AGI governance research.

founding
2024

Closure of Future of Humanity Institute

The Future of Humanity Institute closed due to administrative conflicts with the Faculty of Philosophy.

departure
2005

Founded Future of Humanity Institute

Established the Future of Humanity Institute at Oxford University to research global catastrophic risks.

founding

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Within the next few decades.

Advocates for careful consideration and governance of AGI development to mitigate risks.

Key Beliefs
  • AGI poses significant existential risks.
  • Proactive governance is essential.
  • Technical safety research must precede AGI development.
Safety Approach

Promotes a combination of ethical frameworks and technical safeguards.

Intercepted Communications

β€œThe future is uncertain, but we can take steps to shape it.”

Nick Bostrom, Interview2023-05-15Future of AI

β€œIf we succeed in creating superintelligent AI, we must ensure it is aligned with human values.”

Superintelligence: Paths, Dangers, Strategies2014AI Alignment

β€œWe need to be proactive about the risks posed by advanced AI.”

Nick Bostrom, Lecture2022-11-10AI Safety

β€œThe simulation argument raises profound questions about our existence.”

Nick Bostrom, Interview2021-08-20Simulation Theory

β€œA vulnerable world is one where a single mistake can lead to catastrophe.”

The Vulnerable World Hypothesis2019Existential Risk

Research Output

2010s3
2000s2

The Vulnerable World Hypothesis

2019

Discusses risks of global catastrophic events.

Superintelligence: Paths, Dangers, Strategies

2014

Influential book on AI safety and existential risk.

Simulation and Self-Identification

2011

Analyzes implications of the simulation argument.

Ethical Issues in Advanced Artificial Intelligence

2008

Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AIs

Explores ethical considerations in AI development.

Anthropic Bias: A Key to the Puzzle of the Universe

2006

Investigates anthropic reasoning in cosmology.

Known Associates

Organizational Affiliations

Current

Macrostrategy Research Initiative

Principal Researcher

2026-present

Former

Future of Humanity Institute

Director

2005-2024

Oxford University

Philosopher

2005-2024

Source Material

Dossier last updated: 2026-03-04