← Back to Intelligence Dossier
Toby Ord

Toby Ord

Toby Ord

Organization
University of Oxford

Position
Senior Researcher, Oxford AI Governance Initiative

🇦🇺🇬🇧Australian-British
h-Index--
Citations--
Followers--
Awards0
Publications1
Companies3

Intelligence Briefing

Author of "The Precipice: Existential Risk and the Future of Humanity" (2020), in which he estimated a 1-in-6 chance of existential catastrophe this century, with unaligned AI as the single greatest risk (1 in 10). Founded Giving What We Can in 2009, kickstarting the effective altruism movement. Was a Senior Research Fellow at the Future of Humanity Institute until its closure in 2024. Now a Senior Researcher at Oxford's AI Governance Initiative (AIGI).

Expertise
Existential RiskEthicsAI GovernanceEffective AltruismPhilosophyGovernance
Education

BSc, Computer ScienceUniversity of Melbourne

DPhil, PhilosophyUniversity of Oxford

Operational History

2024

Future of Humanity Institute Closure

The Future of Humanity Institute closed, ending his role as Senior Research Fellow.

departure
2024

Joined Oxford AI Governance Initiative

Became a Senior Researcher at the AI Governance Initiative.

career
2009

Founded Giving What We Can

Initiated the effective altruism movement.

founding

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Ranks unaligned AI as the highest existential risk facing humanity. Advocates for treating AI safety as a civilizational priority on par with nuclear non-proliferation. Supports strong international governance frameworks.

Safety Approach

Ranks unaligned AI as the highest existential risk facing humanity. Advocates for treating AI safety as a civilizational priority on par with nuclear non-proliferation. Supports strong international governance frameworks.

Intercepted Communications

We need to treat AI safety as a civilizational priority.

Public Interview2021-05-15AI Safety

The risks from unaligned AI are unprecedented.

Podcast2022-02-10Existential Risk

Effective altruism is about using evidence and reason to figure out how to benefit others as much as possible.

Lecture2020-11-20Effective Altruism

We have a moral obligation to mitigate existential risks.

Conference Talk2023-03-30Ethics

The future is uncertain, but we can influence it positively.

Article2023-01-05Philosophy

Research Output

2020s1

The Precipice: Existential Risk and the Future of Humanity

2020

Discusses existential risks and their implications.

Known Associates

Organizational Affiliations

Current

University of Oxford

Senior Researcher, Oxford AI Governance Initiative

2024-Present

Former

Future of Humanity Institute

Senior Research Fellow

2015-2024

University of Melbourne

BSc in Computer Science

2000-2004

Source Material

Dossier last updated: 2026-03-04