← Back to Intelligence Dossier
Ajeya Cotra

Ajeya Cotra

AI Safety Researcher

Organization
METR

Position
Member of Technical Staff, METR (Model Evaluation & Threat Research)

πŸ‡ΊπŸ‡ΈAmerican
h-Index--
Citations--
Followers--
Awards0
Publications3
Companies3

Intelligence Briefing

AI safety researcher now at METR (Model Evaluation & Threat Research), working on threat modeling and risk assessment for loss-of-control risks from advanced AI. Previously a senior researcher at Open Philanthropy (now Coefficient Giving), where she authored the influential "Biological Anchors" framework for forecasting transformative AI timelines. Her compute-based model for predicting when AI might match human cognition is widely cited in the AI safety community. Published detailed AI predictions for 2026 including forecasts on AI revenue, benchmarks, and capabilities.

Expertise
AI SafetyAI ForecastingCompute ScalingRisk Assessment
Education

BS, Electrical Engineering and Computer Science β€” University of California, Berkeley

Operational History

2023

Joined METR

Became a Member of Technical Staff at METR, focusing on AI safety and threat research.

career
2022

Published 'AI Predictions for 2026'

Released a comprehensive report detailing predictions for AI capabilities and benchmarks expected by 2026.

research
2021

Authored 'Biological Anchors'

Introduced a new framework for forecasting transformative AI timelines, which gained significant attention in the AI safety community.

research
2020

Senior Researcher at Open Philanthropy

Served as a senior researcher, focusing on AI safety and forecasting.

career
2019

Joined Open Philanthropy

Became a Senior Program Officer, contributing to AI safety initiatives.

career

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Believes there is a meaningful chance of transformative AI within the next decade. Thinks current safety plans that rely on "using AI to make AI safe" may be insufficient. Advocates for rigorous external evaluation, threat modeling, and preparing for scenarios where AI systems could resist human oversight.

Safety Approach

Believes there is a meaningful chance of transformative AI within the next decade. Thinks current safety plans that rely on "using AI to make AI safe" may be insufficient. Advocates for rigorous external evaluation, threat modeling, and preparing for scenarios where AI systems could resist human oversight.

Intercepted Communications

β€œThe path to transformative AI is fraught with risks that we must prepare for.”

Interview with AI Safety Journal2023-01-15AI Safety

β€œCurrent safety measures may not be enough; we need rigorous external evaluations.”

Podcast on AI Ethics2022-06-10AI Safety

β€œAI could match human cognition sooner than we think, and we must be ready.”

Conference on AI Forecasting2022-09-20AI Forecasting

β€œWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeover.”

Research Paper2021-03-05AI Risk

β€œThe future of AI is uncertain, and we need to prepare for all scenarios.”

Keynote Speech at AI Safety Summit2023-05-12AI Safety

Research Output

2020s3

AI Predictions for 2026

2022

Detailed predictions on AI capabilities and benchmarks expected by 2026.

Biological Anchors: A New Framework for Forecasting Transformative AI

2021

Introduced a novel framework for understanding AI development timelines.

Without Specific Countermeasures, the Easiest Path to Transformative AI Likely Leads to AI Takeover

2021

Discussed the risks associated with unregulated AI development.

Known Associates

Organizational Affiliations

Current

METR

AI Safety Researcher

2023-Present

Former

Open Philanthropy

AI Safety Researcher

2020-2023

Open Philanthropy

AI Safety Advocate

2019-2020

Source Material

Dossier last updated: 2026-03-04