
Intelligence Briefing
AI safety researcher now at METR (Model Evaluation & Threat Research), working on threat modeling and risk assessment for loss-of-control risks from advanced AI. Previously a senior researcher at Open Philanthropy (now Coefficient Giving), where she authored the influential "Biological Anchors" framework for forecasting transformative AI timelines. Her compute-based model for predicting when AI might match human cognition is widely cited in the AI safety community. Published detailed AI predictions for 2026 including forecasts on AI revenue, benchmarks, and capabilities.
BS, Electrical Engineering and Computer Science β University of California, Berkeley
Operational History
Joined METR
Became a Member of Technical Staff at METR, focusing on AI safety and threat research.
careerPublished 'AI Predictions for 2026'
Released a comprehensive report detailing predictions for AI capabilities and benchmarks expected by 2026.
researchAuthored 'Biological Anchors'
Introduced a new framework for forecasting transformative AI timelines, which gained significant attention in the AI safety community.
researchSenior Researcher at Open Philanthropy
Served as a senior researcher, focusing on AI safety and forecasting.
careerJoined Open Philanthropy
Became a Senior Program Officer, contributing to AI safety initiatives.
careerAGI Position Assessment
Unknown
Believes there is a meaningful chance of transformative AI within the next decade. Thinks current safety plans that rely on "using AI to make AI safe" may be insufficient. Advocates for rigorous external evaluation, threat modeling, and preparing for scenarios where AI systems could resist human oversight.
Believes there is a meaningful chance of transformative AI within the next decade. Thinks current safety plans that rely on "using AI to make AI safe" may be insufficient. Advocates for rigorous external evaluation, threat modeling, and preparing for scenarios where AI systems could resist human oversight.
Intercepted Communications
βThe path to transformative AI is fraught with risks that we must prepare for.β
βCurrent safety measures may not be enough; we need rigorous external evaluations.β
βAI could match human cognition sooner than we think, and we must be ready.β
βWithout specific countermeasures, the easiest path to transformative AI likely leads to AI takeover.β
βThe future of AI is uncertain, and we need to prepare for all scenarios.β
Research Output
AI Predictions for 2026
2022Detailed predictions on AI capabilities and benchmarks expected by 2026.
Biological Anchors: A New Framework for Forecasting Transformative AI
2021Introduced a novel framework for understanding AI development timelines.
Without Specific Countermeasures, the Easiest Path to Transformative AI Likely Leads to AI Takeover
2021Discussed the risks associated with unregulated AI development.
Known Associates
Eliezer Yudkowsky
collaboratorCollaborated on AI safety research and discussions.
View Dossier βNick Bostrom
mentorMentored in AI ethics and safety frameworks.
View Dossier βMiriam Zelinsky
colleagueWorked together on AI forecasting projects.
View Dossier βMax Tegmark
collaboratorCollaborated on research related to AI safety and future scenarios.
View Dossier βOrganizational Affiliations
Current
METR
AI Safety Researcher
2023-Present
Former
Open Philanthropy
AI Safety Researcher
2020-2023
Open Philanthropy
AI Safety Advocate
2019-2020
Source Material
Dossier last updated: 2026-03-04