← Back to Intelligence Dossier
Miles Brundage

Miles Brundage

Miles Brundage

Organization
AVERI

Position
Co-Founder & Executive Director, AVERI (AI Verification and Evaluation Research Institute)

πŸ‡ΊπŸ‡ΈAmerican
h-Index--
Citations--
Followers--
Awards0
Publications2
Companies3

Intelligence Briefing

Co-founded and leads AVERI (AI Verification and Evaluation Research Institute), a nonprofit launched in January 2026 that promotes independent third-party auditing of frontier AI models. Previously spent six years at OpenAI as Head of Policy Research and then Senior Advisor for AGI Readiness. Left OpenAI in October 2024 for more independence; in March 2025 publicly criticized OpenAI for "rewriting the history" of its safety commitments. AVERI's launch paper, co-authored with 30+ AI safety researchers, lays out a framework for rigorous independent audits of leading AI companies.

Expertise
AI PolicyAI GovernanceAI Safety EvaluationAI Auditing
Education

PhD, Human and Social Dimensions of Science and Technology β€” Arizona State University

Operational History

2026

Founding of AVERI

Co-founded AVERI, a nonprofit focused on independent auditing of AI systems.

founding
2026

Launch of AVERI's Framework

Published a framework for rigorous independent audits of leading AI companies, co-authored with 30+ AI safety researchers.

research
2025

Criticism of OpenAI

Publicly criticized OpenAI for its approach to safety commitments, stating they were 'rewriting the history' of their safety practices.

policy
2024

Departure from OpenAI

Left OpenAI after six years to pursue independent research and advocacy in AI safety.

departure
2018

The Malicious Use of Artificial Intelligence

Co-authored a report discussing the potential malicious uses of AI and recommendations for mitigating risks.

research

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Strong advocate for independent, external auditing of AI systems. Believes voluntary self-regulation by AI companies is insufficient. Argues we are in "triage mode" for AI policy and need to prioritize building robust evaluation infrastructure now. Left OpenAI partly due to concerns about the gap between safety commitments and practice.

Safety Approach

Strong advocate for independent, external auditing of AI systems. Believes voluntary self-regulation by AI companies is insufficient. Argues we are in "triage mode" for AI policy and need to prioritize building robust evaluation infrastructure now. Left OpenAI partly due to concerns about the gap between safety commitments and practice.

Intercepted Communications

β€œWe are in triage mode for AI policy; we need to prioritize building robust evaluation infrastructure now.”

Miles Brundage2025-03-15AI Policy

β€œVoluntary self-regulation by AI companies is insufficient to ensure safety.”

Miles Brundage2025-03-15AI Safety

β€œThe gap between safety commitments and practice is concerning.”

Miles Brundage2025-03-15AI Safety

β€œIndependent, external auditing is crucial for the future of AI safety.”

Miles Brundage2026-01-10AI Auditing

β€œWe must address the malicious uses of AI before they become widespread.”

Miles Brundage2018-02-20AI Ethics

Research Output

2020s1
2010s1

Frontier AI Auditing: Toward Rigorous Third-Party Assessment

2026

The Malicious Use of Artificial Intelligence

2018

Known Associates

Organizational Affiliations

Current

AVERI

Co-Founder & Executive Director

2026-Present

Former

OpenAI

Head of Policy Research

2018-2024

Future of Humanity Institute

Researcher

2016-2018

Source Material

Dossier last updated: 2026-03-04