
Intelligence Briefing
Co-founded and leads AVERI (AI Verification and Evaluation Research Institute), a nonprofit launched in January 2026 that promotes independent third-party auditing of frontier AI models. Previously spent six years at OpenAI as Head of Policy Research and then Senior Advisor for AGI Readiness. Left OpenAI in October 2024 for more independence; in March 2025 publicly criticized OpenAI for "rewriting the history" of its safety commitments. AVERI's launch paper, co-authored with 30+ AI safety researchers, lays out a framework for rigorous independent audits of leading AI companies.
PhD, Human and Social Dimensions of Science and Technology β Arizona State University
Operational History
Founding of AVERI
Co-founded AVERI, a nonprofit focused on independent auditing of AI systems.
foundingLaunch of AVERI's Framework
Published a framework for rigorous independent audits of leading AI companies, co-authored with 30+ AI safety researchers.
researchCriticism of OpenAI
Publicly criticized OpenAI for its approach to safety commitments, stating they were 'rewriting the history' of their safety practices.
policyDeparture from OpenAI
Left OpenAI after six years to pursue independent research and advocacy in AI safety.
departureThe Malicious Use of Artificial Intelligence
Co-authored a report discussing the potential malicious uses of AI and recommendations for mitigating risks.
researchAGI Position Assessment
Unknown
Strong advocate for independent, external auditing of AI systems. Believes voluntary self-regulation by AI companies is insufficient. Argues we are in "triage mode" for AI policy and need to prioritize building robust evaluation infrastructure now. Left OpenAI partly due to concerns about the gap between safety commitments and practice.
Strong advocate for independent, external auditing of AI systems. Believes voluntary self-regulation by AI companies is insufficient. Argues we are in "triage mode" for AI policy and need to prioritize building robust evaluation infrastructure now. Left OpenAI partly due to concerns about the gap between safety commitments and practice.
Intercepted Communications
βWe are in triage mode for AI policy; we need to prioritize building robust evaluation infrastructure now.β
βVoluntary self-regulation by AI companies is insufficient to ensure safety.β
βThe gap between safety commitments and practice is concerning.β
βIndependent, external auditing is crucial for the future of AI safety.β
βWe must address the malicious uses of AI before they become widespread.β
Research Output
Frontier AI Auditing: Toward Rigorous Third-Party Assessment
2026The Malicious Use of Artificial Intelligence
2018Known Associates
Sam Altman
colleagueWorked together at OpenAI.
View Dossier βNick Bostrom
mentorMentored Brundage during his time at the Future of Humanity Institute.
View Dossier βElon Musk
collaboratorCollaborated on AI safety initiatives.
View Dossier βKate Crawford
collaboratorCollaborated on AI policy research.
View Dossier βOrganizational Affiliations
Current
AVERI
Co-Founder & Executive Director
2026-Present
Former
OpenAI
Head of Policy Research
2018-2024
Future of Humanity Institute
Researcher
2016-2018
Source Material
Dossier last updated: 2026-03-04