← Back to Intelligence Dossier
Ethan Perez

Ethan Perez

AI Safety Researcher

Organization
Anthropic

Position
Research Scientist

h-Index--
Citations22,503
Followers--
Awards0
Publications2
Companies1

Intelligence Briefing

Ethan Perez is a prominent researcher in the field of AI safety, focusing on alignment and red-teaming methodologies.

With a background in both safety and alignment, Ethan Perez has contributed significantly to the discourse on AI safety at Anthropic. He holds a BS from Rice University and a PhD from New York University, where he honed his expertise in AI alignment strategies and safety protocols.

Expertise
SafetyAlignmentRed-Teaming
Education

BS, Rice University

PhD, New York University

Operational History

2023

Red-Teaming Initiatives

Led several red-teaming initiatives to assess AI systems for safety vulnerabilities.

research
2022

Published on AI Alignment

Published a paper discussing the challenges and methodologies in AI alignment.

research
2021

Joined Anthropic

Ethan Perez joined Anthropic as a research scientist focusing on AI safety and alignment.

career

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Ethan advocates for rigorous safety measures in AI development, emphasizing the importance of alignment with human values.

Safety Approach

Ethan advocates for rigorous safety measures in AI development, emphasizing the importance of alignment with human values.

Intercepted Communications

AI safety is not just a technical challenge; it's a moral imperative.

Interview with AI Weekly2023-05-15AI Safety

Alignment is key to ensuring that AI systems act in accordance with human values.

Panel Discussion at AI Safety Conference2023-09-10Alignment

Red-teaming helps us uncover potential risks before they become real-world issues.

Blog Post on Anthropic Website2023-11-01Red-Teaming

We must prioritize safety in AI development to prevent unintended consequences.

Keynote Speech at Tech Summit2024-02-20Safety

Collaboration across disciplines is essential for effective AI safety research.

Research Collaboration Announcement2024-03-05Collaboration

Research Output

2020s2

Red-Teaming AI Systems

2023

Proceedings of the AI Safety Conference

Discusses methodologies for effectively red-teaming AI systems.

Challenges in AI Alignment

2022

Journal of AI Research

This paper outlines the key challenges faced in aligning AI systems with human values.

Field Intelligence

The Future of AI Safety

AI Safety Conference2023-09-1045 minutes

Aligning AI with Human Values

Tech Summit2024-02-2030 minutes

Known Associates

Organizational Affiliations

Current

Anthropic

AI Safety Researcher

2021-Present

Dossier last updated: 2026-03-04