← Back to Intelligence Dossier
Jacob Steinhardt

Jacob Steinhardt

AI Safety Researcher

Organization
Transluce

Position
Co-Founder and Researcher

h-Index--
Citations31,191
Followers--
Awards0
Publications0
Companies2

Intelligence Briefing

Jacob Steinhardt is a prominent researcher in AI safety, focusing on robustness, alignment, and benchmarking of AI systems. He has a background in both theoretical and applied aspects of machine learning.

Jacob Steinhardt is known for his work on AI safety, particularly in the areas of robustness and alignment. He completed his undergraduate studies at MIT and earned a PhD from Stanford University. He has previously worked at OpenAI, contributing to various projects aimed at ensuring the safe deployment of AI technologies. Currently, he is a co-founder at Transluce, where he continues to advance research in AI safety.

Expertise
RobustnessSafetyAlignmentBenchmarking
Education

BS, MIT

PhD, Stanford University

Operational History

2023

Co-Founder of Transluce

Co-founded Transluce to focus on AI safety and robustness research.

founding
2021

Research Scientist at OpenAI

Joined OpenAI as a research scientist focusing on AI safety.

career
2020

PhD in Machine Learning

Completed PhD at Stanford University.

career
2014

Bachelor of Science

Graduated from MIT with a BS.

career

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Safety Approach

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Intercepted Communications

AI safety is not just a technical challenge; it's a societal imperative.

Conference on AI Safety2023-05-15AI Safety

Robustness in AI systems is crucial for their safe deployment in real-world applications.

AI Alignment Forum2023-08-10Robustness

We must prioritize alignment to ensure AI systems reflect human values.

AI Ethics Symposium2023-09-20Alignment

Benchmarking AI systems is essential for understanding their capabilities and limitations.

Machine Learning Conference2023-11-05Benchmarking

The future of AI depends on our ability to make it safe and reliable.

Tech Talk at Stanford2023-12-01Future of AI

Known Associates

Organizational Affiliations

Current

Transluce

Co-Founder and Researcher

2023-present

Former

OpenAI

Research Scientist

2021-2023

Dossier last updated: 2026-03-04