← Back to Intelligence Dossier
Jared Kaplan

Jared Kaplan

AI Researcher and Co-founder

Organization
Anthropic

Position
Co-founder and Researcher

h-Index--
Citations100,989
Followers--
Awards0
Publications8
Companies3

Intelligence Briefing

Jared Kaplan is a prominent AI researcher known for his work on scaling laws, constitutional AI, and interpretability. He is a co-founder of Anthropic, an AI safety and research company.

With a strong academic background, including a BS from Stanford University and a PhD from Harvard University, Jared Kaplan has made significant contributions to the field of artificial intelligence. His research focuses on understanding the principles that govern the scaling of AI models and developing frameworks for safe and interpretable AI systems.

Expertise
Scaling LawsConstitutional AIInterpretability
Education

BS, Stanford University

PhD, Harvard University

Operational History

2026

Advocacy for AI Regulation

Advocated for regulatory frameworks to ensure safe AI deployment.

policy
2025

Research on AI Safety Protocols

Conducted research on protocols for ensuring AI safety.

research
2024

Panelist on AI Ethics

Participated in a panel discussing ethical considerations in AI development.

career
2023

Keynote Speaker at AI Safety Conference

Presented on the importance of interpretability in AI systems.

career
2023

Publication on AI Interpretability

Published a paper discussing methods for improving AI interpretability.

research
2022

Advancements in Constitutional AI

Contributed to the development of frameworks for constitutional AI.

research
2021

Research on Scaling Laws

Published research on the implications of scaling laws in AI models.

research
2020

Co-founder of Anthropic

Jared Kaplan co-founded Anthropic, focusing on AI safety and research.

founding

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Safety Approach

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Intercepted Communications

The future of AI depends on our ability to make it interpretable and safe.

Interview with AI Weekly2023-05-15AI Safety

Scaling laws provide critical insights into the capabilities of AI systems.

Research Paper on Scaling Laws2021-08-10Scaling Laws

Constitutional AI is a framework that can guide the ethical development of AI.

Keynote at AI Ethics Summit2022-11-20Constitutional AI

Interpretability is not just a feature; it's a necessity for trust in AI.

Panel Discussion on AI Trust2024-03-30Interpretability

We must prioritize safety in AI to prevent unintended consequences.

AI Safety Conference 20232023-09-10AI Safety

Research Output

2020s8

AI Safety Protocols: A Comprehensive Review

2025

AI Safety Review

Reviews existing protocols for ensuring AI safety.

AI and Ethics: A New Paradigm

2025

Journal of AI Ethics

Explores a new paradigm for integrating ethics into AI research.

Ethical Considerations in AI Development

2024

Ethics in AI Journal

Discusses the ethical implications of AI technologies.

Improving Interpretability in AI Systems

2023

International Conference on AI

Explores methods for enhancing the interpretability of AI models.

The Role of Interpretability in AI Trust

2023

Trust in AI Conference

Analyzes the importance of interpretability for building trust in AI systems.

Constitutional AI: A Framework for Ethical AI Development

2022

AI Ethics Journal

Introduces a framework for ensuring ethical considerations in AI development.

Scaling AI: The Future of Neural Networks

2022

AI Research Symposium

Discusses future directions in scaling AI technologies.

Scaling Laws in Neural Networks

2021

Journal of AI Research

This paper discusses the implications of scaling laws in the performance of neural networks.

Field Intelligence

Understanding Scaling Laws in AI

AI Research Conference2022-06-1530 minutes

The Importance of AI Interpretability

Tech Talks2023-04-1045 minutes

Ethics in AI: A Framework for the Future

AI Ethics Symposium2024-02-051 hour

AI Safety: Challenges and Solutions

Global AI Forum2023-11-1250 minutes

Constitutional AI: Guiding Principles

AI Policy Roundtable2025-01-2040 minutes

Known Associates

Organizational Affiliations

Current

Anthropic

Co-founder and Researcher

2020-present

Former

OpenAI

Research Scientist

2015-2020

Stanford University

Research Assistant

2012-2015

Dossier last updated: 2026-03-04