
Jared Kaplan
AI Researcher and Co-founder
Organization
Anthropic
Position
Co-founder and Researcher
Intelligence Briefing
Jared Kaplan is a prominent AI researcher known for his work on scaling laws, constitutional AI, and interpretability. He is a co-founder of Anthropic, an AI safety and research company.
With a strong academic background, including a BS from Stanford University and a PhD from Harvard University, Jared Kaplan has made significant contributions to the field of artificial intelligence. His research focuses on understanding the principles that govern the scaling of AI models and developing frameworks for safe and interpretable AI systems.
BS, — Stanford University
PhD, — Harvard University
Operational History
Advocacy for AI Regulation
Advocated for regulatory frameworks to ensure safe AI deployment.
policyResearch on AI Safety Protocols
Conducted research on protocols for ensuring AI safety.
researchPanelist on AI Ethics
Participated in a panel discussing ethical considerations in AI development.
careerKeynote Speaker at AI Safety Conference
Presented on the importance of interpretability in AI systems.
careerPublication on AI Interpretability
Published a paper discussing methods for improving AI interpretability.
researchAdvancements in Constitutional AI
Contributed to the development of frameworks for constitutional AI.
researchResearch on Scaling Laws
Published research on the implications of scaling laws in AI models.
researchCo-founder of Anthropic
Jared Kaplan co-founded Anthropic, focusing on AI safety and research.
foundingAGI Position Assessment
Unknown
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.
Intercepted Communications
“The future of AI depends on our ability to make it interpretable and safe.”
“Scaling laws provide critical insights into the capabilities of AI systems.”
“Constitutional AI is a framework that can guide the ethical development of AI.”
“Interpretability is not just a feature; it's a necessity for trust in AI.”
“We must prioritize safety in AI to prevent unintended consequences.”
Research Output
AI Safety Protocols: A Comprehensive Review
2025AI Safety Review
Reviews existing protocols for ensuring AI safety.
AI and Ethics: A New Paradigm
2025Journal of AI Ethics
Explores a new paradigm for integrating ethics into AI research.
Ethical Considerations in AI Development
2024Ethics in AI Journal
Discusses the ethical implications of AI technologies.
Improving Interpretability in AI Systems
2023International Conference on AI
Explores methods for enhancing the interpretability of AI models.
The Role of Interpretability in AI Trust
2023Trust in AI Conference
Analyzes the importance of interpretability for building trust in AI systems.
Constitutional AI: A Framework for Ethical AI Development
2022AI Ethics Journal
Introduces a framework for ensuring ethical considerations in AI development.
Scaling AI: The Future of Neural Networks
2022AI Research Symposium
Discusses future directions in scaling AI technologies.
Scaling Laws in Neural Networks
2021Journal of AI Research
This paper discusses the implications of scaling laws in the performance of neural networks.
Field Intelligence
Understanding Scaling Laws in AI
The Importance of AI Interpretability
Ethics in AI: A Framework for the Future
AI Safety: Challenges and Solutions
Constitutional AI: Guiding Principles
Known Associates
Sam Altman
colleagueWorked together at OpenAI before co-founding Anthropic.
View Dossier →Kate Crawford
collaboratorCollaborated on research related to AI ethics and safety.
View Dossier →Ilya Sutskever
rivalKnown for differing views on AI safety and research methodologies.
View Dossier →Elon Musk
mentorProvided guidance on AI safety and ethical considerations.
View Dossier →Organizational Affiliations
Current
Anthropic
Co-founder and Researcher
2020-present
Former
OpenAI
Research Scientist
2015-2020
Stanford University
Research Assistant
2012-2015
Dossier last updated: 2026-03-04