
Intelligence Briefing
Co-founder and CEO of Anthropic, the company behind the Claude AI model family. Former VP of Research at OpenAI where he led GPT-2 and GPT-3 development. Anthropic valued at $380B as of February 2026. Spends up to 40% of his time on company culture. Published "The Adolescence of Technology" essay on AI risks in January 2026.
PhD, Biophysics β Princeton University
BA, Physics β Stanford University
Operational History
Published Essay on AI Risks
Published "The Adolescence of Technology" essay discussing AI risks.
researchFounded Anthropic
Co-founded Anthropic to focus on AI safety and research.
foundingJoined OpenAI as VP of Research
Led research efforts including the development of GPT-2 and GPT-3.
careerJoined Google Brain
Worked as a Senior Research Scientist focusing on machine learning.
careerPostdoctoral Scholar at Stanford Medicine
Conducted research in biophysics and machine learning applications.
careerAGI Position Assessment
Unknown
Strong safety advocate who founded Anthropic specifically to build safer AI. Warns about "unusually painful" job disruption and concentration of power in AI companies. Maintains "red lines" on military AI applications including mass surveillance and autonomous weapons.
Strong safety advocate who founded Anthropic specifically to build safer AI. Warns about "unusually painful" job disruption and concentration of power in AI companies. Maintains "red lines" on military AI applications including mass surveillance and autonomous weapons.
Intercepted Communications
βWe need to ensure that AI development is aligned with human values and safety.β
βThe concentration of power in AI companies poses significant risks to society.β
βJob disruption due to AI could be unusually painful if not managed properly.β
βWe must draw red lines on military applications of AI.β
βAI should be developed responsibly to benefit humanity as a whole.β
Research Output
The Adolescence of Technology
2026Discusses the risks associated with AI technology.
Responsible Scaling Policy
2023Discusses policies for responsible scaling of AI technologies.
Constitutional AI
2022arXiv
Proposes a framework for aligning AI systems with human values.
Scaling Laws for Neural Language Models
2021arXiv
Introduces scaling laws that guide the development of language models.
Language Models are Few-Shot Learners
2020NeurIPS
Presents the GPT-3 model and its few-shot learning capabilities.
GPT-2: Language Models are Unsupervised Multitask Learners
2019OpenAI
Introduces the GPT-2 model and its capabilities.
Known Associates
Sam Altman
colleagueFormer colleague at OpenAI and current CEO of OpenAI.
View Dossier βJan Leike
collaboratorCo-author on several AI safety papers.
View Dossier βAlec Radford
collaboratorCo-author on the GPT-2 paper.
View Dossier βKatherine Martin
collaboratorCo-author on the GPT-2 paper.
View Dossier βOrganizational Affiliations
Current
Anthropic
CEO
2021 - Present
Former
OpenAI
VP
2018 - 2021
Google Brain
Scientist
2015 - 2018
Stanford Medicine
Scholar
2014 - 2015
Source Material
Dossier last updated: 2026-03-04