← Back to Intelligence Dossier
Percy Liang

Percy Liang

Percy Liang

Organization
Stanford University

Position
Associate Professor of Computer Science, Stanford University; Director, Center for Research on Foundation Models (CRFM)

πŸ‡ΊπŸ‡ΈAmerican
h-Index--
Citations105,690
Followers--
Awards1
Publications4
Companies3

Intelligence Briefing

Founder and director of the Center for Research on Foundation Models (CRFM) at Stanford. Created HELM (Holistic Evaluation of Language Models), the most widely used open-source benchmark suite for evaluating LLMs across multiple dimensions including accuracy, fairness, robustness, and efficiency. A leading voice for transparency and accountability in foundation model development. AI2050 Fellow.

Expertise
Natural Language ProcessingFoundation ModelsAI BenchmarkingMachine LearningNLP
Education

BS, Computer Science β€” Massachusetts Institute of Technology

MEng, Computer Science β€” Massachusetts Institute of Technology

PhD, Computer Science β€” University of California, Berkeley

Operational History

2023

Foundation Models Report Published

Published a comprehensive report on foundation models, outlining their implications and challenges.

research
2022

AI2050 Fellowship

Recognized as an AI2050 Fellow for contributions to AI safety and transparency.

award
2021

Director of CRFM

Appointed as the director of the Center for Research on Foundation Models at Stanford.

founding
2020

HELM Benchmark Released

Launched HELM, a benchmark suite for evaluating language models.

research

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Strong advocate for transparency, rigorous evaluation, and accountability in foundation model development. Believes standardized benchmarks are essential to understand capabilities and risks.

Safety Approach

Strong advocate for transparency, rigorous evaluation, and accountability in foundation model development. Believes standardized benchmarks are essential to understand capabilities and risks.

Intercepted Communications

β€œStandardized benchmarks are essential to understand the capabilities and risks of foundation models.”

Percy Liang Interview2023-01-15AI Safety

β€œTransparency in AI development is not just a best practice; it's a necessity.”

Stanford AI Conference2022-09-10Transparency

β€œThe HELM benchmark provides a holistic view of model performance across multiple dimensions.”

HELM Launch Event2020-06-05Benchmarking

β€œWe must hold AI systems accountable to ensure they serve humanity's best interests.”

AI Ethics Symposium2021-11-20Accountability

β€œFoundation models are powerful, but they come with significant risks that we must address.”

AI Safety Workshop2023-03-01Risk Management

Research Output

2020s3
2010s1

Foundation Models: Current Trends and Future Directions

2023

Stanford University

A report detailing the implications of foundation models.

Language Model Transparency Index

2021

AI & Society

Proposed a framework for assessing transparency in language models.

HELM: Holistic Evaluation of Language Models

2020

arXiv

Introduced a comprehensive benchmark for evaluating language models.

Semantic Parsing with Neural Networks

2019

Journal of Machine Learning Research

Explored advancements in semantic parsing using neural networks.

Field Intelligence

The Future of Foundation Models

●Stanford AI Conference2022-09-10

AI Safety and Accountability

●AI Ethics Symposium2021-11-20

Known Associates

Organizational Affiliations

Current

Stanford University

Associate Professor

2018-Present

Stanford University

Director, Center for Research on Foundation Models

2021-Present

Former

Google

Postdoctoral Researcher

2017-2018

Commendations

2022

AI2050 Fellowship

Schmidt Futures

Awarded for contributions to AI safety and transparency.

Source Material

Dossier last updated: 2026-03-04