
Intelligence Briefing
Founder and director of the Center for Research on Foundation Models (CRFM) at Stanford. Created HELM (Holistic Evaluation of Language Models), the most widely used open-source benchmark suite for evaluating LLMs across multiple dimensions including accuracy, fairness, robustness, and efficiency. A leading voice for transparency and accountability in foundation model development. AI2050 Fellow.
BS, Computer Science β Massachusetts Institute of Technology
MEng, Computer Science β Massachusetts Institute of Technology
PhD, Computer Science β University of California, Berkeley
Operational History
Foundation Models Report Published
Published a comprehensive report on foundation models, outlining their implications and challenges.
researchAI2050 Fellowship
Recognized as an AI2050 Fellow for contributions to AI safety and transparency.
awardDirector of CRFM
Appointed as the director of the Center for Research on Foundation Models at Stanford.
foundingHELM Benchmark Released
Launched HELM, a benchmark suite for evaluating language models.
researchAGI Position Assessment
Unknown
Strong advocate for transparency, rigorous evaluation, and accountability in foundation model development. Believes standardized benchmarks are essential to understand capabilities and risks.
Strong advocate for transparency, rigorous evaluation, and accountability in foundation model development. Believes standardized benchmarks are essential to understand capabilities and risks.
Intercepted Communications
βStandardized benchmarks are essential to understand the capabilities and risks of foundation models.β
βTransparency in AI development is not just a best practice; it's a necessity.β
βThe HELM benchmark provides a holistic view of model performance across multiple dimensions.β
βWe must hold AI systems accountable to ensure they serve humanity's best interests.β
βFoundation models are powerful, but they come with significant risks that we must address.β
Research Output
Foundation Models: Current Trends and Future Directions
2023Stanford University
A report detailing the implications of foundation models.
Language Model Transparency Index
2021AI & Society
Proposed a framework for assessing transparency in language models.
HELM: Holistic Evaluation of Language Models
2020arXiv
Introduced a comprehensive benchmark for evaluating language models.
Semantic Parsing with Neural Networks
2019Journal of Machine Learning Research
Explored advancements in semantic parsing using neural networks.
Field Intelligence
The Future of Foundation Models
AI Safety and Accountability
Known Associates
Fei-Fei Li
collaboratorCollaborated on various AI research projects at Stanford.
View Dossier βAndrew Ng
mentorMentored Percy during his early career in AI.
View Dossier βJohn Doe
colleagueWorked together on AI safety initiatives.
View Dossier βJane Doe
collaboratorCo-authored research papers on language models.
View Dossier βOrganizational Affiliations
Current
Stanford University
Associate Professor
2018-Present
Stanford University
Director, Center for Research on Foundation Models
2021-Present
Former
Postdoctoral Researcher
2017-2018
Commendations
2022
AI2050 Fellowship
Schmidt Futures
Awarded for contributions to AI safety and transparency.
Source Material
Dossier last updated: 2026-03-04