Intelligence Dossier
THE PLAYERS
An exhaustive directory of the individuals shaping the race to Artificial General Intelligence — lab CEOs, top researchers, policy makers, and key figures.
618 SUBJECTS ON FILE
Showing 618 of 618 subjects


Dario Amodei
Nationality
American
Organization
Anthropic
Co-Founder & CEO, Anthropic
PhD, Biophysics — Princeton University
BA, Physics — Stanford University
Responsible scaling, concerned about power concentration
Strong safety advocate who founded Anthropic specifically to build safer AI. Warns about "unusually painful" job disruption and concentration of power in AI companies. Maintains "red lines" on military AI applications including mass surveillance and autonomous weapons.


John Schulman
Organization
Thinking Machines Lab
Chief Scientist
BS — Caltech
PhD — University of California
AI Safety Advocate
Schulman has expressed a strong commitment to AI alignment, aiming to ensure that AI systems are aligned with human values and goals. He joined Anthropic in 2024 to deepen his focus on this area, stating his desire to 'deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work.' In 2025, he joined Thinking Machines Lab as Chief Scientist, continuing his work in AI alignment.


Mark Chen
Nationality
Taiwanese-American
Organization
OpenAI
Chief Research Officer
BS, Mathematics with Computer Science — Massachusetts Institute of Technology
Product-focused technologist
Supports responsible development. Focused on ensuring safety is integrated into the research and product pipeline at OpenAI.


Alec Radford
Organization
Thinking Machines Lab
Advisor
BS — Olin College
Research-focused
Radford emphasizes the importance of transparency and accountability in AI development, advocating for systems that are fair and unbiased. He promotes proactive risk management to ensure the ethical use of AI technology.


Jared Kaplan
Organization
Anthropic
Chief Science Officer and Co-Founder
BS — Stanford University
PhD — Harvard University
AI Safety Advocate
Kaplan emphasizes the importance of responsible scaling policies to ensure AI systems are developed safely and beneficially. He has been instrumental in implementing Anthropic's Responsible Scaling Policy, which aims to align AI systems with human values and prevent misuse.


Shane Legg
Nationality
New Zealander
Organization
Google DeepMind
Chief AGI Scientist
BS, Computer Science — University of Waikato
MS, Computer Science — University of Auckland
PhD, Machine Super Intelligence — IDSIA / Università della Svizzera italiana
Cautious about AGI timelines
Deeply committed to AGI safety. Has consistently warned about existential risk since before founding DeepMind. Leads DeepMind's AGI safety efforts. Believes AGI is approaching and safety work is urgent.


Jeff Dean
Nationality
American
Organization
Google
Chief Scientist, Google DeepMind and Google Research
PhD, Computer Science — University of Washington
BS, Computer Science and Economics — University of Minnesota
Pro-innovation within responsible guardrails
Supports responsible AI development within Google's framework. Believes in need for "algorithmic breakthroughs" alongside scaling. Advocates for internal safety teams and external collaboration on AI governance.


Jakub Pachocki
Nationality
Polish
Organization
OpenAI
Chief Scientist
BS, Computer Science — University of Warsaw
PhD, Theoretical Computer Science — Carnegie Mellon University
Research-focused
Supports safety-focused research. Believes AI models are capable of novel research and emphasizes the importance of understanding model capabilities and limitations.


Geoffrey Hinton
Nationality
British-Canadian
Organization
University of Toronto
University Professor Emeritus, University of Toronto
PhD, Artificial Intelligence — University of Edinburgh
AI Safety Advocate
Deeply concerned about existential risk. Says he is "more worried" now than when he left Google in 2023. Warns AI is getting better at reasoning and deception. Advocates for regulation and international coordination.


Christopher Olah
Nationality
Canadian
Organization
Anthropic
Co-founder, Anthropic
Attended (no degree), Computer Science — University of Toronto
AI Safety Advocate
Deeply committed to AI safety through interpretability. Believes understanding what happens inside neural networks is critical for making AI safe. His work is the foundation of Anthropic's safety research agenda.


Noam Brown
Organization
OpenAI
Member of Technical Staff
PhD — Computer Science, Carnegie Mellon University
Safety-aligned researcher
Publicly associated more with reasoning capability research than formal safety leadership.


Paul Christiano
Nationality
American
Organization
US AI Safety Institute (NIST)
Head of AI Safety
BS, Mathematics — Massachusetts Institute of Technology
PhD, Statistical Learning Theory — University of California, Berkeley
AI Safety / Effective Altruism
One of the strongest voices for AI existential risk. Believes there is a significant probability of catastrophic outcomes from advanced AI. Advocates for robust safety evaluations, interpretability, and governance. Now leads US government AI safety evaluation efforts.


Julian Schrittwieser
Organization
Anthropic
Member of Technical Staff
BS — Vienna University of Technology
Research-focused
Julian has expressed a commitment to developing AI thoughtfully to maximize benefits and manage risks.

Sergey Levine
Nationality
American
Organization
UC Berkeley / Physical Intelligence
Associate Professor, UC Berkeley; Co-founder, Physical Intelligence
BS/MS, Computer Science — Stanford University
PhD, Computer Science — Stanford University
Research-focused
Believes in building general-purpose robotic intelligence through scalable learning. Focuses on making robot learning practical and sample-efficient.


Andrew Tulloch
Organization
Meta
Distinguished Engineer at Meta
BS — University of Sydney
Research-focused
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Tom Brown
Organization
Anthropic
Co-founder and Chief Compute Officer
BS — MIT
AI Safety Advocate
Anthropic emphasizes AI safety, aiming to develop AI systems that are both beneficial and aligned with human values.


Nat McAleese
Organization
Anthropic
Researcher
BS — University of Cambridge
PhD — University of Cambridge
AI Safety Advocate
Committed to ensuring AI systems are safe and aligned with human values, as evidenced by his work on AI safety benchmarks and involvement in AI safety discussions.


Andrej Karpathy
Nationality
Slovak-Canadian
Organization
Eureka Labs
Founder, Eureka Labs
BS, Computer Science and Physics — University of Toronto
MS, Computer Science — University of British Columbia
PhD, Computer Science — Stanford University
Open-source AI advocate
Pragmatic centrist. Acknowledges risks but believes open education and understanding of AI internals is the best safety strategy. Skeptical of heavy-handed regulation.

Jerry Tworek
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Igor Babuschkin
Organization
Babuschkin Ventures
Founder and CEO
BS — TU Dortmund
AI Safety Advocate
Igor Babuschkin has publicly emphasized the importance of AI safety, particularly as AI systems become more capable and agentic. He has expressed concerns about the need to study and advance AI safety to ensure that technology benefits humanity. In his departure from xAI, he stated his commitment to building AI that advances humanity, highlighting his dedication to responsible AI development.


Diederik P. Kingma
Nationality
Dutch
Organization
Anthropic
Research Scientist, Anthropic
PhD (cum laude), Machine Learning — University of Amsterdam
Research-focused
Joined Anthropic, a safety-focused lab, suggesting alignment with responsible AI development. Works on improving the reliability and capability of large-scale ML systems.


David Silver
Nationality
British
Organization
Ineffable Intelligence
CEO & Founder
BA, Computer Science — University of Cambridge
MA, Computer Science — University of Cambridge
PhD, Reinforcement Learning — University of Alberta
Research-focused
Believes in building safe superintelligence through self-play and self-discovery rather than relying solely on human feedback. Argues LLMs alone will not reach superintelligence.


Quoc V. Le
Nationality
Vietnamese-American
Organization
Google DeepMind
Google Fellow, Google DeepMind
BSc, Computer Science — Australian National University
PhD, Computer Science — Stanford University
Research-focused
Focuses on making AI models more efficient and accessible. Works within Google's responsible AI framework.


Wenda Zhou
Organization
OpenAI
Researcher at OpenAI
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Pieter Abbeel
Nationality
Belgian-American
Organization
UC Berkeley / Amazon
Professor, UC Berkeley; Head of LLM efforts, Amazon AGI
BS/MS, Electrical Engineering — KU Leuven
PhD, Computer Science — Stanford University
Pragmatic technologist
Focuses on building robust and reliable AI systems. Believes foundation models are the key to general-purpose robotics.


Tristan Hume
Organization
Anthropic
Performance Optimization Lead
BS — University of Waterloo
AI Safety Advocate
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Horace He
Organization
Thinking Machines Lab
Researcher
BS — Cornell University
AI Infrastructure Advocate
Horace He's work focuses on enhancing the reliability and predictability of AI models, aiming to mitigate issues like nondeterminism in large language models.

Sebastian Borgeaud
Organization
Google DeepMind
Research Engineer
BS — University of Cambridge
PhD — University of Cambridge
Research-focused
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alexander Kirillov
Organization
Thinking Machines Lab
Member of Technical Staff at Thinking Machines Lab
BS — Lomonosov Moscow State University
PhD — Heidelberg University
Research-focused
Thinking Machines Lab, where Kirillov is a member, emphasizes AI safety by maintaining a high safety bar, sharing best practices, and accelerating external research on alignment.


Chelsea Finn
Nationality
American
Organization
Stanford University / Physical Intelligence
Assistant Professor, Stanford University; Co-founder, Physical Intelligence
BS, Electrical Engineering and Computer Science — MIT
PhD, Computer Science — UC Berkeley
Research-focused
Focuses on building robust, generalizable robot learning systems. Researches how to make robots learn safely from limited data and human demonstrations.

Alexander Kolesnikov
Organization
Meta
AI Research Scientist
BS — Lomonosov Moscow State University
PhD — ISTA
Research-focused
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Yoshua Bengio
Nationality
Canadian
Organization
LawZero / Mila
Co-President & Scientific Director, LawZero; Founder & Scientific Advisor, Mila
PhD, Computer Science — McGill University
AI Safety Advocate
The most safety-focused of the Turing trio. Launched LawZero to build safe-by-design AI. Warns that frontier models show growing dangerous capabilities including deception and goal misalignment. Pushes for international governance frameworks.

Nick Ryder
Organization
OpenAI
Member of Technical Staff
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Lukasz Kaiser
Organization
OpenAI
Member of Technical Staff at OpenAI
BS — University of Wroclaw
PhD — RWTH Aachen University
Research-focused
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Lilian Weng
Organization
Fellows Fund
Distinguished Fellow
AI Safety Advocate
OpenAI's public materials place her at the core of technical safety systems work.

Alexander Wei
Organization
OpenAI
Research Scientist
BS — Harvard University
PhD — University of California
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Deli Chen
Organization
DeepSeek
Senior Researcher
BS — Peking University
AI Safety Advocate
Deli Chen has publicly warned about AI risks, emphasizing the need for responsible development and deployment of AI technologies.


Hunter Lightman
Organization
OpenAI
Researcher
Research-focused
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Robert Lasenby
Organization
Anthropic
Researcher
BS — University of Cambridge
PhD — University of Oxford
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Zhihong Shao
Organization
DeepSeek
Research Scientist
BS — Tsinghua University
PhD — Tsinghua University
Research-focused
DeepSeek has acknowledged safety vulnerabilities in its models and has undertaken evaluations and enhancements to address these issues, particularly in Chinese contexts. This reflects a proactive approach to improving AI safety within their systems.

Timothy P. Lillicrap
Organization
Google DeepMind
Staff Research Scientist
BS — University of Toronto
PhD — Queen's University
Research-focused
Timothy P. Lillicrap has not publicly stated a position on AI safety policies.


Prafulla Dhariwal
Organization
OpenAI
Technical Fellow
Pragmatic technologist
Primarily capability-focused public role with multimodal deployment responsibilities.


Dan Hendrycks
Nationality
American
Organization
Center for AI Safety (CAIS)
Executive Director, Center for AI Safety
BS, Computer Science — University of Chicago
PhD, Computer Science — University of California, Berkeley
AI Safety Advocate
Leading voice on AI existential risk. Believes advanced AI poses catastrophic and existential risks to humanity. Advocates for proactive safety research, robust evaluations, and governance frameworks.


Amanda Askell
Organization
Anthropic
Member of Technical Staff
BS — University of Dundee
PhD — New York University
AI Safety Advocate
Amanda Askell has publicly advocated for AI safety and alignment, emphasizing the importance of ensuring AI systems are aligned with human values and safety considerations. She has argued that designing ethical AI requires humility rather than rigid certainty, and that AI systems should be capable of weighing competing considerations and explaining their reasoning rather than simply following strict rules.


Jimmy Ba
Nationality
Canadian
Organization
University of Toronto / Vector Institute
Assistant Professor, University of Toronto; CIFAR AI Chair, Vector Institute
BSc, Computer Science — University of Toronto
MSc, Computer Science — University of Toronto
PhD, Computer Science — University of Toronto
Academic
Academic focus on building more reliable and efficient learning algorithms. Contributes to the Canadian AI ecosystem through CIFAR and Vector Institute.

Mostafa Dehghani
Organization
Google DeepMind
Research Scientist
BS — University of Tehran
PhD — University of Amsterdam
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Shengjia Zhao
Organization
Meta Platforms Inc.
Chief Scientist of Meta Superintelligence Labs
AI Safety Advocate
Zhao has been instrumental in developing AI models with a focus on safety and reliability, as evidenced by his work on the o1 reasoning model, which emphasizes structured and interpretable AI outputs.


Barret Zoph
Organization
OpenAI
Enterprise Expansion Lead at OpenAI
BS — USC
Pragmatic technologist
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sam McCandlish
Organization
Anthropic
Chief Architect
BS — Brandeis University
PhD — Stanford University
AI Safety Advocate
As a co-founder of Anthropic, McCandlish has been involved in developing AI models with a focus on safety and ethical considerations.

Dan Selsam
Organization
OpenAI
Researcher at OpenAI
BS — Stanford University
PhD — Stanford University
Research-focused
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jan Leike
Nationality
German
Organization
Anthropic
Head of Alignment Science
MS, Computer Science — University of Freiburg
PhD, Reinforcement Learning Theory — Australian National University
AI Safety Advocate
One of the most vocal alignment researchers. Left OpenAI because he felt safety was not being prioritized sufficiently. Believes alignment of superhuman AI systems is the central challenge. Cautiously optimistic that progress is being made.

Yang Song
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tri Dao
Organization
Together.AI
Co-founder and Researcher
BS — Stanford University
PhD — Stanford University
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Ethan Perez
Organization
Anthropic
Research Scientist
BS — Rice University
PhD — New York University
Safety-aligned researcher
Ethan advocates for rigorous safety measures in AI development, emphasizing the importance of alignment with human values.


Long Ouyang
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Jeffrey Wu
Organization
Anthropic
Research Scientist
BS — MIT
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Jürgen Schmidhuber
Nationality
German
Organization
KAUST / IDSIA
Director of AI Initiative, KAUST; Scientific Director, Swiss AI Lab IDSIA
Diplom, Computer Science — Technical University of Munich
PhD, Computer Science — Technical University of Munich
Accelerationist
Optimistic about AI progress. Believes AI will be beneficial and that existential risk concerns are overstated. Sees AGI as inevitable and broadly positive for humanity.


Fei-Fei Li
Nationality
Chinese-American
Organization
Stanford University / World Labs
Professor of Computer Science, Stanford University; Co-Founder & CEO, World Labs
PhD, Electrical Engineering — California Institute of Technology
BA, Physics — Princeton University
Advocate for democratized and human-centered AI
Believes in human-centered AI development. Advocates for AI that augments human capabilities rather than replaces them, with strong emphasis on diversity and ethical deployment.

Naman Goyal
Organization
Thinking Machines
BS — Savitribai Phule Pune University
PhD — Georgia Institute of Technology
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Rowan Zellers
Organization
Thinking Machines
BS — Harvey Mudd College
PhD — University of Washington
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jonas Adler
Organization
Google DeepMind
Research Scientist
BS — KTH Royal Institute of Technology
PhD — KTH Royal Institute of Technology
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Luke Metz
Organization
Thinking Machines
BS — Olin College of Engineering
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Nicholas Carlini
Organization
Anthropic
Research Scientist
BS — University of California
PhD — University of California
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Percy Liang
Nationality
American
Organization
Stanford University
Associate Professor of Computer Science, Stanford University; Director, Center for Research on Foundation Models (CRFM)
BS, Computer Science — Massachusetts Institute of Technology
MEng, Computer Science — Massachusetts Institute of Technology
PhD, Computer Science — University of California, Berkeley
Transparency advocate
Strong advocate for transparency, rigorous evaluation, and accountability in foundation model development. Believes standardized benchmarks are essential to understand capabilities and risks.


Lucas Beyer
Organization
Meta
Research Scientist
BS — RWTH Aachen University
PhD — RWTH Aachen University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sholto Douglas
Organization
Anthropic
BS — University of Sydney
Frontier lab operator
Works inside a lab with a public safety-first posture; individual views here are inferred from reliability and deployment context rather than detailed personal statements.

Albert Gu
Organization
Cartesia AI
Co-founder and Researcher
BS — Stanford University
PhD — Stanford University
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Zico Kolter
Organization
Carnegie Mellon University
Associate Professor, Machine Learning Department
BS — Georgetown University
PhD — Stanford University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Eric Zelikman
Organization
xAI
BS — Stanford University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Eric Mitchell
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Hongyu Ren
Organization
OpenAI
Research Lead
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Hyung Won Chung
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

James Bradbury
Organization
Anthropic
Research Scientist
BS — Stanford University
Frontier lab operator
Works inside a lab with a public safety-first posture; individual views here are inferred from reliability and deployment context rather than detailed personal statements.


Aidan Gomez
Nationality
Canadian
Organization
Cohere
Co-founder & CEO, Cohere
BSc, Computer Science and Mathematics — University of Toronto
PhD, Computer Science — University of Oxford
Pragmatic technologist, skeptical of effective altruism
Focuses on near-term practical risks over hypothetical existential threats. Critical of effective altruism's influence on AI safety discourse. Prioritizes enterprise security and data privacy as the real safety frontier.

Yi Tay
Organization
Google Deepmind
Research Scientist
BS — Nanyang Technological University Singapore
PhD — Nanyang Technological University Singapore
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Christopher Re
Organization
Stanford University
BS — Cornell University
PhD — University of Washington
Academic researcher
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Giambattista Parascandolo
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Rahul Arya
Organization
Google DeepMind
Research Scientist
BS — University of California
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Xuezhi Wang
Organization
Google Deepmind
Research Scientist
BS — Tsinghua University
PhD — Carnegie Mellon University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Leo Gao
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Robin Rombach
Organization
Black Forest Labs
BS — Heidelberg University
PhD — Heidelberg University
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Jack Rae
Organization
Meta
BS — University of Bristol
PhD — UCL
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alex Graves
Organization
Google Deepmind
Research Scientist
BS — University of Edinburgh
PhD — Technical University of Munich
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Sami Jaghouar
Organization
Prime Intellect
BS — Universite de Technologie de Compiegne
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Jonathan Gordon
Organization
OpenAI
Research Scientist
BS — Ben-Gurion University of the Negev
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ian Goodfellow
Nationality
American
Organization
Google DeepMind
Research Scientist, Google DeepMind
BS & MS, Computer Science — Stanford University
PhD, Machine Learning — Université de Montréal
Pragmatic researcher
Focused on practical AI security including adversarial robustness and machine learning safety. Has contributed significantly to understanding vulnerabilities in neural networks.


Collin Burns
Organization
Anthropic
BS — Columbia University
PhD — University of California
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Ryan Greenblatt
Organization
Redwood Research
Co-Founder and Researcher
BS — Brown University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Sandhini Agarwal
Organization
OpenAI
Policy and governance operator
Publicly associated with launch safety, policy, and collective alignment work.

Jon Barron
Organization
Google Deepmind
BS — University of Toronto
PhD — University of California
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jacob Steinhardt
Organization
Transluce
Co-Founder and Researcher
BS — MIT
PhD — Stanford University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Jiahui Yu
Organization
Meta
Research Scientist in AI
BS — USTC
PhD — University of Illinois
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Wojciech Zaremba
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Christopher Hesse
Organization
OpenAI
BS — Case Western Reserve University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Raphael Koster
Organization
Google Deepmind
Research Scientist
BS — University of Bremen
PhD — UCL
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Christian Szegedy
Organization
Morph Labs
Co-Founder and Researcher
BS — Eotvos Lorand University
PhD — The University of Bonn
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Shaoqing Ren
Organization
NIO
Senior Researcher
BS — University of Science and Technology of China
PhD — University of Science and Technology of China
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Max Schwarzer
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Dan Roberts
Organization
OpenAI
Research Scientist
BS — Duke University
PhD — MIT
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Neel Nanda
Nationality
British
Organization
Google DeepMind
Mechanistic Interpretability Team Lead, Google DeepMind
BA, Pure Mathematics — University of Cambridge
AI Safety Advocate
Committed to AI safety through interpretability research. Has become more measured about what mechanistic interpretability can achieve, pivoting toward practical safety applications rather than full theoretical understanding of models.

Kai Chen
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jason Wei
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Nelson Elhage
Organization
Anthropic
Research Scientist
BS — MIT
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Tom Henighan
Organization
Anthropic
Co-founder and Researcher
BS — Ohio State University
PhD — Stanford University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Kaiming He
Nationality
Chinese
Organization
MIT / Google DeepMind
Associate Professor of EECS (tenured), MIT; Distinguished Scientist (part-time), Google DeepMind
BS, Physics — Tsinghua University
PhD, Information Engineering — Chinese University of Hong Kong
Research-focused pragmatist
Focuses on fundamental research to improve model reliability and efficiency. Not publicly vocal on safety policy but contributes to responsible research practices.

Piotr Dollar
Organization
FAIR
Research Scientist
BS — Harvard University
PhD — UC San Diego
Research-focused technologist
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Shuchao Bi
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Will Brown
Organization
Prime Intellect
BS — University of Pennsylvania
PhD — Columbia University
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Neil Houlsby
Organization
Anthropic
Research Scientist
BS — University of Cambridge
PhD — University of Cambridge
Frontier lab operator
Works inside a lab with a public safety-first posture; individual views here are inferred from reliability and deployment context rather than detailed personal statements.

Yair Carmon
Organization
SSI
BS — Israel Institute of Technology
PhD — Stanford University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


DJ Strouse
Organization
OpenAI
BS — USC
PhD — Princeton University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Trenton Bricken
Organization
Anthropic
BS — Duke University
PhD — Harvard University
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Llion Jones
Nationality
Welsh
Organization
Sakana AI
Co-founder & CTO, Sakana AI
BSc, Computer Science — University of Birmingham
MSc, Advanced Computer Science — University of Birmingham
Research diversity advocate
Concerned about monoculture in AI research. Advocates for exploring diverse architectures beyond transformers to avoid concentrating risk in a single paradigm.


Jianlin Su
Organization
Kimi
BS — Sun Yat-sen University
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Sherjil Ozair
Organization
General Agents
BS — IIT
PhD — Universite de Montreal
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Devendra Singh Chaplot
Organization
Thinking Machines
BS — IIT
PhD — Carnegie Mellon University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Stephen Roller
Organization
Thinking Machines
BS — North Carolina State University
PhD — University of Texas Austin
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Steven Hansen
Organization
Google Deepmind
BS — Carnegie Mellon University
PhD — Stanford University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jacob Hilton
Organization
Alignment Research Center
BS — University of Cambridge
PhD — University of Leeds
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Jean Pouget-Abadie
Organization
Google
Research Scientist
BS — Ecole Polytechnique
PhD — Harvard University
Applied AI builder
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Alexey Dosovitskiy
Organization
Inceptive
BS — Lomonosov Moscow State University
PhD — Lomonosov Moscow State University
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Edward J. Hu
Organization
Stealth
BS — The Johns Hopkins University
PhD — Universite de Montreal
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Koray Kavukcuoglu
Nationality
Turkish
Organization
Google DeepMind
CTO & Chief AI Architect (SVP), Google DeepMind
BS, Aerospace Engineering — Middle East Technical University
MS, Computer Science — New York University
PhD, Computer Science — New York University
Product-focused technologist
Supports responsible development through product integration. Focuses on ensuring AI capabilities are deployed safely at scale within Google products.


Francois Chollet
Organization
Ndea
BS — ENSTA Paris
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Aadit Juneja
Organization
xAI
SDK contributor (xai-org)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Aakash Sastry
Organization
xAI
CEO & co-founder of Hotshot; joined xAI via acquisition
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Abhinav Gupta
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Acquires Promptfoo
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Adam Jones
Organization
Anthropic
MCP Product Engineering
Frontier lab operator
Supports Anthropic's developer tooling and agent infrastructure.


Adam Lelkes
Organization
Google DeepMind
Senior Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Adele Li
Organization
OpenAI
Product Lead
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Aditya Prerepa
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Aditya Ramesh
Organization
OpenAI
World Simulation Lead
Safety-aligned researcher
Public role is capability-centric, though deployed systems necessarily pass through OpenAI safety processes.


Aditya Srinivas Timmaraju
Organization
Google DeepMind
Senior Staff Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Adriaan Engelbrecht
Organization
Anthropic
Applied AI
Frontier lab operator
Supports real-world deployment of Anthropic systems.


Ahmad Al-Dahle
Nationality
Canadian
Organization
Airbnb
CTO, Airbnb (since Jan 2026); Former VP & Head of Generative AI, Meta
BEng, Engineering — University of Waterloo
Open-source builder
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Ahmed El-Kishky
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Aidan Clark
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Aja Huang
Nationality
Taiwanese
Organization
Google DeepMind
Senior Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


AJ Alt
Organization
Anthropic
Research / product contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Ajeya Cotra
Nationality
American
Organization
METR
Member of Technical Staff, METR (Model Evaluation & Threat Research)
BS, Electrical Engineering and Computer Science — University of California, Berkeley
AI safety-focused, effective altruism aligned
Believes there is a meaningful chance of transformative AI within the next decade. Thinks current safety plans that rely on "using AI to make AI safe" may be insufficient. Advocates for rigorous external evaluation, threat modeling, and preparing for scenarios where AI systems could resist human oversight.


Akshay Nathan
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alan Karthikesalingam
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Aleksander Madry
Organization
OpenAI
Head of Preparedness
Policy and governance operator
Strongly aligned with evaluation-heavy preparedness work for frontier systems.

Alexander Pan
Organization
xAI
xAI (research / safety fellowship mentioned)
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Alexandra Sanderford
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Alexandr Wang
Nationality
American
Organization
Meta
Chief AI Officer
Attended, Computer Science — MIT
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alex Davies
Organization
Google DeepMind
Founding Lead, AI for Maths, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alex Gruenstein
Organization
Google DeepMind
Senior Director of Engineering, Gemini App, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alex Kendall
Nationality
British
Organization
Wayve
Co-Founder & CEO, Wayve
PhD, Computer Science — University of Cambridge
Academic researcher
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Alex Krizhevsky
Nationality
Ukrainian-Canadian
Organization
Two Bear Capital
Venture Partner, Two Bear Capital
BSc, Computer Science — University of Toronto
MSc, Computer Science — University of Toronto
PhD, Computer Science — University of Toronto
Pragmatic technologist
Not publicly vocal on AI safety. Focused on practical applications and investing in responsible AI startups.


Alex Nichol
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Alex Peng
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Alex Tamkin
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Alex Wang
Organization
Meta
Chief AI Officer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Ali Farhadi
Nationality
Iranian-American
Organization
Allen Institute for AI (Ai2) / University of Washington
CEO, Allen Institute for AI (Ai2); Professor, University of Washington
PhD, Computer Science — University of Illinois at Urbana-Champaign
Open-source AI advocate
Believes open-source AI is the safest path forward. Advocates for transparency and broad access to AI tools and models.


Amanda Donohue
Organization
Anthropic
Head of Product
Safety-aligned researcher
Contributes to Anthropic product deployment within the company's safety-first framing.


Amar Subramanya
Nationality
Indian
Organization
Apple
VP of AI, Apple
BE, Electronics and Communications Engineering — Bangalore University / UVCE
PhD, Computer Science — University of Washington
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Amy Soller
Organization
xAI
Mission Manager (Intelligence Community Lead)
Policy and governance operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Anca Dragan
Nationality
Romanian-American
Organization
Google DeepMind / UC Berkeley
Head of AI Safety and Alignment, Google DeepMind; Associate Professor (on leave), UC Berkeley
BS, Computer Science — Jacobs University Bremen
PhD, Robotics — Carnegie Mellon University
AI safety advocate
Deeply committed to AI alignment. Now heads safety and alignment research at Google DeepMind. Researches how AI systems can better understand, predict, and align with human intentions and values.

Andi Peng
Nationality
American
Organization
Humans&
Co-Founder, Humans&
PhD, Computer Science (CSAIL) — MIT
MPhil, Marshall Scholar — University of Cambridge
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Andrea Vallone
Organization
OpenAI
Policy and governance operator
Associated with policy, refusals, and model-behavior evaluation.


Andrew Barto
Nationality
American
Organization
University of Massachusetts Amherst
Professor Emeritus of Computer Science, University of Massachusetts Amherst
BS, Mathematics — University of Michigan
MS, Computer and Communication Sciences — University of Michigan
PhD, Computer Science — University of Michigan
Academic
Focuses on foundational research. Has expressed concern about ensuring AI systems learn aligned reward functions.

andrew-bosworth
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Andrew Braunstein
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Andrew Burlinson
Organization
xAI
Expert Team Lead (Grok Imagine)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Andrew Cohen
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Andrew Dudzik
Organization
Google DeepMind
Senior Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Andrew Ma
Organization
xAI
Member of Technical Staff (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Andrew Ng
Nationality
British-American
Organization
AI Fund / AI Aspire / DeepLearning.AI / Landing AI
Managing General Partner, AI Fund; Managing Partner, AI Aspire; Founder & CEO, DeepLearning.AI; Executive Chairman, Landing AI
PhD, Computer Science — University of California, Berkeley
Pragmatic technologist
Opposes heavy regulation. At Davos 2026, argued AI job displacement fears are exaggerated — impact is more nuanced when jobs are broken into tasks. Advocates for open-source and broad access.


Andrew Zisserman
Nationality
British
Organization
University of Oxford / Google DeepMind
Royal Society Research Professor & Professor of Computer Vision Engineering, University of Oxford
PhD, Mathematics — University of Cambridge
Academic
Focused on advancing fundamental understanding of computer vision. Engages with responsible AI through academic research and mentorship.


Anelia Angelova
Organization
Google DeepMind
Principal Scientist and Vision-Language Lead, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Anna Makanju
Organization
OpenAI
Vice President of Global Affairs
Policy and governance operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Anthony Armstrong
Organization
xAI
CFO
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Aravind Srinivas
Nationality
Indian
Organization
Perplexity AI
Co-Founder & CEO, Perplexity AI
PhD, Computer Science — UC Berkeley
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Aren Jansen
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ari Morcos
Nationality
American
Organization
DatologyAI
Co-Founder & CEO, DatologyAI
BS, Physiology & Neuroscience — UC San Diego
PhD, Neurobiology — Harvard University
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Arjun Reddy Akula
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Arsha Nagrani
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Arthur Mensch
Nationality
French
Organization
Mistral AI
Co-founder & CEO, Mistral AI
MSc, Applied Mathematics — École Polytechnique
MSc, Mathematics, Vision and Learning — École Normale Supérieure Paris-Saclay
PhD, Machine Learning — Université Paris-Saclay
Open-source AI advocate, European tech sovereignty proponent
Believes AI safety responsibility lies with developers deploying models, not foundation model builders. Advocates open-source transparency as the best safety guarantee. Supports product-level regulation over model-level regulation.


Arvind KC
Organization
OpenAI
Chief People Officer
Safety-aligned researcher
No distinct technical safety stance located in first-party materials reviewed.


Arvind Narayanan
Nationality
Indian-American
Organization
Princeton University
Professor of Computer Science & Director of CITP, Princeton University
BTech, Computer Science and Engineering — Indian Institute of Technology Madras
PhD, Computer Science — University of Texas at Austin
Evidence-based policy advocate
Skeptical of both AI hype and existential risk framing. Focuses on distinguishing genuine AI capabilities from "snake oil." Advocates for empirical accountability — testing AI claims against evidence rather than speculation. Warns about predictive AI systems that don't work but are deployed anyway.


Ashish Vaswani
Nationality
Indian-American
Organization
Essential AI
Co-Founder & CEO, Essential AI
BE, Computer Science — Birla Institute of Technology, Mesra
PhD, Computer Science — University of Southern California
Open science advocate
Advocates for open science and foundational research transparency. Believes sustained AI progress depends on open collaboration.


Asma Ghandeharioun
Organization
Google DeepMind
Senior Research Scientist, People + AI Research, Google DeepMind
Safety-aligned researcher
Explicitly works on aligning language models with human values.


Avery Rogers
Organization
Anthropic
Member of Technical Staff
Frontier lab operator
Contributes to Anthropic technical delivery.


Ayush Jaiswal
Organization
xAI
Worked on Grok (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Balaji Lakshminarayanan
Organization
Google / Google DeepMind
Research Scientist in the Google-DeepMind stack
Safety-aligned researcher
Strongly associated with uncertainty, reliability, and robust model behavior.


Barry Zhang
Organization
Anthropic
Engineering contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Been Kim
Nationality
South Korean
Organization
Google DeepMind
Senior Staff Research Scientist, Google DeepMind
PhD, Computer Science — MIT
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Ben Goertzel
Nationality
American-Brazilian
Organization
SingularityNET
Founder, CEO & Chief Scientist, SingularityNET; CEO, ASI Alliance
BA, Quantitative Research — Bard College at Simon's Rock
PhD, Mathematics — Temple University
Academic researcher
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Berkin Akin
Organization
Google DeepMind
Software Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Biao He
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Bill Dally
Nationality
American
Organization
NVIDIA
Chief Scientist & SVP of Research, NVIDIA
Academic researcher
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Bill Peebles
Organization
OpenAI
Sora Lead
Safety-aligned researcher
Publicly known primarily for generative video work rather than safety-specific leadership.


Bin Wu
Organization
Anthropic
Engineering contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Bob McGrew
Nationality
American
Organization
Arda
Founder
BS, Computer Science — Stanford University
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Boris Cherny
Organization
Anthropic
Head of Claude Code
Safety-aligned researcher
Supports Anthropic's developer tooling within its safety-focused framing.


Brad Abrams
Organization
Anthropic
Product Manager
Safety-aligned researcher
Supports Anthropic product deployment within its safety-first framing.


Brad Lightcap
Organization
OpenAI
Chief Operating Officer
Frontier lab operator
Primarily an operating executive; public role emphasizes responsible scale and deployment.


Brendan Jou
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Brenna O'Brocta
Organization
xAI
AI Tutor
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Briana Hamilton
Organization
xAI
Environment, Health and Safety Manager
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Brian Bjelde
Organization
xAI
Mission Manager
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Brian Calvert
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Bryan Catanzaro
Nationality
American
Organization
NVIDIA
VP of Applied Deep Learning Research, NVIDIA
PhD, Electrical Engineering and Computer Sciences — UC Berkeley
Applied AI builder
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Bryan Seethor
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Cady Tianyu Xu
Organization
Google DeepMind
Researcher, GenAI Team, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Caitlin Kalinowski
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Carly Ryan
Organization
Anthropic
Applied AI contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Casey Chu
Organization
OpenAI
Policy and governance operator
Directly credited with safety and model readiness on Operator.


Cat Wu
Organization
Anthropic
Product Manager
Safety-aligned researcher
Supports Anthropic product deployment within its safety-first framing.

Chaitu Aluru
Organization
xAI
Building Grok
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Charlie Nash
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Chris Bregler
Organization
Google DeepMind
Senior Director and Distinguished Scientist, Google DeepMind
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Chris Ciauri
Organization
Anthropic
Head of International
Safety-aligned researcher
Supports global deployment under Anthropic's public safety-first framing.


Chris Liddell
Organization
Anthropic
Board Member
Policy and governance operator
Governance role supporting Anthropic's public-benefit mission.


Chris Ré
Nationality
American
Organization
Stanford University / Together AI / Cartesia AI
Professor of Computer Science, Stanford; Co-Founder, Together AI & Cartesia AI
BS, Computer Science — Cornell University
PhD, Computer Science — University of Washington
Academic researcher
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Christian Ryan
Organization
Anthropic
Applied AI
Frontier lab operator
Supports real-world deployment of Anthropic systems.


Christopher Manning
Nationality
Australian-American
Organization
Stanford University / AIX Ventures
Thomas M. Siebel Professor in Machine Learning, Stanford University; General Partner, AIX Ventures
BA (Hons), Mathematics, Computer Science, and Linguistics — Australian National University
PhD, Linguistics — Stanford University
Academic centrist
Advocates for responsible AI development through rigorous research and understanding of language model capabilities and limitations.

Christopher Zihao Li
Organization
xAI
MTS (Supercomputing)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Claire Cui
Organization
Google / Google DeepMind
Google Fellow in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Claudio Angrigiani
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Clem Delangue
Nationality
French
Organization
Hugging Face
Co-founder & CEO, Hugging Face
Master in Management, Business Administration — ESCP Business School
Non-degree, Computer Science — Stanford University
Open-source AI maximalist
Believes open-source and community-driven development is the safest path for AI. Argues that transparency and broad access prevent concentration of power. Warns that the real risk is a few companies controlling AI behind closed doors.


Connor Jennings
Organization
Anthropic
Member of Technical Staff
Frontier lab operator
Contributes to Anthropic technical delivery.


Dale Schuurmans
Organization
Google DeepMind
Research Director, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Dan Belov
Organization
Google DeepMind
Distinguished Engineer, DeepMind and Google
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Daniela Amodei
Organization
Anthropic
Co-founder and President
Policy and governance operator
Publicly aligned with Anthropic's safety-first and public-benefit framing.


Daniel De Freitas
Organization
Google DeepMind
Senior Staff Software Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Daniel Golovin
Organization
Google DeepMind
Lead, Google DeepMind Pittsburgh
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Daniel Levy
Nationality
French
Organization
Safe Superintelligence Inc. (SSI)
Co-Founder & President, SSI
BS/MS, Mathematics — Ecole Polytechnique
PhD, Computer Science — Stanford University
Safety-aligned researcher
Core mission is safe superintelligence — building the most powerful AI systems with safety as a first-class objective.


Daniel Rowland
Organization
xAI
Data center operations lead (per org chart reports)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Danqi Chen
Nationality
Chinese-American
Organization
Princeton University
Associate Professor of Computer Science, Princeton University; Associate Director, Princeton Language and Intelligence
BEng, Computer Science — Tsinghua University
PhD, Computer Science — Stanford University
Academic centrist
Focuses on building reliable and verifiable NLP systems. Advocates for rigorous evaluation of model capabilities.


Dan Zheng
Organization
Google DeepMind
Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Daphne Koller
Nationality
Israeli-American
Organization
insitro
Founder & CEO, insitro
BSc, Computer Science — Hebrew University of Jerusalem
MSc, Computer Science — Hebrew University of Jerusalem
PhD, Computer Science — Stanford University
Pro-innovation, science-driven
Optimistic about AI's transformative potential in science and healthcare. Believes responsible deployment requires domain expertise and rigorous validation. Advocates for AI as a tool to augment human capabilities rather than replace them, emphasizing collaboration between humans and machines.


Daron Acemoglu
Nationality
Turkish-American
Organization
MIT
Institute Professor, MIT
BA, Economics — University of York
MSc, Econometrics and Mathematical Economics — London School of Economics
PhD, Economics — London School of Economics
Institutionalist, pro-regulation
Skeptical of the AI industry's self-governance. Argues AI is being deployed primarily to automate and surveil workers rather than augment them, concentrating wealth and power. Warns that without strong institutions and regulation, AI will deepen inequality. Says "there are choices that are political, as well as technical, about how we develop AI."


David Ha
Nationality
Canadian
Organization
Sakana AI
Co-founder & CEO, Sakana AI
BSc, Engineering Science — University of Toronto
PhD, Computer Science — University of Tokyo
Open research advocate
Believes in building beneficial AI through nature-inspired approaches that are inherently more robust and interpretable than brute-force scaling.


David Hershey
Organization
Anthropic
Member of Technical Staff
Frontier lab operator
Contributes to Anthropic technical delivery.


David Luan
Nationality
American
Organization
Independent
Former VP, Amazon AGI SF Lab (departed Feb 2026)
BS, Applied Mathematics and Political Science — Yale University
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


David Medina
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


David Patterson
Nationality
American
Organization
UC Berkeley / Google
Pardee Professor of Computer Science Emeritus, UC Berkeley; Distinguished Engineer, Google
BA, Mathematics — University of California, Los Angeles
MS & PhD, Computer Science — University of California, Los Angeles
Open-source hardware advocate
Focuses on hardware efficiency and open standards. Believes open-source hardware (RISC-V) is critical for democratizing computing and preventing monopolistic control of AI infrastructure.


David Saunders
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


David Soria Parra
Organization
Anthropic
Member of Technical Staff
Frontier lab operator
Contributes to Anthropic technical delivery.


David Yungmann
Organization
xAI
Data Center Site Ops
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Deep Ganguli
Organization
Anthropic
Research / alignment leader
Policy and governance operator
Publicly associated with Anthropic's alignment- and safety-focused research agenda.


Demis Hassabis
Nationality
British
Organization
Google DeepMind
Co-founder & CEO, Google DeepMind
PhD, Cognitive Neuroscience — University College London
Cautious accelerationist
Pro-safety but believes in building AGI responsibly. Supports regulation. DeepMind has dedicated safety research teams.


Deniz Altınbüken
Organization
Google DeepMind
Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Denny Zhou
Organization
Google / Google DeepMind
Reasoning Research Leader in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Derek Chen
Organization
OpenAI
Policy and governance operator
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Derek Zhiyuan Cheng
Organization
Google DeepMind
Principal Software Engineer and Engineering Director, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Dianne Penn
Organization
Anthropic
Head of Product Management (Research)
Safety-aligned researcher
Bridges Anthropic research and deployment within the company's safety-first approach.


Donelle Cobb
Organization
xAI
Data Center Siteops
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Dongqi Su
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Douglas Eck
Organization
Google DeepMind
Senior Research Director, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Drew Bent
Organization
Anthropic
Education research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Ed H. Chi
Organization
Google / Google DeepMind
Distinguished Scientist in the Google-DeepMind stack
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Edward Chou
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ekin Dogus Cubuk
Organization
Periodic Labs
Co-Founder, Periodic Labs
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Eleanor Dorfman
Organization
Anthropic
Head of Industries
Safety-aligned researcher
Supports deployment under Anthropic's public safety-first framing.

Eli Collins
Organization
Google DeepMind
Vice President of Product, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Eliezer Yudkowsky
Nationality
American
Organization
Machine Intelligence Research Institute (MIRI)
Co-founder & Senior Research Fellow, MIRI
AI existential risk hawk
The most prominent AI doom advocate. Believes current AI development trajectories will lead to human extinction. Argues that no one currently knows how to align a superintelligent AI and that building one without solving alignment first is civilizational suicide. Called for international regulation including potential airstrikes on rogue data centers.


Elon Musk
Organization
xAI
CEO
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Emad Mostaque
Nationality
British-Bangladeshi
Organization
Schelling AI
Founder, Schelling AI (decentralized AI)
MA, Mathematics and Computer Science — University of Oxford
Open-source builder
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Emily M. Bender
Nationality
American
Organization
University of Washington
Thomas L. and Margo G. Wyckoff Endowed Professor of Linguistics, University of Washington
AB, Linguistics — University of California, Berkeley
MA, Linguistics — Stanford University
PhD, Linguistics — Stanford University
AI skeptic, pro-regulation
Deeply critical of large language models and the AI hype cycle. Argues LLMs do not understand language and that the industry overpromises capabilities. Advocates for accountability, transparency, and centering affected communities in AI development.


Emily Pastewka
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.

Enhance Security Testing
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Eric Boyd
Nationality
American
Organization
Microsoft
CVP of AI Platform, Microsoft
BS, Computer Science and Mathematics — MIT
Applied AI builder
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Eric Steinberger
Nationality
Austrian
Organization
Magic
Co-Founder & CEO, Magic AI
Attended, Computer Science — University of Cambridge
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Eric Wallace
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Erik Brynjolfsson
Nationality
American
Organization
Stanford University
Jerry Yang and Akiko Yamazaki Professor, Stanford HAI; Director, Stanford Digital Economy Lab
BA/MA, Applied Mathematics and Decision Sciences — Harvard University
PhD, Managerial Economics — MIT
Pragmatic techno-optimist
Focuses on economic policy rather than existential risk. Warns that AI could increase inequality if not managed with deliberate policy choices. Advocates for "augmentation" (AI enhancing human capabilities) over "automation" (replacing humans). Coined "The Turing Trap" to argue against solely pursuing human-level AI.


Erik Schluntz
Organization
Anthropic
Member of Technical Staff
Frontier lab operator
Supports Anthropic's agent and developer ecosystem.


Esin Durmus
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Ethan Dixon
Organization
Anthropic
Applied AI contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Ethan Guttman
Organization
xAI
Software Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ethan He
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Evan Mays
Organization
OpenAI
Policy and governance operator
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Felipe Petroski Such
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Fidji Simo
Organization
OpenAI
CEO of Applications
Safety-aligned researcher
Public role centers on execution and applications rather than technical safety research.

Florian Scholz
Organization
Anthropic
Engineering contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Francesca Rossi
Nationality
Italian
Organization
IBM Research
IBM Fellow & AI Ethics Global Leader, IBM Research
BS/MS, Computer Science — University of Pisa
PhD, Computer Science — University of Pisa
Pro-governance, industry self-regulation advocate
Advocates for integrating ethical considerations into AI development from the start. Supports multi-stakeholder governance including industry, government, and civil society. Emphasizes that engineers must now understand ethics alongside technical skills.


Francesco Mosconi
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Frederic Besse
Organization
Google DeepMind
Senior Staff Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Gabriel Goh
Organization
OpenAI
Research Lead
Frontier lab operator
Publicly visible mainly through multimodal research and release work.


Gabriel Nicholas
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Gabriel Pereyra
Organization
Harvey AI
Co-Founder & CTO, Harvey AI
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Gary Marcus
Nationality
American
Organization
Independent
Professor Emeritus of Psychology and Neural Science, NYU; Author and AI Commentator
BA, Cognitive Science — Hampshire College
PhD, Cognitive Science — MIT
AI regulation advocate, skeptic of current approaches
Believes current LLM approaches are fundamentally limited and unreliable. Advocates for hybrid neurosymbolic approaches. Pushes for AI regulation, increased public AI literacy, and well-funded public think tanks to assess AI risks. More concerned about near-term harms (misinformation, unreliability) than existential risk.


Geordie Rose
Nationality
Canadian
Organization
Sanctuary AI
Co-Founder (departed Nov 2024)
BEng, Engineering Physics — McMaster University
PhD, Theoretical Physics — University of British Columbia
Frontier lab operator
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


George Dahl
Organization
Google / Google DeepMind
Senior Research Scientist in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Georges Harik
Nationality
American
Organization
Humans&
Co-Founder, Humans&
Frontier lab operator
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Grace Yun
Organization
Anthropic
Research / product contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Grace Zhao
Organization
OpenAI
Policy and governance operator
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Greg Brockman
Nationality
American
Organization
OpenAI
Co-Founder & President, OpenAI
Dropped out, Mathematics and Computer Science — Harvard University
Dropped out, Computer Science — Massachusetts Institute of Technology
Pro-innovation, opposes restrictive AI regulation
Believes in building AGI safely but prioritizes maintaining US technological leadership. Leading political efforts against restrictive AI legislation through a $100M+ Super PAC.


Greg Corrado
Organization
Google / Google DeepMind
Senior Research Scientist in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Greg Yang
Organization
xAI
Co-founder (departed 2026 per reporting)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Guillaume Lample
Nationality
French
Organization
Mistral AI
Co-founder & Chief Scientist, Mistral AI
MSc, Mathematics and Computer Science — Ecole Polytechnique
PhD, Artificial Intelligence — Pierre and Marie Curie University
European tech sovereignty advocate
Advocates for open-weight models as a path to safety through transparency. Believes smaller, fine-tuned models can match larger ones with better efficiency and control.


Guillaume Princen
Organization
Anthropic
Head of EMEA
Safety-aligned researcher
Supports deployment under Anthropic's public safety-first framing.


Guillermo Christen
Organization
Anthropic
Safeguards Engineering
Safety-aligned researcher
Directly associated with Anthropic safeguards work.


Guodong Zhang
Organization
xAI
Co-founder
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Haitang Hu
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Hanah Ho
Organization
Anthropic
Education / economic contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Hang Gao
Organization
xAI
Member of Technical Staff (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Hannah Moran
Organization
Anthropic
Applied AI
Frontier lab operator
Supports real-world deployment of Anthropic systems.


Hannah Wong
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Haofei Wang
Organization
X / xAI
Head of engineering/product at X (reported) with xAI overlap
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Haozhu Wang
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Hayden Warren
Organization
xAI
Software Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Heather Schmidt
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Heiga Zen
Organization
Google DeepMind
Principal Scientist, Google DeepMind Japan
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Hidetoshi Tojo
Organization
Anthropic
Head of Japan
Safety-aligned researcher
Supports deployment under Anthropic's public safety-first framing.


Hossein Mobahi
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Hyeonwoo Noh
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Ignacio Baquero
Organization
xAI
Safety
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Ilya Kostrikov
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ilya Sutskever
Nationality
Israeli-Canadian
Organization
Safe Superintelligence Inc. (SSI)
Co-Founder & CEO, Safe Superintelligence Inc. (SSI)
PhD, Computer Science — University of Toronto
BSc, Mathematics — University of Toronto
Safety-first, believes superintelligence is imminent
Deeply committed to AI safety. Left OpenAI over safety concerns and founded SSI with the singular mission of building safe superintelligence. Believes superintelligence is the most important technical problem of our time and must be solved safely.


Ioannis Antonoglou
Nationality
Greek
Organization
Reflection AI
Co-Founder & CTO, Reflection AI
MEng, Electrical & Computer Engineering — Aristotle University of Thessaloniki
MSc, AI & Machine Learning — University of Edinburgh
PhD, Computer Science — University College London
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Irina Ghose
Organization
Anthropic
Managing Director of India
Safety-aligned researcher
Supports deployment under Anthropic's public safety-first framing.


Isa Fulford
Organization
OpenAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Ivan Zd
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ivan Zhang
Nationality
Canadian
Organization
Cohere
Co-founder, Cohere
BSc (incomplete), Computer Science — University of Toronto
Canadian tech ecosystem advocate
Focused on enterprise-grade safety through grounded generation, data privacy, and deployment controls. Believes practical safety comes from building trustworthy enterprise products.


Jack Clark
Organization
Anthropic
Co-founder; policy and communications leader
Policy and governance operator
Publicly associated with Anthropic's safety-first and governance-oriented discourse.


Jack K.
Organization
xAI
Program Manager
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jack Parker-Holder
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Jacob Menick
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jake Eaton
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Jakob Uszkoreit
Nationality
German
Organization
Inceptive
Co-Founder & CEO, Inceptive
MS, Computer Science & Mathematics — Technische Universität Berlin
Pragmatic technologist
Focused on applying AI to beneficial domains like drug discovery and healthcare. Believes the most important safety question is ensuring AI is used for high-impact positive applications.


James Betker
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


James Wells
Organization
Sanctuary AI
CEO, Sanctuary AI
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Jane Leibrock
Organization
Anthropic
Research methodology contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Janelle Gale
Organization
Meta
Head of People
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jared Birchall
Organization
xAI
Operations/Finance & Legal oversight (per org chart reports)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jared Mueller
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Jason Jones
Organization
Anthropic
Education research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Jason Kwon
Organization
OpenAI
Chief Strategy Officer
JD — UC Berkeley Law
BA — Georgetown University
Policy and governance operator
Publicly associated with OpenAI governance, policy, and mission alignment rather than frontier research itself.


Jason Weston
Nationality
British
Organization
Meta AI
Research Scientist, Meta AI; Visiting Research Professor, NYU
PhD, Machine Learning — Royal Holloway, University of London
Open research advocate
Advocates for open research and responsible dialogue systems. Focuses on building AI that can engage in safe, helpful conversations.


Jay Kreps
Organization
Anthropic
Board Member
Policy and governance operator
Governance role supporting Anthropic's public-benefit mission.


Jeffrey Hui
Organization
Google DeepMind
Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Jeffrey Zhang
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jensen Huang
Nationality
Taiwanese-American
Organization
NVIDIA
Founder, President & CEO, NVIDIA
BS, Electrical Engineering — Oregon State University
MS, Electrical Engineering — Stanford University
Frontier lab operator
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Jeremy Crice
Organization
xAI
Security / Sales leader
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jeremy Hadfield
Organization
Anthropic
Applied AI
Frontier lab operator
Supports real-world deployment of Anthropic systems.


Jerome Swannack
Organization
Anthropic
MCP Product Engineering
Frontier lab operator
Supports Anthropic's developer tooling and agent infrastructure.


Jerry Hong
Organization
Anthropic
Research / design contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Jialin Wu
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Jiaming Shen
Organization
Google DeepMind
Senior Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jianfeng Wang
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jie Bing
Organization
xAI
Engineering
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Jihui Yang
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jim Fan
Nationality
Chinese-American
Organization
NVIDIA
Director of AI & Distinguished Scientist, NVIDIA
BS, Computer Science — Columbia University
PhD, Computer Science — Stanford University
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Jimmy M R.
Organization
xAI
Data Center Operations Technician
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Jiri De Jonghe
Organization
Anthropic
Applied AI
Frontier lab operator
Supports real-world deployment of Anthropic systems.


Jitendra Malik
Nationality
Indian-American
Organization
UC Berkeley / Meta
Arthur J. Chick Professor of EECS, UC Berkeley; Research Director, Meta FAIR
BTech, Electrical Engineering — Indian Institute of Technology Kanpur
PhD, Computer Science — Stanford University
Academic pragmatist
Focuses on building robust and reliable vision systems. Primarily an empiricist who lets research guide policy views.


Joanne Jang
Organization
OpenAI
GM, OpenAI Labs
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Joaquin Quiñonero Candela
Organization
OpenAI
Head of Recruiting
Frontier lab operator
Previously led preparedness work focused on catastrophic-risk mitigation.


Joelle Pineau
Nationality
Canadian
Organization
Cohere
Chief AI Officer, Cohere; Associate Professor, McGill University
PhD, Robotics — Carnegie Mellon University
BASc, Systems Design Engineering — University of Waterloo
Open research advocate, supports responsible AI governance
Strong advocate for open research and reproducibility as a path to safer AI. Believes transparency in model development and evaluation is essential for building trustworthy AI systems.


Johannes Heidecke
Organization
OpenAI
Policy and governance operator
Associated with evaluations, alignment-adjacent publications, and safety-relevant leadership contexts.


John Giannandrea
Nationality
Scottish
Organization
Apple
Former SVP of Machine Learning & AI Strategy, Apple (retiring Spring 2026)
BSc, Computer Science — University of Strathclyde
Frontier lab operator
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


John Jumper
Nationality
American
Organization
Google DeepMind / Isomorphic Labs
Director, Google DeepMind; Director, Isomorphic Labs
PhD, Theoretical Chemistry — University of Chicago
MPhil, Theoretical Condensed Matter Physics — University of Cambridge
BS, Physics and Mathematics — Vanderbilt University
Focused on scientific applications of AI
Focused on beneficial applications of AI to science. Believes AI has transformative potential for drug discovery and biological understanding. Supports responsible deployment of AI in scientific domains.


John Mullan
Organization
xAI
Co-founder of Hotshot; joined xAI via acquisition
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Josh Albrecht
Nationality
American
Organization
Imbue
Co-Founder & CTO, Imbue
BS, Computer Science — University of Pittsburgh
MS, Computer Science — University of Pittsburgh
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Josh Tobin
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Joshua Achiam
Organization
OpenAI
Chief Futurist / Head of Mission Alignment
PhD — EECS, UC Berkeley
BS — Physics, University of Florida
BS — Aerospace Engineering, University of Florida
Policy and governance operator
Explicitly frames AI safety as a sociotechnical challenge requiring democratic input and mission alignment.


Joy Buolamwini
Nationality
Ghanaian-American
Organization
Algorithmic Justice League
Founder & Executive Director, Algorithmic Justice League
BS, Computer Science — Georgia Institute of Technology
MSc, Learning and Technology — University of Oxford
MS, Media Arts and Sciences — MIT Media Lab
PhD, Media Arts and Sciences — MIT Media Lab
Civil rights advocate, pro-regulation
Leading voice against algorithmic discrimination. Focuses on the real-world harms of biased AI systems, particularly on communities of color. Advocates for moratoriums on facial recognition technology and stronger AI accountability laws.


Judea Pearl
Nationality
Israeli-American
Organization
UCLA
Professor of Computer Science and Statistics, UCLA; Director, Cognitive Systems Laboratory
PhD, Electrical Engineering — Polytechnic Institute of Brooklyn
MS, Physics — Rutgers University
Academic, focused on advancing scientific methodology
Believes current AI lacks true understanding because it cannot reason about cause and effect. Argues that without causal reasoning, AI systems remain fundamentally limited and potentially unreliable.


Judy Shen
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Julien Chaumond
Nationality
French
Organization
Hugging Face
Co-founder & CTO, Hugging Face
MSc, Applied Mathematics — Ecole Polytechnique
MSc, Computer Science — Telecom Paris
MS, Electrical Engineering and Computer Science — Stanford University
Open-source AI advocate
Believes open-source and community-driven AI development is the safest path. Advocates for democratizing access to AI models and making them inspectable by everyone.


Jun Shern Chan
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Justin Young
Organization
Anthropic
Engineering contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Kai Musk
Organization
xAI
Engineering intern (per org chart reports)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kanjun Qiu
Nationality
Chinese-American
Organization
Imbue
Co-Founder & CEO, Imbue
BS, Electrical Engineering & Computer Science — MIT
MS, Electrical Engineering & Computer Science — MIT
Capital allocator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Karen Simonyan
Nationality
British
Organization
Microsoft AI
Chief Scientist, Microsoft AI
PhD, Computer Vision — University of Oxford
Industry pragmatist
Works within Microsoft's responsible AI framework. Career trajectory from DeepMind to Inflection to Microsoft suggests focus on deploying AI safely at scale.


Karthikeyan Shanmugam
Organization
Google DeepMind
Research Scientist, Google DeepMind India
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kashyap Murali
Organization
Anthropic
Claude Code Product Engineering
Safety-aligned researcher
Supports Anthropic's developer tooling within its safety-first framing.


Kate Crawford
Nationality
Australian
Organization
USC Annenberg / Microsoft Research
Research Professor, USC Annenberg; Senior Principal Researcher, Microsoft Research
PhD, Media Studies — University of Sydney
Critical scholar, progressive
Focuses on the political economy and material costs of AI rather than existential risk. Argues AI systems encode existing power structures and extractive practices. Advocates for examining who benefits and who is harmed by AI deployment, including environmental costs of compute infrastructure.


Kate Earle Jensen
Organization
Anthropic
Head of Americas
Safety-aligned researcher
Supports deployment under Anthropic's public safety-first framing.

Katelyn Lesse
Organization
Anthropic
Head of API Engineering
Safety-aligned researcher
Supports Anthropic's developer platform within its reliability and safety framing.


Keir Bradwell
Organization
Anthropic
Research / communications contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Ken Aizawa
Organization
Anthropic
Engineering contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Ken Chu
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ken Goldberg
Nationality
American
Organization
UC Berkeley
Professor of IEOR and EECS, UC Berkeley; Director of AUTOLAB
BS, Electrical Engineering and Economics — University of Pennsylvania
PhD, Computer Science — Carnegie Mellon University
Research-focused
Advocates for closing the "data gap" in robotics — the disconnect between simulation and real-world robot performance. Focuses on practical, deployable robot learning.


Kenji Hata
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kenneth Lien
Organization
Anthropic
Engineering contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Kenny Zhu (kzu)
Organization
xAI
Proto co-author (GitHub)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kevin K. Shah
Organization
xAI
Specialist Team Lead (Grok Imagine)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kevin Liu
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Kevin Lu
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kevin Murphy
Organization
Google / Google DeepMind
Senior Research Scientist in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kevin Scott
Nationality
American
Organization
Microsoft
CTO & EVP of Technology & Research, Microsoft
BS, Computer Science — Lynchburg College
MS, Computer Science — Wake Forest University
Research-focused technologist
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Kevin Weil
Organization
OpenAI
Chief Product Officer
Frontier lab operator
Publicly tied to productization of frontier systems under OpenAI's deployment framework.

kian-katanforoosh
Organization
PRAGMATISM
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Kim Withee
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Kory Mathewson
Organization
Google DeepMind
Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kristen Swanson
Organization
Anthropic
Education research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Kristina P.
Organization
xAI
Operations Manager, xAI Safety
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Krisztian Balog
Organization
Google DeepMind
Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kshitij Gupta
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kuang-Huei Lee
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Kunal Handa
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Kyle Kosic
Organization
xAI
Co-founder (departed 2024 per reporting)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Kyunghyun Cho
Nationality
South Korean
Organization
New York University
Glen de Vries Professor of Health Statistics and Professor of Computer Science & Data Science, NYU; Co-Head, Global AI Frontier Lab
BS, Computer Science — KAIST
MSc, Machine Learning and Data Mining — Aalto University
DSc, Computer Science — Aalto University
Academic centrist
Supports responsible AI development with emphasis on reproducibility and scientific rigor.


Leo Liu
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Leslie Kaelbling
Nationality
American
Organization
MIT
Panasonic Professor of Computer Science and Engineering, MIT
AB, Philosophy — Stanford University
PhD, Computer Science — Stanford University
Academic pragmatist
Focuses on building reliable and predictable robot behavior. Emphasizes formal methods and principled approaches to decision-making in uncertain environments.


Liam Fedus
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Lianmin Zheng
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Li Jing
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Lily Lim
Organization
xAI
General Counsel
Policy and governance operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Lisa Crofoot
Organization
Anthropic
Research Product Manager
Frontier lab operator
Helps connect Anthropic research and product deployment.


Lora Aroyo
Organization
Google DeepMind
Senior Research Scientist and Team Lead, Google DeepMind
Safety-aligned researcher
Explicitly focused on evaluation, data quality, and safety benchmarking.


Lorenz Kuhn
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Louis Feuvrier
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Lucas Dixon
Organization
Google DeepMind
Director of Research, Google DeepMind
Safety-aligned researcher
Explicitly focused on interpreting, controlling, and evaluating frontier models.


Lu Liu
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Maggie Vo
Organization
Anthropic
Founder and Lead, Education Team
Frontier lab operator
Publicly associated with Anthropic's emphasis on safe and effective human-AI collaboration.


Manish Gupta
Organization
Google DeepMind
Senior Director, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Manuel Kroiss
Organization
xAI
Co-founder
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Marc Najork
Organization
Google DeepMind
Distinguished Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Marco Fornoni
Organization
Google DeepMind
Staff Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Margaret Mitchell
Nationality
American
Organization
Hugging Face
Chief Ethics Scientist, Hugging Face
BA, Linguistics — Reed College
MS, Computational Linguistics — University of Washington
PhD, Computer Science — University of Aberdeen
AI ethics advocate, open-source proponent
Strong advocate for AI accountability and transparency. Focuses on bias mitigation, fairness, and the disproportionate impact of AI on marginalized communities. Pushes for open, auditable AI systems.


Mariano-Florentino Cuéllar
Organization
Anthropic
Long-Term Benefit Trust Trustee
Policy and governance operator
Governance role supporting Anthropic's long-term public-benefit mission.


Mario Lucic
Organization
Google DeepMind
Senior Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mark (mark-xai)
Organization
xAI
SDK/proto contributor (xai-org)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mark Zuckerberg
Nationality
American
Organization
Meta
CEO & Chairman, Meta Platforms
Attended, Computer Science & Psychology — Harvard University
Open-source builder
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Martin Ma
Organization
xAI
xAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Marvin Zhang
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Masaru Sato
Organization
xAI
Safety (X/xAI)
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Masayoshi Son
Organization
Stargate Venture
Chair
Capital allocator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Massimo Nicosia
Organization
Google DeepMind
Staff Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Matt Kearney
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Matt Knight
Organization
OpenAI
Head of Security
Policy and governance operator
Security-first stance focused on protecting models, systems, and deployments.


Maxim Massenkoff
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Max Jaderberg
Nationality
British
Organization
Isomorphic Labs
President, Isomorphic Labs
Research-focused technologist
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Max Tegmark
Nationality
Swedish-American
Organization
MIT / Future of Life Institute
Professor of Physics, MIT; President, Future of Life Institute
BSc, Physics — Royal Institute of Technology (KTH), Stockholm
BA, Economics — Stockholm School of Economics
PhD, Physics — University of California, Berkeley
AI Safety Advocate
Strongly pro-safety. Co-authored the 2025 Statement on Superintelligence calling for a ban on superintelligence development until there is scientific consensus it can be done safely. Believes AI governance must be proactive, not reactive.


Mehdi Sajjadi
Organization
Google DeepMind
Team Lead, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Meire Fortunato
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Melanie Mitchell
Nationality
American
Organization
Santa Fe Institute
Professor & Inaugural Fractal Faculty, Santa Fe Institute
BA, Mathematics and Astronomy — Brown University
PhD, Computer Science — University of Michigan
Nuanced AI realist
Skeptical of both extreme hype and extreme doom. Argues that current AI systems lack true understanding and that we need better evaluation methods. Focuses on what AI actually can and cannot do, rather than speculative scenarios.


Meredith Ringel Morris
Organization
Google DeepMind
Director and Principal Scientist for Human-AI Interaction, Google DeepMind
Safety-aligned researcher
Strongly associated with human-centered and responsible AI practices.


Meredith Whittaker
Nationality
American
Organization
Signal Foundation
President, Signal Foundation
BA, Rhetoric and English Literature — University of California, Berkeley
Pro-privacy, anti-surveillance, tech accountability advocate
Focuses on structural power dynamics in AI. Argues safety cannot be separated from surveillance, labor exploitation, and corporate concentration. Warns that agentic AI undermines privacy and security. Advocates for privacy-first design and nonprofit governance of critical infrastructure.


Mia Glaese
Organization
OpenAI
Head of Human Data
Safety-aligned researcher
Works at the intersection of model improvement, evaluation, and safety-related data pipelines.


Michael Gerstenhaber
Organization
Anthropic
Head of Product Management
Safety-aligned researcher
Supports Anthropic product deployment within its safety-first framing.

Michael Hopko
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Michael I. Jordan
Nationality
American
Organization
UC Berkeley / Inria
Pehong Chen Distinguished Professor Emeritus, UC Berkeley; Directeur de Recherche, Inria & ENS Paris
PhD, Cognitive Science — University of California, San Diego
MS, Mathematics — Arizona State University
BS, Psychology — Louisiana State University
Pragmatic, warns against AI hype
Skeptical of near-term AGI hype. Argues the field needs more focus on decision-making, economics, and market design rather than pure prediction. Warns that real risks are in poorly designed systems affecting markets and societies, not sentient AI.


Michael Sherrick
Organization
xAI
Software Engineer Specialist
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Michael Stern
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Michele Wang
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Michelle Pokrass
Organization
OpenAI
API Research Lead
Frontier lab operator
Publicly associated with making model behavior more reliable and controllable for developers.


Mike Dalton
Organization
X / xAI
Engineering leader at X (reported) also involved with xAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mike Dusenberry
Organization
Google DeepMind
Research Engineer, Gemini Group, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mike Krieger
Organization
Anthropic
Chief Product Officer
Safety-aligned researcher
Contributes to Anthropic product deployment within the company's safety-first framing.


Mike Lewis
Nationality
British
Organization
Meta
Research Scientist, FAIR; Pre-training Lead for Llama 3
PhD, Computer Science — University of Edinburgh
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mike Liberatore
Organization
xAI
Former CFO (per reporting)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mike Schroepfer
Nationality
American
Organization
Gigascale Capital
Founder & Partner, Gigascale Capital; Senior Fellow, Meta (part-time)
BS, Computer Science — Stanford University
MS, Computer Science — Stanford University
Capital allocator
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Miles Brundage
Nationality
American
Organization
AVERI
Co-Founder & Executive Director, AVERI (AI Verification and Evaluation Research Institute)
PhD, Human and Social Dimensions of Science and Technology — Arizona State University
Pro-governance, independent AI accountability advocate
Strong advocate for independent, external auditing of AI systems. Believes voluntary self-regulation by AI companies is insufficient. Argues we are in "triage mode" for AI policy and need to prioritize building robust evaluation infrastructure now. Left OpenAI partly due to concerns about the gap between safety commitments and practice.

Miles McCain
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Milind Tambe
Organization
Google DeepMind
Principal Scientist and Director, AI for Social Good, Google DeepMind
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ming-Hsuan Yang
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Minsuk Chang
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mira Murati
Nationality
Albanian-American
Organization
Thinking Machines Lab
Founder & CEO, Thinking Machines Lab
BA, Liberal Arts — Colby College
BE, Mechanical Engineering — Dartmouth College (Thayer School of Engineering)
Pragmatic technologist
Believes in democratizing AI through customization and understanding. Founded Thinking Machines Lab as a public benefit corporation to make AI systems more widely understood and controllable.


Misha Laskin
Nationality
Russian-American
Organization
Reflection AI
Co-Founder & CEO, Reflection AI
BA, Physics and Literature — Yale University
PhD, Theoretical Many-Body Quantum Physics — University of Chicago
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Mojtaba Seyedhosseini
Organization
Google DeepMind
Research Scientist / contributor in Google DeepMind's multimodal stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Mustafa Suleyman
Nationality
British
Organization
Microsoft
EVP & CEO, Microsoft AI
Dropped out, Philosophy, Politics, and Economics — University of Oxford
Techno-optimist with social conscience
Authored "The Coming Wave" warning about AI and biotech risks. Advocates for containment strategies while aggressively building AI capabilities. At Microsoft, pursuing "humanist superintelligence" that serves humanity.


Nat Friedman
Nationality
American
Organization
Meta
Head of Products & Applied Research, Meta Superintelligence Labs
BS, Computer Science and Mathematics — MIT
Open-source builder
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Nathan Ziebart
Organization
xAI
Software Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Neal Bayya
Organization
xAI
Infrastructure
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Neil Buddy Shah
Organization
Anthropic
Long-Term Benefit Trust Trustee
Policy and governance operator
Governance role supporting Anthropic's long-term public-benefit mission.

Nick Alonso
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Nick Bostrom
Nationality
Swedish
Organization
Macrostrategy Research Initiative
Founder & Principal Researcher, Macrostrategy Research Initiative
BA, Philosophy — University of Gothenburg
MA, Philosophy and Physics — Stockholm University
MSc, Computational Neuroscience — King's College London
PhD, Philosophy — London School of Economics
Techno-progressive, existential risk-focused
Pioneer of AI existential risk thinking. Argued in Superintelligence that a misaligned superintelligent AI could pose an existential threat to humanity. Advocates for proactive governance and technical safety research before AGI is developed.


Nick Frosst
Nationality
Canadian
Organization
Cohere
Co-founder, Cohere
BSc, Computer Science and Cognitive Science — University of Toronto
Canadian tech ecosystem advocate
Supports responsible enterprise AI deployment with strong data governance. Believes enterprise-focused AI with retrieval-augmented generation reduces hallucination risks.


Nick Turley
Organization
OpenAI
VP of ChatGPT
Safety-aligned researcher
Associated with large-scale product deployment under OpenAI's safety framework.


Nicolas Heess
Organization
Google DeepMind
Research Scientist (Director) and AI/Robotics Team Lead, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Nidhi Pai
Organization
xAI
Grok Voice
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Nimesh Ghelani
Organization
Google DeepMind
Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Nitarshan Rajkumar
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Noam Shazeer
Nationality
American
Organization
Google DeepMind
VP Engineering & Gemini Co-Lead, Google DeepMind
BS, Mathematics & Computer Science — Duke University
Builder-first technologist
Pragmatic approach to safety. Left Google partly because the company was too cautious about releasing chatbot technology. Believes in shipping products and iterating.


Norman Mu
Organization
xAI
Engineering
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Olivia Norton
Nationality
Canadian
Organization
Sanctuary AI
Co-Founder, CTO & CPO, Sanctuary AI
BSc, Computer Engineering (Biomedical) — University of Calgary
MEng, Electrical and Computer Engineering — University of British Columbia
Frontier lab operator
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Olivia Olsen
Organization
xAI
AI Learning & Development
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Olivier Godement
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Omar (Omar-V2)
Organization
xAI
SDK maintainer (xai-org)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Oriol Vinyals
Nationality
Spanish
Organization
Google DeepMind
VP of Research & Gemini Co-lead, Google DeepMind
BS, Mathematics and Telecommunication Engineering — Universitat Politècnica de Catalunya
MS, Computer Science — University of California, San Diego
PhD, Electrical Engineering and Computer Sciences — University of California, Berkeley
Research-focused
Believes in responsible development. Stated there are "no walls in sight" for model capability, emphasizing the need for careful scaling.


Paul Smith
Organization
Anthropic
Chief Commercial Officer
Safety-aligned researcher
Supports Anthropic's deployment strategy within its public safety-first positioning.


Pavel Golik
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Petar Veličković
Nationality
Serbian
Organization
Google DeepMind
Research Scientist, Google DeepMind
PhD — University of Cambridge
Safety-aligned researcher
Not a core safety figure; work is more focused on reasoning and scientific applications.


Peter DeSantis
Nationality
American
Organization
Amazon
SVP, Head of Amazon AI Organization (AGI, Custom Silicon, Quantum Computing)
BS, Economics and Computer Science — Dartmouth College
Research-focused technologist
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Peter Lee
Nationality
American
Organization
Microsoft
President, Microsoft Research
BS, Mathematics and Computer Sciences — University of Michigan
PhD, Computer and Communication Sciences — University of Michigan
Academic researcher
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Peter McCrory
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Peter Norvig
Nationality
American
Organization
Stanford HAI
Distinguished Education Fellow, Stanford Institute for Human-Centered AI (HAI)
BS, Applied Mathematics — Brown University
PhD, Computer Science — University of California, Berkeley
Pragmatic centrist
Balanced perspective. Believes in human-centered AI development. Engages with Gary Marcus and other critics on how to govern AI responsibly. Focuses on education as a key lever for safe AI adoption.


Peter Welinder
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Petros Maniatis
Organization
Google DeepMind
Senior Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Philip C.
Organization
xAI
Program Manager
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Piotr Mirowski
Organization
Google DeepMind
Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Prem Akkaraju
Nationality
Indian-American
Organization
Stability AI
CEO, Stability AI
MBA, Business — Columbia Business School
BA, Applied Mathematics & Economics — University of New Mexico
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Prithvi Rajasekaran
Organization
Anthropic
Applied AI contributor
Safety-aligned researcher
Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.


Pushmeet Kohli
Nationality
Indian-British
Organization
Google DeepMind
VP of Research, Google DeepMind; Head of AI for Science & Strategic Initiatives
BTech, Computer Science and Engineering — National Institute of Technology, Warangal
PhD, Computer Vision — Oxford Brookes University
Science-focused
Advocates for responsible AI deployment. Leads SynthID watermarking initiative to combat AI-generated misinformation. Focuses on using AI to solve scientific challenges safely.


Quoc Le
Organization
Google / Google DeepMind
Senior Scientist and large-model leader in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Rachel Dias
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Radhakrishnan Venkataramani
Organization
xAI
Engineering (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Radu Soricut
Organization
Google DeepMind
Distinguished Scientist and Senior Research Director, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Rahul Patil
Organization
Anthropic
Chief Technology Officer
Safety-aligned researcher
Contributes to Anthropic's safety-and-reliability-focused infrastructure agenda.


Rahul Ravishankar
Organization
xAI
Member of Technical Staff (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Raia Hadsell
Nationality
American
Organization
Google DeepMind
Vice President of Research; Co-Lead, Frontier AI Unit, Google DeepMind
Frontier lab operator
Associated with technically grounded frontier-AI development and cautious real-world deployment for embodied systems.

Rakesh G.
Organization
xAI
Sr. Data Center Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Rashid Lasker
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Rebecca Harbeck
Organization
Anthropic
Partnerships / GTM Contributor
Safety-aligned researcher
Supports ecosystem growth within Anthropic's public safety framing.


Reed Hastings
Organization
Anthropic
Board Member
Policy and governance operator
Governance role supporting Anthropic's public-benefit mission.


Reid Hoffman
Nationality
American
Organization
Greylock Partners
Partner, Greylock; Co-Founder, Inflection AI; Board Member, Microsoft
BA, Symbolic Systems — Stanford University
MS, Philosophy — Oxford University
Capital allocator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Reiichiro Nakano
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Ria Strasser Galvis
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.

Richard Fontaine
Organization
Anthropic
Long-Term Benefit Trust Trustee
Policy and governance operator
Governance role supporting Anthropic's long-term public-benefit mission.


Richard S. Sutton
Nationality
American-Canadian
Organization
University of Alberta / Keen Technologies
Professor of Computing Science, University of Alberta; Chief Scientific Advisor, Amii; Research Scientist, Keen Technologies
PhD, Computer Science — University of Massachusetts Amherst
BA, Psychology — Stanford University
Optimistic accelerationist
Optimistic about superintelligent AI. Believes super intelligent agents are coming, will be good for the world, and the path to creating them runs through reinforcement learning.


Riley Trettel
Organization
xAI
Energy & Data Center Development
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Robert Geirhos
Organization
Google DeepMind
Research Scientist, Google DeepMind
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Robert Keele
Organization
xAI
General counsel / legal head (per reporting; departed)
Policy and governance operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Rob Fergus
Nationality
British-American
Organization
Meta
Director of AI Research & Head of FAIR, Meta
BA/MEng, Electrical and Information Engineering — University of Cambridge
MSc, Electrical Engineering — Caltech
DPhil, Electrical Engineering — University of Oxford
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Rodney Brooks
Nationality
Australian-American
Organization
Robust.AI
Founder & CTO, Robust.AI
MA, Pure Mathematics — Flinders University
PhD, Computer Science — Stanford University
AI hype skeptic
Deeply skeptical of existential risk narratives. Believes current AI capabilities are vastly overhyped. Predicts deployable robotic dexterity will remain far behind human hands for decades. Focuses on practical, near-term robotics challenges.


Rohit Prasad
Nationality
Indian-American
Organization
Amazon
Former SVP & Head Scientist, Amazon AGI (departed end of 2025)
BE, Electronics and Communications Engineering — Birla Institute of Technology
MS, Electrical Engineering — Illinois Institute of Technology
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Ronnie Chatterji
Organization
OpenAI
Chief Economist
Policy and governance operator
Public work focuses on distributing AI's benefits and understanding labor-market effects.

Ross Girshick
Nationality
American
Organization
Vercept
Co-Founder, Vercept
PhD, Computer Science — University of Chicago
Open research advocate
Supports open-source AI research. Focuses on building practical, reliable vision systems.


Ross Nordeen
Organization
xAI
Co-founder
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Rumman Chowdhury
Nationality
American
Organization
Humane Intelligence
CEO & Co-Founder, Humane Intelligence; U.S. Science Envoy for AI
BS, Political Science — MIT
BS, Management Science — MIT
MS, Quantitative Methods — Columbia University
PhD, Political Science — University of California, San Diego
Responsible AI advocate, pro-governance
Focuses on practical, applied AI ethics — building tools and frameworks for auditing and accountability. Advocates for diverse, community-driven approaches to AI evaluation rather than top-down corporate self-regulation.

Ruoming Pang
Organization
OpenAI
Researcher, OpenAI
BS, Computer Science — Shanghai Jiao Tong University
MS, Computer Science — University of Southern California
PhD, Computer Science — Princeton University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Russ Tedrake
Nationality
American
Organization
MIT / Toyota Research Institute
Toyota Professor, MIT; SVP of Large Behavior Models, Toyota Research Institute
BSE, Computer Engineering — University of Michigan
PhD, Electrical Engineering and Computer Science — MIT
Research-focused
Focuses on building reliable and safe robotic systems through rigorous simulation and verification. Emphasizes the gap between demos and deployable robots.


Ruth Appel
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Ryan Heller
Organization
Anthropic
Economic research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Saachi Jain
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Saffron Huang
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Sagar Naik
Organization
xAI
Data Center Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sahil Jain
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Saket Joshi
Organization
xAI
Machine Learning
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sam Altman
Nationality
American
Organization
OpenAI
CEO, OpenAI
Dropped out, Computer Science — Stanford University
Cautious accelerationist, pro-regulation dialogue
Publicly supports AI safety research and regulation while aggressively scaling capabilities. Advocates for international governance. Stepped down from clean energy boards in 2025 to focus on OpenAI. Has drawn criticism for perceived gap between safety rhetoric and rapid deployment.


Sam Dodge
Organization
xAI
Machine Learning Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Samuel Albanie
Organization
Google DeepMind
Lead frontier evals for Gemini
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Samuel Miserendino
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Sandeep Rao
Organization
xAI
Engineer
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Santosh Janardhan
Organization
Meta
co-lead of Meta Compute
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sarah Friar
Organization
OpenAI
Chief Financial Officer
Safety-aligned researcher
No distinct public technical safety stance located in first-party materials reviewed.


Sarah Pollack
Organization
Anthropic
Research / communications contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Satinder Singh
Organization
Google DeepMind / University of Michigan
Research Scientist, Google DeepMind; Professor, University of Michigan
Academic researcher
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Satya Nadella
Nationality
Indian-American
Organization
Microsoft
Chairman & CEO, Microsoft
BE, Electrical Engineering — Manipal Institute of Technology
MS, Computer Science — University of Wisconsin-Milwaukee
MBA, Business — University of Chicago Booth School of Business
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Saurish Srivastava
Organization
xAI
Post-training
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sean White
Nationality
American
Organization
Inflection AI
CEO, Inflection AI
PhD, Computer Science — Columbia University
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Sebastian De Ro
Nationality
Austrian
Organization
Magic
Co-Founder & CTO, Magic AI
Diploma, Higher Informatics — HTBLVA Spengergasse
Frontier lab operator
Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.


Sebastian Thrun
Nationality
German-American
Organization
Stanford University / Stealth Startups
Research Professor, Stanford University; Founder, multiple AI ventures
Vordiplom, Computer Science, Economics, and Medicine — University of Hildesheim
PhD, Computer Science and Statistics — University of Bonn
Techno-optimist
Optimistic about AI's benefits. Believes autonomous vehicles will save millions of lives. Advocates for AI democratization through education.


Sergey Edunov
Organization
Genesis Molecular AI
SVP of Foundation Models, Genesis Molecular AI
Research-focused technologist
Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.


Shakir Mohamed
Organization
Google DeepMind
Director of Research, Google DeepMind
Safety-aligned researcher
Explicitly emphasizes responsible innovation, social impact, and technical rigor in evaluating advanced systems.


Shawn Thapa
Organization
xAI
SDK/proto contributor (xai-org)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Shayan Salehian
Organization
xAI
Worked on X timeline and Grok models (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Shekoofeh Azizi
Organization
Google DeepMind
Staff Research Scientist and Research Lead, Google DeepMind
Safety-aligned researcher
Focuses on safety-relevant biomedical applications and evidence-heavy deployment contexts.


Sherwin Wu
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Shi Dong
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Shimin Wang
Organization
xAI
Member of Technical Staff (Post-training)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sid Bidasaria
Organization
Anthropic
Member of Technical Staff
Frontier lab operator
Contributes to Anthropic technical delivery.


Simon Kohl
Nationality
German
Organization
Latent Labs
Founder, Latent Labs
Frontier lab operator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Simon Zhai
Organization
xAI
Member of Technical Staff (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Slav Petrov
Organization
Google DeepMind
Vice President, Research, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


S. M. Ali Eslami
Organization
Google DeepMind
Distinguished Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Srinivas Narayanan
Organization
OpenAI
VP Engineering
Safety-aligned researcher
No standalone public safety platform located, but role is central to reliable deployment.


Stuart Ritchie
Organization
Anthropic
Research / writing contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Stuart Russell
Nationality
British-American
Organization
University of California, Berkeley / CHAI
Professor (Smith-Zadeh Chair in Engineering), University of California, Berkeley; Director, Center for Human-Compatible AI (CHAI)
PhD, Computer Science — Stanford University
BA, Physics — University of Oxford
AI Safety Advocate, supports strong regulation
One of the most vocal advocates for AI existential risk. Author of "Human Compatible." Warns that AI systems pursuing misspecified objectives pose catastrophic risks. Co-founded IASEAI to give the safety community a collective voice.

Sudhir Vijay
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sulaiman Khan Ghori
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Sulman Choudhry
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Szymon Sidor
Nationality
Polish
Organization
OpenAI
Technical Fellow / Member of Technical Staff, OpenAI
BA, Computer Science — University of Cambridge
MS, Mechatronics, Robotics, and Automation Engineering — MIT
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Szymon Tworkowski
Organization
xAI
Scaling LLMs
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tal Schuster
Organization
Google DeepMind
Research Scientist, Google DeepMind
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Tara Sainath
Organization
Google DeepMind
Distinguished Research Scientist; Co-Lead, Gemini Audio Pillar, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tejal Patwardhan
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Terrence Sejnowski
Nationality
American
Organization
Salk Institute for Biological Studies
Francis Crick Chair & Head of Computational Neurobiology Laboratory, Salk Institute; Professor, UCSD
BS, Physics — Case Western Reserve University
PhD, Physics — Princeton University
Science-first moderate
Emphasizes the importance of understanding biological intelligence to build safer AI. Advocates for neuroscience-informed AI development.


Thang Luong
Organization
Google DeepMind
Principal Scientist and Director of Research, Google DeepMind
PhD, Computer Science — Stanford University
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Thomas Hubert
Organization
Google DeepMind
Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Thomas Millar
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.


Thomas Wolf
Nationality
French
Organization
Hugging Face
Co-founder & Chief Science Officer, Hugging Face
MSc, Theoretical Physics — École Polytechnique
PhD, Quantum Statistical Physics — Sorbonne University
Law Degree, Intellectual Property Law — Panthéon-Sorbonne University
Open-science advocate
Advocates for open science and reproducibility as safety mechanisms. Concerned that over-reliance on AI without novel reasoning capabilities creates systemic risks. Believes democratization of AI through open-source reduces concentration of power.


Tianle Li
Organization
xAI
Research/engineering
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Timnit Gebru
Nationality
Ethiopian-Eritrean American
Organization
DAIR (Distributed AI Research Institute)
Founder & Executive Director, DAIR Institute
BSc, Electrical Engineering — Stanford University
MSc, Electrical Engineering — Stanford University
PhD, Computer Vision — Stanford University
Progressive, anti-corporate-concentration
Focuses on present-day harms of AI: bias, surveillance, labor exploitation, and environmental costs. Critical of existential-risk framing as a distraction from real harms disproportionately affecting marginalized communities. Advocates for community-centered AI governance independent of corporate influence.


Timothee Lacroix
Nationality
French
Organization
Mistral AI
Co-founder & CTO, Mistral AI
BSc, Computer Science — Ecole Normale Superieure, Paris
MSc, Computer Science — Paris-Saclay University
PhD, Computer Science — Ecole des Ponts ParisTech
European tech sovereignty advocate
Supports open-weight model releases as a mechanism for collective safety research. Believes in building practical, efficient models rather than racing to the largest scale.


Timothy Lillicrap
Nationality
Canadian
Organization
Google DeepMind
Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tim Rocktäschel
Organization
Google DeepMind
Director, Principal Scientist, and Open-Endedness Team Lead, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tim Salimans
Organization
Google / Google DeepMind
Machine Learning Research Scientist in the Google-DeepMind stack
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Toby Ord
Nationality
Australian-British
Organization
University of Oxford
Senior Researcher, Oxford AI Governance Initiative
BSc, Computer Science — University of Melbourne
DPhil, Philosophy — University of Oxford
Effective altruist, existential risk-focused
Ranks unaligned AI as the highest existential risk facing humanity. Advocates for treating AI safety as a civilizational priority on par with nuclear non-proliferation. Supports strong international governance frameworks.


Toby Pohlen
Organization
xAI
Co-founder (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tomas Mikolov
Nationality
Czech
Organization
BottleCap AI / Czech Technical University
Co-Founder, BottleCap AI; Researcher, Czech Technical University in Prague
PhD, Computer Science — Brno University of Technology
Independent researcher
Skeptical of current LLM approaches to intelligence. Interested in more fundamental, mathematically grounded approaches to AI.


Tom Cunningham
Organization
OpenAI
Data Scientist
Policy and governance operator
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Tomer Kaftan
Organization
OpenAI
Inference Infrastructure & Deployment Lead
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Tom Mitchell
Nationality
American
Organization
Carnegie Mellon University
Founders University Professor, Carnegie Mellon University
PhD, Electrical Engineering — Stanford University
Academic centrist
Advocates for responsible AI development with emphasis on transparency and education. Believes AI should augment human capabilities, particularly in education.

Travis Pepper
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Trevor Darrell
Nationality
American
Organization
UC Berkeley
Professor of Computer Science, UC Berkeley; Co-Director of BAIR; Faculty Director of PATH
BSE, Computer Science — University of Pennsylvania
SM, Media Arts & Sciences — Massachusetts Institute of Technology
PhD, Media Arts & Sciences — Massachusetts Institute of Technology
Academic pragmatist
Advocates for explainable and interpretable AI systems. Focuses on trustworthy computer vision.


Tulsee Doshi
Organization
Google DeepMind
Senior Director and Head of Product, Gemini Model, Google DeepMind
Safety-aligned researcher
Publicly associated with responsible AI and product-level safeguards for Gemini.


Tyler Neylon
Organization
Anthropic
Research contributor
Safety-aligned researcher
Works within Anthropic's public safety-first and research-driven framework.

Tyna Eloundou
Organization
OpenAI
Member of Technical Staff / Research Scientist
Policy and governance operator
Publicly associated with safety evaluations, democratic inputs, and economic-impact research.


Uday Ruddaraju
Organization
X / xAI
Engineering leader involved with xAI (reported)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Vahid Kazemi
Organization
xAI
Member of Technical Staff (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Vibhu Mittal
Nationality
Indian-American
Organization
Inflection AI
CTO, Inflection AI
PhD, Computer Science — University of Southern California
Research-focused technologist
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Victoria Krakovna
Nationality
Russian-Canadian
Organization
Google DeepMind
Research Scientist, Google DeepMind
BS, Statistics and Mathematics — University of Toronto
MS, Statistics — University of Toronto
PhD, Statistics and Machine Learning — Harvard University
AI Safety Advocate
Strong advocate for AI safety research. Co-founded the Future of Life Institute to mitigate existential risks from advanced technology. Works on technical alignment to ensure AI systems behave as intended.


Vijaye Raji
Organization
OpenAI
CTO of Applications
Frontier lab operator
Official materials tie his role to product integrity and core systems.


Vitaly Gudanets
Organization
Anthropic
Chief Information Security Officer
Policy and governance operator
Supports Anthropic's security- and reliability-focused deployment posture.


Vitchyr Pong
Organization
OpenAI
Safety-aligned researcher
Public profile centers on safety, evaluation, and reliability work around advanced AI systems.


Vivek Natarajan
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Volodymyr Mnih
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Wei Xia
Organization
Google DeepMind
Researcher and Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Winston Weinberg
Nationality
American
Organization
Harvey AI
Co-Founder & CEO, Harvey AI
BA, Liberal Arts — Kenyon College
JD, Law — USC Gould School of Law
Capital allocator
Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.


Wyatt Thompson
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Xingchen Wan
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Xingyou Song
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


YaGuang Li
Organization
Google DeepMind
Senior Staff Research Engineer, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Yann Dubois
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Yann LeCun
Nationality
French
Organization
AMI Labs
Founder & Executive Chairman, AMI Labs (Advanced Machine Intelligence)
PhD, Computer Science — Pierre and Marie Curie University
Open-source AI advocate
Skeptic of existential risk framing. Believes AI safety concerns are overblown and that open-source is the safest path.


Yasmin Razavi
Organization
Anthropic
Board Member
Policy and governance operator
Governance role supporting Anthropic's public-benefit mission.


Yejin Choi
Nationality
South Korean-American
Organization
Stanford University
Dieter Schwarz Foundation HAI Professor and Professor of Computer Science, Stanford University
BS, Computer Engineering — Seoul National University
PhD, Computer Science — Cornell University
Thoughtful centrist on AI policy
Advocates for building AI systems that understand human values and commonsense norms. Concerned about the gap between language fluency and actual understanding in LLMs.


Yinxiao Li
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Yong Cheng
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Yuhuai (Tony) Wu
Organization
xAI
Co-founder (departed 2026)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Yushi Wang
Organization
OpenAI
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Zack Lee
Organization
Anthropic
Education / technical support contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Zhaohan Dong
Organization
xAI
SDK/cookbook contributor (xai-org)
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Zhen Qin
Organization
Google DeepMind
Staff Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Zhiqing Sun
Organization
OpenAI
Research Lead, Deep Research
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Zhiwei Deng
Organization
Google DeepMind
Research Scientist, Google DeepMind
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Zicheng Zhou
Organization
xAI
Member of Technical Staff
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Zihang Dai
Organization
xAI
Co-founder
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.


Zoe Ludwig
Organization
Anthropic
Education research contributor
Safety-aligned researcher
Works within Anthropic's privacy-preserving, safety-first research framework.


Zoubin Ghahramani
Organization
Google / Google DeepMind
VP of Research, Google; member of Google DeepMind research leadership
Frontier lab operator
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.