Intelligence Dossier

THE PLAYERS

An exhaustive directory of the individuals shaping the race to Artificial General Intelligence — lab CEOs, top researchers, policy makers, and key figures.

618 SUBJECTS ON FILE

Showing 618 of 618 subjects

🇺🇸
Dario Amodei

Dario Amodei

Nationality
American

Organization
Anthropic

Position

Co-Founder & CEO, Anthropic

Expertise
AI SafetyLarge Language ModelsBiophysicsMachine LearningAI Policy
Education

PhD, BiophysicsPrinceton University

BA, PhysicsStanford University

Alignment

Responsible scaling, concerned about power concentration

Safety Stance

Strong safety advocate who founded Anthropic specifically to build safer AI. Warns about "unusually painful" job disruption and concentration of power in AI companies. Maintains "red lines" on military AI applications including mass surveillance and autonomous weapons.

Notable Work
Constitutional AIClaude model familyResponsible Scaling PolicyGPT-2 and GPT-3 (at OpenAI)Scaling Laws for Neural Language Models
CEOLab LeaderSafetyFounderFrontier Lab LeaderResearcherPolicy
@DarioAmodeiWebsiteLinkedIn
VIEW DOSSIER →
John Schulman

John Schulman

Organization
Thinking Machines Lab

Position

Chief Scientist

Expertise
Reinforcement Learning
Education

BSCaltech

PhDUniversity of California

Alignment

AI Safety Advocate

Safety Stance

Schulman has expressed a strong commitment to AI alignment, aiming to ensure that AI systems are aligned with human values and goals. He joined Anthropic in 2024 to deepen his focus on this area, stating his desire to 'deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work.' In 2025, he joined Thinking Machines Lab as Chief Scientist, continuing his work in AI alignment.

Notable Work
Proximal Policy Optimization (PPO)Reinforcement Learning from Human Feedback (RLHF)
Reinforcement LearningFounderFrontier Lab LeaderAcademicResearcherSafetySystems
@johnschulman2Website
VIEW DOSSIER →
🇹🇼🇺🇸
Mark Chen

Mark Chen

Nationality
Taiwanese-American

Organization
OpenAI

Position

Chief Research Officer

Expertise
Multimodal AICode GenerationReasoning ModelsComputer VisionFrontier Lab Leader
Education

BS, Mathematics with Computer ScienceMassachusetts Institute of Technology

Alignment

Product-focused technologist

Safety Stance

Supports responsible development. Focused on ensuring safety is integrated into the research and product pipeline at OpenAI.

Notable Work
DALL-ECodexGPT-4 Visiono1 reasoning modelso3 reasoning models
Frontier Lab LeaderResearcherSafetyInvestorProduct
@markchen90WebsiteLinkedIn
VIEW DOSSIER →
Alec Radford

Alec Radford

Organization
Thinking Machines Lab

Position

Advisor

Expertise
GANsGPTCLIP
Education

BSOlin College

Alignment

Research-focused

Safety Stance

Radford emphasizes the importance of transparency and accountability in AI development, advocating for systems that are fair and unbiased. He promotes proactive risk management to ensure the ethical use of AI technology.

Notable Work
Generative Pre-trained Transformers (GPT)Whisper speech recognition modelDALL·E image generation model
GANsGPTCLIPFounderFrontier Lab LeaderResearcherSystems
@RadfordalecWebsiteLinkedIn
VIEW DOSSIER →
Jared Kaplan

Jared Kaplan

Organization
Anthropic

Position

Chief Science Officer and Co-Founder

Expertise
Scaling LawsConstitutional AIInterpretability
Education

BSStanford University

PhDHarvard University

Alignment

AI Safety Advocate

Safety Stance

Kaplan emphasizes the importance of responsible scaling policies to ensure AI systems are developed safely and beneficially. He has been instrumental in implementing Anthropic's Responsible Scaling Policy, which aims to align AI systems with human values and prevent misuse.

Notable Work
Scaling Laws for Neural Language ModelsLanguage Models are Few-Shot LearnersResponsible Scaling Policy at Anthropic
Scaling LawsConstitutional AIInterpretabilityFounderFrontier Lab LeaderAcademicResearcherSafety
@jared_kaplanWebsite
VIEW DOSSIER →
🇳🇿
Shane Legg

Shane Legg

Nationality
New Zealander

Organization
Google DeepMind

Position

Chief AGI Scientist

Expertise
Artificial General IntelligenceMachine Intelligence TheoryAI SafetyReinforcement LearningAGI
Education

BS, Computer ScienceUniversity of Waikato

MS, Computer ScienceUniversity of Auckland

PhD, Machine Super IntelligenceIDSIA / Università della Svizzera italiana

Alignment

Cautious about AGI timelines

Safety Stance

Deeply committed to AGI safety. Has consistently warned about existential risk since before founding DeepMind. Leads DeepMind's AGI safety efforts. Believes AGI is approaching and safety work is urgent.

Notable Work
Machine Super Intelligence (PhD thesis)Universal Intelligence (formal measure)AIXI (co-work with Hutter)DeepMind founding research agenda
AGILab LeaderFounderFrontier Lab LeaderAcademicResearcherSafetySystems
@ShaneLeggWebsiteLinkedIn
VIEW DOSSIER →
🇺🇸
Jeff Dean

Jeff Dean

Nationality
American

Organization
Google

Position

Chief Scientist, Google DeepMind and Google Research

Expertise
Distributed SystemsMachine LearningComputer ArchitectureLarge-Scale ComputingAI InfrastructureBig Tech AI Leader
Education

PhD, Computer ScienceUniversity of Washington

BS, Computer Science and EconomicsUniversity of Minnesota

Alignment

Pro-innovation within responsible guardrails

Safety Stance

Supports responsible AI development within Google's framework. Believes in need for "algorithmic breakthroughs" alongside scaling. Advocates for internal safety teams and external collaboration on AI governance.

Notable Work
MapReduceBigTableTensorFlowGoogle BrainSpannerLLM scaling
Big Tech AI LeaderResearcherLab LeaderSafetyInvestorPolicySystems
@JeffDeanWebsiteLinkedIn
VIEW DOSSIER →
🇵🇱
Jakub Pachocki

Jakub Pachocki

Nationality
Polish

Organization
OpenAI

Position

Chief Scientist

Expertise
Deep LearningReinforcement LearningTheoretical Computer ScienceReasoning ModelsFrontier Research Leader
Education

BS, Computer ScienceUniversity of Warsaw

PhD, Theoretical Computer ScienceCarnegie Mellon University

Alignment

Research-focused

Safety Stance

Supports safety-focused research. Believes AI models are capable of novel research and emphasizes the importance of understanding model capabilities and limitations.

Notable Work
GPT-4OpenAI Five (Dota 2)o1 reasoning modelo3 reasoning modelLarge-scale RL optimization
Frontier Research LeaderResearcherFrontier Lab LeaderAcademicSafety
@jbpachockiWebsiteLinkedIn
VIEW DOSSIER →
🇬🇧🇨🇦
Geoffrey Hinton

Geoffrey Hinton

Nationality
British-Canadian

Organization
University of Toronto

Position

University Professor Emeritus, University of Toronto

Expertise
Deep LearningNeural NetworksBackpropagationTuring AwardNobel Laureate
Education

PhD, Artificial IntelligenceUniversity of Edinburgh

Alignment

AI Safety Advocate

Safety Stance

Deeply concerned about existential risk. Says he is "more worried" now than when he left Google in 2023. Warns AI is getting better at reasoning and deception. Advocates for regulation and international coordination.

Notable Work
BackpropagationBoltzmann MachinesCapsule NetworksDeep Belief Networks
ResearcherTuring AwardNobel LaureateSafetyLab LeaderAcademicPolicy
@geoffreyhintonWebsiteLinkedIn
VIEW DOSSIER →
🇨🇦
Christopher Olah

Christopher Olah

Nationality
Canadian

Organization
Anthropic

Position

Co-founder, Anthropic

Expertise
Neural Network InterpretabilityMechanistic InterpretabilityVisualizationAI SafetyInterpretability
Education

Attended (no degree), Computer ScienceUniversity of Toronto

Alignment

AI Safety Advocate

Safety Stance

Deeply committed to AI safety through interpretability. Believes understanding what happens inside neural networks is critical for making AI safe. His work is the foundation of Anthropic's safety research agenda.

Notable Work
Neural network feature visualizationCircuits in neural networksUnderstanding LSTM NetworksMechanistic interpretability of ClaudeGolden Gate Bridge neuron
InterpretabilityResearcherFounderCEOFrontier Lab LeaderAcademicSafety
@ch402WebsiteLinkedIn
VIEW DOSSIER →
Noam Brown

Noam Brown

Organization
OpenAI

Position

Member of Technical Staff

Expertise
reasoning systemsmulti-agent learningstrategic AIreasoning-researchmember-of-technical-staffmulti-agent
Education

PhDComputer Science, Carnegie Mellon University

Alignment

Safety-aligned researcher

Safety Stance

Publicly associated more with reasoning capability research than formal safety leadership.

Notable Work
Reasoning modelsmulti-agent reasoningdeep research contributions
reasoning-researchmember-of-technical-staffmulti-agentAcademicResearcherSafetySystems
@polynoamialWebsiteLinkedIn
VIEW DOSSIER →
🇺🇸
Paul Christiano

Paul Christiano

Nationality
American

Organization
US AI Safety Institute (NIST)

Position

Head of AI Safety

Expertise
AI AlignmentReinforcement Learning from Human FeedbackAI GovernanceAI Safety EvaluationAI Safety
Education

BS, MathematicsMassachusetts Institute of Technology

PhD, Statistical Learning TheoryUniversity of California, Berkeley

Alignment

AI Safety / Effective Altruism

Safety Stance

One of the strongest voices for AI existential risk. Believes there is a significant probability of catastrophic outcomes from advanced AI. Advocates for robust safety evaluations, interpretability, and governance. Now leads US government AI safety evaluation efforts.

Notable Work
Reinforcement Learning from Human Feedback (RLHF)Iterated Distillation and AmplificationAI alignment theoryEliciting Latent Knowledge
AI SafetyResearcherFounderLab LeaderSafetyPolicy
@paulfchristianoWebsiteLinkedIn
VIEW DOSSIER →
Julian Schrittwieser

Julian Schrittwieser

Organization
Anthropic

Position

Member of Technical Staff

Expertise
AlphaGoAlphaZeroMuZero
Education

BSVienna University of Technology

Alignment

Research-focused

Safety Stance

Julian has expressed a commitment to developing AI thoughtfully to maximize benefits and manage risks.

Notable Work
AlphaGoAlphaZeroMuZeroAlphaCodeAlphaTensorAlphaProof
AlphaGoAlphaZeroMuZeroResearcher
@MononofuWebsite
VIEW DOSSIER →
🇺🇸
Sergey Levine

Sergey Levine

Nationality
American

Organization
UC Berkeley / Physical Intelligence

Position

Associate Professor, UC Berkeley; Co-founder, Physical Intelligence

Expertise
Deep Reinforcement LearningRoboticsRobot LearningOffline RL
Education

BS/MS, Computer ScienceStanford University

PhD, Computer ScienceStanford University

Alignment

Research-focused

Safety Stance

Believes in building general-purpose robotic intelligence through scalable learning. Focuses on making robot learning practical and sample-efficient.

Notable Work
End-to-End Training of Deep Visuomotor PoliciesSoft Actor-Critic (SAC)Offline Reinforcement LearningQT-OptHiL-SERLPi-0 (Physical Intelligence)
RoboticsAcademicResearcherFounderLab Leader
@svlevineWebsiteLinkedIn
VIEW DOSSIER →
Andrew Tulloch

Andrew Tulloch

Organization
Meta

Position

Distinguished Engineer at Meta

Expertise
PyTorchInferenceQuantizationSystems
Education

BSUniversity of Sydney

Alignment

Research-focused

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4 pretrainingGPT-4.5 pretrainingPyTorch development
PyTorchInferenceQuantizationSystemsAcademicResearcher
Website
VIEW DOSSIER →
Tom Brown

Tom Brown

Organization
Anthropic

Position

Co-founder and Chief Compute Officer

Expertise
GPT-3Scaling LawsRLHF
Education

BSMIT

Alignment

AI Safety Advocate

Safety Stance

Anthropic emphasizes AI safety, aiming to develop AI systems that are both beneficial and aligned with human values.

Notable Work
GPT-3Scaling Laws for Neural Language ModelsConstitutional AI
GPT-3Scaling LawsRLHFFounderCEOFrontier Lab LeaderResearcherSafety
@t0mbWebsiteLinkedIn
VIEW DOSSIER →
Nat McAleese

Nat McAleese

Organization
Anthropic

Position

Researcher

Expertise
Scalable OversightLLM Critic ModelsRLHF
Education

BSUniversity of Cambridge

PhDUniversity of Cambridge

Alignment

AI Safety Advocate

Safety Stance

Committed to ensuring AI systems are safe and aligned with human values, as evidenced by his work on AI safety benchmarks and involvement in AI safety discussions.

Notable Work
AI Safety Benchmark v0.5AI Safety for Fleshy Humans: a whirlwind tour
Scalable OversightLLM Critic ModelsRLHFResearcherSafetySystems
@__nmca__Website
VIEW DOSSIER →
🇨🇦
Andrej Karpathy

Andrej Karpathy

Nationality
Slovak-Canadian

Organization
Eureka Labs

Position

Founder, Eureka Labs

Expertise
Deep LearningComputer VisionNatural Language ProcessingAI Education
Education

BS, Computer Science and PhysicsUniversity of Toronto

MS, Computer ScienceUniversity of British Columbia

PhD, Computer ScienceStanford University

Alignment

Open-source AI advocate

Safety Stance

Pragmatic centrist. Acknowledges risks but believes open education and understanding of AI internals is the best safety strategy. Skeptical of heavy-handed regulation.

Notable Work
ImageNet classificationCS231n (Stanford deep learning course)Tesla Autopilot vision stackminGPTnanoGPTNeural Networks: Zero to Hero
Technical CommentatorEducatorResearcherFounderLab LeaderSafetyPolicyProduct
@karpathyWebsiteLinkedIn
VIEW DOSSIER →
Jerry Tworek

Jerry Tworek

Organization
OpenAI

Expertise
reasoning systemsresearch leadershipfrontier modelsreasoning-researchresearch-leadershipneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research leadershipOpenAI o1 leadership
reasoning-researchresearch-leadershipneeds-reviewResearcherSystems
@MillionIntWebsiteLinkedIn
VIEW DOSSIER →
Igor Babuschkin

Igor Babuschkin

Organization
Babuschkin Ventures

Position

Founder and CEO

Expertise
Deep Reinforcement LearningLarge-Scale Training
Education

BSTU Dortmund

Alignment

AI Safety Advocate

Safety Stance

Igor Babuschkin has publicly emphasized the importance of AI safety, particularly as AI systems become more capable and agentic. He has expressed concerns about the need to study and advance AI safety to ensure that technology benefits humanity. In his departure from xAI, he stated his commitment to building AI that advances humanity, highlighting his dedication to responsible AI development.

Notable Work
DeepMind's AlphaStarxAI's Grok chatbot
Deep Reinforcement LearningLarge-Scale TrainingFounderCEOResearcherSafetyInvestorSystems
@ibabWebsiteLinkedIn
VIEW DOSSIER →
🇳🇱
Diederik P. Kingma

Diederik P. Kingma

Nationality
Dutch

Organization
Anthropic

Position

Research Scientist, Anthropic

Expertise
Generative ModelsOptimizationVariational InferenceDiffusion Models
Education

PhD (cum laude), Machine LearningUniversity of Amsterdam

Alignment

Research-focused

Safety Stance

Joined Anthropic, a safety-focused lab, suggesting alignment with responsible AI development. Works on improving the reliability and capability of large-scale ML systems.

Notable Work
Adam optimizerVariational Autoencoders (VAEs)GlowVariational Diffusion Models
Generative ModelsResearcherSafetySystems
@dpkingmaWebsiteLinkedIn
VIEW DOSSIER →
🇬🇧
David Silver

David Silver

Nationality
British

Organization
Ineffable Intelligence

Position

CEO & Founder

Expertise
Reinforcement LearningGame AIPlanningSelf-play
Education

BA, Computer ScienceUniversity of Cambridge

MA, Computer ScienceUniversity of Cambridge

PhD, Reinforcement LearningUniversity of Alberta

Alignment

Research-focused

Safety Stance

Believes in building safe superintelligence through self-play and self-discovery rather than relying solely on human feedback. Argues LLMs alone will not reach superintelligence.

Notable Work
AlphaGoAlphaZeroAlphaStarUCT algorithm (Monte Carlo tree search)Deep reinforcement learning (DQN co-author)
RLResearcherFounderCEOAcademicInvestor
@david_silverWebsiteLinkedIn
VIEW DOSSIER →
🇻🇳🇺🇸
Quoc V. Le

Quoc V. Le

Nationality
Vietnamese-American

Organization
Google DeepMind

Position

Google Fellow, Google DeepMind

Expertise
Deep LearningAutoMLNeural Architecture SearchFoundation Models
Education

BSc, Computer ScienceAustralian National University

PhD, Computer ScienceStanford University

Alignment

Research-focused

Safety Stance

Focuses on making AI models more efficient and accessible. Works within Google's responsible AI framework.

Notable Work
Sequence-to-Sequence Learning (seq2seq)Neural Architecture Search (NAS)EfficientNetLarge-scale unsupervised learning
Foundation ModelsResearcher
@quocleixWebsiteLinkedIn
VIEW DOSSIER →
Wenda Zhou

Wenda Zhou

Organization
OpenAI

Position

Researcher at OpenAI

Expertise
reasoning modelsoperator systemsopenai o1reasoning-researchagentsneeds-review
Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
OpenAI o1Operator
reasoning-researchagentsneeds-reviewAcademicResearcherSystems
@zhouwendaWebsite
VIEW DOSSIER →
🇧🇪🇺🇸
Pieter Abbeel

Pieter Abbeel

Nationality
Belgian-American

Organization
UC Berkeley / Amazon

Position

Professor, UC Berkeley; Head of LLM efforts, Amazon AGI

Expertise
RoboticsReinforcement LearningDeep LearningFoundation Models for Robotics
Education

BS/MS, Electrical EngineeringKU Leuven

PhD, Computer ScienceStanford University

Alignment

Pragmatic technologist

Safety Stance

Focuses on building robust and reliable AI systems. Believes foundation models are the key to general-purpose robotics.

Notable Work
Apprenticeship LearningAutonomous Helicopter AerobaticsDeep Reinforcement Learning for RoboticsRobotics Foundation Models (Covariant)
RoboticsAcademicFounderLab LeaderResearcherSystems
@pabbeelWebsiteLinkedIn
VIEW DOSSIER →
Tristan Hume

Tristan Hume

Organization
Anthropic

Position

Performance Optimization Lead

Expertise
InterpretabilityPerformance Optimization
Education

BSUniversity of Waterloo

Alignment

AI Safety Advocate

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Designing AI-resistant technical evaluations
InterpretabilityPerformance OptimizationFounderFrontier Lab LeaderResearcherSafetySystemsOpen Source
@trishumeWebsite
VIEW DOSSIER →
Horace He

Horace He

Organization
Thinking Machines Lab

Position

Researcher

Expertise
PyTorchKernel Auto-TuningTraining Efficiency
Education

BSCornell University

Alignment

AI Infrastructure Advocate

Safety Stance

Horace He's work focuses on enhancing the reliability and predictability of AI models, aiming to mitigate issues like nondeterminism in large language models.

Notable Work
PyTorch DevelopmentKernel Auto-Tuning TechniquesTraining Efficiency Improvements
PyTorchKernel Auto-TuningTraining EfficiencyResearcherSystems
@cHHilleeWebsite
VIEW DOSSIER →
Sebastian Borgeaud

Sebastian Borgeaud

Organization
Google DeepMind

Position

Research Engineer

Expertise
Retrieval-Augmented LMsChinchillaFlamingo
Education

BSUniversity of Cambridge

PhDUniversity of Cambridge

Alignment

Research-focused

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChinchillaFlamingoRETROGopherPerceiver
Retrieval-Augmented LMsChinchillaFlamingoAcademicResearcher
@borgeaud_sWebsiteLinkedIn
VIEW DOSSIER →
Alexander Kirillov

Alexander Kirillov

Organization
Thinking Machines Lab

Position

Member of Technical Staff at Thinking Machines Lab

Expertise
Segment Anything Models
Education

BSLomonosov Moscow State University

PhDHeidelberg University

Alignment

Research-focused

Safety Stance

Thinking Machines Lab, where Kirillov is a member, emphasizes AI safety by maintaining a high safety bar, sharing best practices, and accelerating external research on alignment.

Notable Work
Segment Anything Model (SAM)Detectron2Advanced Voice ModeGPT-4o
Segment Anything ModelsAcademicResearcherSafety
@alexrkirillovWebsite
VIEW DOSSIER →
🇺🇸
Chelsea Finn

Chelsea Finn

Nationality
American

Organization
Stanford University / Physical Intelligence

Position

Assistant Professor, Stanford University; Co-founder, Physical Intelligence

Expertise
Meta-LearningRoboticsFew-Shot LearningRobot Learning
Education

BS, Electrical Engineering and Computer ScienceMIT

PhD, Computer ScienceUC Berkeley

Alignment

Research-focused

Safety Stance

Focuses on building robust, generalizable robot learning systems. Researches how to make robots learn safely from limited data and human demonstrations.

Notable Work
Model-Agnostic Meta-Learning (MAML)Few-Shot LearningPi-0 (Physical Intelligence)Hi RobotSRT-H (Autonomous Surgery)RoboReward
RoboticsAcademicResearcherFounderLab LeaderSystems
@chelseabfinnWebsiteLinkedIn
VIEW DOSSIER →
Alexander Kolesnikov

Alexander Kolesnikov

Organization
Meta

Position

AI Research Scientist

Expertise
Vision TransformersSelf-Supervised RL
Education

BSLomonosov Moscow State University

PhDISTA

Alignment

Research-focused

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemma 3 technical reportJet: A Modern Transformer-Based Normalizing FlowJetFormer: An Autoregressive Generative Model of Raw Images and TextPaligemma: A versatile 3b vlm for transfer
Vision TransformersSelf-Supervised RLAcademicResearcher
@__kolesnikov__Website
VIEW DOSSIER →
🇨🇦
Yoshua Bengio

Yoshua Bengio

Nationality
Canadian

Organization
LawZero / Mila

Position

Co-President & Scientific Director, LawZero; Founder & Scientific Advisor, Mila

Expertise
Deep LearningNatural Language ProcessingGenerative ModelsAI SafetyTuring Award
Education

PhD, Computer ScienceMcGill University

Alignment

AI Safety Advocate

Safety Stance

The most safety-focused of the Turing trio. Launched LawZero to build safe-by-design AI. Warns that frontier models show growing dangerous capabilities including deception and goal misalignment. Pushes for international governance frameworks.

Notable Work
Generative Adversarial Networks (co-author)Attention MechanismsNeural Machine TranslationGFlowNetsScientist AI (LawZero)
ResearcherTuring AwardSafetyFounderAcademicLab LeaderPolicy
@Yoshua_BengioWebsiteLinkedIn
VIEW DOSSIER →
Nick Ryder

Nick Ryder

Organization
OpenAI

Position

Member of Technical Staff

Expertise
audio modelsexecutive leadership supportfrontier-model sponsorshipresearch-leadershipaudio-modelsneeds-review
Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
next-generation audio modelsGPT-4.5 executive leadership
research-leadershipaudio-modelsneeds-reviewAcademicResearcher
@nickryder32Website
VIEW DOSSIER →
Lukasz Kaiser

Lukasz Kaiser

Organization
OpenAI

Position

Member of Technical Staff at OpenAI

Expertise
TransformersContext
Education

BSUniversity of Wroclaw

PhDRWTH Aachen University

Alignment

Research-focused

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Co-authored 'Attention Is All You Need' (2017)Led development of O1 reasoning models at OpenAIServed as long-context lead for GPT-4
TransformersContextAcademicResearcher
@lukaszkaiserWebsiteLinkedIn
VIEW DOSSIER →
Lilian Weng

Lilian Weng

Organization
Fellows Fund

Position

Distinguished Fellow

Expertise
safety systemsalignmentevaluationsafety-governancetechnical-safety
Alignment

AI Safety Advocate

Safety Stance

OpenAI's public materials place her at the core of technical safety systems work.

Notable Work
Safety systemsalignment and evaluation work across GPT-era models
safety-governancealignmenttechnical-safetyLab LeaderResearcherSafetyInvestorPolicy
@lilianwengWebsiteLinkedIn
VIEW DOSSIER →
Alexander Wei

Alexander Wei

Organization
OpenAI

Position

Research Scientist

Expertise
ReasoningIMOSafety
Education

BSHarvard University

PhDUniversity of California

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
CICERO: First human-level AI for DiplomacyJailbroken: How Does LLM Safety Training Fail?
ReasoningIMOSafetyResearcher
@alexwei_WebsiteLinkedIn
VIEW DOSSIER →
Deli Chen

Deli Chen

Organization
DeepSeek

Position

Senior Researcher

Expertise
DeepSeekReasoningGraph Neural Nets
Education

BSPeking University

Alignment

AI Safety Advocate

Safety Stance

Deli Chen has publicly warned about AI risks, emphasizing the need for responsible development and deployment of AI technologies.

Notable Work
DeepSeek-R1 modelDeepSeek V3.1 modelDeepSeek V3.2-Exp model
DeepSeekReasoningGraph Neural NetsFounderLab LeaderResearcherSafetyProduct
@victor207755822Website
VIEW DOSSIER →
Hunter Lightman

Hunter Lightman

Organization
OpenAI

Position

Researcher

Expertise
deep researchreasoning systemsfrontier modelsreasoning-researchfrontier-modelsneeds-review
Alignment

Research-focused

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchOpenAI o1
reasoning-researchfrontier-modelsneeds-reviewAcademicResearcherSystems
@hunterlightmanWebsite
VIEW DOSSIER →
Robert Lasenby

Robert Lasenby

Organization
Anthropic

Position

Researcher

Expertise
Mechanistic InterpretabilitySuperposition Theory
Education

BSUniversity of Cambridge

PhDUniversity of Oxford

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Dark matter from the Fraternal Twin HiggsNuclear Dark Matter - Synthesis and PhenomenologyObserving Superradiance of Light Vector Particles
Mechanistic InterpretabilitySuperposition TheoryResearcherSafety
VIEW DOSSIER →
Zhihong Shao

Zhihong Shao

Organization
DeepSeek

Position

Research Scientist

Expertise
Reinforcement LearningMoEDeepSeekMixture of Experts (MoE)Artificial Intelligence
Education

BSTsinghua University

PhDTsinghua University

Alignment

Research-focused

Safety Stance

DeepSeek has acknowledged safety vulnerabilities in its models and has undertaken evaluations and enhancements to address these issues, particularly in Chinese contexts. This reflects a proactive approach to improving AI safety within their systems.

Notable Work
DeepSeek-V3DeepSeek-Coder-V2DeepSeek-Prover-V2DeepSeekMath-V2: Towards Self-Verifiable Mathematical ReasoningTowards Understanding the Safety Boundaries of DeepSeek Models: Evaluation and Findings
Reinforcement LearningMoEDeepSeekFounderLab LeaderResearcherSafetySystems
@zshao5Website
VIEW DOSSIER →
Timothy P. Lillicrap

Timothy P. Lillicrap

Organization
Google DeepMind

Position

Staff Research Scientist

Expertise
RLRecurrent MemoryAlphaGoReinforcement Learning
Education

BSUniversity of Toronto

PhDQueen's University

Alignment

Research-focused

Safety Stance

Timothy P. Lillicrap has not publicly stated a position on AI safety policies.

Notable Work
Mastering the game of Go with deep neural networks and tree searchContinuous control with deep reinforcement learningRecurrent Models of Visual AttentionDeterministic Policy Gradient AlgorithmsEmergence of Locomotion Behaviours in Rich Environments
RLRecurrent MemoryAlphaGoAcademicResearcherSafetySystems
Website
VIEW DOSSIER →
Prafulla Dhariwal

Prafulla Dhariwal

Organization
OpenAI

Position

Technical Fellow

Expertise
multimodal modelsimage generationaudio-visual systemsmultimodal-researchresearch-leadershipimage-generation
Alignment

Pragmatic technologist

Safety Stance

Primarily capability-focused public role with multimodal deployment responsibilities.

Notable Work
4o Image Generationmultimodal model leadershipaudio and image systems
multimodal-researchresearch-leadershipimage-generationFrontier Lab LeaderResearcherSystemsProduct
@prafdharWebsite
VIEW DOSSIER →
🇺🇸
Dan Hendrycks

Dan Hendrycks

Nationality
American

Organization
Center for AI Safety (CAIS)

Position

Executive Director, Center for AI Safety

Expertise
AI SafetyMachine Learning RobustnessAI BenchmarksAI Governance
Education

BS, Computer ScienceUniversity of Chicago

PhD, Computer ScienceUniversity of California, Berkeley

Alignment

AI Safety Advocate

Safety Stance

Leading voice on AI existential risk. Believes advanced AI poses catastrophic and existential risks to humanity. Advocates for proactive safety research, robust evaluations, and governance frameworks.

Notable Work
GELU activation functionMMLU benchmarkAI Safety benchmarksIntroduction to AI Safety, Ethics, and Society (textbook)
AI SafetyResearcherLab LeaderSafetyPolicy
@DanHendrycksWebsiteLinkedIn
VIEW DOSSIER →
Amanda Askell

Amanda Askell

Organization
Anthropic

Position

Member of Technical Staff

Expertise
AlignmentFine-tuningEthics
Education

BSUniversity of Dundee

PhDNew York University

Alignment

AI Safety Advocate

Safety Stance

Amanda Askell has publicly advocated for AI safety and alignment, emphasizing the importance of ensuring AI systems are aligned with human values and safety considerations. She has argued that designing ethical AI requires humility rather than rigid certainty, and that AI systems should be capable of weighing competing considerations and explaining their reasoning rather than simply following strict rules.

Notable Work
"Learning to Summarize with Human Feedback" (2022)"Training language models to follow instructions with human feedback" (2022)"Ensuring the Safety of Artificial Intelligence" (2021)"AI Safety Needs Social Scientists" (2019)
AlignmentFine-tuningEthicsAcademicResearcherSafetyPolicySystems
@amandaaskellWebsiteLinkedIn
VIEW DOSSIER →
🇨🇦
Jimmy Ba

Jimmy Ba

Nationality
Canadian

Organization
University of Toronto / Vector Institute

Position

Assistant Professor, University of Toronto; CIFAR AI Chair, Vector Institute

Expertise
OptimizationDeep LearningReinforcement LearningEfficient LearningML Theory
Education

BSc, Computer ScienceUniversity of Toronto

MSc, Computer ScienceUniversity of Toronto

PhD, Computer ScienceUniversity of Toronto

Alignment

Academic

Safety Stance

Academic focus on building more reliable and efficient learning algorithms. Contributes to the Canadian AI ecosystem through CIFAR and Vector Institute.

Notable Work
Adam optimizerLayer NormalizationLookahead optimizerEfficient deep learning
ML TheoryAcademicResearcher
@jimmybajimmybaWebsiteLinkedIn
VIEW DOSSIER →
Mostafa Dehghani

Mostafa Dehghani

Organization
Google DeepMind

Position

Research Scientist

Expertise
Vision TransformersScalingSupervised Learning
Education

BSUniversity of Tehran

PhDUniversity of Amsterdam

Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Scaling Vision Transformers to 22 Billion ParametersFractal Patterns May Unravel the Intelligence in Next-Token PredictionFrozen Feature Augmentation for Few-Shot Image Classification
Vision TransformersScalingSupervised LearningAcademicResearcher
@m__dehghaniWebsite
VIEW DOSSIER →
Shengjia Zhao

Shengjia Zhao

Organization
Meta Platforms Inc.

Position

Chief Scientist of Meta Superintelligence Labs

Expertise
reasoning modelsaudio modelso1-family systemsreasoning-researchaudio-modelsneeds-review
Alignment

AI Safety Advocate

Safety Stance

Zhao has been instrumental in developing AI models with a focus on safety and reliability, as evidenced by his work on the o1 reasoning model, which emphasizes structured and interpretable AI outputs.

Notable Work
OpenAI o1GPT-4o mininext-generation audio models
reasoning-researchaudio-modelsneeds-reviewFrontier Lab LeaderAcademicResearcherSafetySystems
VIEW DOSSIER →
Barret Zoph

Barret Zoph

Organization
OpenAI

Position

Enterprise Expansion Lead at OpenAI

Expertise
NAS-RLScaling MoE
Education

BSUSC

Alignment

Pragmatic technologist

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
NAS-RLScaling MoE
NAS-RLScaling MoEFrontier Lab LeaderResearcher
VIEW DOSSIER →
Sam McCandlish

Sam McCandlish

Organization
Anthropic

Position

Chief Architect

Expertise
Scaling LawsSafetyAnthropic
Education

BSBrandeis University

PhDStanford University

Alignment

AI Safety Advocate

Safety Stance

As a co-founder of Anthropic, McCandlish has been involved in developing AI models with a focus on safety and ethical considerations.

Notable Work
An Empirical Model of Large-Batch TrainingScaling Laws for Neural Language Models
Scaling LawsSafetyAnthropicFounderFrontier Lab LeaderAcademicResearcher
@samuelu1Website
VIEW DOSSIER →
Dan Selsam

Dan Selsam

Organization
OpenAI

Position

Researcher at OpenAI

Expertise
MathReasoningo-series
Education

BSStanford University

PhDStanford University

Alignment

Research-focused

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
OpenAI o1 modelLean theorem proverCompetitive programming with large reasoning models
MathReasoningo-seriesAcademicResearcherSystems
Website
VIEW DOSSIER →
🇩🇪
Jan Leike

Jan Leike

Nationality
German

Organization
Anthropic

Position

Head of Alignment Science

Expertise
AI AlignmentScalable OversightReinforcement Learning TheorySuperalignmentAI Safety
Education

MS, Computer ScienceUniversity of Freiburg

PhD, Reinforcement Learning TheoryAustralian National University

Alignment

AI Safety Advocate

Safety Stance

One of the most vocal alignment researchers. Left OpenAI because he felt safety was not being prioritized sufficiently. Believes alignment of superhuman AI systems is the central challenge. Cautiously optimistic that progress is being made.

Notable Work
Superalignment (OpenAI)Scalable oversightWeak-to-strong generalizationReward modeling
AI SafetyResearcherFrontier Lab LeaderSafetySystems
@janleikeWebsiteLinkedIn
VIEW DOSSIER →
Yang Song

Yang Song

Organization
OpenAI

Expertise
image generationmultimodal researchdiffusion systemsmultimodal-researchimage-generationneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
4o Image Generation
multimodal-researchimage-generationneeds-reviewResearcherSystems
VIEW DOSSIER →
Tri Dao

Tri Dao

Organization
Together.AI

Position

Co-founder and Researcher

Expertise
FlashAttentionState-Space ModelsMamba
Education

BSStanford University

PhDStanford University

Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
FlashAttentionState-Space ModelsMamba
FlashAttentionState-Space ModelsMambaFounderLab LeaderResearcher
VIEW DOSSIER →
Ethan Perez

Ethan Perez

Organization
Anthropic

Position

Research Scientist

Expertise
SafetyAlignmentRed-Teaming
Education

BSRice University

PhDNew York University

Alignment

Safety-aligned researcher

Safety Stance

Ethan advocates for rigorous safety measures in AI development, emphasizing the importance of alignment with human values.

Notable Work
SafetyAlignmentRed-Teaming
SafetyAlignmentRed-TeamingResearcher
VIEW DOSSIER →
Long Ouyang

Long Ouyang

Organization
OpenAI

Expertise
instruction tuningmultimodal researchpost-trainingmultimodal-researchneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
4o Image GenerationGPT-4 data and alignment work
post-trainingmultimodal-researchneeds-reviewResearcherSafety
VIEW DOSSIER →
Jeffrey Wu

Jeffrey Wu

Organization
Anthropic

Position

Research Scientist

Expertise
Scalable OversightLatent InterpretabilityCoT
Education

BSMIT

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Scalable OversightLatent InterpretabilityCoT
Scalable OversightLatent InterpretabilityCoTResearcherSafety
VIEW DOSSIER →
🇩🇪
Jürgen Schmidhuber

Jürgen Schmidhuber

Nationality
German

Organization
KAUST / IDSIA

Position

Director of AI Initiative, KAUST; Scientific Director, Swiss AI Lab IDSIA

Expertise
Deep LearningRecurrent Neural NetworksLSTMArtificial General IntelligenceGenerative AI
Education

Diplom, Computer ScienceTechnical University of Munich

PhD, Computer ScienceTechnical University of Munich

Alignment

Accelerationist

Safety Stance

Optimistic about AI progress. Believes AI will be beneficial and that existential risk concerns are overstated. Sees AGI as inevitable and broadly positive for humanity.

Notable Work
Long Short-Term Memory (LSTM)Highway NetworksArtificial CuriosityCompressed Network SearchFast Weight Programmers
Deep LearningResearcherLab LeaderAcademic
@SchmidhuberAIWebsite
VIEW DOSSIER →
🇨🇳🇺🇸
Fei-Fei Li

Fei-Fei Li

Nationality
Chinese-American

Organization
Stanford University / World Labs

Position

Professor of Computer Science, Stanford University; Co-Founder & CEO, World Labs

Expertise
Computer VisionMachine LearningSpatial IntelligenceCognitive NeuroscienceAI for Healthcare
Education

PhD, Electrical EngineeringCalifornia Institute of Technology

BA, PhysicsPrinceton University

Alignment

Advocate for democratized and human-centered AI

Safety Stance

Believes in human-centered AI development. Advocates for AI that augments human capabilities rather than replaces them, with strong emphasis on diversity and ethical deployment.

Notable Work
ImageNetVisual RecognitionSpatial IntelligenceAI for HealthcareHuman-Centered AI
AcademicResearcherFounderCEOLab LeaderProduct
@drfeifeiWebsiteLinkedIn
VIEW DOSSIER →
Naman Goyal

Naman Goyal

Organization
Thinking Machines

Expertise
MultimodalGraph Neural Nets
Education

BSSavitribai Phule Pune University

PhDGeorgia Institute of Technology

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
MultimodalGraph Neural Nets
MultimodalGraph Neural NetsResearcher
VIEW DOSSIER →
Rowan Zellers

Rowan Zellers

Organization
Thinking Machines

Expertise
BenchmarksRLReinforcement Learning
Education

BSHarvey Mudd College

PhDUniversity of Washington

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Benchmarks
BenchmarksRLResearcher
VIEW DOSSIER →
Jonas Adler

Jonas Adler

Organization
Google DeepMind

Position

Research Scientist

Expertise
Scientific Machine LearningAlphaFold
Education

BSKTH Royal Institute of Technology

PhDKTH Royal Institute of Technology

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Scientific Machine LearningAlphaFold
Scientific Machine LearningAlphaFoldResearcher
VIEW DOSSIER →
Luke Metz

Luke Metz

Organization
Thinking Machines

Expertise
Learned OptimizersMeta-LearningIn-Context Learning
Education

BSOlin College of Engineering

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Learned OptimizersMeta-LearningIn-Context Learning
Learned OptimizersMeta-LearningIn-Context LearningResearcher
VIEW DOSSIER →
Nicholas Carlini

Nicholas Carlini

Organization
Anthropic

Position

Research Scientist

Expertise
Adversarial MLRobustnessSelf-supervised learning
Education

BSUniversity of California

PhDUniversity of California

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Adversarial MLRobustnessSelf-supervised learning
Adversarial MLRobustnessSelf-supervised learningResearcherSafety
VIEW DOSSIER →
🇺🇸
Percy Liang

Percy Liang

Nationality
American

Organization
Stanford University

Position

Associate Professor of Computer Science, Stanford University; Director, Center for Research on Foundation Models (CRFM)

Expertise
Natural Language ProcessingFoundation ModelsAI BenchmarkingMachine LearningNLP
Education

BS, Computer ScienceMassachusetts Institute of Technology

MEng, Computer ScienceMassachusetts Institute of Technology

PhD, Computer ScienceUniversity of California, Berkeley

Alignment

Transparency advocate

Safety Stance

Strong advocate for transparency, rigorous evaluation, and accountability in foundation model development. Believes standardized benchmarks are essential to understand capabilities and risks.

Notable Work
HELM (Holistic Evaluation of Language Models)Foundation Models ReportSemantic ParsingLanguage Model Transparency Index
NLPAcademicResearcherFounderLab LeaderSafetyOpen Source
@percyliangWebsiteLinkedIn
VIEW DOSSIER →
Lucas Beyer

Lucas Beyer

Organization
Meta

Position

Research Scientist

Expertise
Vision TransformersMultimodal
Education

BSRWTH Aachen University

PhDRWTH Aachen University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Vision TransformersMultimodal
Vision TransformersMultimodalResearcher
VIEW DOSSIER →
Sholto Douglas

Sholto Douglas

Organization
Anthropic

Expertise
Scaling RLThought Leadership
Education

BSUniversity of Sydney

Alignment

Frontier lab operator

Safety Stance

Works inside a lab with a public safety-first posture; individual views here are inferred from reliability and deployment context rather than detailed personal statements.

Notable Work
Scaling RLThought Leadership
Scaling RLThought LeadershipResearcher
VIEW DOSSIER →
Albert Gu

Albert Gu

Organization
Cartesia AI

Position

Co-founder and Researcher

Expertise
MambaState Space Models
Education

BSStanford University

PhDStanford University

Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
MambaState Space Models
MambaState Space ModelsFounderLab LeaderResearcher
VIEW DOSSIER →
Zico Kolter

Zico Kolter

Organization
Carnegie Mellon University

Position

Associate Professor, Machine Learning Department

Expertise
RobustnessSafety
Education

BSGeorgetown University

PhDStanford University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
RobustnessSafety
RobustnessSafetyAcademicResearcher
VIEW DOSSIER →
Eric Zelikman

Eric Zelikman

Organization
xAI

Expertise
ReasoningAlignment
Education

BSStanford University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
ReasoningAlignment
ReasoningAlignmentResearcherSafety
VIEW DOSSIER →
Eric Mitchell

Eric Mitchell

Organization
OpenAI

Expertise
deep researchevaluationlanguage modelsreasoning-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research
reasoning-researchevaluationneeds-reviewResearcher
VIEW DOSSIER →
Hongyu Ren

Hongyu Ren

Organization
OpenAI

Position

Research Lead

Expertise
reasoning modelsreinforcement learningfrontier-model researchreasoning-researchfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
o3/o3-mini reasoning workGPT-4o minideep research
reasoning-researchfrontier-modelsneeds-reviewFrontier Lab LeaderResearcher
VIEW DOSSIER →
Hyung Won Chung

Hyung Won Chung

Organization
OpenAI

Expertise
reasoning modelsdeep researchalignmentreasoning-researchalignment-evalsneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
deep researchGPT-4o mini
reasoning-researchalignment-evalsneeds-reviewResearcherSafety
VIEW DOSSIER →
James Bradbury

James Bradbury

Organization
Anthropic

Position

Research Scientist

Expertise
PaLIOptimizationArchitecture Search
Education

BSStanford University

Alignment

Frontier lab operator

Safety Stance

Works inside a lab with a public safety-first posture; individual views here are inferred from reliability and deployment context rather than detailed personal statements.

Notable Work
PaLIOptimizationArchitecture Search
PaLIOptimizationArchitecture SearchResearcher
VIEW DOSSIER →
🇨🇦
Aidan Gomez

Aidan Gomez

Nationality
Canadian

Organization
Cohere

Position

Co-founder & CEO, Cohere

Expertise
Natural Language ProcessingTransformer ArchitecturesEnterprise AIAttention MechanismsFrontier FounderModel Inventor
Education

BSc, Computer Science and MathematicsUniversity of Toronto

PhD, Computer ScienceUniversity of Oxford

Alignment

Pragmatic technologist, skeptical of effective altruism

Safety Stance

Focuses on near-term practical risks over hypothetical existential threats. Critical of effective altruism's influence on AI safety discourse. Prioritizes enterprise security and data privacy as the real safety frontier.

Notable Work
Attention Is All You Need (Transformer paper)Command RCommand ACohere EmbedCohere Rerank
CEOFrontier FounderModel InventorFounderLab LeaderResearcherSafety
@aidangomezWebsiteLinkedIn
VIEW DOSSIER →
Yi Tay

Yi Tay

Organization
Google Deepmind

Position

Research Scientist

Expertise
ReasoningScalingPost-TrainingPalm
Education

BSNanyang Technological University Singapore

PhDNanyang Technological University Singapore

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ReasoningScalingPost-TrainingPalm
ReasoningScalingPost-TrainingPalmResearcher
VIEW DOSSIER →
Christopher Re

Christopher Re

Organization
Stanford University

Expertise
Flash AttentionSequence LengthKernels
Education

BSCornell University

PhDUniversity of Washington

Alignment

Academic researcher

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Flash AttentionSequence LengthKernels
Flash AttentionSequence LengthKernelsAcademicResearcher
VIEW DOSSIER →
Giambattista Parascandolo

Giambattista Parascandolo

Organization
OpenAI

Expertise
reasoning modelsopenai o1operatorreasoning-researchfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
OpenAI o1Operator
reasoning-researchfrontier-modelsneeds-reviewResearcher
VIEW DOSSIER →
Rahul Arya

Rahul Arya

Organization
Google DeepMind

Position

Research Scientist

Expertise
Generalization TheoryPreference LearningOverparameterized Models
Education

BSUniversity of California

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Generalization TheoryPreference LearningOverparameterized Models
Generalization TheoryPreference LearningOverparameterized ModelsResearcher
VIEW DOSSIER →
Xuezhi Wang

Xuezhi Wang

Organization
Google Deepmind

Position

Research Scientist

Expertise
CoTTest-Time InferenceRobustness
Education

BSTsinghua University

PhDCarnegie Mellon University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
CoTTest-Time InferenceRobustness
CoTTest-Time InferenceRobustnessResearcherSafetySystems
VIEW DOSSIER →
Leo Gao

Leo Gao

Organization
OpenAI

Expertise
Mechanistic Interpretability
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Mechanistic Interpretability
Mechanistic InterpretabilityResearcherSafety
VIEW DOSSIER →
Robin Rombach

Robin Rombach

Organization
Black Forest Labs

Expertise
Latent DiffusionGenerative Image and Video
Education

BSHeidelberg University

PhDHeidelberg University

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Latent DiffusionGenerative Image and Video
Latent DiffusionGenerative Image and VideoResearcher
VIEW DOSSIER →
Jack Rae

Jack Rae

Organization
Meta

Expertise
MemoryReasoning
Education

BSUniversity of Bristol

PhDUCL

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
MemoryReasoning
MemoryReasoningResearcher
VIEW DOSSIER →
Alex Graves

Alex Graves

Organization
Google Deepmind

Position

Research Scientist

Expertise
LSTMsRNNsDeep Learning
Education

BSUniversity of Edinburgh

PhDTechnical University of Munich

Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Development of LSTM networksApplications of RNNs in speech recognition
LSTMsRNNsDeep LearningAcademicResearcher
VIEW DOSSIER →
Sami Jaghouar

Sami Jaghouar

Organization
Prime Intellect

Expertise
Distributed TrainingDecentralized RLMultimodal AI
Education

BSUniversite de Technologie de Compiegne

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Distributed TrainingDecentralized RLMultimodal AI
Distributed TrainingDecentralized RLMultimodal AIResearcher
VIEW DOSSIER →
Jonathan Gordon

Jonathan Gordon

Organization
OpenAI

Position

Research Scientist

Expertise
Bayesian Deep LearningNeural ProcessesMeta-Learning
Education

BSBen-Gurion University of the Negev

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Bayesian Deep LearningNeural ProcessesMeta-Learning
Bayesian Deep LearningNeural ProcessesMeta-LearningResearcher
VIEW DOSSIER →
🇺🇸
Ian Goodfellow

Ian Goodfellow

Nationality
American

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
Deep LearningGenerative Adversarial NetworksMachine Learning SecurityGenerative Models
Education

BS & MS, Computer ScienceStanford University

PhD, Machine LearningUniversité de Montréal

Alignment

Pragmatic researcher

Safety Stance

Focused on practical AI security including adversarial robustness and machine learning safety. Has contributed significantly to understanding vulnerabilities in neural networks.

Notable Work
Generative Adversarial Networks (GANs)Deep Learning textbookAdversarial examplesTorax fusion simulator
Generative ModelsResearcherFrontier Lab LeaderSafetyPolicy
@goodfellow_ianWebsiteLinkedIn
VIEW DOSSIER →
Collin Burns

Collin Burns

Organization
Anthropic

Expertise
AlignmentMMLUMATH
Education

BSColumbia University

PhDUniversity of California

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
AlignmentMMLUMATH
AlignmentMMLUMATHResearcherSafety
VIEW DOSSIER →
Ryan Greenblatt

Ryan Greenblatt

Organization
Redwood Research

Position

Co-Founder and Researcher

Expertise
AlignmentSafety
Education

BSBrown University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
AlignmentSafety
AlignmentSafetyFounderLab LeaderResearcher
VIEW DOSSIER →
Sandhini Agarwal

Sandhini Agarwal

Organization
OpenAI

Expertise
launch safetypolicycollective alignmentsafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Publicly associated with launch safety, policy, and collective alignment work.

Notable Work
GPT-4 launch safetycollective alignment
safety-governanceproductneeds-reviewResearcherSafetyPolicyProduct
VIEW DOSSIER →
Jon Barron

Jon Barron

Organization
Google Deepmind

Expertise
NeRF3D Scene Generation
Education

BSUniversity of Toronto

PhDUniversity of California

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
NeRF3D Scene Generation
NeRF3D Scene GenerationResearcher
VIEW DOSSIER →
Jacob Steinhardt

Jacob Steinhardt

Organization
Transluce

Position

Co-Founder and Researcher

Expertise
RobustnessSafetyAlignmentBenchmarking
Education

BSMIT

PhDStanford University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
RobustnessSafetyAlignmentBenchmarking
RobustnessSafetyAlignmentBenchmarkingFounderLab LeaderResearcherSystems
VIEW DOSSIER →
Jiahui Yu

Jiahui Yu

Organization
Meta

Position

Research Scientist in AI

Expertise
PerceptionMultimodalVision
Education

BSUSTC

PhDUniversity of Illinois

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
PerceptionMultimodalVision
PerceptionMultimodalVisionResearcher
VIEW DOSSIER →
Wojciech Zaremba

Wojciech Zaremba

Organization
OpenAI

Expertise
founding researchreinforcement learningfrontier-model leadershipfounding-teamresearch-leadershipneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research leadershipOpenAI o1 leadershipGPT-4 data leadership
founding-teamresearch-leadershipneeds-reviewResearcher
VIEW DOSSIER →
Christopher Hesse

Christopher Hesse

Organization
OpenAI

Expertise
InfraScalingReinforcement Learning
Education

BSCase Western Reserve University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
InfraScalingReinforcement Learning
InfraScalingReinforcement LearningResearcher
VIEW DOSSIER →
Raphael Koster

Raphael Koster

Organization
Google Deepmind

Position

Research Scientist

Expertise
Multi-Agent Reinforcement LearningEpisodic Memory
Education

BSUniversity of Bremen

PhDUCL

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Multi-Agent Reinforcement LearningEpisodic Memory
Multi-Agent Reinforcement LearningEpisodic MemoryResearcher
VIEW DOSSIER →
Christian Szegedy

Christian Szegedy

Organization
Morph Labs

Position

Co-Founder and Researcher

Expertise
Batch NormRLVisionReinforcement LearningComputer Vision
Education

BSEotvos Lorand University

PhDThe University of Bonn

Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Batch NormVision
Batch NormRLVisionFounderLab LeaderResearcher
VIEW DOSSIER →
Shaoqing Ren

Shaoqing Ren

Organization
NIO

Position

Senior Researcher

Expertise
ResNetFaster R-CNNAutonomous Driving
Education

BSUniversity of Science and Technology of China

PhDUniversity of Science and Technology of China

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
ResNetFaster R-CNNAutonomous Driving
ResNetFaster R-CNNAutonomous DrivingResearcher
VIEW DOSSIER →
Max Schwarzer

Max Schwarzer

Organization
OpenAI

Expertise
deep researchopenai o1reasoning systemsreasoning-researchfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchOpenAI o1
reasoning-researchfrontier-modelsneeds-reviewResearcherSystems
VIEW DOSSIER →
Dan Roberts

Dan Roberts

Organization
OpenAI

Position

Research Scientist

Expertise
Reinforcement Learning
Education

BSDuke University

PhDMIT

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Reinforcement Learning
Reinforcement LearningResearcher
VIEW DOSSIER →
🇬🇧
Neel Nanda

Neel Nanda

Nationality
British

Organization
Google DeepMind

Position

Mechanistic Interpretability Team Lead, Google DeepMind

Expertise
Mechanistic InterpretabilityTransformer CircuitsAI SafetySparse AutoencodersInterpretability
Education

BA, Pure MathematicsUniversity of Cambridge

Alignment

AI Safety Advocate

Safety Stance

Committed to AI safety through interpretability research. Has become more measured about what mechanistic interpretability can achieve, pivoting toward practical safety applications rather than full theoretical understanding of models.

Notable Work
Gemma Scope (sparse autoencoders)TransformerLens libraryGrokking researchInduction headsProgress Measures for Grokking
InterpretabilityResearcherFrontier Lab LeaderSafety
@NeelNanda5Website
VIEW DOSSIER →
Kai Chen

Kai Chen

Organization
OpenAI

Expertise
operator systemsdeep researchagentsreasoning-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Operatordeep research
agentsreasoning-researchneeds-reviewResearcherSystems
VIEW DOSSIER →
Jason Wei

Jason Wei

Organization
OpenAI

Expertise
reasoningalignmentevaluationreasoning-researchalignment-evalsneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
OpenAI o1HealthBench
reasoning-researchalignment-evalsneeds-reviewResearcherSafety
VIEW DOSSIER →
Nelson Elhage

Nelson Elhage

Organization
Anthropic

Position

Research Scientist

Expertise
Mechanistic InterpretabilityConstitutional AI
Education

BSMIT

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Mechanistic InterpretabilityConstitutional AI
Mechanistic InterpretabilityConstitutional AIResearcherSafety
VIEW DOSSIER →
Tom Henighan

Tom Henighan

Organization
Anthropic

Position

Co-founder and Researcher

Expertise
Scaling LawsInterpretabilityConstitutional AI
Education

BSOhio State University

PhDStanford University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Scaling LawsInterpretabilityConstitutional AI
Scaling LawsInterpretabilityConstitutional AIFounderFrontier Lab LeaderResearcherSafetySystems
VIEW DOSSIER →
🇨🇳
Kaiming He

Kaiming He

Nationality
Chinese

Organization
MIT / Google DeepMind

Position

Associate Professor of EECS (tenured), MIT; Distinguished Scientist (part-time), Google DeepMind

Expertise
Computer VisionDeep LearningImage RecognitionGenerative Models
Education

BS, PhysicsTsinghua University

PhD, Information EngineeringChinese University of Hong Kong

Alignment

Research-focused pragmatist

Safety Stance

Focuses on fundamental research to improve model reliability and efficiency. Not publicly vocal on safety policy but contributes to responsible research practices.

Notable Work
ResNet (Deep Residual Learning)Masked Autoencoders (MAE)Faster R-CNNFeature Pyramid NetworksMask R-CNNMeanFlow
Computer VisionAcademicResearcherSafetyPolicy
WebsiteLinkedIn
VIEW DOSSIER →
Piotr Dollar

Piotr Dollar

Organization
FAIR

Position

Research Scientist

Expertise
Computer VisionMask R-CNN
Education

BSHarvard University

PhDUC San Diego

Alignment

Research-focused technologist

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Mask R-CNN
Computer VisionMask R-CNNResearcher
VIEW DOSSIER →
Shuchao Bi

Shuchao Bi

Organization
OpenAI

Expertise
audio modelsspeech systemsmodel researchaudio-modelsmultimodal-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
next-generation audio models
audio-modelsmultimodal-researchneeds-reviewResearcherSystems
VIEW DOSSIER →
Will Brown

Will Brown

Organization
Prime Intellect

Expertise
RLSequential PredictionDistributed TrainingReinforcement Learning
Education

BSUniversity of Pennsylvania

PhDColumbia University

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Sequential PredictionDistributed Training
RLSequential PredictionDistributed TrainingResearcher
VIEW DOSSIER →
Neil Houlsby

Neil Houlsby

Organization
Anthropic

Position

Research Scientist

Expertise
Vision TransformersMoE
Education

BSUniversity of Cambridge

PhDUniversity of Cambridge

Alignment

Frontier lab operator

Safety Stance

Works inside a lab with a public safety-first posture; individual views here are inferred from reliability and deployment context rather than detailed personal statements.

Notable Work
Vision TransformersMoE
Vision TransformersMoEResearcher
VIEW DOSSIER →
Yair Carmon

Yair Carmon

Organization
SSI

Expertise
OptimizersRobustnessScaling
Education

BSIsrael Institute of Technology

PhDStanford University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
OptimizersRobustnessScaling
OptimizersRobustnessScalingResearcherSafety
VIEW DOSSIER →
DJ Strouse

DJ Strouse

Organization
OpenAI

Expertise
Scaling RL
Education

BSUSC

PhDPrinceton University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Scaling RL
Scaling RLResearcher
VIEW DOSSIER →
Trenton Bricken

Trenton Bricken

Organization
Anthropic

Expertise
Mechanistic InterpretabilitySparse Autoencoders
Education

BSDuke University

PhDHarvard University

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Mechanistic InterpretabilitySparse Autoencoders
Mechanistic InterpretabilitySparse AutoencodersResearcherSafety
VIEW DOSSIER →
Llion Jones

Llion Jones

Nationality
Welsh

Organization
Sakana AI

Position

Co-founder & CTO, Sakana AI

Expertise
Transformer ArchitectureAttention MechanismsNatural Language ProcessingAI ResearchModel InventorFrontier Lab Founder
Education

BSc, Computer ScienceUniversity of Birmingham

MSc, Advanced Computer ScienceUniversity of Birmingham

Alignment

Research diversity advocate

Safety Stance

Concerned about monoculture in AI research. Advocates for exploring diverse architectures beyond transformers to avoid concentrating risk in a single paradigm.

Notable Work
Attention Is All You Need (Transformer paper)Tensor2TensorGoogle Translate improvements
Model InventorFrontier Lab FounderFounderLab LeaderResearcher
@YesThisIsLionLinkedIn
VIEW DOSSIER →
Jianlin Su

Jianlin Su

Organization
Kimi

Expertise
OptimizersContext-LengthMoE
Education

BSSun Yat-sen University

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
OptimizersContext-LengthMoE
OptimizersContext-LengthMoEResearcher
VIEW DOSSIER →
Sherjil Ozair

Sherjil Ozair

Organization
General Agents

Expertise
GANsVisionAgents
Education

BSIIT

PhDUniversite de Montreal

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
GANsVisionAgents
GANsVisionAgentsResearcher
VIEW DOSSIER →
Devendra Singh Chaplot

Devendra Singh Chaplot

Organization
Thinking Machines

Expertise
MultimodalEmbodied Systems
Education

BSIIT

PhDCarnegie Mellon University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
MultimodalEmbodied Systems
MultimodalEmbodied SystemsResearcherSystems
VIEW DOSSIER →
Stephen Roller

Stephen Roller

Organization
Thinking Machines

Expertise
InfrastructureScale
Education

BSNorth Carolina State University

PhDUniversity of Texas Austin

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
InfrastructureScale
InfrastructureScaleResearcherSystems
VIEW DOSSIER →
Steven Hansen

Steven Hansen

Organization
Google Deepmind

Expertise
AgentsAlgorithm DistillationMemory-Augmented Deep RL
Education

BSCarnegie Mellon University

PhDStanford University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AgentsAlgorithm DistillationMemory-Augmented Deep RL
AgentsAlgorithm DistillationMemory-Augmented Deep RLResearcher
VIEW DOSSIER →
Jacob Hilton

Jacob Hilton

Organization
Alignment Research Center

Expertise
InterpretabilityTruthfulnessSafety
Education

BSUniversity of Cambridge

PhDUniversity of Leeds

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
InterpretabilityTruthfulnessSafety
InterpretabilityTruthfulnessSafetyResearcher
VIEW DOSSIER →
Jean Pouget-Abadie

Jean Pouget-Abadie

Organization
Google

Position

Research Scientist

Expertise
GANsInference
Education

BSEcole Polytechnique

PhDHarvard University

Alignment

Applied AI builder

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
GANsInference
GANsInferenceResearcherSystems
VIEW DOSSIER →
Alexey Dosovitskiy

Alexey Dosovitskiy

Organization
Inceptive

Expertise
Vision Transformers
Education

BSLomonosov Moscow State University

PhDLomonosov Moscow State University

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Vision Transformers
Vision TransformersResearcher
VIEW DOSSIER →
Edward J. Hu

Edward J. Hu

Organization
Stealth

Expertise
LoRAmuTransfer
Education

BSThe Johns Hopkins University

PhDUniversite de Montreal

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
LoRA, muTransfer
LoRAmuTransferResearcher
VIEW DOSSIER →
🇹🇷
Koray Kavukcuoglu

Koray Kavukcuoglu

Nationality
Turkish

Organization
Google DeepMind

Position

CTO & Chief AI Architect (SVP), Google DeepMind

Expertise
Deep LearningConvolutional Neural NetworksReinforcement LearningAI Product IntegrationFrontier Research Leader
Education

BS, Aerospace EngineeringMiddle East Technical University

MS, Computer ScienceNew York University

PhD, Computer ScienceNew York University

Alignment

Product-focused technologist

Safety Stance

Supports responsible development through product integration. Focuses on ensuring AI capabilities are deployed safely at scale within Google products.

Notable Work
DQN (co-author)WaveNet (oversight)IMPALAGemini integration
Frontier Research LeaderFrontier Lab LeaderResearcherProduct
@koraykvLinkedIn
VIEW DOSSIER →
Francois Chollet

Francois Chollet

Organization
Ndea

Expertise
KerasARC-AGI
Education

BSENSTA Paris

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
KerasARC-AGI
KerasARC-AGIResearcher
VIEW DOSSIER →
Aadit Juneja

Aadit Juneja

Organization
xAI

Position

SDK contributor (xai-org)

Expertise
developer_experiencesdk
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
developer_experiencesdk
developer_experiencesdkResearcher
VIEW DOSSIER →
Aakash Sastry

Aakash Sastry

Organization
xAI

Position

CEO & co-founder of Hotshot; joined xAI via acquisition

Expertise
acquisitionvideo_generation
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
acquisitionvideo_generationFrontier Lab Leader
acquisitionvideo_generationFounderCEOFrontier Lab LeaderResearcher
VIEW DOSSIER →
Abhinav Gupta

Abhinav Gupta

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
reasoningcode modelslarge language modelsllms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini reasoningcode-focused modelslanguage-model evaluation
researchllmsreasoningResearcher
Website
VIEW DOSSIER →
A

Acquires Promptfoo

Organization
OpenAI

Expertise
Artificial Intelligencedb-ingested
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
db-ingestedArtificial Intelligence
db-ingestedResearcher
VIEW DOSSIER →
Adam Jones

Adam Jones

Organization
Anthropic

Position

MCP Product Engineering

Expertise
MCPdeveloper toolsagent toolingengineering
Alignment

Frontier lab operator

Safety Stance

Supports Anthropic's developer tooling and agent infrastructure.

Notable Work
Writing effective tools for AI agentsIntroducing advanced tool use on the Claude Developer PlatformBuilder Summit London talks
engineeringResearcherSystemsProduct
VIEW DOSSIER →
Adam Lelkes

Adam Lelkes

Organization
Google DeepMind

Position

Senior Research Scientist, Google DeepMind

Expertise
generative AItheorylanguage modelsgenerative-ai
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
generative AI researchlearning theory crossoverlanguage-model methods
researchgenerative-aitheoryResearcher
Website
VIEW DOSSIER →
Adele Li

Adele Li

Organization
OpenAI

Position

Product Lead

Expertise
product managementmultimodal productsimage generationmultimodal-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images
productmultimodal-researchneeds-reviewFrontier Lab LeaderResearcherProduct
VIEW DOSSIER →
Aditya Prerepa

Aditya Prerepa

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Aditya Ramesh

Aditya Ramesh

Organization
OpenAI

Position

World Simulation Lead

Expertise
world modelsimage generationgenerative mediamultimodal-researchworld-modelsimage-generation
Alignment

Safety-aligned researcher

Safety Stance

Public role is capability-centric, though deployed systems necessarily pass through OpenAI safety processes.

Notable Work
DALL·E4o Image Generationworld simulation
multimodal-researchworld-modelsimage-generationFrontier Lab LeaderResearcherSafetySystems
VIEW DOSSIER →
Aditya Srinivas Timmaraju

Aditya Srinivas Timmaraju

Organization
Google DeepMind

Position

Senior Staff Research Engineer, Google DeepMind

Expertise
large-scale ML engineeringfoundation modelssystemsengineeringllms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini-era model engineeringresearch infrastructurelarge-model optimization
engineeringsystemsllmsResearcherSystems
Website
VIEW DOSSIER →
Adriaan Engelbrecht

Adriaan Engelbrecht

Organization
Anthropic

Position

Applied AI

Expertise
applied AIagent deployment
Alignment

Frontier lab operator

Safety Stance

Supports real-world deployment of Anthropic systems.

Notable Work
Builder Summit London
applied AIResearcherSystemsProduct
VIEW DOSSIER →
🇨🇦
Ahmad Al-Dahle

Ahmad Al-Dahle

Nationality
Canadian

Organization
Airbnb

Position

CTO, Airbnb (since Jan 2026); Former VP & Head of Generative AI, Meta

Expertise
Generative AIOpen Source LLMsAutonomous SystemsMobile TechnologyEx-MetaEx-Apple
Education

BEng, EngineeringUniversity of Waterloo

Alignment

Open-source builder

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-MetaEx-AppleLlamaOpen SourceGenerative AIOpen Source LLMs
CTOEx-MetaEx-AppleLlamaOpen SourceLab LeaderResearcherSystems
@Ahmad_Al_Dahle
VIEW DOSSIER →
Ahmed El-Kishky

Ahmed El-Kishky

Organization
OpenAI

Expertise
reasoning modelsopenai o1deep researchreasoning-researchfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
OpenAI o1deep research
reasoning-researchfrontier-modelsneeds-reviewResearcher
VIEW DOSSIER →
Aidan Clark

Aidan Clark

Organization
OpenAI

Expertise
pretrainingaudio modelsfrontier model researchfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4o pre-trainingnext-generation audio models
frontier-modelspretrainingneeds-reviewResearcher
VIEW DOSSIER →
🇹🇼
Aja Huang

Aja Huang

Nationality
Taiwanese

Organization
Google DeepMind

Position

Senior Staff Research Scientist, Google DeepMind

Expertise
reinforcement learninggame-playing AImathematical reasoninggames
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AlphaGoAlphaGo Zeroformal reasoning systems
researchrlgamesResearcher
WebsiteLinkedIn
VIEW DOSSIER →
AJ Alt

AJ Alt

Organization
Anthropic

Position

Research / product contributor

Expertise
human-AI interactionproduct research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Introducing Anthropic Interviewer
researchResearcherSafetyProduct
VIEW DOSSIER →
🇺🇸
Ajeya Cotra

Ajeya Cotra

Nationality
American

Organization
METR

Position

Member of Technical Staff, METR (Model Evaluation & Threat Research)

Expertise
AI SafetyAI ForecastingCompute ScalingRisk Assessment
Education

BS, Electrical Engineering and Computer ScienceUniversity of California, Berkeley

Alignment

AI safety-focused, effective altruism aligned

Safety Stance

Believes there is a meaningful chance of transformative AI within the next decade. Thinks current safety plans that rely on "using AI to make AI safe" may be insufficient. Advocates for rigorous external evaluation, threat modeling, and preparing for scenarios where AI systems could resist human oversight.

Notable Work
Biological Anchors: A New Framework for Forecasting Transformative AIWithout Specific Countermeasures, the Easiest Path to Transformative AI Likely Leads to AI TakeoverAI Predictions for 2026
AI SafetyResearcherSafetySystems
@ajeya_cotraWebsiteLinkedIn
VIEW DOSSIER →
Akshay Nathan

Akshay Nathan

Organization
OpenAI

Expertise
research leadershipdeep researchgpt-5research-leadershipfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research leadershipGPT-5
research-leadershipfrontier-modelsneeds-reviewResearcher
VIEW DOSSIER →
Alan Karthikesalingam

Alan Karthikesalingam

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
biomedical AIclinical language modelsAI for sciencehealthbiomedical-aiscience
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AMIEAI Co-ScientistMed-PaLM family
healthbiomedical-aiscienceverifiedResearcherSystems
Website
VIEW DOSSIER →
Aleksander Madry

Aleksander Madry

Organization
OpenAI

Position

Head of Preparedness

Expertise
preparednessrobustnessfrontier risk evaluationsafety-governancetechnical-safety
Alignment

Policy and governance operator

Safety Stance

Strongly aligned with evaluation-heavy preparedness work for frontier systems.

Notable Work
Preparedness evaluationsrobustness and adversarial ML
safety-governancepreparednesstechnical-safetyFrontier Lab LeaderAcademicResearcherSafetyPolicy
VIEW DOSSIER →
Alexander Pan

Alexander Pan

Organization
xAI

Position

xAI (research / safety fellowship mentioned)

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

researchsafetyResearcherSafety
LinkedIn
VIEW DOSSIER →
Alexandra Sanderford

Alexandra Sanderford

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic IndexAnthropic Economic Index report: Economic primitives
economic researchResearcherSafety
VIEW DOSSIER →
🇺🇸
Alexandr Wang

Alexandr Wang

Nationality
American

Organization
Meta

Position

Chief AI Officer

Expertise
AI Data InfrastructureSuperintelligenceAI ScalingChief AI OfficerScale AI
Education

Attended, Computer ScienceMIT

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Chief AI OfficerSuperintelligenceScale AIFrontier Lab LeaderAI Data InfrastructureAI Scaling
Chief AI OfficerFounderSuperintelligenceScale AICEOFrontier Lab LeaderResearcherSystems
@alexandr_wang
VIEW DOSSIER →
Alex Davies

Alex Davies

Organization
Google DeepMind

Position

Founding Lead, AI for Maths, Google DeepMind

Expertise
AI for mathematicsscientific discoveryreasoning systemssciencemathreasoning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
FunSearch-era math workAI for Maths initiativemath discovery tooling
sciencemathreasoningFrontier Lab LeaderResearcherSystems
Website
VIEW DOSSIER →
Alex Gruenstein

Alex Gruenstein

Organization
Google DeepMind

Position

Senior Director of Engineering, Gemini App, Google DeepMind

Expertise
speech systemsAI product engineeringconsumer AI applicationsengineeringspeechverified
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini app engineering leadershipspeech technologiesassistant-era systems
engineeringproductspeechverifiedFrontier Lab LeaderResearcherSystemsProduct
Website
VIEW DOSSIER →
🇬🇧
Alex Kendall

Alex Kendall

Nationality
British

Organization
Wayve

Position

Co-Founder & CEO, Wayve

Expertise
Autonomous DrivingComputer VisionEnd-to-End Deep Learning
Education

PhD, Computer ScienceUniversity of Cambridge

Alignment

Academic researcher

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
End-to-end autonomous drivingBayesian deep learning
FounderCEOAutonomous DrivingComputer VisionLab LeaderAcademicResearcher
VIEW DOSSIER →
🇺🇦🇨🇦
Alex Krizhevsky

Alex Krizhevsky

Nationality
Ukrainian-Canadian

Organization
Two Bear Capital

Position

Venture Partner, Two Bear Capital

Expertise
Computer VisionDeep LearningConvolutional Neural Networks
Education

BSc, Computer ScienceUniversity of Toronto

MSc, Computer ScienceUniversity of Toronto

PhD, Computer ScienceUniversity of Toronto

Alignment

Pragmatic technologist

Safety Stance

Not publicly vocal on AI safety. Focused on practical applications and investing in responsible AI startups.

Notable Work
AlexNetImageNet Classification with Deep CNNsCIFAR-10 and CIFAR-100 datasets (co-creator)cuda-convnet
Computer VisionResearcherAcademicSafetyInvestor
Website
VIEW DOSSIER →
Alex Nichol

Alex Nichol

Organization
OpenAI

Expertise
image generationdiffusion modelsmultimodal researchmultimodal-researchimage-generationneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images research collaboration4o Image Generation
multimodal-researchimage-generationneeds-reviewResearcher
VIEW DOSSIER →
Alex Peng

Alex Peng

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Alex Tamkin

Alex Tamkin

Organization
Anthropic

Position

Research contributor

Expertise
economic researchAI usage analysisevaluation
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerAnthropic Economic Index reportsHow AI assistance impacts the formation of coding skillsAnthropic Education Report: How educators use Claude
researchResearcherSafety
VIEW DOSSIER →
A

Alex Wang

Organization
Meta

Position

Chief AI Officer

Expertise
Artificial Intelligence
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Frontier Lab LeaderArtificial Intelligence
Frontier Lab LeaderResearcher
VIEW DOSSIER →
🇺🇸
Ali Farhadi

Ali Farhadi

Nationality
Iranian-American

Organization
Allen Institute for AI (Ai2) / University of Washington

Position

CEO, Allen Institute for AI (Ai2); Professor, University of Washington

Expertise
Computer VisionMachine LearningNatural Language and VisionOpen Source AI
Education

PhD, Computer ScienceUniversity of Illinois at Urbana-Champaign

Alignment

Open-source AI advocate

Safety Stance

Believes open-source AI is the safest path forward. Advocates for transparency and broad access to AI tools and models.

Notable Work
YOLO (co-creator)Visual GenomeXnor.ai (on-device AI)OLMoOpen Source AI models at Ai2
Computer VisionAcademicLab LeaderFounderCEOResearcherOpen Source
WebsiteLinkedIn
VIEW DOSSIER →
Amanda Donohue

Amanda Donohue

Organization
Anthropic

Position

Head of Product

Expertise
product leadershipenterprise AI
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic product deployment within the company's safety-first framing.

Notable Work
Frontier Lab Leaderproduct leadershipenterprise AI
productFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
🇮🇳
Amar Subramanya

Amar Subramanya

Nationality
Indian

Organization
Apple

Position

VP of AI, Apple

Expertise
Foundation ModelsMachine LearningAI SafetySemi-Supervised LearningEx-GoogleEx-Microsoft
Education

BE, Electronics and Communications EngineeringBangalore University / UVCE

PhD, Computer ScienceUniversity of Washington

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Ex-GoogleEx-MicrosoftFoundation ModelsAI SafetyMachine LearningSemi-Supervised Learning
VPEx-GoogleEx-MicrosoftFoundation ModelsAI SafetyLab LeaderResearcherSafety
VIEW DOSSIER →
Amy Soller

Amy Soller

Organization
xAI

Position

Mission Manager (Intelligence Community Lead)

Expertise
governmentmission_manager
Alignment

Policy and governance operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
governmentmission_managerFrontier Lab Leader
governmentmission_managerFrontier Lab LeaderResearcherPolicy
LinkedIn
VIEW DOSSIER →
🇷🇴🇺🇸
Anca Dragan

Anca Dragan

Nationality
Romanian-American

Organization
Google DeepMind / UC Berkeley

Position

Head of AI Safety and Alignment, Google DeepMind; Associate Professor (on leave), UC Berkeley

Expertise
Human-Robot InteractionAI SafetyAI AlignmentRobotics
Education

BS, Computer ScienceJacobs University Bremen

PhD, RoboticsCarnegie Mellon University

Alignment

AI safety advocate

Safety Stance

Deeply committed to AI alignment. Now heads safety and alignment research at Google DeepMind. Researches how AI systems can better understand, predict, and align with human intentions and values.

Notable Work
Legible Robot Motion PlanningValue Alignment in AIHuman-Robot InteractionGLOVER++HOVA-500K
RoboticsAcademicResearcherFrontier Lab LeaderSafetySystems
@ancadianadraganWebsiteLinkedIn
VIEW DOSSIER →
🇺🇸
Andi Peng

Andi Peng

Nationality
American

Organization
Humans&

Position

Co-Founder, Humans&

Expertise
Reinforcement LearningAI SafetyHuman-AI CollaborationEx-Anthropic
Education

PhD, Computer Science (CSAIL)MIT

MPhil, Marshall ScholarUniversity of Cambridge

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Ex-AnthropicAI SafetyHuman-AI CollaborationReinforcement Learning
FounderEx-AnthropicAI SafetyHuman-AI CollaborationLab LeaderResearcherSafety
VIEW DOSSIER →
Andrea Vallone

Andrea Vallone

Organization
OpenAI

Expertise
model behavior policydetection and refusalshealth benchmarksalignment-evalssafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Associated with policy, refusals, and model-behavior evaluation.

Notable Work
GPT-4 detection and refusals policyHealthBench
alignment-evalssafety-governanceneeds-reviewResearcherSafetyPolicy
VIEW DOSSIER →
🇺🇸
Andrew Barto

Andrew Barto

Nationality
American

Organization
University of Massachusetts Amherst

Position

Professor Emeritus of Computer Science, University of Massachusetts Amherst

Expertise
Reinforcement LearningMachine LearningAdaptive ControlNeuroscience-inspired AIRL PioneerTuring Award
Education

BS, MathematicsUniversity of Michigan

MS, Computer and Communication SciencesUniversity of Michigan

PhD, Computer ScienceUniversity of Michigan

Alignment

Academic

Safety Stance

Focuses on foundational research. Has expressed concern about ensuring AI systems learn aligned reward functions.

Notable Work
Reinforcement Learning: An Introduction (textbook)Temporal-difference learningActor-critic methodsIntrinsic motivation in RL
RL PioneerAcademicTuring AwardResearcherSystems
Website
VIEW DOSSIER →
a

andrew-bosworth

Expertise
Artificial Intelligence
Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Artificial Intelligence
Researcher
VIEW DOSSIER →
Andrew Braunstein

Andrew Braunstein

Organization
OpenAI

Expertise
inference systemscore infrastructuregpt-5deployment-infrastructuregpt5needs-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images core inferenceGPT-5
deployment-infrastructuregpt5needs-reviewResearcherSystemsProduct
VIEW DOSSIER →
Andrew Burlinson

Andrew Burlinson

Organization
xAI

Position

Expert Team Lead (Grok Imagine)

Expertise
grok_imaginehuman_data_lead
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
grok_imaginehuman_data_leadFrontier Lab Leader
grok_imaginehuman_data_leadFrontier Lab LeaderResearcher
LinkedIn
VIEW DOSSIER →
Andrew Cohen

Andrew Cohen

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Andrew Dudzik

Andrew Dudzik

Organization
Google DeepMind

Position

Senior Research Scientist, Google DeepMind

Expertise
algorithmsdistributed systemsmachine intelligence
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
algorithms and theorydistributed ML systemsmachine intelligence research
researchalgorithmssystemsResearcherSystems
Website
VIEW DOSSIER →
Andrew Ma

Andrew Ma

Organization
xAI

Position

Member of Technical Staff (departed 2026)

Expertise
engineeringsearch
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringsearch
engineeringsearchResearcher
VIEW DOSSIER →
🇬🇧🇺🇸
Andrew Ng

Andrew Ng

Nationality
British-American

Organization
AI Fund / AI Aspire / DeepLearning.AI / Landing AI

Position

Managing General Partner, AI Fund; Managing Partner, AI Aspire; Founder & CEO, DeepLearning.AI; Executive Chairman, Landing AI

Expertise
Machine LearningDeep LearningAI EducationAgentic AIOpen Source
Education

PhD, Computer ScienceUniversity of California, Berkeley

Alignment

Pragmatic technologist

Safety Stance

Opposes heavy regulation. At Davos 2026, argued AI job displacement fears are exaggerated — impact is more nuanced when jobs are broken into tasks. Advocates for open-source and broad access.

Notable Work
Google BrainLarge-scale unsupervised learningStanford Autonomous HelicopterAgentic AI workflows
CEOEducatorInvestorOpen SourceAgentic AIFounderLab LeaderAcademic
@AndrewYNgWebsiteLinkedIn
VIEW DOSSIER →
🇬🇧
Andrew Zisserman

Andrew Zisserman

Nationality
British

Organization
University of Oxford / Google DeepMind

Position

Royal Society Research Professor & Professor of Computer Vision Engineering, University of Oxford

Expertise
Computer VisionDeep LearningVisual RecognitionMulti-view Geometry
Education

PhD, MathematicsUniversity of Cambridge

Alignment

Academic

Safety Stance

Focused on advancing fundamental understanding of computer vision. Engages with responsible AI through academic research and mentorship.

Notable Work
VGGNet (Very Deep Convolutional Networks)Multiple View GeometryVisual Transformer architecturesVideo understanding
Computer VisionAcademicResearcher
Website
VIEW DOSSIER →
Anelia Angelova

Anelia Angelova

Organization
Google DeepMind

Position

Principal Scientist and Vision-Language Lead, Google DeepMind

Expertise
computer visionvision-language modelsrobotics perceptionvisionmultimodalrobotics
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
vision-language systemsrobot visionmultimodal learning
visionmultimodalroboticsverifiedFrontier Lab LeaderResearcherSystems
Website
VIEW DOSSIER →
Anna Makanju

Anna Makanju

Organization
OpenAI

Position

Vice President of Global Affairs

Expertise
global affairspolicygovernanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
policy and global affairs leadershipOperator leadership
policygovernanceneeds-reviewResearcherPolicy
VIEW DOSSIER →
Anthony Armstrong

Anthony Armstrong

Organization
xAI

Position

CFO

Expertise
finance
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
finance
executivefinanceResearcher
VIEW DOSSIER →
🇮🇳
Aravind Srinivas

Aravind Srinivas

Nationality
Indian

Organization
Perplexity AI

Position

Co-Founder & CEO, Perplexity AI

Expertise
Information RetrievalLarge Language ModelsAI SearchEx-OpenAI
Education

PhD, Computer ScienceUC Berkeley

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-OpenAIAI SearchInformation RetrievalLarge Language Models
FounderCEOEx-OpenAIAI SearchLab LeaderResearcher
VIEW DOSSIER →
Aren Jansen

Aren Jansen

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
multimodal language modelingmedia generationspeech and audiomultimodalaudio
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
media generationmultimodal foundation researchspeech-centric modeling
researchmultimodalaudioResearcher
Website
VIEW DOSSIER →
🇺🇸
Ari Morcos

Ari Morcos

Nationality
American

Organization
DatologyAI

Position

Co-Founder & CEO, DatologyAI

Expertise
Data-Centric AINeural Network RepresentationsNeurobiologyEx-MetaEx-DeepMind
Education

BS, Physiology & NeuroscienceUC San Diego

PhD, NeurobiologyHarvard University

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-MetaEx-DeepMindData-Centric AINeural Network RepresentationsNeurobiology
FounderCEOEx-MetaEx-DeepMindData-Centric AILab LeaderResearcher
@arimorcos
VIEW DOSSIER →
Arjun Reddy Akula

Arjun Reddy Akula

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
computer visionNLPstatistical modelingvision-languagemultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
VQA and multimodal benchmarksvision-language reasoningdeep learning
researchvision-languagemultimodalResearcher
Website
VIEW DOSSIER →
Arsha Nagrani

Arsha Nagrani

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
video understandingaudio-visual learningmultimodal representation learningvideomultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
multimodal video understandingaudio-aware learningvideo-language systems
researchvideomultimodalResearcher
Website
VIEW DOSSIER →
🇫🇷
Arthur Mensch

Arthur Mensch

Nationality
French

Organization
Mistral AI

Position

Co-founder & CEO, Mistral AI

Expertise
Large Language ModelsMachine LearningOptimizationOpen-Source AIFrontier Founder
Education

MSc, Applied MathematicsÉcole Polytechnique

MSc, Mathematics, Vision and LearningÉcole Normale Supérieure Paris-Saclay

PhD, Machine LearningUniversité Paris-Saclay

Alignment

Open-source AI advocate, European tech sovereignty proponent

Safety Stance

Believes AI safety responsibility lies with developers deploying models, not foundation model builders. Advocates open-source transparency as the best safety guarantee. Supports product-level regulation over model-level regulation.

Notable Work
Mistral 7BMixtral (Mixture of Experts)PixtralCodestral
CEOFrontier FounderFounderFrontier Lab LeaderResearcherSafetyPolicyProduct
@arthurmenschWebsiteLinkedIn
VIEW DOSSIER →
Arvind KC

Arvind KC

Organization
OpenAI

Position

Chief People Officer

Expertise
organizational designtalent strategycompany buildingexecutive-leadershippeople-operationsorganizational-scaling
Alignment

Safety-aligned researcher

Safety Stance

No distinct technical safety stance located in first-party materials reviewed.

Notable Work
People strategy for OpenAI scale-up
executive-leadershippeople-operationsorganizational-scalingFrontier Lab LeaderResearcherSafety
VIEW DOSSIER →
🇮🇳🇺🇸
Arvind Narayanan

Arvind Narayanan

Nationality
Indian-American

Organization
Princeton University

Position

Professor of Computer Science & Director of CITP, Princeton University

Expertise
AI AccountabilityAlgorithmic FairnessPrivacyInformation SecurityTechnology Policy
Education

BTech, Computer Science and EngineeringIndian Institute of Technology Madras

PhD, Computer ScienceUniversity of Texas at Austin

Alignment

Evidence-based policy advocate

Safety Stance

Skeptical of both AI hype and existential risk framing. Focuses on distinguishing genuine AI capabilities from "snake oil." Advocates for empirical accountability — testing AI claims against evidence rather than speculation. Warns about predictive AI systems that don't work but are deployed anyway.

Notable Work
AI Snake Oil (book)De-anonymization of large datasetsWeb transparency and accountabilityPredictive AI auditingBlockchain analysis
AI AccountabilityAcademicResearcherLab LeaderPolicySystems
@random_walkerWebsiteLinkedIn
VIEW DOSSIER →
🇮🇳🇺🇸
Ashish Vaswani

Ashish Vaswani

Nationality
Indian-American

Organization
Essential AI

Position

Co-Founder & CEO, Essential AI

Expertise
TransformersNatural Language ProcessingDeep LearningFoundation ModelsModel Inventor
Education

BE, Computer ScienceBirla Institute of Technology, Mesra

PhD, Computer ScienceUniversity of Southern California

Alignment

Open science advocate

Safety Stance

Advocates for open science and foundational research transparency. Believes sustained AI progress depends on open collaboration.

Notable Work
Attention Is All You Need (Transformer)Rnj-1 modelNeural machine translation
TransformersModel InventorFounderCEOLab LeaderResearcher
@ashVaswaniWebsiteLinkedIn
VIEW DOSSIER →
Asma Ghandeharioun

Asma Ghandeharioun

Organization
Google DeepMind

Position

Senior Research Scientist, People + AI Research, Google DeepMind

Expertise
interpretabilityalignmenthuman-centered AIresponsible-ai
Alignment

Safety-aligned researcher

Safety Stance

Explicitly works on aligning language models with human values.

Notable Work
model interpretabilityhuman values alignmentPAIR research
responsible-aiinterpretabilityalignmentResearcherSafety
Website
VIEW DOSSIER →
Avery Rogers

Avery Rogers

Organization
Anthropic

Position

Member of Technical Staff

Expertise
engineeringdeveloper products
Alignment

Frontier lab operator

Safety Stance

Contributes to Anthropic technical delivery.

Notable Work
engineeringdeveloper products
engineeringResearcher
VIEW DOSSIER →
Ayush Jaiswal

Ayush Jaiswal

Organization
xAI

Position

Worked on Grok (departed 2026)

Expertise
grok
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
grok
productgrokResearcherProduct
VIEW DOSSIER →
Balaji Lakshminarayanan

Balaji Lakshminarayanan

Organization
Google / Google DeepMind

Position

Research Scientist in the Google-DeepMind stack

Expertise
uncertainty estimationrobustnessprobabilistic deep learninguncertainty
Alignment

Safety-aligned researcher

Safety Stance

Strongly associated with uncertainty, reliability, and robust model behavior.

Notable Work
OOD robustnessdistance-aware uncertainty methodsreliable deep learning
researchrobustnessuncertaintyResearcherSafety
Website
VIEW DOSSIER →
Barry Zhang

Barry Zhang

Organization
Anthropic

Position

Engineering contributor

Expertise
agent engineeringdeveloper toolsengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Building effective agentsHow we built our multi-agent research system
engineeringResearcherSafety
VIEW DOSSIER →
🇰🇷
Been Kim

Been Kim

Nationality
South Korean

Organization
Google DeepMind

Position

Senior Staff Research Scientist, Google DeepMind

Expertise
AI InterpretabilityExplainabilityHuman-AI InteractionAlignmentInterpretabilityAAAI Fellow
Education

PhD, Computer ScienceMIT

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
TCAVConcept Activation VectorsAgentic interpretability with AlphaZero
ResearcherInterpretabilityExplainabilityAAAI FellowSafety
Website
VIEW DOSSIER →
🇺🇸🇧🇷
Ben Goertzel

Ben Goertzel

Nationality
American-Brazilian

Organization
SingularityNET

Position

Founder, CEO & Chief Scientist, SingularityNET; CEO, ASI Alliance

Expertise
Artificial General IntelligenceCognitive ScienceDecentralized AIComplex SystemsAGI Pioneer
Education

BA, Quantitative ResearchBard College at Simon's Rock

PhD, MathematicsTemple University

Alignment

Academic researcher

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
OpenCogAGI SocietyProbabilistic Logic Networks
FounderCEOAGI PioneerDecentralized AILab LeaderAcademicResearcherSystems
Website
VIEW DOSSIER →
Berkin Akin

Berkin Akin

Organization
Google DeepMind

Position

Software Engineer, Google DeepMind

Expertise
TPU performancelarge-scale computesystems engineeringengineeringhardware
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
compute performance for large-scale AI on TPUsmodel systems optimizationinfrastructure engineering
engineeringsystemshardwareResearcherSystems
Website
VIEW DOSSIER →
Biao He

Biao He

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇺🇸
Bill Dally

Bill Dally

Nationality
American

Organization
NVIDIA

Position

Chief Scientist & SVP of Research, NVIDIA

Expertise
GPU ArchitectureParallel ComputingComputer Architecture
Alignment

Academic researcher

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
J-MachineM-MachineGPU architectureParallel computing
Chief ScientistGPU ArchitectureAcademicLab LeaderResearcherSystems
VIEW DOSSIER →
Bill Peebles

Bill Peebles

Organization
OpenAI

Position

Sora Lead

Expertise
video generationmultimodal researchworld simulationmultimodal-researchvideo-generationresearch-lead
Alignment

Safety-aligned researcher

Safety Stance

Publicly known primarily for generative video work rather than safety-specific leadership.

Notable Work
Soravideo generationmultimodal generative models
multimodal-researchvideo-generationresearch-leadFrontier Lab LeaderResearcherSafety
VIEW DOSSIER →
Bin Wu

Bin Wu

Organization
Anthropic

Position

Engineering contributor

Expertise
developer platformtool useengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Introducing advanced tool use on the Claude Developer Platform
engineeringResearcherSafetyProduct
VIEW DOSSIER →
🇺🇸
Bob McGrew

Bob McGrew

Nationality
American

Organization
Arda

Position

Founder

Expertise
AI ResearchAI ReasoningAI ManufacturingEngineering LeadershipEx-OpenAICRO
Education

BS, Computer ScienceStanford University

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-OpenAICROEx-PalantirAI ReasoningAI ResearchAI Manufacturing
Ex-OpenAICROEx-PalantirFounderAI ReasoningLab LeaderResearcher
VIEW DOSSIER →
Boris Cherny

Boris Cherny

Organization
Anthropic

Position

Head of Claude Code

Expertise
coding agentsdeveloper toolsengineering
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic's developer tooling within its safety-focused framing.

Notable Work
Claude Code
productengineeringFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Brad Abrams

Brad Abrams

Organization
Anthropic

Position

Product Manager

Expertise
product managementdeveloper platform
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic product deployment within its safety-first framing.

Notable Work
product managementdeveloper platform
productResearcherSafetyProduct
VIEW DOSSIER →
Brad Lightcap

Brad Lightcap

Organization
OpenAI

Position

Chief Operating Officer

Expertise
operationsbusiness strategyglobal deploymentexecutive-leadershipbusiness
Alignment

Frontier lab operator

Safety Stance

Primarily an operating executive; public role emphasizes responsible scale and deployment.

Notable Work
Operational scaling for OpenAI products and infrastructure
executive-leadershipoperationsbusinessFrontier Lab LeaderResearcherSystemsProduct
VIEW DOSSIER →
Brendan Jou

Brendan Jou

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
multimodal learninghuman-centered AIaudiovisual generationmultimodalhci
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
audiovisual generationhuman-centered MLmultimodal research
researchmultimodalhciResearcher
Website
VIEW DOSSIER →
Brenna O'Brocta

Brenna O'Brocta

Organization
xAI

Position

AI Tutor

Expertise
human_data
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
human_data
human_datasafetyResearcherSafety
LinkedIn
VIEW DOSSIER →
Briana Hamilton

Briana Hamilton

Organization
xAI

Position

Environment, Health and Safety Manager

Expertise
operations
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
operations
operationssafetyResearcherSafety
LinkedIn
VIEW DOSSIER →
Brian Bjelde

Brian Bjelde

Organization
xAI

Position

Mission Manager

Expertise
mission_manager
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
mission_manager
mission_managerResearcher
LinkedIn
VIEW DOSSIER →
Brian Calvert

Brian Calvert

Organization
Anthropic

Position

Research contributor

Expertise
AI usage research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practice
researchResearcherSafety
VIEW DOSSIER →
🇺🇸
Bryan Catanzaro

Bryan Catanzaro

Nationality
American

Organization
NVIDIA

Position

VP of Applied Deep Learning Research, NVIDIA

Expertise
Applied Deep LearningNLPSpeech RecognitionOpen ModelsVP ResearchApplied AI
Education

PhD, Electrical Engineering and Computer SciencesUC Berkeley

Alignment

Applied AI builder

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
cuDNNDLSS 2.0Nemotron models
VP ResearchApplied AIOpen ModelsEx-BaiduLab LeaderResearcherSystems
VIEW DOSSIER →
Bryan Seethor

Bryan Seethor

Organization
Anthropic

Position

Research contributor

Expertise
organizational AI usehuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
How AI Is Transforming Work at Anthropic
researchResearcherSafety
VIEW DOSSIER →
Cady Tianyu Xu

Cady Tianyu Xu

Organization
Google DeepMind

Position

Researcher, GenAI Team, Google DeepMind

Expertise
LLM agentsclosed-loop executionreasoning systemsagentsllms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
LLM agents for executiongenAI researchagent reliability in dynamic environments
researchagentsllmsResearcherSystems
Website
VIEW DOSSIER →
C

Caitlin Kalinowski

Organization
OpenAI

Expertise
Artificial Intelligencedb-ingested
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
db-ingestedArtificial Intelligence
db-ingestedResearcher
VIEW DOSSIER →
Carly Ryan

Carly Ryan

Organization
Anthropic

Position

Applied AI contributor

Expertise
context engineeringapplied AI
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Effective context engineering for AI agents
applied AIResearcherSafety
VIEW DOSSIER →
Casey Chu

Casey Chu

Organization
OpenAI

Expertise
safety and model readinessimage generationpolicy-linked researchalignment-evalsmultimodal-researchneeds-review
Alignment

Policy and governance operator

Safety Stance

Directly credited with safety and model readiness on Operator.

Notable Work
Operator safety and model readiness4o Image Generation
alignment-evalsmultimodal-researchneeds-reviewResearcherSafetyPolicy
VIEW DOSSIER →
Cat Wu

Cat Wu

Organization
Anthropic

Position

Product Manager

Expertise
product managementdeveloper experience
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic product deployment within its safety-first framing.

Notable Work
product managementdeveloper experience
productResearcherSafetyProduct
VIEW DOSSIER →
Chaitu Aluru

Chaitu Aluru

Organization
xAI

Position

Building Grok

Expertise
grokengineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
grokengineering
grokengineeringResearcher
LinkedIn
VIEW DOSSIER →
Charlie Nash

Charlie Nash

Organization
OpenAI

Expertise
image generationmultimodal researchmodel developmentmultimodal-researchimage-generationneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images4o Image Generation
multimodal-researchimage-generationneeds-reviewResearcher
VIEW DOSSIER →
Chris Bregler

Chris Bregler

Organization
Google DeepMind

Position

Senior Director and Distinguished Scientist, Google DeepMind

Expertise
computer visiongraphicsgenerative mediavisiongenerative-medialeadership
Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
motion capture3D visionvideo generation leadership
visiongenerative-medialeadershipverifiedFrontier Lab LeaderAcademicResearcher
Website
VIEW DOSSIER →
Chris Ciauri

Chris Ciauri

Organization
Anthropic

Position

Head of International

Expertise
international expansionenterprise AIregional leadershipcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports global deployment under Anthropic's public safety-first framing.

Notable Work
Anthropic international expansion
regional leadershipcommercialFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Chris Liddell

Chris Liddell

Organization
Anthropic

Position

Board Member

Expertise
corporate governanceboard oversightboardgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's public-benefit mission.

Notable Work
boardgovernancecorporate governanceboard oversight
boardgovernanceResearcherPolicy
VIEW DOSSIER →
🇺🇸
Chris Ré

Chris Ré

Nationality
American

Organization
Stanford University / Together AI / Cartesia AI

Position

Professor of Computer Science, Stanford; Co-Founder, Together AI & Cartesia AI

Expertise
State Space ModelsFoundation ModelsData-Centric AIAI SystemsMacArthur Fellow
Education

BS, Computer ScienceCornell University

PhD, Computer ScienceUniversity of Washington

Alignment

Academic researcher

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
S4 (Structured State Space)Mamba (via students)SnorkelFlashAttention (lab)
ProfessorFounderMacArthur FellowState Space ModelsAcademicLab LeaderResearcherSystems
Website
VIEW DOSSIER →
Christian Ryan

Christian Ryan

Organization
Anthropic

Position

Applied AI

Expertise
applied AIagent deployment
Alignment

Frontier lab operator

Safety Stance

Supports real-world deployment of Anthropic systems.

Notable Work
Writing effective tools for AI agentsBuilder Summit LondonCode with Claude 2025
applied AIResearcherSystemsProduct
VIEW DOSSIER →
🇦🇺🇺🇸
Christopher Manning

Christopher Manning

Nationality
Australian-American

Organization
Stanford University / AIX Ventures

Position

Thomas M. Siebel Professor in Machine Learning, Stanford University; General Partner, AIX Ventures

Expertise
Natural Language ProcessingComputational LinguisticsDeep LearningInformation RetrievalNLP
Education

BA (Hons), Mathematics, Computer Science, and LinguisticsAustralian National University

PhD, LinguisticsStanford University

Alignment

Academic centrist

Safety Stance

Advocates for responsible AI development through rigorous research and understanding of language model capabilities and limitations.

Notable Work
GloVe Word VectorsStanford CoreNLPBilinear Attention MechanismTree-Structured Recursive Neural NetworksStanford Dependencies
NLPAcademicResearcherLab LeaderInvestor
@chrmanningWebsiteLinkedIn
VIEW DOSSIER →
Christopher Zihao Li

Christopher Zihao Li

Organization
xAI

Position

MTS (Supercomputing)

Expertise
supercomputingmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
supercomputingmts
supercomputingmtsResearcher
LinkedIn
VIEW DOSSIER →
Claire Cui

Claire Cui

Organization
Google / Google DeepMind

Position

Google Fellow in the Google-DeepMind stack

Expertise
recommender systemsrepresentation learningapplied machine learningrecsysrepresentation-learning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
frontier recommender modelingensemble prediction variationapplied deep learning
researchrecsysrepresentation-learningResearcherSystems
Website
VIEW DOSSIER →
Claudio Angrigiani

Claudio Angrigiani

Organization
xAI

Position

Member of Technical Staff

Expertise
designengineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
designengineering
designengineeringResearcher
LinkedIn
VIEW DOSSIER →
🇫🇷
Clem Delangue

Clem Delangue

Nationality
French

Organization
Hugging Face

Position

Co-founder & CEO, Hugging Face

Expertise
Open-Source AIAI PlatformsMachine Learning InfrastructureCommunity BuildingOpen Source Leader
Education

Master in Management, Business AdministrationESCP Business School

Non-degree, Computer ScienceStanford University

Alignment

Open-source AI maximalist

Safety Stance

Believes open-source and community-driven development is the safest path for AI. Argues that transparency and broad access prevent concentration of power. Warns that the real risk is a few companies controlling AI behind closed doors.

Notable Work
Hugging Face HubHugging Face Transformers library (co-created)SmolLMLeRobot
CEOOpen Source LeaderFounderLab LeaderResearcherSystemsProductOpen Source
@ClementDelangueWebsiteLinkedIn
VIEW DOSSIER →
Connor Jennings

Connor Jennings

Organization
Anthropic

Position

Member of Technical Staff

Expertise
engineeringdeveloper productscontext engineering
Alignment

Frontier lab operator

Safety Stance

Contributes to Anthropic technical delivery.

Notable Work
Effective context engineering for AI agents
engineeringResearcher
VIEW DOSSIER →
Dale Schuurmans

Dale Schuurmans

Organization
Google DeepMind

Position

Research Director, Google DeepMind

Expertise
reinforcement learningoptimizationmachine learning theorytheoryverified
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
foundational ML theorydecision-makingUniversity of Alberta partnership
researchtheoryrlverifiedFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
Dan Belov

Dan Belov

Organization
Google DeepMind

Position

Distinguished Engineer, DeepMind and Google

Expertise
ML infrastructurerobotics lab infrastructurelarge-scale systemsengineeringinfrastructure
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
DeepMind engineering organization build-outresearch infrastructurescientific computing systems
engineeringinfrastructuresystemsResearcherSystems
Website
VIEW DOSSIER →
Daniela Amodei

Daniela Amodei

Organization
Anthropic

Position

Co-founder and President

Expertise
company leadershipoperationsAI governanceco-founder
Alignment

Policy and governance operator

Safety Stance

Publicly aligned with Anthropic's safety-first and public-benefit framing.

Notable Work
Anthropic leadership and governance
executiveco-founderFounderFrontier Lab LeaderResearcherSafetyPolicy
VIEW DOSSIER →
Daniel De Freitas

Daniel De Freitas

Organization
Google DeepMind

Position

Senior Staff Software Engineer, Google DeepMind

Expertise
Conversational AIChatbotsLarge Language ModelsEx-Character.AI
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Meena/LaMDACharacter.AI
Ex-Character.AIConversational AIChatbotsFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
Daniel Golovin

Daniel Golovin

Organization
Google DeepMind

Position

Lead, Google DeepMind Pittsburgh

Expertise
optimizationautomated experimentationBayesian methodsbayesian-methods
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
VizierML-driven optimizationautomated experimental design
researchoptimizationbayesian-methodsFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
🇫🇷
Daniel Levy

Daniel Levy

Nationality
French

Organization
Safe Superintelligence Inc. (SSI)

Position

Co-Founder & President, SSI

Expertise
OptimizationMachine LearningSuperintelligence SafetyEx-OpenAISuperintelligence
Education

BS/MS, MathematicsEcole Polytechnique

PhD, Computer ScienceStanford University

Alignment

Safety-aligned researcher

Safety Stance

Core mission is safe superintelligence — building the most powerful AI systems with safety as a first-class objective.

Notable Work
Ex-OpenAISuperintelligenceFrontier Lab LeaderOptimizationMachine LearningSuperintelligence Safety
FounderEx-OpenAISafetySuperintelligenceCEOFrontier Lab LeaderResearcherSystems
VIEW DOSSIER →
Daniel Rowland

Daniel Rowland

Organization
xAI

Position

Data center operations lead (per org chart reports)

Expertise
data_centerinfrastructure
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_centerinfrastructureFrontier Lab Leader
data_centerinfrastructureFrontier Lab LeaderResearcherSystems
VIEW DOSSIER →
🇨🇳🇺🇸
Danqi Chen

Danqi Chen

Nationality
Chinese-American

Organization
Princeton University

Position

Associate Professor of Computer Science, Princeton University; Associate Director, Princeton Language and Intelligence

Expertise
Natural Language ProcessingQuestion AnsweringRetrieval-Augmented GenerationLong-Context ModelsNLP
Education

BEng, Computer ScienceTsinghua University

PhD, Computer ScienceStanford University

Alignment

Academic centrist

Safety Stance

Focuses on building reliable and verifiable NLP systems. Advocates for rigorous evaluation of model capabilities.

Notable Work
Open-Domain Question Answering (DrQA)Neural Dependency ParsingRetrieval-Augmented GenerationLong-Context Language ModelsSimCSE
NLPAcademicResearcherLab LeaderSystems
WebsiteLinkedIn
VIEW DOSSIER →
Dan Zheng

Dan Zheng

Organization
Google DeepMind

Position

Research Engineer, Google DeepMind

Expertise
large modelsreasoningengineeringllms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini-related research engineeringmath/reasoning projectslarge-model systems
engineeringreasoningllmsResearcher
Website
VIEW DOSSIER →
🇮🇱🇺🇸
Daphne Koller

Daphne Koller

Nationality
Israeli-American

Organization
insitro

Position

Founder & CEO, insitro

Expertise
Machine LearningProbabilistic Graphical ModelsComputational BiologyAI-driven Drug DiscoveryBayesian NetworksAI for Science
Education

BSc, Computer ScienceHebrew University of Jerusalem

MSc, Computer ScienceHebrew University of Jerusalem

PhD, Computer ScienceStanford University

Alignment

Pro-innovation, science-driven

Safety Stance

Optimistic about AI's transformative potential in science and healthcare. Believes responsible deployment requires domain expertise and rigorous validation. Advocates for AI as a tool to augment human capabilities rather than replace them, emphasizing collaboration between humans and machines.

Notable Work
Probabilistic Graphical Models (textbook)Bayesian network structure learningMulti-task learning for drug discoveryCoursera platform
CEOAI for ScienceAcademicFounderLab LeaderResearcherProduct
@DaphneKollerWebsiteLinkedIn
VIEW DOSSIER →
🇹🇷🇺🇸
Daron Acemoglu

Daron Acemoglu

Nationality
Turkish-American

Organization
MIT

Position

Institute Professor, MIT

Expertise
EconomicsPolitical EconomyTechnology & InequalityLabor EconomicsNobel Laureate
Education

BA, EconomicsUniversity of York

MSc, Econometrics and Mathematical EconomicsLondon School of Economics

PhD, EconomicsLondon School of Economics

Alignment

Institutionalist, pro-regulation

Safety Stance

Skeptical of the AI industry's self-governance. Argues AI is being deployed primarily to automate and surveil workers rather than augment them, concentrating wealth and power. Warns that without strong institutions and regulation, AI will deepen inequality. Says "there are choices that are political, as well as technical, about how we develop AI."

Notable Work
Why Nations Fail (2012)Power and Progress (2023)The Narrow Corridor (2019)The Turing Trap (with Brynjolfsson)Harms of AI (2024 NBER)
EconomicsAcademicNobel LaureateResearcherPolicy
@DAcemogluMITWebsiteLinkedIn
VIEW DOSSIER →
🇨🇦
David Ha

David Ha

Nationality
Canadian

Organization
Sakana AI

Position

Co-founder & CEO, Sakana AI

Expertise
Artificial IntelligenceEvolutionary ComputingCreative AIRecurrent Neural NetworksNature-Inspired AIFrontier Lab Founder
Education

BSc, Engineering ScienceUniversity of Toronto

PhD, Computer ScienceUniversity of Tokyo

Alignment

Open research advocate

Safety Stance

Believes in building beneficial AI through nature-inspired approaches that are inherently more robust and interpretable than brute-force scaling.

Notable Work
World ModelsWeight Agnostic Neural NetworksAI ScientistNeuroevolutionSketch-RNN
Frontier Lab FounderResearcherFounderCEOLab Leader
@hardmaruWebsiteLinkedIn
VIEW DOSSIER →
David Hershey

David Hershey

Organization
Anthropic

Position

Member of Technical Staff

Expertise
engineeringagentsevaluation
Alignment

Frontier lab operator

Safety Stance

Contributes to Anthropic technical delivery.

Notable Work
Effective harnesses for long-running agents
engineeringResearcher
VIEW DOSSIER →
🇺🇸
David Luan

David Luan

Nationality
American

Organization
Independent

Position

Former VP, Amazon AGI SF Lab (departed Feb 2026)

Expertise
Large Language ModelsAI AgentsAI EngineeringEx-OpenAIEx-Google
Education

BS, Applied Mathematics and Political ScienceYale University

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-OpenAIEx-GoogleAI AgentsLarge Language ModelsAI Engineering
Ex-OpenAIEx-GoogleAI AgentsFounderCEOLab LeaderResearcher
VIEW DOSSIER →
David Medina

David Medina

Organization
OpenAI

Expertise
research infrastructureimage generationoperator systemsresearch-infrastructuremultimodal-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT ImagesOperator
research-infrastructuremultimodal-researchneeds-reviewResearcherSystems
VIEW DOSSIER →
🇺🇸
David Patterson

David Patterson

Nationality
American

Organization
UC Berkeley / Google

Position

Pardee Professor of Computer Science Emeritus, UC Berkeley; Distinguished Engineer, Google

Expertise
Computer ArchitectureRISCRAIDHardware-Software Co-designTuring Award
Education

BA, MathematicsUniversity of California, Los Angeles

MS & PhD, Computer ScienceUniversity of California, Los Angeles

Alignment

Open-source hardware advocate

Safety Stance

Focuses on hardware efficiency and open standards. Believes open-source hardware (RISC-V) is critical for democratizing computing and preventing monopolistic control of AI infrastructure.

Notable Work
RISC architectureRAID storageRISC-V open ISATPU design at GoogleComputer Architecture textbook
SystemsAcademicTuring AwardResearcherOpen Source
WebsiteLinkedIn
VIEW DOSSIER →
David Saunders

David Saunders

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researcheconomic analysis
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIndia Country Brief: The Anthropic Economic IndexIntroducing Anthropic InterviewerHow AI Is Transforming Work at AnthropicAnthropic Economic Index report: Economic primitives
researchResearcherSafety
VIEW DOSSIER →
David Soria Parra

David Soria Parra

Organization
Anthropic

Position

Member of Technical Staff

Expertise
engineeringMCPdeveloper tools
Alignment

Frontier lab operator

Safety Stance

Contributes to Anthropic technical delivery.

Notable Work
Writing effective tools for AI agents
engineeringResearcher
VIEW DOSSIER →
David Yungmann

David Yungmann

Organization
xAI

Position

Data Center Site Ops

Expertise
data_centeroperations
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_centeroperations
data_centeroperationsResearcher
LinkedIn
VIEW DOSSIER →
Deep Ganguli

Deep Ganguli

Organization
Anthropic

Position

Research / alignment leader

Expertise
AI safetyalignment sciencegovernance
Alignment

Policy and governance operator

Safety Stance

Publicly associated with Anthropic's alignment- and safety-focused research agenda.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerHow AI Is Transforming Work at Anthropic
researchResearcherSafetyPolicy
VIEW DOSSIER →
🇬🇧
Demis Hassabis

Demis Hassabis

Nationality
British

Organization
Google DeepMind

Position

Co-founder & CEO, Google DeepMind

Expertise
Artificial IntelligenceReinforcement LearningNeuroscienceWorld ModelsNobel Laureate
Education

PhD, Cognitive NeuroscienceUniversity College London

Alignment

Cautious accelerationist

Safety Stance

Pro-safety but believes in building AGI responsibly. Supports regulation. DeepMind has dedicated safety research teams.

Notable Work
AlphaGoAlphaFoldAlphaZeroGemini 3VeoSIMA 2
CEOResearcherNobel LaureateLab LeaderFounderFrontier Lab LeaderSafetyPolicy
@demishassabisWebsiteLinkedIn
VIEW DOSSIER →
Deniz Altınbüken

Deniz Altınbüken

Organization
Google DeepMind

Position

Research Engineer, Google DeepMind

Expertise
NLP systemsmachine learning engineeringlarge modelsengineeringnlpllms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
language model engineeringNLP systemsfoundation-model infrastructure
engineeringnlpllmsResearcherSystems
Website
VIEW DOSSIER →
Denny Zhou

Denny Zhou

Organization
Google / Google DeepMind

Position

Reasoning Research Leader in the Google-DeepMind stack

Expertise
LLM reasoningfew-shot learningsystematic generalizationllmsreasoning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
reasoning-team leadershipprompting and reasoning methodsLM reasoning benchmarks
researchllmsreasoningResearcher
Website
VIEW DOSSIER →
Derek Chen

Derek Chen

Organization
OpenAI

Expertise
monitoring and responsetrust and safetydeploymentsafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
GPT-4 monitoring and responsedeep research deployment
safety-governancedeploymentneeds-reviewResearcherSafetyPolicyProduct
VIEW DOSSIER →
Derek Zhiyuan Cheng

Derek Zhiyuan Cheng

Organization
Google DeepMind

Position

Principal Software Engineer and Engineering Director, Google DeepMind

Expertise
recommendation systemsLLM product integrationdata representation learningengineeringrecsysapplied-ai
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ads and recommender MLproduct integrationlarge-scale representation learning
engineeringrecsysapplied-aiFrontier Lab LeaderResearcherSystemsProduct
Website
VIEW DOSSIER →
Dianne Penn

Dianne Penn

Organization
Anthropic

Position

Head of Product Management (Research)

Expertise
research productproduct management
Alignment

Safety-aligned researcher

Safety Stance

Bridges Anthropic research and deployment within the company's safety-first approach.

Notable Work
Frontier Lab Leaderresearch productproduct management
productresearchFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Donelle Cobb

Donelle Cobb

Organization
xAI

Position

Data Center Siteops

Expertise
data_centeroperations
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_centeroperations
data_centeroperationsResearcher
LinkedIn
VIEW DOSSIER →
Dongqi Su

Dongqi Su

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Douglas Eck

Douglas Eck

Organization
Google DeepMind

Position

Senior Research Director, Google DeepMind

Expertise
generative mediamusic AImultimodal learningleadershipgenerative-mediamultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Magentagenerative musicimage-video-audio generation leadership
leadershipgenerative-mediamultimodalverifiedFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
Drew Bent

Drew Bent

Organization
Anthropic

Position

Education research contributor

Expertise
education researchAI fluency
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Education Report: The AI Fluency IndexAnthropic Education Report: How educators use Claude
education researchResearcherSafety
VIEW DOSSIER →
Ed H. Chi

Ed H. Chi

Organization
Google / Google DeepMind

Position

Distinguished Scientist in the Google-DeepMind stack

Expertise
recommendation systemsreinforcement learningdialog and robust MLrecsysrobustness
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
large-scale recommender systemsdialog modelingreliable ML
researchrecsysrobustnessResearcherSafetySystems
Website
VIEW DOSSIER →
Edward Chou

Edward Chou

Organization
xAI

Position

Member of Technical Staff

Expertise
mts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
mts
mlmtsResearcher
LinkedIn
VIEW DOSSIER →
Ekin Dogus Cubuk

Ekin Dogus Cubuk

Organization
Periodic Labs

Position

Co-Founder, Periodic Labs

Expertise
Materials ScienceAI for ScienceChemistryEx-DeepMind
Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Ex-DeepMindAI for ScienceMaterials ScienceChemistry
FounderEx-DeepMindAI for ScienceMaterials ScienceLab LeaderResearcher
VIEW DOSSIER →
Eleanor Dorfman

Eleanor Dorfman

Organization
Anthropic

Position

Head of Industries

Expertise
industry strategyenterprise AIcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports deployment under Anthropic's public safety-first framing.

Notable Work
commercialFrontier Lab Leaderindustry strategyenterprise AI
commercialFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Eli Collins

Eli Collins

Organization
Google DeepMind

Position

Vice President of Product, Google DeepMind

Expertise
AI product strategyGeminiresearch-to-product executionleadershipgemini
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Google DeepMind product leadershipGemini product strategyAI platform execution
productleadershipgeminiResearcherProduct
WebsiteLinkedIn
VIEW DOSSIER →
🇺🇸
Eliezer Yudkowsky

Eliezer Yudkowsky

Nationality
American

Organization
Machine Intelligence Research Institute (MIRI)

Position

Co-founder & Senior Research Fellow, MIRI

Expertise
AI AlignmentDecision TheoryAI SafetyRationality
Alignment

AI existential risk hawk

Safety Stance

The most prominent AI doom advocate. Believes current AI development trajectories will lead to human extinction. Argues that no one currently knows how to align a superintelligent AI and that building one without solving alignment first is civilizational suicide. Called for international regulation including potential airstrikes on rogue data centers.

Notable Work
AI Alignment theoryCoherent Extrapolated VolitionTimeless Decision TheoryThe SequencesIf Anyone Builds It, Everyone Dies
AI SafetyResearcherFounderLab LeaderSafetyPolicy
@ESYudkowskyWebsite
VIEW DOSSIER →
Elon Musk

Elon Musk

Organization
xAI

Position

CEO

Expertise
frontier_lab_leadership
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
frontier_lab_leadership
founderexecutivefrontier_lab_leadershipFounderCEOResearcher
@elonmusk
VIEW DOSSIER →
🇬🇧
Emad Mostaque

Emad Mostaque

Nationality
British-Bangladeshi

Organization
Schelling AI

Position

Founder, Schelling AI (decentralized AI)

Expertise
Generative AIOpen Source AIDecentralized AIOpen SourceStable Diffusion
Education

MA, Mathematics and Computer ScienceUniversity of Oxford

Alignment

Open-source builder

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Open SourceGenerative AIStable DiffusionOpen Source AIDecentralized AI
FounderOpen SourceGenerative AIStable DiffusionCEOLab LeaderResearcherInvestor
VIEW DOSSIER →
🇺🇸
Emily M. Bender

Emily M. Bender

Nationality
American

Organization
University of Washington

Position

Thomas L. and Margo G. Wyckoff Endowed Professor of Linguistics, University of Washington

Expertise
Computational LinguisticsNLPAI EthicsLanguage Technology
Education

AB, LinguisticsUniversity of California, Berkeley

MA, LinguisticsStanford University

PhD, LinguisticsStanford University

Alignment

AI skeptic, pro-regulation

Safety Stance

Deeply critical of large language models and the AI hype cycle. Argues LLMs do not understand language and that the industry overpromises capabilities. Advocates for accountability, transparency, and centering affected communities in AI development.

Notable Work
On the Dangers of Stochastic Parrots (2021)Climbing towards NLU: On Meaning, Form, and Understanding in the Age of DataData Statements for NLPThe AI Con (2025 book)
NLPAcademicAI EthicsLab LeaderResearcherPolicy
@emilymbenderWebsiteLinkedIn
VIEW DOSSIER →
Emily Pastewka

Emily Pastewka

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic IndexAnthropic Economic Index report: Economic primitives
economic researchResearcherSafety
VIEW DOSSIER →
E

Enhance Security Testing

Organization
OpenAI

Expertise
Artificial Intelligencedb-ingested
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
db-ingestedArtificial Intelligence
db-ingestedResearcher
VIEW DOSSIER →
🇺🇸
Eric Boyd

Eric Boyd

Nationality
American

Organization
Microsoft

Position

CVP of AI Platform, Microsoft

Expertise
Azure AIAI PlatformEnterprise AISearchCVPPlatform
Education

BS, Computer Science and MathematicsMIT

Alignment

Applied AI builder

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
CVPAzure AIEnterprise AIPlatformAI PlatformSearch
CVPAzure AIEnterprise AIPlatformLab LeaderResearcherProduct
VIEW DOSSIER →
🇦🇹
Eric Steinberger

Eric Steinberger

Nationality
Austrian

Organization
Magic

Position

Co-Founder & CEO, Magic AI

Expertise
Deep Reinforcement LearningLong-Context ModelsAGIAI CodingEx-Meta
Education

Attended, Computer ScienceUniversity of Cambridge

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Long-Term Memory (LTM) architecture100M-token context windows
FounderCEOEx-MetaAGIAI CodingLab LeaderResearcher
VIEW DOSSIER →
Eric Wallace

Eric Wallace

Organization
OpenAI

Expertise
model behaviorreasoningstructured outputsalignment-evalsreasoning-researchneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
deep researchGPT-4o mini
alignment-evalsreasoning-researchneeds-reviewResearcherSafety
VIEW DOSSIER →
🇺🇸
Erik Brynjolfsson

Erik Brynjolfsson

Nationality
American

Organization
Stanford University

Position

Jerry Yang and Akiko Yamazaki Professor, Stanford HAI; Director, Stanford Digital Economy Lab

Expertise
AI EconomicsDigital EconomyProductivityLabor MarketsEconomics
Education

BA/MA, Applied Mathematics and Decision SciencesHarvard University

PhD, Managerial EconomicsMIT

Alignment

Pragmatic techno-optimist

Safety Stance

Focuses on economic policy rather than existential risk. Warns that AI could increase inequality if not managed with deliberate policy choices. Advocates for "augmentation" (AI enhancing human capabilities) over "automation" (replacing humans). Coined "The Turing Trap" to argue against solely pursuing human-level AI.

Notable Work
The Second Machine AgeMachine, Platform, CrowdThe Turing TrapGenerative AI at Work (2023)Canaries in the Coal Mine (2025)
EconomicsAcademicFounderLab LeaderResearcherPolicySystemsProduct
@erikbrynWebsiteLinkedIn
VIEW DOSSIER →
Erik Schluntz

Erik Schluntz

Organization
Anthropic

Position

Member of Technical Staff

Expertise
agent engineeringdeveloper experienceengineering
Alignment

Frontier lab operator

Safety Stance

Supports Anthropic's agent and developer ecosystem.

Notable Work
Building effective agents
engineeringResearcher
VIEW DOSSIER →
Esin Durmus

Esin Durmus

Organization
Anthropic

Position

Research contributor

Expertise
AI safety evaluationhuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerHow AI Is Transforming Work at AnthropicAnthropic Education Report: How educators use Claude
researchResearcherSafety
VIEW DOSSIER →
Ethan Dixon

Ethan Dixon

Organization
Anthropic

Position

Applied AI contributor

Expertise
context engineeringapplied AI
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Effective context engineering for AI agents
applied AIResearcherSafety
VIEW DOSSIER →
Ethan Guttman

Ethan Guttman

Organization
xAI

Position

Software Engineer

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
Ethan He

Ethan He

Organization
xAI

Position

Member of Technical Staff

Expertise
video_generationmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
video_generationmts
video_generationmtsResearcher
LinkedIn
VIEW DOSSIER →
Evan Mays

Evan Mays

Organization
OpenAI

Expertise
evaluationsafety systemspaperbenchsafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
PaperBenchdeep research safety systems
evaluationsafety-governanceneeds-reviewResearcherSafetyPolicySystems
VIEW DOSSIER →
Felipe Petroski Such

Felipe Petroski Such

Organization
OpenAI

Expertise
inference optimizationreliabilitygpt systemsdeployment-infrastructureneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4 inference optimization and reliabilityGPT-4o mini
deployment-infrastructurereliabilityneeds-reviewResearcherSystemsProduct
VIEW DOSSIER →
Fidji Simo

Fidji Simo

Organization
OpenAI

Position

CEO of Applications

Expertise
applicationsconsumer productsorganizational scalingexecutive-leadership
Alignment

Safety-aligned researcher

Safety Stance

Public role centers on execution and applications rather than technical safety research.

Notable Work
Applications leadership across ChatGPT and product commercialization
executive-leadershipapplicationsproductCEOResearcherSafetyProduct
VIEW DOSSIER →
Florian Scholz

Florian Scholz

Organization
Anthropic

Position

Engineering contributor

Expertise
multi-agent systemsproduct engineeringengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
How we built our multi-agent research system
engineeringResearcherSafetySystemsProduct
VIEW DOSSIER →
🇮🇹
Francesca Rossi

Francesca Rossi

Nationality
Italian

Organization
IBM Research

Position

IBM Fellow & AI Ethics Global Leader, IBM Research

Expertise
AI EthicsConstraint ProgrammingMulti-Agent SystemsAI Value Alignment
Education

BS/MS, Computer ScienceUniversity of Pisa

PhD, Computer ScienceUniversity of Pisa

Alignment

Pro-governance, industry self-regulation advocate

Safety Stance

Advocates for integrating ethical considerations into AI development from the start. Supports multi-stakeholder governance including industry, government, and civil society. Emphasizes that engineers must now understand ethics alongside technical skills.

Notable Work
Constraint ProgrammingAI Value AlignmentPreference ReasoningMulti-Agent Decision MakingRome Call for AI Ethics
AI EthicsResearcherAcademicSafetyPolicySystems
@frossi_tWebsiteLinkedIn
VIEW DOSSIER →
Francesco Mosconi

Francesco Mosconi

Organization
Anthropic

Position

Research contributor

Expertise
AI usage research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practice
researchResearcherSafety
VIEW DOSSIER →
Frederic Besse

Frederic Besse

Organization
Google DeepMind

Position

Senior Staff Research Engineer, Google DeepMind

Expertise
agentsresearch engineeringsimulation environmentsengineeringgames
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
SIMA and agentsgame-based training groundsresearch engineering leadership
engineeringagentsgamesResearcher
WebsiteLinkedIn
VIEW DOSSIER →
Gabriel Goh

Gabriel Goh

Organization
OpenAI

Position

Research Lead

Expertise
image generationmultimodal researchgenerative mediamultimodal-researchimage-generationresearch-lead
Alignment

Frontier lab operator

Safety Stance

Publicly visible mainly through multimodal research and release work.

Notable Work
ChatGPT Images4o Image GenerationDALL·E-era image research
multimodal-researchimage-generationresearch-leadFrontier Lab LeaderResearcher
VIEW DOSSIER →
Gabriel Nicholas

Gabriel Nicholas

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researchsocietal impacts
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practice
researchResearcherSafety
VIEW DOSSIER →
Gabriel Pereyra

Gabriel Pereyra

Organization
Harvey AI

Position

Co-Founder & CTO, Harvey AI

Expertise
Deep LearningNLPLegal AIEx-DeepMindEx-Meta
Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-DeepMindEx-MetaLegal AIDeep LearningNLP
FounderCTOEx-DeepMindEx-MetaLegal AILab LeaderResearcher
VIEW DOSSIER →
🇺🇸
Gary Marcus

Gary Marcus

Nationality
American

Organization
Independent

Position

Professor Emeritus of Psychology and Neural Science, NYU; Author and AI Commentator

Expertise
Cognitive ScienceAI CriticismNeuroscienceNatural Language Understanding
Education

BA, Cognitive ScienceHampshire College

PhD, Cognitive ScienceMIT

Alignment

AI regulation advocate, skeptic of current approaches

Safety Stance

Believes current LLM approaches are fundamentally limited and unreliable. Advocates for hybrid neurosymbolic approaches. Pushes for AI regulation, increased public AI literacy, and well-funded public think tanks to assess AI risks. More concerned about near-term harms (misinformation, unreliability) than existential risk.

Notable Work
Algebraic MindRebooting AICritique of deep learning limitationsNeurosymbolic AI advocacy
Technical CommentatorAcademicFounderCEOLab LeaderResearcherPolicyProduct
@GaryMarcusWebsiteLinkedIn
VIEW DOSSIER →
🇨🇦
Geordie Rose

Geordie Rose

Nationality
Canadian

Organization
Sanctuary AI

Position

Co-Founder (departed Nov 2024)

Expertise
Quantum ComputingRoboticsEmbodied AGI
Education

BEng, Engineering PhysicsMcMaster University

PhD, Theoretical PhysicsUniversity of British Columbia

Alignment

Frontier lab operator

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
Quantum ComputingRoboticsEmbodied AGI
FounderQuantum ComputingRoboticsEmbodied AGICEOLab LeaderResearcherSystems
VIEW DOSSIER →
George Dahl

George Dahl

Organization
Google / Google DeepMind

Position

Senior Research Scientist in the Google-DeepMind stack

Expertise
deep learningspeech recognitionchemical and biological MLdeep-learninghealth
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
early deep acoustic modelsmolecular MLrepresentation learning
researchdeep-learninghealthResearcher
Website
VIEW DOSSIER →
🇺🇸
Georges Harik

Georges Harik

Nationality
American

Organization
Humans&

Position

Co-Founder, Humans&

Expertise
SearchAdvertising TechnologyAI SystemsEx-GooglePioneer
Alignment

Frontier lab operator

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-GooglePioneerSearchAdvertising TechnologyAI Systems
FounderEx-GooglePioneerLab LeaderResearcherSystems
VIEW DOSSIER →
Grace Yun

Grace Yun

Organization
Anthropic

Position

Research / product contributor

Expertise
human-AI interactionproduct research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Introducing Anthropic Interviewer
researchResearcherSafetyProduct
VIEW DOSSIER →
Grace Zhao

Grace Zhao

Organization
OpenAI

Expertise
safety systemsevaluationreasoning model monitoringalignment-evalssafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
deep research safety systemsGPT-5 contributors
alignment-evalssafety-governanceneeds-reviewResearcherSafetyPolicySystems
VIEW DOSSIER →
🇺🇸
Greg Brockman

Greg Brockman

Nationality
American

Organization
OpenAI

Position

Co-Founder & President, OpenAI

Expertise
AI InfrastructureSystems EngineeringSoftware ArchitectureScalable Computing
Education

Dropped out, Mathematics and Computer ScienceHarvard University

Dropped out, Computer ScienceMassachusetts Institute of Technology

Alignment

Pro-innovation, opposes restrictive AI regulation

Safety Stance

Believes in building AGI safely but prioritizes maintaining US technological leadership. Leading political efforts against restrictive AI legislation through a $100M+ Super PAC.

Notable Work
OpenAI infrastructure architectureScaling AI compute systemsStargate Project
CTOLab LeaderFounderFrontier Lab LeaderResearcherPolicySystems
@gdbWebsiteLinkedIn
VIEW DOSSIER →
Greg Corrado

Greg Corrado

Organization
Google / Google DeepMind

Position

Senior Research Scientist in the Google-DeepMind stack

Expertise
deep learningscalable machine learningneuroscience-inspired AIdeep-learning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
founding large-scale DNN work at Googlebrain-inspired computingmedical and linguistic ML
researchdeep-learningsystemsResearcherSystems
Website
VIEW DOSSIER →
Greg Yang

Greg Yang

Organization
xAI

Position

Co-founder (departed 2026 per reporting)

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Frontier Lab Leader
founderresearchFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇫🇷
Guillaume Lample

Guillaume Lample

Nationality
French

Organization
Mistral AI

Position

Co-founder & Chief Scientist, Mistral AI

Expertise
Natural Language ProcessingLarge Language ModelsUnsupervised Machine TranslationDeep LearningFrontier Founder
Education

MSc, Mathematics and Computer ScienceEcole Polytechnique

PhD, Artificial IntelligencePierre and Marie Curie University

Alignment

European tech sovereignty advocate

Safety Stance

Advocates for open-weight models as a path to safety through transparency. Believes smaller, fine-tuned models can match larger ones with better efficiency and control.

Notable Work
Unsupervised Machine TranslationCross-lingual Language Model PretrainingMistral 7BMixtral MoEMistral Large 3
Frontier FounderResearcherFounderFrontier Lab LeaderAcademicSafetyOpen Source
@GuillaumeLampleLinkedIn
VIEW DOSSIER →
Guillaume Princen

Guillaume Princen

Organization
Anthropic

Position

Head of EMEA

Expertise
EMEA expansionenterprise AIregional leadershipcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports deployment under Anthropic's public safety-first framing.

Notable Work
Anthropic regional leadership
regional leadershipcommercialFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Guillermo Christen

Guillermo Christen

Organization
Anthropic

Position

Safeguards Engineering

Expertise
safeguardssecurity engineeringsafety engineering
Alignment

Safety-aligned researcher

Safety Stance

Directly associated with Anthropic safeguards work.

Notable Work
safety engineeringsafeguardssecurity engineering
safety engineeringResearcherSafetyProduct
VIEW DOSSIER →
Guodong Zhang

Guodong Zhang

Organization
xAI

Position

Co-founder

Expertise
model_development
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
model_developmentFrontier Lab Leader
founderresearchmodel_developmentFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
Haitang Hu

Haitang Hu

Organization
OpenAI

Expertise
gpt-4o minideep researchreasoningreasoning-researchfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4o minideep research
reasoning-researchfrontier-modelsneeds-reviewResearcher
VIEW DOSSIER →
Hanah Ho

Hanah Ho

Organization
Anthropic

Position

Education / economic contributor

Expertise
visualizationeconomic and education researcheducation research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic IndexAnthropic Education Report: The AI Fluency IndexIntroducing Anthropic Interviewer
education researchResearcherSafety
VIEW DOSSIER →
Hang Gao

Hang Gao

Organization
xAI

Position

Member of Technical Staff (departed 2026)

Expertise
engineeringgrok_imagine
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringgrok_imagine
engineeringgrok_imagineResearcher
VIEW DOSSIER →
Hannah Moran

Hannah Moran

Organization
Anthropic

Position

Applied AI

Expertise
applied AIcontext engineering
Alignment

Frontier lab operator

Safety Stance

Supports real-world deployment of Anthropic systems.

Notable Work
Effective context engineering for AI agentsCode with Claude 2025
applied AIResearcherSystemsProduct
VIEW DOSSIER →
Hannah Wong

Hannah Wong

Organization
OpenAI

Expertise
communications and research opsdeep researchoperator leadershipoperationsleadership-supportneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research leadershipOperator leadership
operationsleadership-supportneeds-reviewResearcher
VIEW DOSSIER →
Haofei Wang

Haofei Wang

Organization
X / xAI

Position

Head of engineering/product at X (reported) with xAI overlap

Expertise
x_platformengineering_lead
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
x_platformengineering_leadFrontier Lab Leader
x_platformengineering_leadFrontier Lab LeaderResearcherProduct
VIEW DOSSIER →
Haozhu Wang

Haozhu Wang

Organization
xAI

Position

Member of Technical Staff

Expertise
mts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
mts
researchmtsResearcher
LinkedIn
VIEW DOSSIER →
Hayden Warren

Hayden Warren

Organization
xAI

Position

Software Engineer

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
Heather Schmidt

Heather Schmidt

Organization
OpenAI

Expertise
infrastructure managementdeploymentmultimodal systemsdeployment-infrastructureoperationsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4 infrastructure managementGPT-5 contributors
deployment-infrastructureoperationsneeds-reviewResearcherSystemsProduct
VIEW DOSSIER →
Heiga Zen

Heiga Zen

Organization
Google DeepMind

Position

Principal Scientist, Google DeepMind Japan

Expertise
speech synthesisspeech technologymachine learningspeechaudio
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
HTSspeech foundation workGemini audio-related work
researchspeechaudioResearcher
Website
VIEW DOSSIER →
Hidetoshi Tojo

Hidetoshi Tojo

Organization
Anthropic

Position

Head of Japan

Expertise
Japan expansionenterprise AIregional leadershipcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports deployment under Anthropic's public safety-first framing.

Notable Work
Anthropic regional leadership
regional leadershipcommercialFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Hossein Mobahi

Hossein Mobahi

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
representation learninggeneralizationoptimizationrepresentation-learning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
sharpness-aware optimization workrepresentation learninglarge-model generalization
researchoptimizationrepresentation-learningResearcher
Website
VIEW DOSSIER →
Hyeonwoo Noh

Hyeonwoo Noh

Organization
OpenAI

Expertise
agentic systemsmultimodal modelsoperator researchmultimodal-researchagentsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Operator researchGPT-4 data and training work
multimodal-researchagentsneeds-reviewResearcherSystems
VIEW DOSSIER →
Ignacio Baquero

Ignacio Baquero

Organization
xAI

Position

Safety

Expertise
trust_safety
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
trust_safety
safetytrust_safetyResearcherSafety
LinkedIn
VIEW DOSSIER →
Ilya Kostrikov

Ilya Kostrikov

Organization
OpenAI

Expertise
deep researchreinforcement learningreasoning systemsreasoning-researchreinforcement-learningneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research
reasoning-researchreinforcement-learningneeds-reviewResearcherSystems
VIEW DOSSIER →
🇮🇱🇨🇦
Ilya Sutskever

Ilya Sutskever

Nationality
Israeli-Canadian

Organization
Safe Superintelligence Inc. (SSI)

Position

Co-Founder & CEO, Safe Superintelligence Inc. (SSI)

Expertise
Deep LearningNeural Network TrainingLanguage ModelsAI SafetySuperintelligence
Education

PhD, Computer ScienceUniversity of Toronto

BSc, MathematicsUniversity of Toronto

Alignment

Safety-first, believes superintelligence is imminent

Safety Stance

Deeply committed to AI safety. Left OpenAI over safety concerns and founded SSI with the singular mission of building safe superintelligence. Believes superintelligence is the most important technical problem of our time and must be solved safely.

Notable Work
AlexNet (co-creator)Sequence to Sequence LearningGPT series (co-architect)Neural Machine Translation
ResearcherFounderSafetyCEOFrontier Lab LeaderProduct
@ilyasutWebsite
VIEW DOSSIER →
🇬🇷
Ioannis Antonoglou

Ioannis Antonoglou

Nationality
Greek

Organization
Reflection AI

Position

Co-Founder & CTO, Reflection AI

Expertise
Reinforcement LearningGame AIRLHFAutonomous AgentsEx-DeepMindAlphaGo
Education

MEng, Electrical & Computer EngineeringAristotle University of Thessaloniki

MSc, AI & Machine LearningUniversity of Edinburgh

PhD, Computer ScienceUniversity College London

Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
AlphaGoAlphaZeroMuZeroDQNRLHF for Gemini
FounderCTOEx-DeepMindAlphaGoReinforcement LearningLab LeaderResearcherSafety
VIEW DOSSIER →
Irina Ghose

Irina Ghose

Organization
Anthropic

Position

Managing Director of India

Expertise
India expansionenterprise AIregional leadershipcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports deployment under Anthropic's public safety-first framing.

Notable Work
Anthropic India expansion
regional leadershipcommercialFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Isa Fulford

Isa Fulford

Organization
OpenAI

Position

Member of Technical Staff

Expertise
post-training researchdeep researchreasoningreasoning-researchmember-of-technical-staffneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research
reasoning-researchmember-of-technical-staffneeds-reviewResearcher
VIEW DOSSIER →
Ivan Zd

Ivan Zd

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇨🇦
Ivan Zhang

Ivan Zhang

Nationality
Canadian

Organization
Cohere

Position

Co-founder, Cohere

Expertise
Machine LearningNatural Language ProcessingAI InfrastructureEnterprise AIFrontier Founder
Education

BSc (incomplete), Computer ScienceUniversity of Toronto

Alignment

Canadian tech ecosystem advocate

Safety Stance

Focused on enterprise-grade safety through grounded generation, data privacy, and deployment controls. Believes practical safety comes from building trustworthy enterprise products.

Notable Work
Enterprise NLP infrastructureCohere Command modelsRetrieval-augmented generation for enterprise
Frontier FounderFounderLab LeaderAcademicResearcherSafetySystemsProduct
@1vnzhLinkedIn
VIEW DOSSIER →
Jack Clark

Jack Clark

Organization
Anthropic

Position

Co-founder; policy and communications leader

Expertise
AI policycommunicationsfrontier AI discourse
Alignment

Policy and governance operator

Safety Stance

Publicly associated with Anthropic's safety-first and governance-oriented discourse.

Notable Work
Measuring AI agent autonomy in practiceAnthropic Economic Index reports
policyFounderFrontier Lab LeaderResearcherSafetyPolicy
VIEW DOSSIER →
Jack K.

Jack K.

Organization
xAI

Position

Program Manager

Expertise
program_management
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
program_management
program_managementResearcher
LinkedIn
VIEW DOSSIER →
Jack Parker-Holder

Jack Parker-Holder

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
world modelsopen-endednessAGIworld-modelsagi
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GenieGenie 3internet-scale world-model training
researchworld-modelsagiResearcher
Website
VIEW DOSSIER →
Jacob Menick

Jacob Menick

Organization
OpenAI

Expertise
pretraininggpt-4o minilanguage modelsfrontier-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4o pre-trainingGPT-4o mini
frontier-modelspretrainingneeds-reviewResearcher
VIEW DOSSIER →
Jake Eaton

Jake Eaton

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researchresearch communication
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIndia Country Brief: The Anthropic Economic IndexHow AI Is Transforming Work at AnthropicHow AI assistance impacts the formation of coding skills
researchResearcherSafety
VIEW DOSSIER →
🇩🇪
Jakob Uszkoreit

Jakob Uszkoreit

Nationality
German

Organization
Inceptive

Position

Co-Founder & CEO, Inceptive

Expertise
TransformersComputational BiologyRNA DesignMachine LearningModel Inventor
Education

MS, Computer Science & MathematicsTechnische Universität Berlin

Alignment

Pragmatic technologist

Safety Stance

Focused on applying AI to beneficial domains like drug discovery and healthcare. Believes the most important safety question is ensuring AI is used for high-impact positive applications.

Notable Work
Attention Is All You Need (Transformer)RNA molecule design with AIAI-designed mRNA therapeutics
Model InventorFounderCEOLab LeaderAcademicResearcherSafety
WebsiteLinkedIn
VIEW DOSSIER →
James Betker

James Betker

Organization
OpenAI

Expertise
image generationmultimodal researchfrontier modelsmultimodal-researchimage-generationneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
4o Image GenerationChatGPT Images
multimodal-researchimage-generationneeds-reviewResearcher
VIEW DOSSIER →
James Wells

James Wells

Organization
Sanctuary AI

Position

CEO, Sanctuary AI

Expertise
Robotics CommercializationHumanoid RobotsBusiness DevelopmentRoboticsEmbodied AGI
Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
RoboticsEmbodied AGIRobotics CommercializationHumanoid RobotsBusiness Development
CEORoboticsEmbodied AGILab LeaderResearcher
VIEW DOSSIER →
Jane Leibrock

Jane Leibrock

Organization
Anthropic

Position

Research methodology contributor

Expertise
research methodshuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Introducing Anthropic Interviewer
researchResearcherSafety
VIEW DOSSIER →
J

Janelle Gale

Organization
Meta

Position

Head of People

Expertise
Artificial Intelligence
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Frontier Lab LeaderArtificial Intelligence
Frontier Lab LeaderResearcher
VIEW DOSSIER →
Jared Birchall

Jared Birchall

Organization
xAI

Position

Operations/Finance & Legal oversight (per org chart reports)

Expertise
executive_opsfinancelegal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
executive_opsfinancelegal
executive_opsfinancelegalResearcher
VIEW DOSSIER →
Jared Mueller

Jared Mueller

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic IndexAnthropic Economic Index reportsHow AI Is Transforming Work at AnthropicIntroducing Anthropic Interviewer
economic researchResearcherSafety
VIEW DOSSIER →
Jason Jones

Jason Jones

Organization
Anthropic

Position

Education research contributor

Expertise
education research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Education Report: How educators use Claude
education researchResearcherSafety
VIEW DOSSIER →
Jason Kwon

Jason Kwon

Organization
OpenAI

Position

Chief Strategy Officer

Expertise
strategypolicylegal and governanceexecutive-leadership
Education

JDUC Berkeley Law

BAGeorgetown University

Alignment

Policy and governance operator

Safety Stance

Publicly associated with OpenAI governance, policy, and mission alignment rather than frontier research itself.

Notable Work
Corporate structure and governance strategyPolicy and legal oversight
executive-leadershipstrategypolicyFrontier Lab LeaderResearcherSafetyPolicy
VIEW DOSSIER →
🇬🇧
Jason Weston

Jason Weston

Nationality
British

Organization
Meta AI

Position

Research Scientist, Meta AI; Visiting Research Professor, NYU

Expertise
Natural Language ProcessingDialogue SystemsMemory NetworksMachine LearningNLP
Education

PhD, Machine LearningRoyal Holloway, University of London

Alignment

Open research advocate

Safety Stance

Advocates for open research and responsible dialogue systems. Focuses on building AI that can engage in safe, helpful conversations.

Notable Work
Memory NetworksParlAIBlenderBotDrQAEnd-to-End Memory NetworksUnified Architecture for NLP (with Collobert)
NLPResearcherAcademicSystemsProductOpen Source
@jasewestonWebsite
VIEW DOSSIER →
Jay Kreps

Jay Kreps

Organization
Anthropic

Position

Board Member

Expertise
corporate governancetechnical company buildingboardgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's public-benefit mission.

Notable Work
boardgovernancecorporate governancetechnical company building
boardgovernanceResearcherPolicy
VIEW DOSSIER →
Jeffrey Hui

Jeffrey Hui

Organization
Google DeepMind

Position

Research Engineer, Google DeepMind

Expertise
language modelsapplied MLengineeringllmsapplied-ai
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini-related engineeringlanguage model systemsapplied model development
engineeringllmsapplied-aiResearcherSystems
Website
VIEW DOSSIER →
Jeffrey Zhang

Jeffrey Zhang

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇹🇼🇺🇸
Jensen Huang

Jensen Huang

Nationality
Taiwanese-American

Organization
NVIDIA

Position

Founder, President & CEO, NVIDIA

Expertise
GPU ComputingAI InfrastructureSemiconductor DesignGPU
Education

BS, Electrical EngineeringOregon State University

MS, Electrical EngineeringStanford University

Alignment

Frontier lab operator

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
GPUAI InfrastructureGPU ComputingSemiconductor Design
FounderCEOGPUAI InfrastructureLab LeaderResearcherSystems
VIEW DOSSIER →
Jeremy Crice

Jeremy Crice

Organization
xAI

Position

Security / Sales leader

Expertise
securitysales
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
securitysales
securitysalesResearcher
LinkedIn
VIEW DOSSIER →
Jeremy Hadfield

Jeremy Hadfield

Organization
Anthropic

Position

Applied AI

Expertise
applied AIagent systems
Alignment

Frontier lab operator

Safety Stance

Supports real-world deployment of Anthropic systems.

Notable Work
Effective context engineering for AI agentsHow we built our multi-agent research systemEffective harnesses for long-running agents
applied AIResearcherSystemsProduct
VIEW DOSSIER →
Jerome Swannack

Jerome Swannack

Organization
Anthropic

Position

MCP Product Engineering

Expertise
MCPdeveloper toolsengineering
Alignment

Frontier lab operator

Safety Stance

Supports Anthropic's developer tooling and agent infrastructure.

Notable Work
Builder Summit London talks
engineeringResearcherSystemsProduct
VIEW DOSSIER →
Jerry Hong

Jerry Hong

Organization
Anthropic

Position

Research / design contributor

Expertise
research designhuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic Interviewer
researchResearcherSafety
VIEW DOSSIER →
Jialin Wu

Jialin Wu

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
multimodal modelsvision-language systemsinstruction tuningmultimodalvision-language
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
PaLI-Xmultimodal generalist modelsOmni-SMoLA
researchmultimodalvision-languageResearcherSystems
Website
VIEW DOSSIER →
Jiaming Shen

Jiaming Shen

Organization
Google DeepMind

Position

Senior Research Scientist, Google DeepMind

Expertise
NLPdata miningreward modelingnlpreward-modeling
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
language understandingRMBoost-era reward modelingtext mining
researchnlpreward-modelingResearcher
Website
VIEW DOSSIER →
Jianfeng Wang

Jianfeng Wang

Organization
OpenAI

Expertise
vision-language modelsmultimodal systemsimage generationmultimodal-researchvision-languageneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images4o Image Generation
multimodal-researchvision-languageneeds-reviewResearcherSystems
VIEW DOSSIER →
Jie Bing

Jie Bing

Organization
xAI

Position

Engineering

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
Jihui Yang

Jihui Yang

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇨🇳🇺🇸
Jim Fan

Jim Fan

Nationality
Chinese-American

Organization
NVIDIA

Position

Director of AI & Distinguished Scientist, NVIDIA

Expertise
Embodied AIRoboticsFoundation ModelsGeneralist AgentsEx-OpenAIDistinguished Scientist
Education

BS, Computer ScienceColumbia University

PhD, Computer ScienceStanford University

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Project GR00TMineDojoVoyagerFoundation Agent
DirectorEx-OpenAIRoboticsEmbodied AIDistinguished ScientistLab LeaderResearcher
Website
VIEW DOSSIER →
Jimmy M R.

Jimmy M R.

Organization
xAI

Position

Data Center Operations Technician

Expertise
data_centeroperations
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_centeroperations
data_centeroperationsResearcher
LinkedIn
VIEW DOSSIER →
Jiri De Jonghe

Jiri De Jonghe

Organization
Anthropic

Position

Applied AI

Expertise
applied AIagent evaluation
Alignment

Frontier lab operator

Safety Stance

Supports real-world deployment of Anthropic systems.

Notable Work
Builder Summit London
applied AIResearcherSystemsProduct
VIEW DOSSIER →
🇮🇳🇺🇸
Jitendra Malik

Jitendra Malik

Nationality
Indian-American

Organization
UC Berkeley / Meta

Position

Arthur J. Chick Professor of EECS, UC Berkeley; Research Director, Meta FAIR

Expertise
Computer VisionMachine LearningRoboticsComputational Neuroscience
Education

BTech, Electrical EngineeringIndian Institute of Technology Kanpur

PhD, Computer ScienceStanford University

Alignment

Academic pragmatist

Safety Stance

Focuses on building robust and reliable vision systems. Primarily an empiricist who lets research guide policy views.

Notable Work
Normalized CutsR-CNN (co-creator)Shape ContextsAnisotropic DiffusionHigh Dynamic Range ImagingDensePose
Computer VisionAcademicResearcherFrontier Lab LeaderPolicySystems
@JitendraMalikCVWebsiteLinkedIn
VIEW DOSSIER →
Joanne Jang

Joanne Jang

Organization
OpenAI

Position

GM, OpenAI Labs

Expertise
productchatgptdeveloper and lab productsneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
GPT-4 product leadershipOpenAI Labscollective alignment acknowledgements
productchatgptneeds-reviewResearcherSafetyProduct
VIEW DOSSIER →
Joaquin Quiñonero Candela

Joaquin Quiñonero Candela

Organization
OpenAI

Position

Head of Recruiting

Expertise
preparednesstalent strategytechnical hiringtalentneeds-review
Alignment

Frontier lab operator

Safety Stance

Previously led preparedness work focused on catastrophic-risk mitigation.

Notable Work
technical recruiting leadershipHealthBench authorship
preparednesstalentneeds-reviewFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇨🇦
Joelle Pineau

Joelle Pineau

Nationality
Canadian

Organization
Cohere

Position

Chief AI Officer, Cohere; Associate Professor, McGill University

Expertise
Reinforcement LearningNatural Language ProcessingRoboticsReproducibility in MLHealthcare AI
Education

PhD, RoboticsCarnegie Mellon University

BASc, Systems Design EngineeringUniversity of Waterloo

Alignment

Open research advocate, supports responsible AI governance

Safety Stance

Strong advocate for open research and reproducibility as a path to safer AI. Believes transparency in model development and evaluation is essential for building trustworthy AI systems.

Notable Work
Reproducibility in MLLlama models (at Meta FAIR)Dialogue systemsHealthcare roboticsPartially Observable MDPs
Chief ScientistResearcherAcademicLab LeaderPolicySystemsProduct
@jpineau1Website
VIEW DOSSIER →
Johannes Heidecke

Johannes Heidecke

Organization
OpenAI

Expertise
safety systemsevaluationreasoning model oversightalignment-evalssafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Associated with evaluations, alignment-adjacent publications, and safety-relevant leadership contexts.

Notable Work
SWE-Lancerdeep research leadershipcollective alignment
alignment-evalssafety-governanceneeds-reviewResearcherSafetyPolicySystems
VIEW DOSSIER →
John Giannandrea

John Giannandrea

Nationality
Scottish

Organization
Apple

Position

Former SVP of Machine Learning & AI Strategy, Apple (retiring Spring 2026)

Expertise
SearchMachine LearningAI StrategyKnowledge GraphsSVPEx-Google
Education

BSc, Computer ScienceUniversity of Strathclyde

Alignment

Frontier lab operator

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
SVPEx-GoogleAI StrategySearchMachine LearningKnowledge Graphs
SVPEx-GoogleAI StrategySearchFounderLab LeaderResearcherSystems
VIEW DOSSIER →
🇺🇸
John Jumper

John Jumper

Nationality
American

Organization
Google DeepMind / Isomorphic Labs

Position

Director, Google DeepMind; Director, Isomorphic Labs

Expertise
Protein Structure PredictionComputational BiologyMachine LearningTheoretical ChemistryAI for ScienceNobel Laureate
Education

PhD, Theoretical ChemistryUniversity of Chicago

MPhil, Theoretical Condensed Matter PhysicsUniversity of Cambridge

BS, Physics and MathematicsVanderbilt University

Alignment

Focused on scientific applications of AI

Safety Stance

Focused on beneficial applications of AI to science. Believes AI has transformative potential for drug discovery and biological understanding. Supports responsible deployment of AI in scientific domains.

Notable Work
AlphaFoldAlphaFold 2AlphaFold Protein Structure DatabaseAI for structural biology
ResearcherNobel LaureateAI for ScienceFrontier Lab LeaderAcademicProduct
Website
VIEW DOSSIER →
John Mullan

John Mullan

Organization
xAI

Position

Co-founder of Hotshot; joined xAI via acquisition

Expertise
acquisitionvideo_generation
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
acquisitionvideo_generationFrontier Lab Leader
acquisitionvideo_generationFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇺🇸
Josh Albrecht

Josh Albrecht

Nationality
American

Organization
Imbue

Position

Co-Founder & CTO, Imbue

Expertise
Machine LearningAI ReasoningTraining DynamicsHyperparameter OptimizationAGI
Education

BS, Computer ScienceUniversity of Pittsburgh

MS, Computer ScienceUniversity of Pittsburgh

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
CARBS hyperparameter optimizerTraining dynamics of self-supervised learning
FounderCTOAGIResearcherLab Leader
Website
VIEW DOSSIER →
Josh Tobin

Josh Tobin

Organization
OpenAI

Expertise
research leadershipreasoning systemsdeep researchresearch-leadershipreasoning-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research leadershipHealthBench
research-leadershipreasoning-researchneeds-reviewResearcherSystems
VIEW DOSSIER →
Joshua Achiam

Joshua Achiam

Organization
OpenAI

Position

Chief Futurist / Head of Mission Alignment

Expertise
mission alignmentAI safetypolicy communicationsafety-governancemission-alignmentpublic-intellectual
Education

PhDEECS, UC Berkeley

BSPhysics, University of Florida

BSAerospace Engineering, University of Florida

Alignment

Policy and governance operator

Safety Stance

Explicitly frames AI safety as a sociotechnical challenge requiring democratic input and mission alignment.

Notable Work
AI safety researchAI impacts researchSpinning Up in Deep RL
safety-governancemission-alignmentpublic-intellectualFrontier Lab LeaderResearcherSafetyPolicy
VIEW DOSSIER →
🇺🇸
Joy Buolamwini

Joy Buolamwini

Nationality
Ghanaian-American

Organization
Algorithmic Justice League

Position

Founder & Executive Director, Algorithmic Justice League

Expertise
AI EthicsAlgorithmic BiasFacial RecognitionComputer Vision
Education

BS, Computer ScienceGeorgia Institute of Technology

MSc, Learning and TechnologyUniversity of Oxford

MS, Media Arts and SciencesMIT Media Lab

PhD, Media Arts and SciencesMIT Media Lab

Alignment

Civil rights advocate, pro-regulation

Safety Stance

Leading voice against algorithmic discrimination. Focuses on the real-world harms of biased AI systems, particularly on communities of color. Advocates for moratoriums on facial recognition technology and stronger AI accountability laws.

Notable Work
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender ClassificationActionable Auditing: Investigating the Impact of Publicly Naming Biased Performance ResultsFacing the Coded Gaze
AI EthicsResearcherFounderLab LeaderAcademicInvestorPolicySystems
@jovialjoyWebsiteLinkedIn
VIEW DOSSIER →
🇮🇱🇺🇸
Judea Pearl

Judea Pearl

Nationality
Israeli-American

Organization
UCLA

Position

Professor of Computer Science and Statistics, UCLA; Director, Cognitive Systems Laboratory

Expertise
Causal InferenceBayesian NetworksProbabilistic ReasoningPhilosophy of ScienceArtificial IntelligenceTuring Award
Education

PhD, Electrical EngineeringPolytechnic Institute of Brooklyn

MS, PhysicsRutgers University

Alignment

Academic, focused on advancing scientific methodology

Safety Stance

Believes current AI lacks true understanding because it cannot reason about cause and effect. Argues that without causal reasoning, AI systems remain fundamentally limited and potentially unreliable.

Notable Work
Bayesian NetworksDo-CalculusCausal Inference FrameworkStructural Causal ModelsThe Book of Why
AcademicTuring AwardResearcherLab LeaderSystems
@yudapearlWebsiteLinkedIn
VIEW DOSSIER →
Judy Shen

Judy Shen

Organization
Anthropic

Position

Research contributor

Expertise
skills researchAI usage research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceHow AI assistance impacts the formation of coding skills
researchResearcherSafety
VIEW DOSSIER →
🇫🇷
Julien Chaumond

Julien Chaumond

Nationality
French

Organization
Hugging Face

Position

Co-founder & CTO, Hugging Face

Expertise
Open Source AIML InfrastructureNatural Language ProcessingModel DeploymentOpen Source Leader
Education

MSc, Applied MathematicsEcole Polytechnique

MSc, Computer ScienceTelecom Paris

MS, Electrical Engineering and Computer ScienceStanford University

Alignment

Open-source AI advocate

Safety Stance

Believes open-source and community-driven AI development is the safest path. Advocates for democratizing access to AI models and making them inspectable by everyone.

Notable Work
Transformers LibraryHugging Face HubTokenizersAcceleratePEFT
Open Source LeaderCTOFounderLab LeaderResearcherSystemsProductOpen Source
@julien_cWebsiteLinkedIn
VIEW DOSSIER →
Jun Shern Chan

Jun Shern Chan

Organization
Anthropic

Position

Research contributor

Expertise
AI usage research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practice
researchResearcherSafety
VIEW DOSSIER →
Justin Young

Justin Young

Organization
Anthropic

Position

Engineering contributor

Expertise
agent harnessesdeveloper toolingengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Effective harnesses for long-running agents
engineeringResearcherSafety
VIEW DOSSIER →
Kai Musk

Kai Musk

Organization
xAI

Position

Engineering intern (per org chart reports)

Expertise
intern
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
intern
internResearcher
VIEW DOSSIER →
🇨🇳🇺🇸
Kanjun Qiu

Kanjun Qiu

Nationality
Chinese-American

Organization
Imbue

Position

Co-Founder & CEO, Imbue

Expertise
AGIAI ReasoningAI AgentsEntrepreneurship
Education

BS, Electrical Engineering & Computer ScienceMIT

MS, Electrical Engineering & Computer ScienceMIT

Alignment

Capital allocator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
AGIAI AgentsAI ReasoningEntrepreneurship
FounderCEOAGIAI AgentsLab LeaderResearcherInvestor
VIEW DOSSIER →
🇬🇧
Karen Simonyan

Karen Simonyan

Nationality
British

Organization
Microsoft AI

Position

Chief Scientist, Microsoft AI

Expertise
Computer VisionDeep LearningGenerative AISpeech Synthesis
Education

PhD, Computer VisionUniversity of Oxford

Alignment

Industry pragmatist

Safety Stance

Works within Microsoft's responsible AI framework. Career trajectory from DeepMind to Inflection to Microsoft suggests focus on deploying AI safely at scale.

Notable Work
VGGNet (Very Deep Convolutional Networks)WaveNetAlphaGo ZeroAlphaZero
Computer VisionResearcherFounderLab Leader
Website
VIEW DOSSIER →
Karthikeyan Shanmugam

Karthikeyan Shanmugam

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind India

Expertise
machine learninginformation theoryoptimizationtheory
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
foundational ML researchefficient learning methodsapplied theory
researchtheoryoptimizationResearcher
Website
VIEW DOSSIER →
Kashyap Murali

Kashyap Murali

Organization
Anthropic

Position

Claude Code Product Engineering

Expertise
coding agentsdeveloper toolsengineering
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic's developer tooling within its safety-first framing.

Notable Work
Builder Summit Bengaluru
engineeringResearcherSafetyProduct
VIEW DOSSIER →
🇦🇺
Kate Crawford

Kate Crawford

Nationality
Australian

Organization
USC Annenberg / Microsoft Research

Position

Research Professor, USC Annenberg; Senior Principal Researcher, Microsoft Research

Expertise
AI EthicsAI and SocietyPolitical Economy of AIData JusticeScience and Technology Studies
Education

PhD, Media StudiesUniversity of Sydney

Alignment

Critical scholar, progressive

Safety Stance

Focuses on the political economy and material costs of AI rather than existential risk. Argues AI systems encode existing power structures and extractive practices. Advocates for examining who benefits and who is harmed by AI deployment, including environmental costs of compute infrastructure.

Notable Work
Atlas of AICalculating Empires (with Vladan Joler)Anatomy of an AI SystemKnowing Machines ProjectExcavating AI
AI EthicsAcademicResearcherFounderLab LeaderSystemsProduct
@katecrawfordWebsiteLinkedIn
VIEW DOSSIER →
Kate Earle Jensen

Kate Earle Jensen

Organization
Anthropic

Position

Head of Americas

Expertise
Americas expansionsales and partnershipsregional leadershipcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports deployment under Anthropic's public safety-first framing.

Notable Work
Anthropic regional leadership
regional leadershipcommercialFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Katelyn Lesse

Katelyn Lesse

Organization
Anthropic

Position

Head of API Engineering

Expertise
API engineeringdeveloper platformengineeringleadership
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic's developer platform within its reliability and safety framing.

Notable Work
Anthropic API leadership
engineeringleadershipFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Keir Bradwell

Keir Bradwell

Organization
Anthropic

Position

Research / communications contributor

Expertise
research communicationeducation and economic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Anthropic Education Report: The AI Fluency IndexAnthropic Economic Index reports
researchResearcherSafety
VIEW DOSSIER →
Ken Aizawa

Ken Aizawa

Organization
Anthropic

Position

Engineering contributor

Expertise
developer toolsagent toolingengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Writing effective tools for AI agents
engineeringResearcherSafety
VIEW DOSSIER →
Ken Chu

Ken Chu

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇺🇸
Ken Goldberg

Ken Goldberg

Nationality
American

Organization
UC Berkeley

Position

Professor of IEOR and EECS, UC Berkeley; Director of AUTOLAB

Expertise
RoboticsAutomationRobot GraspingRobot-Assisted Surgery
Education

BS, Electrical Engineering and EconomicsUniversity of Pennsylvania

PhD, Computer ScienceCarnegie Mellon University

Alignment

Research-focused

Safety Stance

Advocates for closing the "data gap" in robotics — the disconnect between simulation and real-world robot performance. Focuses on practical, deployable robot learning.

Notable Work
Dex-Net (robot grasping)Cloud RoboticsTelegarden (1995, pioneering internet robot)Fog RoboticsRobot-Assisted Surgery
RoboticsAcademicLab LeaderResearcher
@Ken_GoldbergWebsiteLinkedIn
VIEW DOSSIER →
Kenji Hata

Kenji Hata

Organization
OpenAI

Expertise
multimodal modelsvisionimage generationmultimodal-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images4o Image Generation
multimodal-researchvisionneeds-reviewResearcher
VIEW DOSSIER →
Kenneth Lien

Kenneth Lien

Organization
Anthropic

Position

Engineering contributor

Expertise
multi-agent systemsproduct engineeringengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
How we built our multi-agent research system
engineeringResearcherSafetySystemsProduct
VIEW DOSSIER →
Kenny Zhu (kzu)

Kenny Zhu (kzu)

Organization
xAI

Position

Proto co-author (GitHub)

Expertise
developer_experience
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
developer_experience
developer_experienceResearcher
VIEW DOSSIER →
Kevin K. Shah

Kevin K. Shah

Organization
xAI

Position

Specialist Team Lead (Grok Imagine)

Expertise
grok_imagineoperations
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
grok_imagineoperationsFrontier Lab Leader
grok_imagineoperationsFrontier Lab LeaderResearcher
LinkedIn
VIEW DOSSIER →
Kevin Liu

Kevin Liu

Organization
OpenAI

Expertise
operator systemsdeep researchsafety systemsagentsalignment-evalsneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Operatordeep research
agentsalignment-evalsneeds-reviewResearcherSafetySystems
VIEW DOSSIER →
Kevin Lu

Kevin Lu

Organization
OpenAI

Expertise
gpt-4o minireasoningmodel researchfrontier-modelsreasoning-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4o mini
frontier-modelsreasoning-researchneeds-reviewResearcher
VIEW DOSSIER →
Kevin Murphy

Kevin Murphy

Organization
Google / Google DeepMind

Position

Senior Research Scientist in the Google-DeepMind stack

Expertise
probabilistic machine learningvision-language modelsAI for scienceprobabilistic-mlmultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
probabilistic MLvideo and multimodal generationML textbooks and synthesis
researchprobabilistic-mlmultimodalResearcher
Website
VIEW DOSSIER →
🇺🇸
Kevin Scott

Kevin Scott

Nationality
American

Organization
Microsoft

Position

CTO & EVP of Technology & Research, Microsoft

Expertise
AI InfrastructureTechnology StrategyResearch ManagementEx-LinkedInEx-Google
Education

BS, Computer ScienceLynchburg College

MS, Computer ScienceWake Forest University

Alignment

Research-focused technologist

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
AI InfrastructureEx-LinkedInEx-GoogleTechnology StrategyResearch Management
CTOAI InfrastructureEx-LinkedInEx-GoogleLab LeaderResearcherSystems
@kevin_scott
VIEW DOSSIER →
Kevin Weil

Kevin Weil

Organization
OpenAI

Position

Chief Product Officer

Expertise
product leadershipdeveloper platformsconsumer AIexecutive-leadershipdeveloper-platforms
Alignment

Frontier lab operator

Safety Stance

Publicly tied to productization of frontier systems under OpenAI's deployment framework.

Notable Work
Product leadership for ChatGPT, API, and consumer/business AI experiences
executive-leadershipproductdeveloper-platformsFrontier Lab LeaderResearcherSystemsProduct
VIEW DOSSIER →
k

kian-katanforoosh

Organization
PRAGMATISM

Expertise
Artificial Intelligence
Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Artificial Intelligence
Researcher
VIEW DOSSIER →
Kim Withee

Kim Withee

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic IndexAnthropic Economic Index report: Economic primitives
economic researchResearcherSafety
VIEW DOSSIER →
Kory Mathewson

Kory Mathewson

Organization
Google DeepMind

Position

Staff Research Scientist, Google DeepMind

Expertise
computational creativityhuman-machine interactioninteractive MLcreativityhci
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AI for storytellingcreative co-creation systemshuman-in-the-loop RL
researchcreativityhciResearcherSystems
Website
VIEW DOSSIER →
Kristen Swanson

Kristen Swanson

Organization
Anthropic

Position

Education research contributor

Expertise
AI fluencyeducation research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Education Report: The AI Fluency Index
education researchFrontier Lab LeaderResearcherSafety
VIEW DOSSIER →
Kristina P.

Kristina P.

Organization
xAI

Position

Operations Manager, xAI Safety

Expertise
operations
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
operations
safetyoperationsResearcherSafety
LinkedIn
VIEW DOSSIER →
Krisztian Balog

Krisztian Balog

Organization
Google DeepMind

Position

Staff Research Scientist, Google DeepMind

Expertise
conversational information accessevaluationNLPnlp
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
conversational searchevaluation methodsinformation access research
researchnlpevaluationResearcher
Website
VIEW DOSSIER →
Kshitij Gupta

Kshitij Gupta

Organization
OpenAI

Expertise
image generationmultimodal researchvision systemsmultimodal-researchvisionneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images4o Image Generation
multimodal-researchvisionneeds-reviewResearcherSystems
VIEW DOSSIER →
Kuang-Huei Lee

Kuang-Huei Lee

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
agentsroboticslarge language models
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
artificial agentsrobotics + LLMsmultimodal agent research
researchagentsroboticsResearcher
Website
VIEW DOSSIER →
Kunal Handa

Kunal Handa

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researchhuman-AI interactionevaluation
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerAnthropic Education Report: How educators use Claude
researchResearcherSafety
VIEW DOSSIER →
Kyle Kosic

Kyle Kosic

Organization
xAI

Position

Co-founder (departed 2024 per reporting)

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringFrontier Lab Leader
founderengineeringFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇰🇷
Kyunghyun Cho

Kyunghyun Cho

Nationality
South Korean

Organization
New York University

Position

Glen de Vries Professor of Health Statistics and Professor of Computer Science & Data Science, NYU; Co-Head, Global AI Frontier Lab

Expertise
Natural Language ProcessingNeural Machine TranslationDeep LearningSequence ModelingNLP
Education

BS, Computer ScienceKAIST

MSc, Machine Learning and Data MiningAalto University

DSc, Computer ScienceAalto University

Alignment

Academic centrist

Safety Stance

Supports responsible AI development with emphasis on reproducibility and scientific rigor.

Notable Work
Gated Recurrent Unit (GRU)Neural Machine Translation with AttentionSequence-to-Sequence LearningEncoder-Decoder Architecture
NLPAcademicResearcher
@kchonycWebsiteLinkedIn
VIEW DOSSIER →
Leo Liu

Leo Liu

Organization
OpenAI

Expertise
deep researchlanguage modelsevaluationreasoning-researchlanguage-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research
reasoning-researchlanguage-modelsneeds-reviewResearcher
VIEW DOSSIER →
🇺🇸
Leslie Kaelbling

Leslie Kaelbling

Nationality
American

Organization
MIT

Position

Panasonic Professor of Computer Science and Engineering, MIT

Expertise
Reinforcement LearningRoboticsPlanning Under UncertaintyDecision-Making
Education

AB, PhilosophyStanford University

PhD, Computer ScienceStanford University

Alignment

Academic pragmatist

Safety Stance

Focuses on building reliable and predictable robot behavior. Emphasizes formal methods and principled approaches to decision-making in uncertain environments.

Notable Work
Reinforcement learning for embedded systemsPOMDPs for roboticsPRoC3S (LLM-based robot planning)Hierarchical planning under uncertainty
AcademicResearcherRoboticsFounderLab Leader
Website
VIEW DOSSIER →
Liam Fedus

Liam Fedus

Organization
OpenAI

Expertise
post-trainingmodel behaviorfrontier-model leadershipresearch-leadershipneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4o post-trainingOpenAI o1 leadershipOperator leadership
research-leadershippost-trainingneeds-reviewResearcher
VIEW DOSSIER →
Lianmin Zheng

Lianmin Zheng

Organization
xAI

Position

Member of Technical Staff

Expertise
mts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
mts
researchmtsResearcher
LinkedIn
VIEW DOSSIER →
Li Jing

Li Jing

Organization
OpenAI

Expertise
image generationmultimodal researchvisionmultimodal-researchimage-generationneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
4o Image Generationo3/o4 contributors
multimodal-researchimage-generationneeds-reviewResearcher
VIEW DOSSIER →
Lily Lim

Lily Lim

Organization
xAI

Position

General Counsel

Expertise
legalgovernance
Alignment

Policy and governance operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
legalgovernance
legalgovernanceResearcherPolicy
LinkedIn
VIEW DOSSIER →
Lisa Crofoot

Lisa Crofoot

Organization
Anthropic

Position

Research Product Manager

Expertise
research productAI applications
Alignment

Frontier lab operator

Safety Stance

Helps connect Anthropic research and product deployment.

Notable Work
research productAI applications
productResearcherProduct
VIEW DOSSIER →
Lora Aroyo

Lora Aroyo

Organization
Google DeepMind

Position

Senior Research Scientist and Team Lead, Google DeepMind

Expertise
evaluationsafety benchmarkingdata qualityresponsible-ai
Alignment

Safety-aligned researcher

Safety Stance

Explicitly focused on evaluation, data quality, and safety benchmarking.

Notable Work
MLCommons safety benchmarkingdata-centric evaluationresponsible model assessment
responsible-aievaluationsafetyFrontier Lab LeaderResearcherSafety
Website
VIEW DOSSIER →
Lorenz Kuhn

Lorenz Kuhn

Organization
OpenAI

Expertise
deep researchlanguage modelsreasoningreasoning-researchlanguage-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep research
reasoning-researchlanguage-modelsneeds-reviewResearcher
VIEW DOSSIER →
Louis Feuvrier

Louis Feuvrier

Organization
OpenAI

Expertise
deep researchoperator systemslanguage modelsreasoning-researchagentsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchOperator
reasoning-researchagentsneeds-reviewResearcherSystems
VIEW DOSSIER →
Lucas Dixon

Lucas Dixon

Organization
Google DeepMind

Position

Director of Research, Google DeepMind

Expertise
interpretabilitymodel controlon-device and sovereign AIresponsible-aileadershipverified
Alignment

Safety-aligned researcher

Safety Stance

Explicitly focused on interpreting, controlling, and evaluating frontier models.

Notable Work
PAIR co-leadershiptoxicity and safety toolinginterpretability research
responsible-aiinterpretabilityleadershipverifiedFrontier Lab LeaderResearcherSafetySystems
Website
VIEW DOSSIER →
Lu Liu

Lu Liu

Organization
OpenAI

Expertise
image generationmultimodal systemsresearchmultimodal-researchimage-generationneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
ChatGPT Images4o Image Generation
multimodal-researchimage-generationneeds-reviewResearcherSystems
VIEW DOSSIER →
Maggie Vo

Maggie Vo

Organization
Anthropic

Position

Founder and Lead, Education Team

Expertise
AI educationAI fluencyeducation strategyeducationleadership
Alignment

Frontier lab operator

Safety Stance

Publicly associated with Anthropic's emphasis on safe and effective human-AI collaboration.

Notable Work
Anthropic education initiativesAI Fluency course
educationleadershipFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
Manish Gupta

Manish Gupta

Organization
Google DeepMind

Position

Senior Director, Google DeepMind

Expertise
research leadershipapplied AIAI systemsleadershipapplied-aiverified
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AI leadership in India and Japanindustrial AI deploymentresearch management
leadershipsystemsapplied-aiverifiedFrontier Lab LeaderResearcherSystems
Website
VIEW DOSSIER →
Manuel Kroiss

Manuel Kroiss

Organization
xAI

Position

Co-founder

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringFrontier Lab Leader
founderengineeringFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
Marc Najork

Marc Najork

Organization
Google DeepMind

Position

Distinguished Research Scientist, Google DeepMind

Expertise
information retrievalweb-scale systemsapplied machine learninginformation-retrieval
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
search and web systemslarge-scale retrievalresearch leadership
researchinformation-retrievalsystemsResearcherSystems
Website
VIEW DOSSIER →
Marco Fornoni

Marco Fornoni

Organization
Google DeepMind

Position

Staff Engineer, Google DeepMind

Expertise
AI systemsengineeringlarge-scale infrastructureinfrastructure
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
research engineeringmodel systemsproduction-scale AI infrastructure
engineeringsystemsinfrastructureResearcherSystems
Website
VIEW DOSSIER →
🇺🇸
Margaret Mitchell

Margaret Mitchell

Nationality
American

Organization
Hugging Face

Position

Chief Ethics Scientist, Hugging Face

Expertise
AI EthicsNLPMachine Learning FairnessComputer Vision
Education

BA, LinguisticsReed College

MS, Computational LinguisticsUniversity of Washington

PhD, Computer ScienceUniversity of Aberdeen

Alignment

AI ethics advocate, open-source proponent

Safety Stance

Strong advocate for AI accountability and transparency. Focuses on bias mitigation, fairness, and the disproportionate impact of AI on marginalized communities. Pushes for open, auditable AI systems.

Notable Work
Model Cards for Model ReportingGender Shades (co-author)On the Dangers of Stochastic Parrots (co-author)Vision-to-Language Generation
AI EthicsResearcherLab LeaderSystemsOpen Source
@mmitchell_aiWebsiteLinkedIn
VIEW DOSSIER →
Mariano-Florentino Cuéllar

Mariano-Florentino Cuéllar

Organization
Anthropic

Position

Long-Term Benefit Trust Trustee

Expertise
public-benefit governancepolicytrustgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's long-term public-benefit mission.

Notable Work
trustgovernancepublic-benefit governance
trustgovernanceResearcherPolicy
VIEW DOSSIER →
Mario Lucic

Mario Lucic

Organization
Google DeepMind

Position

Senior Staff Research Scientist, Google DeepMind

Expertise
video understandingmultimodal modelsfoundation modelsmultimodalvideo
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini video and audio-video understandinglarge-scale multimodal systemsdata selection research
researchmultimodalvideoResearcherSystems
Website
VIEW DOSSIER →
Mark (mark-xai)

Mark (mark-xai)

Organization
xAI

Position

SDK/proto contributor (xai-org)

Expertise
developer_experiencesdk
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
developer_experiencesdk
developer_experiencesdkResearcher
VIEW DOSSIER →
🇺🇸
Mark Zuckerberg

Mark Zuckerberg

Nationality
American

Organization
Meta

Position

CEO & Chairman, Meta Platforms

Expertise
AI StrategySocial NetworksOpen Source AISuperintelligenceOpen Source
Education

Attended, Computer Science & PsychologyHarvard University

Alignment

Open-source builder

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Open SourceSuperintelligenceAI StrategySocial NetworksOpen Source AI
CEOFounderOpen SourceSuperintelligenceResearcherSystems
@finkd
VIEW DOSSIER →
Martin Ma

Martin Ma

Organization
xAI

Position

xAI

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
Marvin Zhang

Marvin Zhang

Organization
OpenAI

Expertise
reasoning modelsdeep researchgpt-4 evaluation workreasoning-researchevaluationneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
deep researchGPT-4 model safety and evaluation work
reasoning-researchevaluationneeds-reviewResearcherSafety
VIEW DOSSIER →
Masaru Sato

Masaru Sato

Organization
xAI

Position

Safety (X/xAI)

Expertise
trust_safety
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
trust_safety
safetytrust_safetyResearcherSafety
LinkedIn
VIEW DOSSIER →
M

Masayoshi Son

Organization
Stargate Venture

Position

Chair

Expertise
Artificial Intelligence
Alignment

Capital allocator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Artificial Intelligence
ResearcherInvestor
VIEW DOSSIER →
Massimo Nicosia

Massimo Nicosia

Organization
Google DeepMind

Position

Staff Research Engineer, Google DeepMind

Expertise
multimodal understandingmultilingualitylarge language modelsengineeringmultimodalllms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini multimodal understandingmultilingual modelingLLM evaluation and engineering
engineeringmultimodalllmsResearcher
Website
VIEW DOSSIER →
Matt Kearney

Matt Kearney

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researchevaluation
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practice
researchResearcherSafety
VIEW DOSSIER →
Matt Knight

Matt Knight

Organization
OpenAI

Position

Head of Security

Expertise
securitycybersecuritydeployment securitysafety-governancetechnical-safety
Alignment

Policy and governance operator

Safety Stance

Security-first stance focused on protecting models, systems, and deployments.

Notable Work
Security operations and frontier-model deployment security
safety-governancesecuritytechnical-safetyFrontier Lab LeaderResearcherSafetyPolicySystems
VIEW DOSSIER →
Maxim Massenkoff

Maxim Massenkoff

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Economic Index report: Economic primitivesIndia Country Brief: The Anthropic Economic Index
economic researchResearcherSafety
VIEW DOSSIER →
🇬🇧
Max Jaderberg

Max Jaderberg

Nationality
British

Organization
Isomorphic Labs

Position

President, Isomorphic Labs

Expertise
Drug DiscoveryAI for ScienceDeep LearningPresidentEx-DeepMind
Alignment

Research-focused technologist

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
AlphaStarSpatial Transformer Networks
PresidentEx-DeepMindDrug DiscoveryAI for ScienceResearcher
VIEW DOSSIER →
🇸🇪🇺🇸
Max Tegmark

Max Tegmark

Nationality
Swedish-American

Organization
MIT / Future of Life Institute

Position

Professor of Physics, MIT; President, Future of Life Institute

Expertise
AI SafetyCosmologyPhysicsAI GovernanceExistential RiskPhysicist
Education

BSc, PhysicsRoyal Institute of Technology (KTH), Stockholm

BA, EconomicsStockholm School of Economics

PhD, PhysicsUniversity of California, Berkeley

Alignment

AI Safety Advocate

Safety Stance

Strongly pro-safety. Co-authored the 2025 Statement on Superintelligence calling for a ban on superintelligence development until there is scientific consensus it can be done safely. Believes AI governance must be proactive, not reactive.

Notable Work
Mathematical Universe HypothesisCosmic microwave background analysisAI Safety policy frameworksLife 3.0
AI SafetyAcademicPhysicistFounderLab LeaderResearcherSafetyPolicy
@tegmarkWebsite
VIEW DOSSIER →
Mehdi Sajjadi

Mehdi Sajjadi

Organization
Google DeepMind

Position

Team Lead, Google DeepMind

Expertise
3D scene representationdeep generative modelscomputer visionvision
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
scene representation transformers3D-aware generative modelsnovel view synthesis
researchvision3dFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
Meire Fortunato

Meire Fortunato

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
deep learningexplorationphysics simulationsimulation
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
memory and exploration in RLgraph-based physics simulationsgeneral learning systems
researchrlsimulationResearcher
Website
VIEW DOSSIER →
🇺🇸
Melanie Mitchell

Melanie Mitchell

Nationality
American

Organization
Santa Fe Institute

Position

Professor & Inaugural Fractal Faculty, Santa Fe Institute

Expertise
Artificial IntelligenceComplexity ScienceAnalogy-MakingConceptual AbstractionCognitive AI
Education

BA, Mathematics and AstronomyBrown University

PhD, Computer ScienceUniversity of Michigan

Alignment

Nuanced AI realist

Safety Stance

Skeptical of both extreme hype and extreme doom. Argues that current AI systems lack true understanding and that we need better evaluation methods. Focuses on what AI actually can and cannot do, rather than speculative scenarios.

Notable Work
Copycat (analogy-making AI)Complexity: A Guided TourAI evaluation methodologyAbstraction and analogy in AIGenetic algorithms
Technical CommentatorAcademicResearcherSystems
@MelMitchell1WebsiteLinkedIn
VIEW DOSSIER →
Meredith Ringel Morris

Meredith Ringel Morris

Organization
Google DeepMind

Position

Director and Principal Scientist for Human-AI Interaction, Google DeepMind

Expertise
human-AI interactionhuman-centered AIresponsible AIhciresponsible-aileadership
Alignment

Safety-aligned researcher

Safety Stance

Strongly associated with human-centered and responsible AI practices.

Notable Work
human-AI interaction researchPAIR leadershipassistive and conversational AI
hciresponsible-aileadershipverifiedFrontier Lab LeaderResearcherSafety
Website
VIEW DOSSIER →
🇺🇸
Meredith Whittaker

Meredith Whittaker

Nationality
American

Organization
Signal Foundation

Position

President, Signal Foundation

Expertise
AI GovernancePrivacyTech PolicyAI EthicsGovernance
Education

BA, Rhetoric and English LiteratureUniversity of California, Berkeley

Alignment

Pro-privacy, anti-surveillance, tech accountability advocate

Safety Stance

Focuses on structural power dynamics in AI. Argues safety cannot be separated from surveillance, labor exploitation, and corporate concentration. Warns that agentic AI undermines privacy and security. Advocates for privacy-first design and nonprofit governance of critical infrastructure.

Notable Work
AI Now Annual ReportsDisability, Bias, and AIThe Steep Cost of Capture
GovernanceAI EthicsFounderLab LeaderAcademicResearcherSafetyPolicy
@mer__edithWebsiteLinkedIn
VIEW DOSSIER →
Mia Glaese

Mia Glaese

Organization
OpenAI

Position

Head of Human Data

Expertise
human datapost-trainingevaluationhuman-dataresearch-leadership
Alignment

Safety-aligned researcher

Safety Stance

Works at the intersection of model improvement, evaluation, and safety-related data pipelines.

Notable Work
Human data systems for ChatGPT, GPT-4, Sora, and safety evaluations
post-traininghuman-dataresearch-leadershipFrontier Lab LeaderResearcherSafety
VIEW DOSSIER →
Michael Gerstenhaber

Michael Gerstenhaber

Organization
Anthropic

Position

Head of Product Management

Expertise
product managementdeveloper platformleadership
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic product deployment within its safety-first framing.

Notable Work
leadershipFrontier Lab Leaderproduct managementdeveloper platform
productleadershipFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Michael Hopko

Michael Hopko

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇺🇸
Michael I. Jordan

Michael I. Jordan

Nationality
American

Organization
UC Berkeley / Inria

Position

Pehong Chen Distinguished Professor Emeritus, UC Berkeley; Directeur de Recherche, Inria & ENS Paris

Expertise
Machine LearningBayesian StatisticsComputational BiologyOptimizationEconomics and AI
Education

PhD, Cognitive ScienceUniversity of California, San Diego

MS, MathematicsArizona State University

BS, PsychologyLouisiana State University

Alignment

Pragmatic, warns against AI hype

Safety Stance

Skeptical of near-term AGI hype. Argues the field needs more focus on decision-making, economics, and market design rather than pure prediction. Warns that real risks are in poorly designed systems affecting markets and societies, not sentient AI.

Notable Work
Variational InferenceLatent Dirichlet AllocationBayesian NonparametricsStatistical Machine LearningMarket-Based AI Systems
AcademicResearcherSystems
WebsiteLinkedIn
VIEW DOSSIER →
Michael Sherrick

Michael Sherrick

Organization
xAI

Position

Software Engineer Specialist

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
Michael Stern

Michael Stern

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researchproduct research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerAnthropic Economic Index report: Uneven geographic and enterprise AI adoption
researchResearcherSafetyProduct
VIEW DOSSIER →
Michele Wang

Michele Wang

Organization
OpenAI

Expertise
coding evaluationswe-lancersafety systemsevaluationcodingneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
SWE-Lancerdeep research safety systems
evaluationcodingneeds-reviewResearcherSafetySystems
VIEW DOSSIER →
Michelle Pokrass

Michelle Pokrass

Organization
OpenAI

Position

API Research Lead

Expertise
api researchdeveloper productsstructured outputsdeveloper-platformsapi-researchproduct-research
Alignment

Frontier lab operator

Safety Stance

Publicly associated with making model behavior more reliable and controllable for developers.

Notable Work
Structured OutputsGPT API deployment workdeveloper-facing model behavior improvements
developer-platformsapi-researchproduct-researchFrontier Lab LeaderResearcherProduct
VIEW DOSSIER →
Mike Dalton

Mike Dalton

Organization
X / xAI

Position

Engineering leader at X (reported) also involved with xAI

Expertise
x_platformengineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
x_platformengineering
x_platformengineeringResearcher
VIEW DOSSIER →
Mike Dusenberry

Mike Dusenberry

Organization
Google DeepMind

Position

Research Engineer, Gemini Group, Google DeepMind

Expertise
probabilistic deep learninghealth AIGemini systemsengineeringuncertaintygemini
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
uncertainty methodsGemini engineeringhealth-related ML
engineeringuncertaintygeminiResearcherSystems
Website
VIEW DOSSIER →
Mike Krieger

Mike Krieger

Organization
Anthropic

Position

Chief Product Officer

Expertise
product leadershipdeveloper toolsenterprise AI
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic product deployment within the company's safety-first framing.

Notable Work
Claude product leadership
productexecutiveFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
🇬🇧
Mike Lewis

Mike Lewis

Nationality
British

Organization
Meta

Position

Research Scientist, FAIR; Pre-training Lead for Llama 3

Expertise
NLPLanguage ModelsPre-trainingDialogue SystemsLlama
Education

PhD, Computer ScienceUniversity of Edinburgh

Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Llama 3 pre-trainingCiceroBARTRoBERTakNN-LM
ResearcherLlamaPre-trainingNLPFrontier Lab LeaderAcademicSystems
@ml_perception
VIEW DOSSIER →
Mike Liberatore

Mike Liberatore

Organization
xAI

Position

Former CFO (per reporting)

Expertise
finance
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
finance
executivefinanceResearcher
VIEW DOSSIER →
🇺🇸
Mike Schroepfer

Mike Schroepfer

Nationality
American

Organization
Gigascale Capital

Position

Founder & Partner, Gigascale Capital; Senior Fellow, Meta (part-time)

Expertise
AI InfrastructureClimate TechEngineering LeadershipEx-CTOEx-Meta
Education

BS, Computer ScienceStanford University

MS, Computer ScienceStanford University

Alignment

Capital allocator

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
Ex-CTOEx-MetaClimate TechAI InfrastructureEngineering Leadership
Ex-CTOEx-MetaClimate TechInvestorFounderLab LeaderResearcherSystems
@schrep
VIEW DOSSIER →
🇺🇸
Miles Brundage

Miles Brundage

Nationality
American

Organization
AVERI

Position

Co-Founder & Executive Director, AVERI (AI Verification and Evaluation Research Institute)

Expertise
AI PolicyAI GovernanceAI Safety EvaluationAI Auditing
Education

PhD, Human and Social Dimensions of Science and TechnologyArizona State University

Alignment

Pro-governance, independent AI accountability advocate

Safety Stance

Strong advocate for independent, external auditing of AI systems. Believes voluntary self-regulation by AI companies is insufficient. Argues we are in "triage mode" for AI policy and need to prioritize building robust evaluation infrastructure now. Left OpenAI partly due to concerns about the gap between safety commitments and practice.

Notable Work
The Malicious Use of Artificial Intelligence (2018)Frontier AI Auditing: Toward Rigorous Third-Party AssessmentAI and International Security
AI PolicyResearcherFounderLab LeaderAcademicSafetyPolicySystems
@Miles_BrundageWebsiteLinkedIn
VIEW DOSSIER →
Miles McCain

Miles McCain

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researcheconomic analysishuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerAnthropic Economic Index report: Economic primitivesAnthropic Economic Index report: Uneven geographic and enterprise AI adoption
researchResearcherSafety
VIEW DOSSIER →
Milind Tambe

Milind Tambe

Organization
Google DeepMind

Position

Principal Scientist and Director, AI for Social Good, Google DeepMind

Expertise
AI for social goodmulti-agent systemsdecision-makingsocial-goodleadershipverified
Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AI for public healthresource allocationsocial impact AI
social-gooddecision-makingleadershipverifiedFrontier Lab LeaderAcademicResearcherSystems
Website
VIEW DOSSIER →
Ming-Hsuan Yang

Ming-Hsuan Yang

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
computer visionvisual learningvideo understandingvisionmultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
vision and learning researchvideo understandingmultimodal perception
researchvisionmultimodalResearcher
Website
VIEW DOSSIER →
Minsuk Chang

Minsuk Chang

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
interactive learningskill acquisitionagent behavioragentslearning-dynamics
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
learning dynamics in agentsinteractive adaptationknowledge acquisition
researchagentslearning-dynamicsResearcher
Website
VIEW DOSSIER →
🇺🇸
Mira Murati

Mira Murati

Nationality
Albanian-American

Organization
Thinking Machines Lab

Position

Founder & CEO, Thinking Machines Lab

Expertise
AI EngineeringProduct DevelopmentAI Model CustomizationAI LeadershipFrontier Lab Leader
Education

BA, Liberal ArtsColby College

BE, Mechanical EngineeringDartmouth College (Thayer School of Engineering)

Alignment

Pragmatic technologist

Safety Stance

Believes in democratizing AI through customization and understanding. Founded Thinking Machines Lab as a public benefit corporation to make AI systems more widely understood and controllable.

Notable Work
ChatGPT launch (oversight)GPT-4 (oversight)DALL-E (oversight)Tinker (Thinking Machines Lab)
Frontier Lab LeaderFounderCEOResearcherSystemsProduct
@miramurati
VIEW DOSSIER →
🇷🇺🇺🇸
Misha Laskin

Misha Laskin

Nationality
Russian-American

Organization
Reflection AI

Position

Co-Founder & CEO, Reflection AI

Expertise
Reinforcement LearningReward ModelingAutonomous CodingSuperintelligenceEx-DeepMind
Education

BA, Physics and LiteratureYale University

PhD, Theoretical Many-Body Quantum PhysicsUniversity of Chicago

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Decision TransformerReward modeling for GeminiCURL
FounderCEOEx-DeepMindSuperintelligenceReinforcement LearningLab LeaderResearcher
VIEW DOSSIER →
Mojtaba Seyedhosseini

Mojtaba Seyedhosseini

Organization
Google DeepMind

Position

Research Scientist / contributor in Google DeepMind's multimodal stack

Expertise
computer visionmultimodal modelsresponsible AI-linked researchvisionmultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
PaLI-X contributormultimodal scaling workvision systems
researchvisionmultimodalResearcher
Website
VIEW DOSSIER →
🇬🇧
Mustafa Suleyman

Mustafa Suleyman

Nationality
British

Organization
Microsoft

Position

EVP & CEO, Microsoft AI

Expertise
Artificial IntelligenceAI Product DevelopmentAI EthicsHealth AIConversational AI
Education

Dropped out, Philosophy, Politics, and EconomicsUniversity of Oxford

Alignment

Techno-optimist with social conscience

Safety Stance

Authored "The Coming Wave" warning about AI and biotech risks. Advocates for containment strategies while aggressively building AI capabilities. At Microsoft, pursuing "humanist superintelligence" that serves humanity.

Notable Work
DeepMind HealthInflection PiMicrosoft CopilotThe Coming Wave (book)
CEOLab LeaderFounderResearcherProduct
@mustafasuleymanWebsiteLinkedIn
VIEW DOSSIER →
🇺🇸
Nat Friedman

Nat Friedman

Nationality
American

Organization
Meta

Position

Head of Products & Applied Research, Meta Superintelligence Labs

Expertise
Developer ToolsOpen SourceAI ProductsEx-GitHub CEOSuperintelligence
Education

BS, Computer Science and MathematicsMIT

Alignment

Open-source builder

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Ex-GitHub CEOOpen SourceSuperintelligenceDeveloper ToolsFrontier Lab LeaderAI Products
Ex-GitHub CEOOpen SourceSuperintelligenceDeveloper ToolsFounderCEOFrontier Lab LeaderResearcher
@natfriedman
VIEW DOSSIER →
Nathan Ziebart

Nathan Ziebart

Organization
xAI

Position

Software Engineer

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
Neal Bayya

Neal Bayya

Organization
xAI

Position

Infrastructure

Expertise
infrastructure
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
infrastructure
infrastructureResearcherSystems
LinkedIn
VIEW DOSSIER →
Neil Buddy Shah

Neil Buddy Shah

Organization
Anthropic

Position

Long-Term Benefit Trust Trustee

Expertise
public-benefit governancetrustgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's long-term public-benefit mission.

Notable Work
trustgovernancepublic-benefit governance
trustgovernanceResearcherPolicy
VIEW DOSSIER →
Nick Alonso

Nick Alonso

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
🇸🇪
Nick Bostrom

Nick Bostrom

Nationality
Swedish

Organization
Macrostrategy Research Initiative

Position

Founder & Principal Researcher, Macrostrategy Research Initiative

Expertise
Existential RiskAI SafetyPhilosophy of AIDecision TheoryTranshumanismPhilosopher
Education

BA, PhilosophyUniversity of Gothenburg

MA, Philosophy and PhysicsStockholm University

MSc, Computational NeuroscienceKing's College London

PhD, PhilosophyLondon School of Economics

Alignment

Techno-progressive, existential risk-focused

Safety Stance

Pioneer of AI existential risk thinking. Argued in Superintelligence that a misaligned superintelligent AI could pose an existential threat to humanity. Advocates for proactive governance and technical safety research before AGI is developed.

Notable Work
SuperintelligenceSimulation ArgumentAnthropic BiasExistential Risk taxonomyThe Vulnerable World Hypothesis
AI SafetyAcademicPhilosopherFounderLab LeaderResearcherSafetyPolicy
@nickbostromWebsite
VIEW DOSSIER →
🇨🇦
Nick Frosst

Nick Frosst

Nationality
Canadian

Organization
Cohere

Position

Co-founder, Cohere

Expertise
Deep LearningNeural NetworksCapsule NetworksEnterprise AIFrontier Founder
Education

BSc, Computer Science and Cognitive ScienceUniversity of Toronto

Alignment

Canadian tech ecosystem advocate

Safety Stance

Supports responsible enterprise AI deployment with strong data governance. Believes enterprise-focused AI with retrieval-augmented generation reduces hallucination risks.

Notable Work
Capsule Networks (with Hinton)Matrix Capsules with EM RoutingEnterprise LLM deployment
Frontier FounderFounderLab LeaderAcademicResearcherPolicyProduct
@nickfrosstWebsiteLinkedIn
VIEW DOSSIER →
Nick Turley

Nick Turley

Organization
OpenAI

Position

VP of ChatGPT

Expertise
chatgptconsumer productdeploymentexecutive-leadership
Alignment

Safety-aligned researcher

Safety Stance

Associated with large-scale product deployment under OpenAI's safety framework.

Notable Work
ChatGPT product leadershipdeep research leadership participation
productchatgptexecutive-leadershipFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Nicolas Heess

Nicolas Heess

Organization
Google DeepMind

Position

Research Scientist (Director) and AI/Robotics Team Lead, Google DeepMind

Expertise
roboticsreinforcement learningembodied intelligence
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
robot manipulationlocomotion and controlGemini Robotics
researchroboticsrlFrontier Lab LeaderResearcher
WebsiteLinkedIn
VIEW DOSSIER →
Nidhi Pai

Nidhi Pai

Organization
xAI

Position

Grok Voice

Expertise
voiceengineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
voiceengineering
voiceengineeringResearcher
LinkedIn
VIEW DOSSIER →
Nimesh Ghelani

Nimesh Ghelani

Organization
Google DeepMind

Position

Research Engineer, Google DeepMind

Expertise
code generationlanguage modelssoftware intelligenceengineeringcodellms
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
models for codingcode-focused evaluationdeveloper tooling research
engineeringcodellmsResearcher
Website
VIEW DOSSIER →
Nitarshan Rajkumar

Nitarshan Rajkumar

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic Index
economic researchResearcherSafety
VIEW DOSSIER →
🇺🇸
Noam Shazeer

Noam Shazeer

Nationality
American

Organization
Google DeepMind

Position

VP Engineering & Gemini Co-Lead, Google DeepMind

Expertise
Foundation ModelsTransformersNatural Language ProcessingLarge Language ModelsModel Inventor
Education

BS, Mathematics & Computer ScienceDuke University

Alignment

Builder-first technologist

Safety Stance

Pragmatic approach to safety. Left Google partly because the company was too cautious about releasing chatbot technology. Believes in shipping products and iterating.

Notable Work
Attention Is All You Need (Transformer)LaMDAMixture of ExpertsSparsely-Gated MoECharacter.AI
Foundation ModelsModel InventorFounderCEOFrontier Lab LeaderResearcherSafety
@NoamShazeerWebsiteLinkedIn
VIEW DOSSIER →
Norman Mu

Norman Mu

Organization
xAI

Position

Engineering

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
🇨🇦
Olivia Norton

Olivia Norton

Nationality
Canadian

Organization
Sanctuary AI

Position

Co-Founder, CTO & CPO, Sanctuary AI

Expertise
Artificial General IntelligenceRoboticsComputer EngineeringEmbodied AGI
Education

BSc, Computer Engineering (Biomedical)University of Calgary

MEng, Electrical and Computer EngineeringUniversity of British Columbia

Alignment

Frontier lab operator

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
RoboticsEmbodied AGIArtificial General IntelligenceComputer Engineering
FounderCTORoboticsEmbodied AGILab LeaderResearcher
VIEW DOSSIER →
Olivia Olsen

Olivia Olsen

Organization
xAI

Position

AI Learning & Development

Expertise
human_datatraining
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
human_datatraining
human_datatrainingResearcher
LinkedIn
VIEW DOSSIER →
Olivier Godement

Olivier Godement

Organization
OpenAI

Expertise
product deploymentlaunch managementaudio modelsdeploymentneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
next-generation audio modelsGPT-4V launch management
productdeploymentneeds-reviewResearcherProduct
VIEW DOSSIER →
Omar (Omar-V2)

Omar (Omar-V2)

Organization
xAI

Position

SDK maintainer (xai-org)

Expertise
developer_experiencesdk
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
developer_experiencesdk
developer_experiencesdkResearcher
VIEW DOSSIER →
🇪🇸
Oriol Vinyals

Oriol Vinyals

Nationality
Spanish

Organization
Google DeepMind

Position

VP of Research & Gemini Co-lead, Google DeepMind

Expertise
Natural Language ProcessingSequence-to-Sequence ModelsMultimodal AIGame AINLP
Education

BS, Mathematics and Telecommunication EngineeringUniversitat Politècnica de Catalunya

MS, Computer ScienceUniversity of California, San Diego

PhD, Electrical Engineering and Computer SciencesUniversity of California, Berkeley

Alignment

Research-focused

Safety Stance

Believes in responsible development. Stated there are "no walls in sight" for model capability, emphasizing the need for careful scaling.

Notable Work
Sequence to Sequence Learning with Neural NetworksPointer NetworksAlphaStarGeminiShow and Tell (image captioning)
NLPResearcherLab LeaderFrontier Lab Leader
@OriolVinyalsMLLinkedIn
VIEW DOSSIER →
Paul Smith

Paul Smith

Organization
Anthropic

Position

Chief Commercial Officer

Expertise
go-to-marketenterprise AIglobal expansioncommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports Anthropic's deployment strategy within its public safety-first positioning.

Notable Work
Anthropic international growth
commercialexecutiveFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Pavel Golik

Pavel Golik

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
speech recognitionmultilingual modelingquality analysisspeechmultilinguality
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
multilingual speech modelingspeech-quality analysisASR research
researchspeechmultilingualityResearcher
Website
VIEW DOSSIER →
Petar Veličković

Petar Veličković

Nationality
Serbian

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
graph neural networksalgorithmic reasoningcomputational biologygraphsreasoningverified
Education

PhDUniversity of Cambridge

Alignment

Safety-aligned researcher

Safety Stance

Not a core safety figure; work is more focused on reasoning and scientific applications.

Notable Work
graph representation learningalgorithmic reasoningAI for science
researchgraphsreasoningverifiedResearcherSafety
Website
VIEW DOSSIER →
🇺🇸
Peter DeSantis

Peter DeSantis

Nationality
American

Organization
Amazon

Position

SVP, Head of Amazon AI Organization (AGI, Custom Silicon, Quantum Computing)

Expertise
AI InfrastructureCloud ComputingCustom ChipsAGISVPInfrastructure
Education

BS, Economics and Computer ScienceDartmouth College

Alignment

Research-focused technologist

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
SVPAGIInfrastructureCloud ComputingAI InfrastructureCustom Chips
SVPAGIInfrastructureCloud ComputingCEOLab LeaderResearcherSystems
VIEW DOSSIER →
🇺🇸
Peter Lee

Peter Lee

Nationality
American

Organization
Microsoft

Position

President, Microsoft Research

Expertise
AI ResearchHealthcare AIScientific DiscoveryPresidentMicrosoft ResearchEx-DARPA
Education

BS, Mathematics and Computer SciencesUniversity of Michigan

PhD, Computer and Communication SciencesUniversity of Michigan

Alignment

Academic researcher

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
PresidentMicrosoft ResearchHealthcare AIEx-DARPAAI ResearchScientific Discovery
PresidentMicrosoft ResearchHealthcare AIEx-DARPALab LeaderAcademicResearcher
VIEW DOSSIER →
Peter McCrory

Peter McCrory

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Economic Index report: Economic primitivesAnthropic Economic Index report: Uneven geographic and enterprise AI adoptionIndia Country Brief: The Anthropic Economic Index
economic researchResearcherSafety
VIEW DOSSIER →
🇺🇸
Peter Norvig

Peter Norvig

Nationality
American

Organization
Stanford HAI

Position

Distinguished Education Fellow, Stanford Institute for Human-Centered AI (HAI)

Expertise
Artificial IntelligenceAI EducationNatural Language ProcessingSearch AlgorithmsAI Educator
Education

BS, Applied MathematicsBrown University

PhD, Computer ScienceUniversity of California, Berkeley

Alignment

Pragmatic centrist

Safety Stance

Balanced perspective. Believes in human-centered AI development. Engages with Gary Marcus and other critics on how to govern AI responsibly. Focuses on education as a key lever for safe AI adoption.

Notable Work
AI: A Modern Approach (textbook)Google Search algorithmsNASA autonomous spacecraft softwareAI education research
AI EducatorAcademicLab LeaderResearcher
@NorvigPeterWebsiteLinkedIn
VIEW DOSSIER →
Peter Welinder

Peter Welinder

Organization
OpenAI

Expertise
product and research leadershiproboticsoperator systemsresearch-leadershipneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Operator leadershipearly OpenAI research and product work
research-leadershipproductneeds-reviewResearcherSystemsProduct
VIEW DOSSIER →
Petros Maniatis

Petros Maniatis

Organization
Google DeepMind

Position

Senior Staff Research Scientist, Google DeepMind

Expertise
code intelligencesystemsmachine learningcode
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
learning for codesoftware intelligencesystems + ML
researchcodesystemsResearcherSystems
Website
VIEW DOSSIER →
Philip C.

Philip C.

Organization
xAI

Position

Program Manager

Expertise
program_management
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
program_management
program_managementsafetyResearcherSafety
LinkedIn
VIEW DOSSIER →
Piotr Mirowski

Piotr Mirowski

Organization
Google DeepMind

Position

Staff Research Scientist, Google DeepMind

Expertise
navigationautonomous agentscomputational creativityagentscreativity
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
StreetLearn and navigation researchweather/climate forecasting contributionscreative AI studies
researchagentscreativityResearcher
Website
VIEW DOSSIER →
🇮🇳🇺🇸
Prem Akkaraju

Prem Akkaraju

Nationality
Indian-American

Organization
Stability AI

Position

CEO, Stability AI

Expertise
Media TechnologyVisual EffectsAI CommercializationMedia Tech
Education

MBA, BusinessColumbia Business School

BA, Applied Mathematics & EconomicsUniversity of New Mexico

Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Media TechVisual EffectsMedia TechnologyAI Commercialization
CEOMedia TechVisual EffectsFounderLab LeaderResearcher
VIEW DOSSIER →
Prithvi Rajasekaran

Prithvi Rajasekaran

Organization
Anthropic

Position

Applied AI contributor

Expertise
context engineeringapplied AI
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's developer and agent engineering stack within the company's safety-first framing.

Notable Work
Effective context engineering for AI agents
applied AIResearcherSafety
VIEW DOSSIER →
🇮🇳🇬🇧
Pushmeet Kohli

Pushmeet Kohli

Nationality
Indian-British

Organization
Google DeepMind

Position

VP of Research, Google DeepMind; Head of AI for Science & Strategic Initiatives

Expertise
AI for ScienceComputer VisionProtein Structure PredictionAI Safety
Education

BTech, Computer Science and EngineeringNational Institute of Technology, Warangal

PhD, Computer VisionOxford Brookes University

Alignment

Science-focused

Safety Stance

Advocates for responsible AI deployment. Leads SynthID watermarking initiative to combat AI-generated misinformation. Focuses on using AI to solve scientific challenges safely.

Notable Work
AlphaFold (oversight)AlphaEvolveSynthIDCo-ScientistAlphaGenome
AI for ScienceLab LeaderFrontier Lab LeaderAcademicResearcherSafetyInvestorProduct
@pushmeetLinkedIn
VIEW DOSSIER →
Quoc Le

Quoc Le

Organization
Google / Google DeepMind

Position

Senior Scientist and large-model leader in the Google-DeepMind stack

Expertise
large language modelsmultimodal systemsdeep learningllmsmultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AutoMLPaLM-era model developmentfoundation model research
researchllmsmultimodalResearcherSystems
Website
VIEW DOSSIER →
Rachel Dias

Rachel Dias

Organization
OpenAI

Expertise
evaluationbenchmarkingpaperbenchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
PaperBencho3/o4 contributors
evaluationbenchmarkingneeds-reviewResearcher
VIEW DOSSIER →
Radhakrishnan Venkataramani

Radhakrishnan Venkataramani

Organization
xAI

Position

Engineering (departed 2026)

Expertise
reasoning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
reasoning
reasoningrlResearcher
VIEW DOSSIER →
Radu Soricut

Radu Soricut

Organization
Google DeepMind

Position

Distinguished Scientist and Senior Research Director, Google DeepMind

Expertise
multimodal modelsnatural language processingmachine learningleadershipmultimodalnlp
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
PaLI-Xmultimodal understandinglarge-model research leadership
leadershipmultimodalnlpverifiedFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
Rahul Patil

Rahul Patil

Organization
Anthropic

Position

Chief Technology Officer

Expertise
infrastructuresecurityenterprise engineeringengineering
Alignment

Safety-aligned researcher

Safety Stance

Contributes to Anthropic's safety-and-reliability-focused infrastructure agenda.

Notable Work
Anthropic engineering leadership
engineeringexecutiveFrontier Lab LeaderResearcherSafetySystemsProduct
VIEW DOSSIER →
Rahul Ravishankar

Rahul Ravishankar

Organization
xAI

Position

Member of Technical Staff (departed 2026)

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
VIEW DOSSIER →
🇺🇸
Raia Hadsell

Raia Hadsell

Nationality
American

Organization
Google DeepMind

Position

Vice President of Research; Co-Lead, Frontier AI Unit, Google DeepMind

Expertise
roboticsreinforcement learningrepresentation learningleadershipverified
Alignment

Frontier lab operator

Safety Stance

Associated with technically grounded frontier-AI development and cautious real-world deployment for embodied systems.

Notable Work
embodied AIagent learningfrontier AI leadership
leadershiproboticsrlverifiedFrontier Lab LeaderResearcherSystemsProduct
Website
VIEW DOSSIER →
Rakesh G.

Rakesh G.

Organization
xAI

Position

Sr. Data Center Engineer

Expertise
data_centerengineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_centerengineering
data_centerengineeringResearcher
LinkedIn
VIEW DOSSIER →
Rashid Lasker

Rashid Lasker

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Rebecca Harbeck

Rebecca Harbeck

Organization
Anthropic

Position

Partnerships / GTM Contributor

Expertise
partnershipsdeveloper ecosystemcommercial
Alignment

Safety-aligned researcher

Safety Stance

Supports ecosystem growth within Anthropic's public safety framing.

Notable Work
commercialpartnershipsdeveloper ecosystem
commercialResearcherSafety
VIEW DOSSIER →
Reed Hastings

Reed Hastings

Organization
Anthropic

Position

Board Member

Expertise
corporate governancetechnology leadershipboardgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's public-benefit mission.

Notable Work
boardgovernancecorporate governancetechnology leadership
boardgovernanceResearcherPolicy
VIEW DOSSIER →
🇺🇸
Reid Hoffman

Reid Hoffman

Nationality
American

Organization
Greylock Partners

Position

Partner, Greylock; Co-Founder, Inflection AI; Board Member, Microsoft

Expertise
Venture CapitalSocial NetworksAI StrategyBoard Member
Education

BA, Symbolic SystemsStanford University

MS, PhilosophyOxford University

Alignment

Capital allocator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Board MemberVenture CapitalSocial NetworksAI Strategy
InvestorFounderBoard MemberLab LeaderResearcher
VIEW DOSSIER →
Reiichiro Nakano

Reiichiro Nakano

Organization
OpenAI

Expertise
reasoningdeployment and post-trainingfrontier modelsreasoning-researchpost-trainingneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4V post-trainingGPT-4 data and training work
reasoning-researchpost-trainingneeds-reviewResearcherProduct
VIEW DOSSIER →
Ria Strasser Galvis

Ria Strasser Galvis

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
India Country Brief: The Anthropic Economic Index
economic researchResearcherSafety
VIEW DOSSIER →
Richard Fontaine

Richard Fontaine

Organization
Anthropic

Position

Long-Term Benefit Trust Trustee

Expertise
public-benefit governancepolicytrustgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's long-term public-benefit mission.

Notable Work
trustgovernancepublic-benefit governance
trustgovernanceResearcherPolicy
VIEW DOSSIER →
🇺🇸🇨🇦
Richard S. Sutton

Richard S. Sutton

Nationality
American-Canadian

Organization
University of Alberta / Keen Technologies

Position

Professor of Computing Science, University of Alberta; Chief Scientific Advisor, Amii; Research Scientist, Keen Technologies

Expertise
Reinforcement LearningTemporal-Difference LearningArtificial IntelligenceComputational NeuroscienceRL Pioneer
Education

PhD, Computer ScienceUniversity of Massachusetts Amherst

BA, PsychologyStanford University

Alignment

Optimistic accelerationist

Safety Stance

Optimistic about superintelligent AI. Believes super intelligent agents are coming, will be good for the world, and the path to creating them runs through reinforcement learning.

Notable Work
Temporal-Difference LearningPolicy Gradient MethodsActor-Critic ArchitectureReinforcement Learning: An Introduction (textbook)The Bitter Lesson
AcademicResearcherRL PioneerLab LeaderPolicy
Website
VIEW DOSSIER →
Riley Trettel

Riley Trettel

Organization
xAI

Position

Energy & Data Center Development

Expertise
data_centerenergy
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_centerenergy
data_centerenergyResearcher
LinkedIn
VIEW DOSSIER →
Robert Geirhos

Robert Geirhos

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
robustnessinterpretabilityvision
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
human-machine comparisonsrobust visual recognitioninterpretability
researchrobustnessvisionResearcherSafety
Website
VIEW DOSSIER →
Robert Keele

Robert Keele

Organization
xAI

Position

General counsel / legal head (per reporting; departed)

Expertise
legalgovernance
Alignment

Policy and governance operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
legalgovernance
legalgovernanceResearcherPolicy
VIEW DOSSIER →
🇬🇧🇺🇸
Rob Fergus

Rob Fergus

Nationality
British-American

Organization
Meta

Position

Director of AI Research & Head of FAIR, Meta

Expertise
Computer VisionDeep LearningFundamental AI ResearchHead of FAIREx-DeepMind
Education

BA/MEng, Electrical and Information EngineeringUniversity of Cambridge

MSc, Electrical EngineeringCaltech

DPhil, Electrical EngineeringUniversity of Oxford

Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Head of FAIREx-DeepMindComputer VisionFrontier Lab LeaderDeep LearningFundamental AI Research
Head of FAIREx-DeepMindComputer VisionAcademicFrontier Lab LeaderResearcher
@rob_fergus
VIEW DOSSIER →
🇦🇺🇺🇸
Rodney Brooks

Rodney Brooks

Nationality
Australian-American

Organization
Robust.AI

Position

Founder & CTO, Robust.AI

Expertise
RoboticsArtificial IntelligenceBehavior-Based RoboticsWarehouse Automation
Education

MA, Pure MathematicsFlinders University

PhD, Computer ScienceStanford University

Alignment

AI hype skeptic

Safety Stance

Deeply skeptical of existential risk narratives. Believes current AI capabilities are vastly overhyped. Predicts deployable robotic dexterity will remain far behind human hands for decades. Focuses on practical, near-term robotics challenges.

Notable Work
Subsumption ArchitectureBehavior-Based RoboticsRoomba (iRobot)Baxter & Sawyer (Rethink Robotics)Carter (Robust.AI)
RoboticsAcademicFounderLab LeaderResearcher
@rodneyabrooksWebsiteLinkedIn
VIEW DOSSIER →
🇮🇳🇺🇸
Rohit Prasad

Rohit Prasad

Nationality
Indian-American

Organization
Amazon

Position

Former SVP & Head Scientist, Amazon AGI (departed end of 2025)

Expertise
Speech RecognitionNatural Language UnderstandingFoundation ModelsAGISpeech AI
Education

BE, Electronics and Communications EngineeringBirla Institute of Technology

MS, Electrical EngineeringIllinois Institute of Technology

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
AGISpeech AIFoundation ModelsSpeech RecognitionNatural Language Understanding
ResearcherAGISpeech AIFoundation ModelsLab Leader
VIEW DOSSIER →
Ronnie Chatterji

Ronnie Chatterji

Organization
OpenAI

Position

Chief Economist

Expertise
economicspolicyAI productivity analysiseconomics-policyexecutive-leadershipai-impacts
Alignment

Policy and governance operator

Safety Stance

Public work focuses on distributing AI's benefits and understanding labor-market effects.

Notable Work
Economic impacts of AIProductivity and labor market analysis
economics-policyexecutive-leadershipai-impactsFrontier Lab LeaderAcademicResearcherPolicy
VIEW DOSSIER →
🇺🇸
Ross Girshick

Ross Girshick

Nationality
American

Organization
Vercept

Position

Co-Founder, Vercept

Expertise
Computer VisionObject DetectionInstance SegmentationDeep Learning
Education

PhD, Computer ScienceUniversity of Chicago

Alignment

Open research advocate

Safety Stance

Supports open-source AI research. Focuses on building practical, reliable vision systems.

Notable Work
R-CNNFast R-CNNFaster R-CNNMask R-CNNSegment Anything Model (SAM)Detectron
Computer VisionResearcherFounderLab LeaderSystemsOpen Source
@inkynumbersWebsiteLinkedIn
VIEW DOSSIER →
Ross Nordeen

Ross Nordeen

Organization
xAI

Position

Co-founder

Expertise
program_management
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
program_managementFrontier Lab Leader
founderprogram_managementFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇺🇸
Rumman Chowdhury

Rumman Chowdhury

Nationality
American

Organization
Humane Intelligence

Position

CEO & Co-Founder, Humane Intelligence; U.S. Science Envoy for AI

Expertise
AI EthicsResponsible AIAlgorithmic AccountabilityAI Governance
Education

BS, Political ScienceMIT

BS, Management ScienceMIT

MS, Quantitative MethodsColumbia University

PhD, Political ScienceUniversity of California, San Diego

Alignment

Responsible AI advocate, pro-governance

Safety Stance

Focuses on practical, applied AI ethics — building tools and frameworks for auditing and accountability. Advocates for diverse, community-driven approaches to AI evaluation rather than top-down corporate self-regulation.

Notable Work
AI Auditing FrameworksAlgorithmic Bias Detection at ScaleCommunity-driven AI Evaluation
AI EthicsResearcherFounderCEOLab LeaderPolicy
@ruchowdhWebsiteLinkedIn
VIEW DOSSIER →
R

Ruoming Pang

Organization
OpenAI

Position

Researcher, OpenAI

Expertise
AI InfrastructureFoundation ModelsDistributed SystemsSpeech RecognitionOpenAIInfrastructure
Education

BS, Computer ScienceShanghai Jiao Tong University

MS, Computer ScienceUniversity of Southern California

PhD, Computer SciencePrinceton University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Zanzibar authorization systemLarge-scale AI infrastructureApple foundation models
OpenAIInfrastructureFoundation ModelsEx-MetaEx-AppleEx-GoogleFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇺🇸
Russ Tedrake

Russ Tedrake

Nationality
American

Organization
MIT / Toyota Research Institute

Position

Toyota Professor, MIT; SVP of Large Behavior Models, Toyota Research Institute

Expertise
RoboticsMotion PlanningManipulationReinforcement Learning
Education

BSE, Computer EngineeringUniversity of Michigan

PhD, Electrical Engineering and Computer ScienceMIT

Alignment

Research-focused

Safety Stance

Focuses on building reliable and safe robotic systems through rigorous simulation and verification. Emphasizes the gap between demos and deployable robots.

Notable Work
Drake (simulation framework)Large Behavior ModelsRobotic Manipulation (textbook)Underactuated RoboticsLittleDog locomotion
RoboticsAcademicLab LeaderResearcherSystemsOpen Source
@RussTedrakeWebsiteLinkedIn
VIEW DOSSIER →
Ruth Appel

Ruth Appel

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Economic Index report: Economic primitivesAnthropic Economic Index report: Uneven geographic and enterprise AI adoptionIndia Country Brief: The Anthropic Economic Index
economic researchFrontier Lab LeaderResearcherSafety
VIEW DOSSIER →
Ryan Heller

Ryan Heller

Organization
Anthropic

Position

Economic research contributor

Expertise
economicsAI adoption analysiseconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Economic Index report: Economic primitives
economic researchResearcherSafety
VIEW DOSSIER →
Saachi Jain

Saachi Jain

Organization
OpenAI

Expertise
alignmentdeep researchmodel behavioralignment-evalsreasoning-researchneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
deep researchcollective alignment
alignment-evalsreasoning-researchneeds-reviewResearcherSafety
VIEW DOSSIER →
Saffron Huang

Saffron Huang

Organization
Anthropic

Position

Research contributor

Expertise
social impacts researchhuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic InterviewerHow AI Is Transforming Work at Anthropic
researchResearcherSafety
VIEW DOSSIER →
Sagar Naik

Sagar Naik

Organization
xAI

Position

Data Center Engineer

Expertise
data_center
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
data_center
data_centerResearcher
LinkedIn
VIEW DOSSIER →
Sahil Jain

Sahil Jain

Organization
xAI

Position

Member of Technical Staff

Expertise
mts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
mts
mlmtsResearcher
LinkedIn
VIEW DOSSIER →
Saket Joshi

Saket Joshi

Organization
xAI

Position

Machine Learning

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
mlengineeringResearcher
LinkedIn
VIEW DOSSIER →
🇺🇸
Sam Altman

Sam Altman

Nationality
American

Organization
OpenAI

Position

CEO, OpenAI

Expertise
Artificial IntelligenceTechnology EntrepreneurshipStartup InvestingAI Policy
Education

Dropped out, Computer ScienceStanford University

Alignment

Cautious accelerationist, pro-regulation dialogue

Safety Stance

Publicly supports AI safety research and regulation while aggressively scaling capabilities. Advocates for international governance. Stepped down from clean energy boards in 2025 to focus on OpenAI. Has drawn criticism for perceived gap between safety rhetoric and rapid deployment.

Notable Work
ChatGPTGPT-4GPT-4oo1 reasoning models
CEOLab LeaderResearcherSafetyPolicyProduct
@samaWebsiteLinkedIn
VIEW DOSSIER →
Sam Dodge

Sam Dodge

Organization
xAI

Position

Machine Learning Engineer

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
mlengineeringResearcher
LinkedIn
VIEW DOSSIER →
S

Samuel Albanie

Organization
Google DeepMind

Position

Lead frontier evals for Gemini

Expertise
Artificial Intelligence
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
Frontier Lab LeaderArtificial Intelligence
Frontier Lab LeaderAcademicResearcherSafety
VIEW DOSSIER →
Samuel Miserendino

Samuel Miserendino

Organization
OpenAI

Expertise
coding evaluationswe-lancersafety systemsevaluationcodingneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
SWE-Lancerdeep research safety systems
evaluationcodingneeds-reviewResearcherSafetySystems
VIEW DOSSIER →
Sandeep Rao

Sandeep Rao

Organization
xAI

Position

Engineer

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
LinkedIn
VIEW DOSSIER →
S

Santosh Janardhan

Organization
Meta

Position

co-lead of Meta Compute

Expertise
Artificial Intelligence
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Frontier Lab LeaderArtificial Intelligence
Frontier Lab LeaderResearcher
VIEW DOSSIER →
Sarah Friar

Sarah Friar

Organization
OpenAI

Position

Chief Financial Officer

Expertise
financescalingenterprise growthexecutive-leadership
Alignment

Safety-aligned researcher

Safety Stance

No distinct public technical safety stance located in first-party materials reviewed.

Notable Work
Financial leadership supporting OpenAI research and scale-up
executive-leadershipfinancescalingCEOFrontier Lab LeaderResearcherSafetyProduct
VIEW DOSSIER →
Sarah Pollack

Sarah Pollack

Organization
Anthropic

Position

Research / communications contributor

Expertise
research communicationeconomic research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIndia Country Brief: The Anthropic Economic IndexIntroducing Anthropic InterviewerHow AI Is Transforming Work at AnthropicAnthropic Education Report: The AI Fluency Index
researchResearcherSafety
VIEW DOSSIER →
Satinder Singh

Satinder Singh

Organization
Google DeepMind / University of Michigan

Position

Research Scientist, Google DeepMind; Professor, University of Michigan

Expertise
reinforcement learningautonomous agentscomputational game theorytheory
Alignment

Academic researcher

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
foundational reinforcement learningRLDM community buildingautonomous agent research
researchrltheoryAcademicResearcher
Website
VIEW DOSSIER →
🇮🇳🇺🇸
Satya Nadella

Satya Nadella

Nationality
Indian-American

Organization
Microsoft

Position

Chairman & CEO, Microsoft

Expertise
AI StrategyCloud ComputingEnterprise TechnologyChairman
Education

BE, Electrical EngineeringManipal Institute of Technology

MS, Computer ScienceUniversity of Wisconsin-Milwaukee

MBA, BusinessUniversity of Chicago Booth School of Business

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
ChairmanAI StrategyCloud ComputingEnterprise Technology
CEOChairmanAI StrategyCloud ComputingResearcher
@satyanadella
VIEW DOSSIER →
Saurish Srivastava

Saurish Srivastava

Organization
xAI

Position

Post-training

Expertise
post_trainingmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
post_trainingmts
post_trainingmtsResearcher
LinkedIn
VIEW DOSSIER →
🇺🇸
Sean White

Sean White

Nationality
American

Organization
Inflection AI

Position

CEO, Inflection AI

Expertise
Human-Computer InteractionAugmented RealityEmotional AIHCI
Education

PhD, Computer ScienceColumbia University

Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
HCIEmotional AIHuman-Computer InteractionAugmented Reality
CEOHCIEmotional AIFounderLab LeaderResearcher
VIEW DOSSIER →
🇦🇹
Sebastian De Ro

Sebastian De Ro

Nationality
Austrian

Organization
Magic

Position

Co-Founder & CTO, Magic AI

Expertise
Systems EngineeringAI InfrastructureLong-Context ModelsInfrastructureAI Coding
Education

Diploma, Higher InformaticsHTBLVA Spengergasse

Alignment

Frontier lab operator

Safety Stance

Focuses on infrastructure reliability and training efficiency; no detailed standalone safety doctrine is documented here.

Notable Work
InfrastructureAI CodingSystems EngineeringAI InfrastructureLong-Context Models
FounderCTOInfrastructureAI CodingLab LeaderResearcherSystems
VIEW DOSSIER →
🇩🇪🇺🇸
Sebastian Thrun

Sebastian Thrun

Nationality
German-American

Organization
Stanford University / Stealth Startups

Position

Research Professor, Stanford University; Founder, multiple AI ventures

Expertise
Autonomous VehiclesRoboticsAI EducationProbabilistic Robotics
Education

Vordiplom, Computer Science, Economics, and MedicineUniversity of Hildesheim

PhD, Computer Science and StatisticsUniversity of Bonn

Alignment

Techno-optimist

Safety Stance

Optimistic about AI's benefits. Believes autonomous vehicles will save millions of lives. Advocates for AI democratization through education.

Notable Work
Stanley (DARPA Grand Challenge winner)Google Self-Driving CarProbabilistic RoboticsOnline Education at Scale
RoboticsAcademicFounderEducatorCEOLab LeaderResearcher
@SebastianThrunWebsiteLinkedIn
VIEW DOSSIER →
Sergey Edunov

Sergey Edunov

Organization
Genesis Molecular AI

Position

SVP of Foundation Models, Genesis Molecular AI

Expertise
Large Language ModelsMachine TranslationPre-trainingEx-MetaLlamaFoundation Models
Alignment

Research-focused technologist

Safety Stance

Primarily research-focused public profile; no detailed standalone safety doctrine is documented here.

Notable Work
Llama 2Llama 3Large-scale machine translation
Ex-MetaLlamaFoundation ModelsBiotech AILab LeaderResearcher
VIEW DOSSIER →
Shakir Mohamed

Shakir Mohamed

Organization
Google DeepMind

Position

Director of Research, Google DeepMind

Expertise
machine learningprobabilistic modelingAI for scienceleadershipprobabilistic-mlverified
Alignment

Safety-aligned researcher

Safety Stance

Explicitly emphasizes responsible innovation, social impact, and technical rigor in evaluating advanced systems.

Notable Work
probabilistic MLcausal and decision-making systemsresponsible AI leadership
leadershipprobabilistic-mlsafetyverifiedFrontier Lab LeaderResearcherSafetySystems
Website
VIEW DOSSIER →
Shawn Thapa

Shawn Thapa

Organization
xAI

Position

SDK/proto contributor (xai-org)

Expertise
developer_experiencesdk
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
developer_experiencesdk
developer_experiencesdkResearcher
VIEW DOSSIER →
Shayan Salehian

Shayan Salehian

Organization
xAI

Position

Worked on X timeline and Grok models (departed 2026)

Expertise
recommendationgrok
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
recommendationgrok
recommendationgrokResearcher
VIEW DOSSIER →
Shekoofeh Azizi

Shekoofeh Azizi

Organization
Google DeepMind

Position

Staff Research Scientist and Research Lead, Google DeepMind

Expertise
biomedical AIcancer AIscientific language modelshealthbiomedical-aiscience
Alignment

Safety-aligned researcher

Safety Stance

Focuses on safety-relevant biomedical applications and evidence-heavy deployment contexts.

Notable Work
TxGemmatherapeutics-related modelsbiomedical super-intelligence agenda
healthbiomedical-aiscienceFrontier Lab LeaderResearcherSafetyProduct
Website
VIEW DOSSIER →
Sherwin Wu

Sherwin Wu

Organization
OpenAI

Expertise
audio modelsAPI deploymentdeveloper systemsdeveloper-platformsaudio-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
next-generation audio modelsGPT-4 API deployment
developer-platformsaudio-modelsneeds-reviewResearcherSystemsProduct
VIEW DOSSIER →
Shi Dong

Shi Dong

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Shimin Wang

Shimin Wang

Organization
xAI

Position

Member of Technical Staff (Post-training)

Expertise
post_trainingmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
post_trainingmts
post_trainingmtsResearcher
LinkedIn
VIEW DOSSIER →
Sid Bidasaria

Sid Bidasaria

Organization
Anthropic

Position

Member of Technical Staff

Expertise
engineeringdeveloper platform
Alignment

Frontier lab operator

Safety Stance

Contributes to Anthropic technical delivery.

Notable Work
engineeringdeveloper platform
engineeringResearcherProduct
VIEW DOSSIER →
🇩🇪
Simon Kohl

Simon Kohl

Nationality
German

Organization
Latent Labs

Position

Founder, Latent Labs

Expertise
Protein DesignAI for BiologyStructural BiologyEx-DeepMindAlphaFoldAI for Science
Alignment

Frontier lab operator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
AlphaFold2
FounderEx-DeepMindAlphaFoldProtein DesignAI for ScienceResearcher
VIEW DOSSIER →
Simon Zhai

Simon Zhai

Organization
xAI

Position

Member of Technical Staff (departed 2026)

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringResearcher
VIEW DOSSIER →
Slav Petrov

Slav Petrov

Organization
Google DeepMind

Position

Vice President, Research, Google DeepMind

Expertise
language modelsNLP researchGemini leadershipleadershipllmsnlp
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini co-leadershipmachine translation and NLPresearch strategy
leadershipllmsnlpverifiedFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
S. M. Ali Eslami

S. M. Ali Eslami

Organization
Google DeepMind

Position

Distinguished Research Scientist, Google DeepMind

Expertise
computer visionreasoningagents and world modelsvisionagents
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
scene representation learningmultimodal reasoningGemini search and agents
researchvisionagentsResearcherSystems
Website
VIEW DOSSIER →
Srinivas Narayanan

Srinivas Narayanan

Organization
OpenAI

Position

VP Engineering

Expertise
engineering leadershipsystemsproduct infrastructureengineeringexecutive-leadership
Alignment

Safety-aligned researcher

Safety Stance

No standalone public safety platform located, but role is central to reliable deployment.

Notable Work
Engineering leadership for frontier model deployment
engineeringexecutive-leadershipsystemsFrontier Lab LeaderResearcherSafetySystemsProduct
VIEW DOSSIER →
Stuart Ritchie

Stuart Ritchie

Organization
Anthropic

Position

Research / writing contributor

Expertise
research communicationsocial research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Introducing Anthropic InterviewerHow AI Is Transforming Work at AnthropicHow AI assistance impacts the formation of coding skillsAnthropic Education Report: How educators use Claude
researchResearcherSafety
VIEW DOSSIER →
🇬🇧🇺🇸
Stuart Russell

Stuart Russell

Nationality
British-American

Organization
University of California, Berkeley / CHAI

Position

Professor (Smith-Zadeh Chair in Engineering), University of California, Berkeley; Director, Center for Human-Compatible AI (CHAI)

Expertise
Artificial IntelligenceMachine LearningAI SafetyProbabilistic ReasoningRational Decision Making
Education

PhD, Computer ScienceStanford University

BA, PhysicsUniversity of Oxford

Alignment

AI Safety Advocate, supports strong regulation

Safety Stance

One of the most vocal advocates for AI existential risk. Author of "Human Compatible." Warns that AI systems pursuing misspecified objectives pose catastrophic risks. Co-founded IASEAI to give the safety community a collective voice.

Notable Work
AI: A Modern Approach (textbook)Human-Compatible AIInverse Reinforcement LearningProbabilistic ProgrammingInternational AI Safety Report
AcademicResearcherSafetyLab LeaderPolicySystems
@StuartJRussellWebsiteLinkedIn
VIEW DOSSIER →
Sudhir Vijay

Sudhir Vijay

Organization
xAI

Position

Member of Technical Staff

Expertise
infrastructuremts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
infrastructuremts
infrastructuremtsResearcherSystems
LinkedIn
VIEW DOSSIER →
Sulaiman Khan Ghori

Sulaiman Khan Ghori

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
VIEW DOSSIER →
Sulman Choudhry

Sulman Choudhry

Organization
OpenAI

Expertise
audio modelsproduct-research coordinationleadership sponsorshipaudio-modelsresearch-leadershipneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
next-generation audio modelsmultimodal product leadership contexts
audio-modelsresearch-leadershipneeds-reviewResearcherProduct
VIEW DOSSIER →
🇵🇱
Szymon Sidor

Szymon Sidor

Nationality
Polish

Organization
OpenAI

Position

Technical Fellow / Member of Technical Staff, OpenAI

Expertise
Reinforcement LearningPre-trainingAI InfrastructureDistributed SystemsTechnical FellowInfrastructure
Education

BA, Computer ScienceUniversity of Cambridge

MS, Mechatronics, Robotics, and Automation EngineeringMIT

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4 pre-trainingOpenAI Five / Dota 2Rapid frameworkChatGPT
Technical FellowPre-trainingReinforcement LearningInfrastructureResearcherSystemsProduct
VIEW DOSSIER →
Szymon Tworkowski

Szymon Tworkowski

Organization
xAI

Position

Scaling LLMs

Expertise
scaling
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
scaling
researchscalingResearcher
LinkedIn
VIEW DOSSIER →
Tal Schuster

Tal Schuster

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
large language modelsrobustnessefficient adaptationllms
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
robust LLM methodsscalable LLM improvementlanguage model evaluation
researchllmsrobustnessResearcherSafety
Website
VIEW DOSSIER →
Tara Sainath

Tara Sainath

Organization
Google DeepMind

Position

Distinguished Research Scientist; Co-Lead, Gemini Audio Pillar, Google DeepMind

Expertise
speech recognitionaudio modelingmultimodal modelsspeechaudioverified
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
end-to-end ASRGemini audiospeech foundation models
researchspeechaudioverifiedFrontier Lab LeaderResearcherSystems
Website
VIEW DOSSIER →
Tejal Patwardhan

Tejal Patwardhan

Organization
OpenAI

Expertise
evaluationcoding benchmarkssafety systemscodingneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
SWE-LancerPaperBench
evaluationcodingneeds-reviewResearcherSafetySystems
VIEW DOSSIER →
🇺🇸
Terrence Sejnowski

Terrence Sejnowski

Nationality
American

Organization
Salk Institute for Biological Studies

Position

Francis Crick Chair & Head of Computational Neurobiology Laboratory, Salk Institute; Professor, UCSD

Expertise
Computational NeuroscienceNeural NetworksDeep LearningBoltzmann MachinesNeuroscience
Education

BS, PhysicsCase Western Reserve University

PhD, PhysicsPrinceton University

Alignment

Science-first moderate

Safety Stance

Emphasizes the importance of understanding biological intelligence to build safer AI. Advocates for neuroscience-informed AI development.

Notable Work
Boltzmann MachinesNETtalkIndependent Component AnalysisNeural circuit dynamics of working memory
NeuroscienceResearcherAcademicLab Leader
Website
VIEW DOSSIER →
Thang Luong

Thang Luong

Organization
Google DeepMind

Position

Principal Scientist and Director of Research, Google DeepMind

Expertise
language modelsmultimodal systemsmachine translationllmsmultimodalverified
Education

PhD, Computer ScienceStanford University

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Luong attentionMeena and LaMDABard-to-Gemini multimodal development
researchllmsmultimodalverifiedFrontier Lab LeaderResearcherSystemsProduct
Website
VIEW DOSSIER →
Thomas Hubert

Thomas Hubert

Organization
Google DeepMind

Position

Research Engineer, Google DeepMind

Expertise
formal reasoningreinforcement learningmathematicsmathreasoning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AlphaGo Zero and AlphaZeroOlympiad-level formal mathematical reasoningAlphaProof-related work
researchmathreasoningResearcherSystems
WebsiteLinkedIn
VIEW DOSSIER →
Thomas Millar

Thomas Millar

Organization
Anthropic

Position

Research contributor

Expertise
AI usage researchhuman-AI interaction
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceIntroducing Anthropic Interviewer
researchResearcherSafety
VIEW DOSSIER →
🇫🇷
Thomas Wolf

Thomas Wolf

Nationality
French

Organization
Hugging Face

Position

Co-founder & Chief Science Officer, Hugging Face

Expertise
Natural Language ProcessingOpen-Source MLQuantum PhysicsTransfer LearningOpen Source Leader
Education

MSc, Theoretical PhysicsÉcole Polytechnique

PhD, Quantum Statistical PhysicsSorbonne University

Law Degree, Intellectual Property LawPanthéon-Sorbonne University

Alignment

Open-science advocate

Safety Stance

Advocates for open science and reproducibility as safety mechanisms. Concerned that over-reliance on AI without novel reasoning capabilities creates systemic risks. Believes democratization of AI through open-source reduces concentration of power.

Notable Work
Hugging Face Transformers libraryHugging Face Datasets libraryBigScience / BLOOMSmolLM
Open Source LeaderResearcherFounderLab LeaderSafetyOpen Source
@Thom_WolfWebsiteLinkedIn
VIEW DOSSIER →
Tianle Li

Tianle Li

Organization
xAI

Position

Research/engineering

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
researchengineeringResearcher
LinkedIn
VIEW DOSSIER →
Timnit Gebru

Timnit Gebru

Nationality
Ethiopian-Eritrean American

Organization
DAIR (Distributed AI Research Institute)

Position

Founder & Executive Director, DAIR Institute

Expertise
AI EthicsAlgorithmic FairnessComputer VisionData GovernanceRacial Bias in AI
Education

BSc, Electrical EngineeringStanford University

MSc, Electrical EngineeringStanford University

PhD, Computer VisionStanford University

Alignment

Progressive, anti-corporate-concentration

Safety Stance

Focuses on present-day harms of AI: bias, surveillance, labor exploitation, and environmental costs. Critical of existential-risk framing as a distraction from real harms disproportionately affecting marginalized communities. Advocates for community-centered AI governance independent of corporate influence.

Notable Work
On the Dangers of Stochastic ParrotsGender Shades (with Joy Buolamwini)Using Deep Learning and Google Street View to Estimate DemographicsDatasheets for Datasets
AI EthicsResearcherFounderLab LeaderPolicy
@timnitGebruWebsiteLinkedIn
VIEW DOSSIER →
🇫🇷
Timothee Lacroix

Timothee Lacroix

Nationality
French

Organization
Mistral AI

Position

Co-founder & CTO, Mistral AI

Expertise
Large Language ModelsAI InfrastructureKnowledge Graph EmbeddingsDistributed SystemsFrontier Founder
Education

BSc, Computer ScienceEcole Normale Superieure, Paris

MSc, Computer ScienceParis-Saclay University

PhD, Computer ScienceEcole des Ponts ParisTech

Alignment

European tech sovereignty advocate

Safety Stance

Supports open-weight model releases as a mechanism for collective safety research. Believes in building practical, efficient models rather than racing to the largest scale.

Notable Work
LLaMA (contributor at Meta)Knowledge Graph EmbeddingsTensor DecompositionsMistral model family
Frontier FounderResearcherFounderFrontier Lab LeaderSafetySystemsOpen Source
@tlacroix6LinkedIn
VIEW DOSSIER →
🇨🇦
Timothy Lillicrap

Timothy Lillicrap

Nationality
Canadian

Organization
Google DeepMind

Position

Staff Research Scientist, Google DeepMind

Expertise
reinforcement learningoptimal controlneuroscience
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
AlphaGo and AlphaZero-era workcontrol and decision-makingagent learning
researchrlneuroscienceResearcherSystems
Website
VIEW DOSSIER →
Tim Rocktäschel

Tim Rocktäschel

Organization
Google DeepMind

Position

Director, Principal Scientist, and Open-Endedness Team Lead, Google DeepMind

Expertise
open-endednessAGIagents and world modelsagentsagi
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
open-endedness researchGenie-era world modelsreasoning agents
researchopen-endednessagentsagiFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →
Tim Salimans

Tim Salimans

Organization
Google / Google DeepMind

Position

Machine Learning Research Scientist in the Google-DeepMind stack

Expertise
generative modelingdiffusion modelsvideo generationgenerative-modelsvideo
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GANs and evaluationImagenImagen Video
researchgenerative-modelsvideoResearcher
Website
VIEW DOSSIER →
🇦🇺🇬🇧
Toby Ord

Toby Ord

Nationality
Australian-British

Organization
University of Oxford

Position

Senior Researcher, Oxford AI Governance Initiative

Expertise
Existential RiskEthicsAI GovernanceEffective AltruismPhilosophyGovernance
Education

BSc, Computer ScienceUniversity of Melbourne

DPhil, PhilosophyUniversity of Oxford

Alignment

Effective altruist, existential risk-focused

Safety Stance

Ranks unaligned AI as the highest existential risk facing humanity. Advocates for treating AI safety as a civilizational priority on par with nuclear non-proliferation. Supports strong international governance frameworks.

Notable Work
The PrecipiceExistential risk quantificationMoral uncertainty frameworksGlobal priorities research
GovernanceAcademicResearcherSafetyPolicy
@tobyordoxfordWebsiteLinkedIn
VIEW DOSSIER →
Toby Pohlen

Toby Pohlen

Organization
xAI

Position

Co-founder (departed 2026)

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringFrontier Lab Leader
founderengineeringFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
🇨🇿
Tomas Mikolov

Tomas Mikolov

Nationality
Czech

Organization
BottleCap AI / Czech Technical University

Position

Co-Founder, BottleCap AI; Researcher, Czech Technical University in Prague

Expertise
Natural Language ProcessingWord EmbeddingsRecurrent Neural NetworksLanguage ModelingNLP
Education

PhD, Computer ScienceBrno University of Technology

Alignment

Independent researcher

Safety Stance

Skeptical of current LLM approaches to intelligence. Interested in more fundamental, mathematically grounded approaches to AI.

Notable Work
Word2VecFastTextRecurrent Neural Network Language ModelsSubword Embeddings
NLPResearcherFounderLab LeaderAcademic
Website
VIEW DOSSIER →
Tom Cunningham

Tom Cunningham

Organization
OpenAI

Position

Data Scientist

Expertise
economicsdata scienceAI productivityeconomics-policydata-scienceneeds-review
Alignment

Policy and governance operator

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
economic researchcollective alignment
economics-policydata-scienceneeds-reviewResearcherSafetyPolicy
VIEW DOSSIER →
Tomer Kaftan

Tomer Kaftan

Organization
OpenAI

Position

Inference Infrastructure & Deployment Lead

Expertise
inference infrastructuredeploymentreliabilitydeployment-infrastructureneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
GPT-4 inference infrastructureChatGPT Images core inference
deployment-infrastructurereliabilityneeds-reviewFrontier Lab LeaderResearcherSystemsProduct
VIEW DOSSIER →
🇺🇸
Tom Mitchell

Tom Mitchell

Nationality
American

Organization
Carnegie Mellon University

Position

Founders University Professor, Carnegie Mellon University

Expertise
Machine LearningNatural Language UnderstandingAI in EducationCognitive Neuroscience
Education

PhD, Electrical EngineeringStanford University

Alignment

Academic centrist

Safety Stance

Advocates for responsible AI development with emphasis on transparency and education. Believes AI should augment human capabilities, particularly in education.

Notable Work
Machine Learning textbookNever-Ending Language Learner (NELL)Brain imaging for NLUAI Mentors for Student Projects
AcademicResearcher
WebsiteLinkedIn
VIEW DOSSIER →
Travis Pepper

Travis Pepper

Organization
xAI

Position

Member of Technical Staff

Expertise
infrastructuremts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
infrastructuremts
infrastructuremtsResearcherSystems
LinkedIn
VIEW DOSSIER →
🇺🇸
Trevor Darrell

Trevor Darrell

Nationality
American

Organization
UC Berkeley

Position

Professor of Computer Science, UC Berkeley; Co-Director of BAIR; Faculty Director of PATH

Expertise
Computer VisionDeep LearningAutonomous VehiclesExplainable AI
Education

BSE, Computer ScienceUniversity of Pennsylvania

SM, Media Arts & SciencesMassachusetts Institute of Technology

PhD, Media Arts & SciencesMassachusetts Institute of Technology

Alignment

Academic pragmatist

Safety Stance

Advocates for explainable and interpretable AI systems. Focuses on trustworthy computer vision.

Notable Work
Caffe Deep Learning LibraryBerkeley DeepDriveDomain AdaptationVisual Question AnsweringMultimodal Learning
Computer VisionAcademicResearcherLab LeaderSystems
@trevordarrellWebsiteLinkedIn
VIEW DOSSIER →
Tulsee Doshi

Tulsee Doshi

Organization
Google DeepMind

Position

Senior Director and Head of Product, Gemini Model, Google DeepMind

Expertise
Gemini product strategyresponsible AImodel deploymentleadershipverified
Alignment

Safety-aligned researcher

Safety Stance

Publicly associated with responsible AI and product-level safeguards for Gemini.

Notable Work
Gemini product leadershipresponsible developmentmodel commercialization strategy
productleadershipsafetyverifiedFrontier Lab LeaderResearcherSafetyProduct
Website
VIEW DOSSIER →
Tyler Neylon

Tyler Neylon

Organization
Anthropic

Position

Research contributor

Expertise
economic researchAI usage analysis
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's public safety-first and research-driven framework.

Notable Work
Measuring AI agent autonomy in practiceAnthropic Economic Index reports
researchResearcherSafety
VIEW DOSSIER →
Tyna Eloundou

Tyna Eloundou

Organization
OpenAI

Position

Member of Technical Staff / Research Scientist

Expertise
AI impactssafety evaluationseconomic researchai-impactssafety-governanceneeds-review
Alignment

Policy and governance operator

Safety Stance

Publicly associated with safety evaluations, democratic inputs, and economic-impact research.

Notable Work
democratic inputs to AIsafety evaluationseconomic impact work
ai-impactssafety-governanceneeds-reviewResearcherSafetyPolicy
VIEW DOSSIER →
Uday Ruddaraju

Uday Ruddaraju

Organization
X / xAI

Position

Engineering leader involved with xAI (reported)

Expertise
infrastructureengineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
infrastructureengineering
infrastructureengineeringResearcherSystems
VIEW DOSSIER →
Vahid Kazemi

Vahid Kazemi

Organization
xAI

Position

Member of Technical Staff (departed 2026)

Expertise
engineering
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineering
engineeringresearchResearcher
VIEW DOSSIER →
🇮🇳🇺🇸
Vibhu Mittal

Vibhu Mittal

Nationality
Indian-American

Organization
Inflection AI

Position

CTO, Inflection AI

Expertise
Natural Language ProcessingMachine TranslationGenerative AINLPEx-GoogleEmotional AI
Education

PhD, Computer ScienceUniversity of Southern California

Alignment

Research-focused technologist

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
NLPEx-GoogleEmotional AINatural Language ProcessingMachine TranslationGenerative AI
CTONLPEx-GoogleEmotional AICEOResearcher
VIEW DOSSIER →
🇷🇺🇨🇦
Victoria Krakovna

Victoria Krakovna

Nationality
Russian-Canadian

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
AI SafetyAI AlignmentSpecification GamingSide Effects Avoidance
Education

BS, Statistics and MathematicsUniversity of Toronto

MS, StatisticsUniversity of Toronto

PhD, Statistics and Machine LearningHarvard University

Alignment

AI Safety Advocate

Safety Stance

Strong advocate for AI safety research. Co-founded the Future of Life Institute to mitigate existential risks from advanced technology. Works on technical alignment to ensure AI systems behave as intended.

Notable Work
Specification gaming examples databaseAvoiding side effects in RL agentsGoal misgeneralizationDeceptive alignment evaluations
AI SafetyResearcherFounderFrontier Lab LeaderAcademicSafetySystems
@vkrakovnaWebsiteLinkedIn
VIEW DOSSIER →
Vijaye Raji

Vijaye Raji

Organization
OpenAI

Position

CTO of Applications

Expertise
product engineeringapplications infrastructurechatgpt engineeringexecutive-leadershipapplicationsengineering
Alignment

Frontier lab operator

Safety Stance

Official materials tie his role to product integrity and core systems.

Notable Work
Product engineering leadership for ChatGPT and Codex
executive-leadershipapplicationsengineeringCEOFrontier Lab LeaderResearcherSystemsProduct
VIEW DOSSIER →
Vitaly Gudanets

Vitaly Gudanets

Organization
Anthropic

Position

Chief Information Security Officer

Expertise
securityenterprise trustgovernanceleadership
Alignment

Policy and governance operator

Safety Stance

Supports Anthropic's security- and reliability-focused deployment posture.

Notable Work
securityleadershipFrontier Lab Leaderenterprise trustgovernance
securityleadershipFrontier Lab LeaderResearcherPolicyProduct
VIEW DOSSIER →
Vitchyr Pong

Vitchyr Pong

Organization
OpenAI

Expertise
deep researchreinforcement learninggpt-4 alignment workreasoning-researchreinforcement-learningneeds-review
Alignment

Safety-aligned researcher

Safety Stance

Public profile centers on safety, evaluation, and reliability work around advanced AI systems.

Notable Work
deep researchGPT-4 alignment work
reasoning-researchreinforcement-learningneeds-reviewResearcherSafety
VIEW DOSSIER →
Vivek Natarajan

Vivek Natarajan

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
medical AIscientific machine learninglanguage models for medicinehealthmedical-aiscience
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Med-PaLMMed-PaLM 2AI for medicine
healthmedical-aiscienceverifiedResearcher
Website
VIEW DOSSIER →
Volodymyr Mnih

Volodymyr Mnih

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
reinforcement learningworld modelsdeep learningworld-models
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
DQNgeneralist agentsworld-model leadership
researchrlworld-modelsResearcher
Website
VIEW DOSSIER →
Wei Xia

Wei Xia

Organization
Google DeepMind

Position

Researcher and Engineer, Google DeepMind

Expertise
audiolarge language modelsmultimodal R&Dmultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
audio + LLM systemsmultimodal product-facing researchspeech and media AI
researchaudiomultimodalResearcherSystemsProduct
Website
VIEW DOSSIER →
🇺🇸
Winston Weinberg

Winston Weinberg

Nationality
American

Organization
Harvey AI

Position

Co-Founder & CEO, Harvey AI

Expertise
Legal AILarge Language ModelsEnterprise AI
Education

BA, Liberal ArtsKenyon College

JD, LawUSC Gould School of Law

Alignment

Capital allocator

Safety Stance

Public safety posture is not yet fully documented; this profile currently reflects role, organization, and research area.

Notable Work
Legal AIEnterprise AILarge Language Models
FounderCEOLegal AIEnterprise AILab LeaderResearcherInvestor
VIEW DOSSIER →
Wyatt Thompson

Wyatt Thompson

Organization
OpenAI

Expertise
deep researchmultimodal and reasoning systemsevaluationreasoning-researchneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchChatGPT Images
reasoning-researchevaluationneeds-reviewResearcherSystems
VIEW DOSSIER →
Xingchen Wan

Xingchen Wan

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
machine learningprobabilistic modelingdecision-makingprobabilistic-mltheory
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
foundational ML researchprobabilistic methodslearning systems
researchprobabilistic-mltheoryResearcher
Website
VIEW DOSSIER →
Xingyou Song

Xingyou Song

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
machine learning theoryoptimizationagentstheory
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
foundational ML researchoptimization methodsagent learning
researchtheoryoptimizationResearcher
Website
VIEW DOSSIER →
YaGuang Li

YaGuang Li

Organization
Google DeepMind

Position

Senior Staff Research Engineer, Google DeepMind

Expertise
model tuningserving efficiencylarge language modelsengineeringllmsgemini
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Gemini 1.5 finetuningLaMDAPaLM 2
engineeringllmsgeminiResearcher
Website
VIEW DOSSIER →
Yann Dubois

Yann Dubois

Organization
OpenAI

Expertise
deep researchreasoning systemsgpt-5reasoning-researchlanguage-modelsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchGPT-5
reasoning-researchlanguage-modelsneeds-reviewResearcherSystems
VIEW DOSSIER →
🇫🇷
Yann LeCun

Yann LeCun

Nationality
French

Organization
AMI Labs

Position

Founder & Executive Chairman, AMI Labs (Advanced Machine Intelligence)

Expertise
Deep LearningConvolutional Neural NetworksComputer VisionWorld ModelsTuring AwardOpen Source
Education

PhD, Computer SciencePierre and Marie Curie University

Alignment

Open-source AI advocate

Safety Stance

Skeptic of existential risk framing. Believes AI safety concerns are overblown and that open-source is the safest path.

Notable Work
Convolutional Neural NetworksLeNetSelf-Supervised LearningJEPAV-JEPA 2
FounderResearcherTuring AwardOpen SourceWorld ModelsLab LeaderAcademicSafety
@ylecunWebsiteLinkedIn
VIEW DOSSIER →
Yasmin Razavi

Yasmin Razavi

Organization
Anthropic

Position

Board Member

Expertise
corporate governanceboard oversightboardgovernance
Alignment

Policy and governance operator

Safety Stance

Governance role supporting Anthropic's public-benefit mission.

Notable Work
boardgovernancecorporate governanceboard oversight
boardgovernanceResearcherPolicy
VIEW DOSSIER →
🇰🇷🇺🇸
Yejin Choi

Yejin Choi

Nationality
South Korean-American

Organization
Stanford University

Position

Dieter Schwarz Foundation HAI Professor and Professor of Computer Science, Stanford University

Expertise
Natural Language ProcessingCommonsense ReasoningAI EthicsLanguage GenerationNLP
Education

BS, Computer EngineeringSeoul National University

PhD, Computer ScienceCornell University

Alignment

Thoughtful centrist on AI policy

Safety Stance

Advocates for building AI systems that understand human values and commonsense norms. Concerned about the gap between language fluency and actual understanding in LLMs.

Notable Work
COMET (Commonsense Transformers)Delphi (moral reasoning)ATOMIC knowledge graphNeural language degeneration researchCommonsense knowledge and reasoning
NLPAcademicResearcherPolicySystems
@YejinChoinkaWebsiteLinkedIn
VIEW DOSSIER →
Yinxiao Li

Yinxiao Li

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
image generationpost-trainingmultimodal modelsimage-generationmultimodal
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Imagen 4 contributionspost-training for generative modelsNano Banana-related work
researchimage-generationmultimodalResearcher
Website
VIEW DOSSIER →
Yong Cheng

Yong Cheng

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
machine translationNLPmachine intelligencenlptranslation
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
multilingual modelingtranslation researchscientific and multimodal NLP
researchnlptranslationResearcher
Website
VIEW DOSSIER →
Yuhuai (Tony) Wu

Yuhuai (Tony) Wu

Organization
xAI

Position

Co-founder (departed 2026)

Expertise
reasoning
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
reasoningFrontier Lab Leader
founderreasoningFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
Yushi Wang

Yushi Wang

Organization
OpenAI

Expertise
deep researchoperator systemsreasoningreasoning-researchagentsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchOperator
reasoning-researchagentsneeds-reviewResearcherSystems
VIEW DOSSIER →
Zack Lee

Zack Lee

Organization
Anthropic

Position

Education / technical support contributor

Expertise
technical supportAI fluencyeducation research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Education Report: The AI Fluency Index
education researchResearcherSafety
VIEW DOSSIER →
Zhaohan Dong

Zhaohan Dong

Organization
xAI

Position

SDK/cookbook contributor (xai-org)

Expertise
developer_experience
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
developer_experience
developer_experienceResearcher
VIEW DOSSIER →
Zhen Qin

Zhen Qin

Organization
Google DeepMind

Position

Staff Research Scientist, Google DeepMind

Expertise
foundation modelsranking and retrievalNLPllmsretrieval
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
foundational large modelsretrieval-related MLindustrial NLP
researchllmsretrievalResearcher
Website
VIEW DOSSIER →
Zhiqing Sun

Zhiqing Sun

Organization
OpenAI

Position

Research Lead, Deep Research

Expertise
deep researchagentic systemsresearchreasoning-researchagentsneeds-review
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
deep researchOperator
reasoning-researchagentsneeds-reviewFrontier Lab LeaderResearcherSystems
VIEW DOSSIER →
Zhiwei Deng

Zhiwei Deng

Organization
Google DeepMind

Position

Research Scientist, Google DeepMind

Expertise
intelligent agentsself-supervised learningmemory systemsagentsmemory
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
system-1/system-2 style learningworld modelsmemory-augmented networks
researchagentsmemoryResearcherSystems
Website
VIEW DOSSIER →
Zicheng Zhou

Zicheng Zhou

Organization
xAI

Position

Member of Technical Staff

Expertise
engineeringmts
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
engineeringmts
engineeringmtsResearcher
LinkedIn
VIEW DOSSIER →
Zihang Dai

Zihang Dai

Organization
xAI

Position

Co-founder

Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
Frontier Lab Leader
founderresearchFounderFrontier Lab LeaderResearcher
VIEW DOSSIER →
Zoe Ludwig

Zoe Ludwig

Organization
Anthropic

Position

Education research contributor

Expertise
AI fluencyeducationeducation research
Alignment

Safety-aligned researcher

Safety Stance

Works within Anthropic's privacy-preserving, safety-first research framework.

Notable Work
Anthropic Education Report: The AI Fluency Index
education researchResearcherSafety
VIEW DOSSIER →
Zoubin Ghahramani

Zoubin Ghahramani

Organization
Google / Google DeepMind

Position

VP of Research, Google; member of Google DeepMind research leadership

Expertise
probabilistic machine learningresearch leadershipBayesian methodsleadershipprobabilistic-mltheory
Alignment

Frontier lab operator

Safety Stance

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Notable Work
probabilistic MLGoogle Brain leadershipresearch strategy
leadershipprobabilistic-mltheoryFrontier Lab LeaderResearcher
Website
VIEW DOSSIER →