← Back to Intelligence Dossier
Thomas Wolf

Thomas Wolf

AI Researcher and Open Source Advocate

Organization
Hugging Face

Position
Co-founder & Chief Science Officer, Hugging Face

πŸ‡«πŸ‡·French
h-Index25
Citations3,000
Followers50,000
Awards0
Publications5
Companies1

Intelligence Briefing

Co-founded Hugging Face and created the Transformers library, which became the most widely used open-source NLP/ML library in the world. Led the BigScience Workshop that produced BLOOM, a 176B parameter open-source LLM. Warned in March 2025 that AI risks becoming "yes-men on servers" without breakthroughs in novel reasoning.

Expertise
Natural Language ProcessingOpen-Source MLQuantum PhysicsTransfer LearningOpen Source Leader
Education

MSc, Theoretical Physics β€” Γ‰cole Polytechnique

PhD, Quantum Statistical Physics β€” Sorbonne University

Law Degree, Intellectual Property Law β€” PanthΓ©on-Sorbonne University

Operational History

2025

Warning on AI Risks

Warned that AI risks becoming 'yes-men on servers' without breakthroughs in novel reasoning.

policy
2022

BigScience Workshop

Led the BigScience Workshop that produced BLOOM, a 176B parameter open-source LLM.

research
2021

Launch of Hugging Face Datasets

Introduced the Hugging Face Datasets library to facilitate easy access to datasets for machine learning.

research
2019

Release of Transformers Library

Launched the Transformers library, which became the most widely used open-source NLP library.

research
2016

Co-founder of Hugging Face

Co-founded Hugging Face, focusing on natural language processing and open-source machine learning.

founding

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Advocates for open science and reproducibility as safety mechanisms. Concerned that over-reliance on AI without novel reasoning capabilities creates systemic risks. Believes democratization of AI through open-source reduces concentration of power.

Safety Approach

Advocates for open science and reproducibility as safety mechanisms. Concerned that over-reliance on AI without novel reasoning capabilities creates systemic risks. Believes democratization of AI through open-source reduces concentration of power.

Intercepted Communications

β€œAI risks becoming 'yes-men on servers' without breakthroughs in novel reasoning.”

Interview, March 20252025-03-01AI Risks

β€œOpen science and reproducibility are essential safety mechanisms in AI.”

Conference Talk, 20232023-06-15AI Safety

β€œDemocratization of AI through open-source reduces concentration of power.”

Blog Post, 20242024-09-10Open Source

β€œThe future of AI depends on our ability to innovate in reasoning capabilities.”

Panel Discussion, 20252025-01-20AI Future

β€œWe must ensure that AI serves humanity, not the other way around.”

Keynote Speech, 20252025-05-05Ethics in AI

Research Output

2020s5

Open Science in AI: Challenges and Opportunities

2024

AI Journal

Discussed the importance of open science in AI.

SmolLM: Efficient Language Models for Low-Resource Environments

2023

arXiv

Proposed efficient models for low-resource settings.

200 citationsView Paper

BLOOM: A 176B Parameter Open-Source Language Model

2022

arXiv

Introduced a large-scale open-source language model.

800 citationsw/ BigScience CollaborationView Paper

Hugging Face Datasets: A New Era for Data in ML

2021

arXiv

Described the datasets library and its impact on ML.

600 citationsView Paper

Transformers: State-of-the-Art Natural Language Processing

2020

arXiv

Pioneering work on transformer models for NLP.

1,500 citationsw/ Alexis Conneau, Julian ChaudharyView Paper

Field Intelligence

The Future of NLP with Transformers

β–ΆYouTube2023-07-101:00:00

AI Ethics and Open Source

β™ͺPodcast2024-02-1545:00

BigScience and the Future of AI

●Conference2022-11-0530:00

Innovations in Language Models

●Webinar2023-03-2050:00

Known Associates

Organizational Affiliations

Current

Hugging Face

Chief Science Officer

2016 - Present

Source Material

Dossier last updated: 2026-03-04