← Back to Intelligence Dossier
Ilya Sutskever

Ilya Sutskever

Ilya Sutskever

Organization
Safe Superintelligence Inc. (SSI)

Position
Co-Founder & CEO, Safe Superintelligence Inc. (SSI)

🇮🇱🇨🇦Israeli-Canadian
h-Index50
Citations100,000
Followers200K
Awards0
Publications8
Companies3

Intelligence Briefing

Co-founded Safe Superintelligence Inc. (SSI) in June 2024 after departing OpenAI as Chief Scientist. Became CEO of SSI in July 2025 after co-founder Daniel Gross left. SSI raised $3B at a $32B valuation with no product plans, focused solely on building safe superintelligent AI. Trained under Geoffrey Hinton and co-created AlexNet.

Expertise
Deep LearningNeural Network TrainingLanguage ModelsAI SafetySuperintelligence
Education

PhD, Computer ScienceUniversity of Toronto

BSc, MathematicsUniversity of Toronto

Operational History

2025

Became CEO of SSI

Ilya Sutskever became the CEO of Safe Superintelligence Inc. after co-founder Daniel Gross departed.

career
2024

Co-founded Safe Superintelligence Inc.

Ilya Sutskever co-founded Safe Superintelligence Inc. (SSI) after leaving OpenAI.

founding
2016

Contributed to GPT series

Ilya Sutskever contributed as a co-architect to the development of the GPT series of models.

research
2015

Joined OpenAI as Chief Scientist

Ilya Sutskever joined OpenAI as Chief Scientist, focusing on AI safety and research.

career
2014

Research Scientist at Google Brain

Ilya Sutskever worked as a Research Scientist at Google Brain, focusing on deep learning.

career
2012

Co-created AlexNet

Ilya Sutskever co-created AlexNet, a groundbreaking convolutional neural network.

research
2010

Co-founded DNNResearch

Ilya Sutskever co-founded DNNResearch, which was later acquired by Google.

founding

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Deeply committed to AI safety. Left OpenAI over safety concerns and founded SSI with the singular mission of building safe superintelligence. Believes superintelligence is the most important technical problem of our time and must be solved safely.

Safety Approach

Deeply committed to AI safety. Left OpenAI over safety concerns and founded SSI with the singular mission of building safe superintelligence. Believes superintelligence is the most important technical problem of our time and must be solved safely.

Intercepted Communications

Superintelligence is the most important technical problem of our time.

Ilya Sutskever2024-06-01AI Safety

We must ensure that AI systems are aligned with human values.

Ilya Sutskever2025-01-15AI Alignment

The future of AI depends on our ability to build it safely.

Ilya Sutskever2025-03-10AI Development

Leaving OpenAI was a necessary step for my commitment to AI safety.

Ilya Sutskever2024-07-20Career Move

Building safe superintelligence is our singular mission at SSI.

Ilya Sutskever2025-08-05Company Mission

Research Output

2020s3
2010s5

Scaling Up AI: The Future of AI Safety

2025

Discusses the future of AI safety in the context of superintelligence.

Scaling Laws for Neural Language Models

2020

arXiv

Analyzed scaling laws for language models.

8,000 citationsw/ Jared Kaplan, Sam McCandlish, Tom Brown, Girish Sastry, Amanda Askell, Jesse Dodge, John Schulman, Dario AmodeiView Paper

Language Models are Few-Shot Learners

2020

NeurIPS

Introduced GPT-3, showcasing few-shot learning capabilities.

30,000 citationsw/ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Girish Sastry, Amanda Askell, John Schulman, Dario AmodeiView Paper

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

2019

arXiv

Significant advancement in natural language understanding.

12,000 citationsw/ Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina ToutanovaView Paper

Generative Pre-trained Transformer

2018

arXiv

Introduced the GPT model, a breakthrough in language modeling.

15,000 citationsw/ Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya SutskeverView Paper

Attention is All You Need

2017

NIPS

Introduced the Transformer architecture, foundational for modern NLP.

20,000 citationsw/ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Taku Kudo, Jasper Vaswani, Polina G. KordumovaView Paper

Sequence to Sequence Learning with Neural Networks

2014

NIPS

Introduced sequence-to-sequence learning framework.

5,000 citationsw/ Oriol Vinyals, Quoc V. LeView Paper

ImageNet Classification with Deep Convolutional Neural Networks

2012

NIPS

Pioneering work in deep learning and computer vision.

10,000 citationsw/ Alex Krizhevsky, Geoffrey HintonView Paper

Known Associates

Organizational Affiliations

Current

Safe Superintelligence Inc. (SSI)

CEO

2024-present

Former

OpenAI

Chief Scientist

2015-2024

Google Brain

Research Scientist

2014-2015

Source Material

Dossier last updated: 2026-03-04