
Intelligence Briefing
Co-founded Safe Superintelligence Inc. (SSI) in June 2024 after departing OpenAI as Chief Scientist. Became CEO of SSI in July 2025 after co-founder Daniel Gross left. SSI raised $3B at a $32B valuation with no product plans, focused solely on building safe superintelligent AI. Trained under Geoffrey Hinton and co-created AlexNet.
PhD, Computer Science — University of Toronto
BSc, Mathematics — University of Toronto
Operational History
Became CEO of SSI
Ilya Sutskever became the CEO of Safe Superintelligence Inc. after co-founder Daniel Gross departed.
careerCo-founded Safe Superintelligence Inc.
Ilya Sutskever co-founded Safe Superintelligence Inc. (SSI) after leaving OpenAI.
foundingContributed to GPT series
Ilya Sutskever contributed as a co-architect to the development of the GPT series of models.
researchJoined OpenAI as Chief Scientist
Ilya Sutskever joined OpenAI as Chief Scientist, focusing on AI safety and research.
careerResearch Scientist at Google Brain
Ilya Sutskever worked as a Research Scientist at Google Brain, focusing on deep learning.
careerCo-created AlexNet
Ilya Sutskever co-created AlexNet, a groundbreaking convolutional neural network.
researchCo-founded DNNResearch
Ilya Sutskever co-founded DNNResearch, which was later acquired by Google.
foundingAGI Position Assessment
Unknown
Deeply committed to AI safety. Left OpenAI over safety concerns and founded SSI with the singular mission of building safe superintelligence. Believes superintelligence is the most important technical problem of our time and must be solved safely.
Deeply committed to AI safety. Left OpenAI over safety concerns and founded SSI with the singular mission of building safe superintelligence. Believes superintelligence is the most important technical problem of our time and must be solved safely.
Intercepted Communications
“Superintelligence is the most important technical problem of our time.”
“We must ensure that AI systems are aligned with human values.”
“The future of AI depends on our ability to build it safely.”
“Leaving OpenAI was a necessary step for my commitment to AI safety.”
“Building safe superintelligence is our singular mission at SSI.”
Research Output
Scaling Up AI: The Future of AI Safety
2025Discusses the future of AI safety in the context of superintelligence.
Scaling Laws for Neural Language Models
2020arXiv
Analyzed scaling laws for language models.
Language Models are Few-Shot Learners
2020NeurIPS
Introduced GPT-3, showcasing few-shot learning capabilities.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2019arXiv
Significant advancement in natural language understanding.
Generative Pre-trained Transformer
2018arXiv
Introduced the GPT model, a breakthrough in language modeling.
Attention is All You Need
2017NIPS
Introduced the Transformer architecture, foundational for modern NLP.
Sequence to Sequence Learning with Neural Networks
2014NIPS
Introduced sequence-to-sequence learning framework.
ImageNet Classification with Deep Convolutional Neural Networks
2012NIPS
Pioneering work in deep learning and computer vision.
Known Associates
Geoffrey Hinton
mentorMentored Ilya Sutskever during his PhD studies at the University of Toronto.
View Dossier →Daniel Gross
co-founderCo-founded Safe Superintelligence Inc. with Ilya Sutskever.
View Dossier →Sam Altman
colleagueWorked together at OpenAI, where Sam Altman served as CEO.
View Dossier →Alex Krizhevsky
collaboratorCollaborated on the development of AlexNet.
View Dossier →Organizational Affiliations
Current
Safe Superintelligence Inc. (SSI)
CEO
2024-present
Former
OpenAI
Chief Scientist
2015-2024
Google Brain
Research Scientist
2014-2015
Source Material
Dossier last updated: 2026-03-04