
Lukasz Kaiser
AI Researcher
Organization
OpenAI
Position
Research Scientist
Intelligence Briefing
Lukasz Kaiser is a prominent AI researcher known for his work on Transformers and context in natural language processing.
Lukasz Kaiser has made significant contributions to the field of artificial intelligence, particularly in the development and understanding of Transformer models, which have revolutionized natural language processing. He is currently a Research Scientist at OpenAI, where he continues to advance AI technologies. Prior to his role at OpenAI, he worked at Google, contributing to various AI projects.
BS, — University of Wroclaw
PhD, — RWTH Aachen University
Operational History
Keynote Speaker at NeurIPS
Delivered a keynote address on the future of Transformers in AI.
awardPublication of 'Language Models are Few-Shot Learners'
Co-authored a paper that demonstrated the few-shot learning capabilities of large language models.
researchPublication of 'The Power of Scale for Parameter-Efficient Prompt Tuning'
Investigated methods for efficient tuning of large language models.
researchJoining OpenAI
Transitioned to OpenAI as a Research Scientist, continuing work on advanced AI models.
careerPublication of 'Transformers for Natural Language Processing'
Contributed to a comprehensive overview of Transformer architectures and their applications.
researchPublication of 'Scaling Laws for Neural Language Models'
Published research on the scaling laws that govern the performance of neural language models.
researchJoining Google
Became a part of Google's AI research team, focusing on natural language processing.
careerPublication of 'Attention is All You Need'
Co-authored the seminal paper that introduced the Transformer model, which has become foundational in NLP.
researchAGI Position Assessment
Unknown
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.
Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.
Intercepted Communications
“Transformers have changed the landscape of natural language processing and continue to evolve.”
“The future of AI lies in understanding context and how it shapes language.”
“Scaling models is not just about size; it's about understanding the underlying principles.”
“AI must be developed responsibly, with a focus on safety and ethics.”
“Collaboration across disciplines is key to advancing AI research.”
Research Output
The Power of Scale for Parameter-Efficient Prompt Tuning
2021ICLR
Investigated efficient tuning methods for large models.
Scaling Laws for Neural Language Models
2020NeurIPS
Explored the relationship between model size and performance.
Language Models are Few-Shot Learners
2020NeurIPS
Demonstrated few-shot learning capabilities of large models.
Attention is All You Need
2017NeurIPS
Introduced the Transformer architecture.
Field Intelligence
The Future of Transformers
Understanding Context in AI
Known Associates
Ashish Vaswani
collaboratorCo-authored the paper 'Attention is All You Need'.
View Dossier →Dario Amodei
collaboratorWorked together on scaling laws research.
View Dossier →Sam Altman
colleagueCollaborates at OpenAI.
View Dossier →Noam Shazeer
collaboratorCo-authored multiple influential papers in NLP.
View Dossier →Organizational Affiliations
Current
OpenAI
AI Researcher
2021-present
Former
AI Researcher
2019-2021
Dossier last updated: 2026-03-04