← Back to Intelligence Dossier
Lukasz Kaiser

Lukasz Kaiser

AI Researcher

Organization
OpenAI

Position
Research Scientist

h-Index25
Citations12,000
Followers--
Awards0
Publications4
Companies2

Intelligence Briefing

Lukasz Kaiser is a prominent AI researcher known for his work on Transformers and context in natural language processing.

Lukasz Kaiser has made significant contributions to the field of artificial intelligence, particularly in the development and understanding of Transformer models, which have revolutionized natural language processing. He is currently a Research Scientist at OpenAI, where he continues to advance AI technologies. Prior to his role at OpenAI, he worked at Google, contributing to various AI projects.

Expertise
TransformersContext
Education

BS, University of Wroclaw

PhD, RWTH Aachen University

Operational History

2023

Keynote Speaker at NeurIPS

Delivered a keynote address on the future of Transformers in AI.

award
2023

Publication of 'Language Models are Few-Shot Learners'

Co-authored a paper that demonstrated the few-shot learning capabilities of large language models.

research
2022

Publication of 'The Power of Scale for Parameter-Efficient Prompt Tuning'

Investigated methods for efficient tuning of large language models.

research
2021

Joining OpenAI

Transitioned to OpenAI as a Research Scientist, continuing work on advanced AI models.

career
2021

Publication of 'Transformers for Natural Language Processing'

Contributed to a comprehensive overview of Transformer architectures and their applications.

research
2020

Publication of 'Scaling Laws for Neural Language Models'

Published research on the scaling laws that govern the performance of neural language models.

research
2019

Joining Google

Became a part of Google's AI research team, focusing on natural language processing.

career
2018

Publication of 'Attention is All You Need'

Co-authored the seminal paper that introduced the Transformer model, which has become foundational in NLP.

research

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Safety Approach

Primarily capability-focused public profile; safety posture here is inferred from frontier-model development and launch-readiness work rather than standalone public advocacy.

Intercepted Communications

Transformers have changed the landscape of natural language processing and continue to evolve.

Interview with AI Weekly2022-05-10Transformers

The future of AI lies in understanding context and how it shapes language.

Talk at AI Summit2023-03-15Context

Scaling models is not just about size; it's about understanding the underlying principles.

Research Presentation2021-11-20Scaling Laws

AI must be developed responsibly, with a focus on safety and ethics.

Panel Discussion2023-01-30AI Safety

Collaboration across disciplines is key to advancing AI research.

Keynote Speech2022-09-05Collaboration

Research Output

2020s3
2010s1

The Power of Scale for Parameter-Efficient Prompt Tuning

2021

ICLR

Investigated efficient tuning methods for large models.

w/ Lukasz Kaiser, OthersView Paper

Scaling Laws for Neural Language Models

2020

NeurIPS

Explored the relationship between model size and performance.

500 citationsw/ Lukasz Kaiser, Nisan Stiennon, Dario AmodeiView Paper

Language Models are Few-Shot Learners

2020

NeurIPS

Demonstrated few-shot learning capabilities of large models.

8,000 citationsw/ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, P. John McCaffrey, Sam Altman, Greg Brockman, Ilya Sutskever, Dario Amodei, Lukasz KaiserView Paper

Attention is All You Need

2017

NeurIPS

Introduced the Transformer architecture.

10,000 citationsw/ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Mirza Muhammad Y. Usman, Illia Polosukhin, Sam Shleifer, Jason Lee, Qingyu Zhou, Yuan Sun, J. Bradbury, Stephen Cheng, Gregory Corrado, Kurt KeutzerView Paper

Field Intelligence

The Future of Transformers

AI Conference2023-02-1045 minutes

Understanding Context in AI

Tech Podcast2022-08-2030 minutes

Known Associates

Organizational Affiliations

Current

OpenAI

AI Researcher

2021-present

Former

Google

AI Researcher

2019-2021

Dossier last updated: 2026-03-04