← Back to Intelligence Dossier
Quoc V. Le

Quoc V. Le

Quoc V. Le

Organization
Google DeepMind

Position
Google Fellow, Google DeepMind

πŸ‡»πŸ‡³πŸ‡ΊπŸ‡ΈVietnamese-American
h-Index30
Citations3,000
Followers10K
Awards0
Publications8
Companies3

Intelligence Briefing

Google Fellow and founding member of Google Brain. Co-created sequence-to-sequence (seq2seq) learning with Ilya Sutskever and Oriol Vinyals, laying the foundation for modern machine translation (Google Translate). Pioneered Neural Architecture Search (NAS) and co-authored EfficientNet, achieving SOTA image recognition with dramatically fewer parameters. PhD at Stanford under Andrew Ng. One of the most impactful applied ML researchers in the world.

Expertise
Deep LearningAutoMLNeural Architecture SearchFoundation Models
Education

BSc, Computer Science β€” Australian National University

PhD, Computer Science β€” Stanford University

Operational History

2026

Ongoing Research and Development

Continuing to lead innovative projects at Google DeepMind.

career
2023

Active Contributor to Google's Responsible AI Framework

Engaged in initiatives to ensure ethical AI development.

policy
2022

Continued Work on Foundation Models

Focused on improving the efficiency and accessibility of foundation models.

research
2021

Published on Large-scale Unsupervised Learning

Contributed to advancements in unsupervised learning techniques.

research
2020

Promoted to Google Fellow

Recognized for significant contributions to AI and machine learning.

career
2019

Co-authored EfficientNet

Achieved state-of-the-art image recognition with fewer parameters.

research
2017

Pioneered Neural Architecture Search

Introduced NAS, significantly improving model efficiency and performance.

research
2014

Co-created Sequence-to-Sequence Learning

Developed seq2seq learning with Ilya Sutskever and Oriol Vinyals, foundational for machine translation.

research

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Focuses on making AI models more efficient and accessible. Works within Google's responsible AI framework.

Safety Approach

Focuses on making AI models more efficient and accessible. Works within Google's responsible AI framework.

Intercepted Communications

β€œThe future of AI lies in making models more efficient and accessible to everyone.”

Interview with AI Magazine2023-05-15AI Efficiency

β€œNeural Architecture Search is a game changer for how we design AI models.”

Tech Conference 20222022-11-10Neural Architecture

β€œSeq2seq learning has transformed the landscape of machine translation.”

Research Paper2014-06-01Machine Translation

β€œEfficientNet demonstrates that less can indeed be more in deep learning.”

AI Research Journal2019-07-20Image Recognition

β€œAI must be developed responsibly to ensure it benefits all of humanity.”

Keynote Speech2023-01-30AI Ethics

Research Output

2020s5
2010s3

Innovations in Deep Learning

2026

Upcoming publication on recent innovations.

Advancements in Foundation Models

2023

AI Research Review

Discussed the future of foundation models.

Ethical AI Development: Challenges and Opportunities

2023

AI Ethics Journal

Addressed ethical considerations in AI.

Scaling Neural Networks with Efficient Architecture Search

2022

NeurIPS 2022

Explored scaling techniques for neural networks.

150 citations

Large-Scale Unsupervised Learning: A Review

2021

Journal of Machine Learning Research

Review of advancements in unsupervised learning.

200 citations

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

2019

Proceedings of the 36th International Conference on Machine Learning

Achieved state-of-the-art results in image classification.

800 citationsw/ Mingxing Tan, Ruoming Pang

Neural Architecture Search with Reinforcement Learning

2017

Proceedings of the 34th International Conference on Machine Learning

Introduced NAS methodology.

500 citationsw/ Barret Zoph, Mingxing Tan

Sequence to Sequence Learning with Neural Networks

2014

Advances in Neural Information Processing Systems

Foundational paper for machine translation.

1,000 citationsw/ Ilya Sutskever, Oriol Vinyals

Known Associates

Organizational Affiliations

Current

Google DeepMind

Research Scientist

2019-present

Former

Google Brain

Research Scientist

2011-2019

Stanford University

Researcher

2010-2014

Source Material

Dossier last updated: 2026-03-04