
Intelligence Briefing
Google Fellow and founding member of Google Brain. Co-created sequence-to-sequence (seq2seq) learning with Ilya Sutskever and Oriol Vinyals, laying the foundation for modern machine translation (Google Translate). Pioneered Neural Architecture Search (NAS) and co-authored EfficientNet, achieving SOTA image recognition with dramatically fewer parameters. PhD at Stanford under Andrew Ng. One of the most impactful applied ML researchers in the world.
BSc, Computer Science β Australian National University
PhD, Computer Science β Stanford University
Operational History
Ongoing Research and Development
Continuing to lead innovative projects at Google DeepMind.
careerActive Contributor to Google's Responsible AI Framework
Engaged in initiatives to ensure ethical AI development.
policyContinued Work on Foundation Models
Focused on improving the efficiency and accessibility of foundation models.
researchPublished on Large-scale Unsupervised Learning
Contributed to advancements in unsupervised learning techniques.
researchPromoted to Google Fellow
Recognized for significant contributions to AI and machine learning.
careerCo-authored EfficientNet
Achieved state-of-the-art image recognition with fewer parameters.
researchPioneered Neural Architecture Search
Introduced NAS, significantly improving model efficiency and performance.
researchCo-created Sequence-to-Sequence Learning
Developed seq2seq learning with Ilya Sutskever and Oriol Vinyals, foundational for machine translation.
researchAGI Position Assessment
Unknown
Focuses on making AI models more efficient and accessible. Works within Google's responsible AI framework.
Focuses on making AI models more efficient and accessible. Works within Google's responsible AI framework.
Intercepted Communications
βThe future of AI lies in making models more efficient and accessible to everyone.β
βNeural Architecture Search is a game changer for how we design AI models.β
βSeq2seq learning has transformed the landscape of machine translation.β
βEfficientNet demonstrates that less can indeed be more in deep learning.β
βAI must be developed responsibly to ensure it benefits all of humanity.β
Research Output
Innovations in Deep Learning
2026Upcoming publication on recent innovations.
Advancements in Foundation Models
2023AI Research Review
Discussed the future of foundation models.
Ethical AI Development: Challenges and Opportunities
2023AI Ethics Journal
Addressed ethical considerations in AI.
Scaling Neural Networks with Efficient Architecture Search
2022NeurIPS 2022
Explored scaling techniques for neural networks.
Large-Scale Unsupervised Learning: A Review
2021Journal of Machine Learning Research
Review of advancements in unsupervised learning.
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
2019Proceedings of the 36th International Conference on Machine Learning
Achieved state-of-the-art results in image classification.
Neural Architecture Search with Reinforcement Learning
2017Proceedings of the 34th International Conference on Machine Learning
Introduced NAS methodology.
Sequence to Sequence Learning with Neural Networks
2014Advances in Neural Information Processing Systems
Foundational paper for machine translation.
Known Associates
Ilya Sutskever
collaboratorCo-created seq2seq learning and collaborated on various research projects.
View Dossier βOriol Vinyals
collaboratorWorked together on seq2seq learning and other AI research.
View Dossier βAndrew Ng
mentorPhD advisor at Stanford University.
View Dossier βMingxing Tan
collaboratorCo-authored EfficientNet and worked on NAS.
View Dossier βOrganizational Affiliations
Current
Google DeepMind
Research Scientist
2019-present
Former
Google Brain
Research Scientist
2011-2019
Stanford University
Researcher
2010-2014
Source Material
Dossier last updated: 2026-03-04