
Intelligence Briefing
Author of "AI Snake Oil" (2024, with Sayash Kapoor), a widely acclaimed book distinguishing genuine AI capabilities from hype. Director of Princeton's Center for Information Technology Policy. Named to TIME100 AI list in 2023. His work focuses on holding AI systems accountable and debunking inflated claims about AI capabilities.
BTech, Computer Science and Engineering — Indian Institute of Technology Madras
PhD, Computer Science — University of Texas at Austin
Operational History
Named to TIME100 AI List
Recognized for contributions to AI accountability and technology policy.
awardDirector of CITP
Became the Director of the Center for Information Technology Policy at Princeton University.
careerResearch on AI Accountability
Published multiple papers on the accountability of AI systems and their societal impacts.
researchVisiting Researcher at Microsoft Research
Conducted research on privacy and security in AI systems.
careerPostdoctoral Researcher at Stanford University
Focused on algorithmic fairness and transparency in machine learning.
careerResearch on De-anonymization
Published significant findings on the de-anonymization of large datasets.
researchBlockchain Analysis Research
Investigated the implications of blockchain technology on privacy and security.
researchAGI Position Assessment
Unknown
Skeptical of both AI hype and existential risk framing. Focuses on distinguishing genuine AI capabilities from "snake oil." Advocates for empirical accountability — testing AI claims against evidence rather than speculation. Warns about predictive AI systems that don't work but are deployed anyway.
Skeptical of both AI hype and existential risk framing. Focuses on distinguishing genuine AI capabilities from "snake oil." Advocates for empirical accountability — testing AI claims against evidence rather than speculation. Warns about predictive AI systems that don't work but are deployed anyway.
Intercepted Communications
“We need to hold AI systems accountable for their claims and impacts.”
“The hype around AI often overshadows the real capabilities and risks.”
“Empirical testing is essential to validate AI claims.”
“Predictive AI systems must be scrutinized before deployment.”
“We must differentiate between genuine innovation and 'snake oil' in AI.”
Research Output
AI Snake Oil
2024A critical examination of AI capabilities versus hype.
The Role of AI in Society
2023AI Ethics Journal
Examined the societal impacts of AI technologies.
Predictive AI Auditing
2022AI & Society
Discussed methods for auditing predictive AI systems.
Privacy and Security in AI Systems
2022IEEE Transactions on Information Forensics and Security
Discussed the challenges of ensuring privacy in AI.
Web Transparency and Accountability
2021Proceedings of the ACM Conference
Explored the need for accountability in web technologies.
Algorithmic Fairness in Machine Learning
2020Journal of Machine Learning Research
Analyzed fairness issues in machine learning algorithms.
Blockchain Analysis: Implications for Privacy
2019Journal of Information Security
Investigated privacy concerns related to blockchain technology.
De-anonymization of Large Datasets
2018Journal of Privacy and Confidentiality
Research on the risks of de-anonymization in data sharing.
Known Associates
Organizational Affiliations
Current
Princeton University
Professor of Computer Science
2016-Present
Former
Stanford University
Postdoctoral Researcher
2019-2021
Microsoft Research
Visiting Researcher
2020
Source Material
Dossier last updated: 2026-03-04