← Back to Intelligence Dossier
Arvind Narayanan

Arvind Narayanan

Arvind Narayanan

Organization
Princeton University

Position
Professor of Computer Science & Director of CITP, Princeton University

🇮🇳🇺🇸Indian-American
h-Index--
Citations--
Followers--
Awards0
Publications8
Companies3

Intelligence Briefing

Author of "AI Snake Oil" (2024, with Sayash Kapoor), a widely acclaimed book distinguishing genuine AI capabilities from hype. Director of Princeton's Center for Information Technology Policy. Named to TIME100 AI list in 2023. His work focuses on holding AI systems accountable and debunking inflated claims about AI capabilities.

Expertise
AI AccountabilityAlgorithmic FairnessPrivacyInformation SecurityTechnology Policy
Education

BTech, Computer Science and EngineeringIndian Institute of Technology Madras

PhD, Computer ScienceUniversity of Texas at Austin

Operational History

2023

Named to TIME100 AI List

Recognized for contributions to AI accountability and technology policy.

award
2022

Director of CITP

Became the Director of the Center for Information Technology Policy at Princeton University.

career
2021

Research on AI Accountability

Published multiple papers on the accountability of AI systems and their societal impacts.

research
2020

Visiting Researcher at Microsoft Research

Conducted research on privacy and security in AI systems.

career
2019

Postdoctoral Researcher at Stanford University

Focused on algorithmic fairness and transparency in machine learning.

career
2018

Research on De-anonymization

Published significant findings on the de-anonymization of large datasets.

research
2017

Blockchain Analysis Research

Investigated the implications of blockchain technology on privacy and security.

research

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

Unknown

Skeptical of both AI hype and existential risk framing. Focuses on distinguishing genuine AI capabilities from "snake oil." Advocates for empirical accountability — testing AI claims against evidence rather than speculation. Warns about predictive AI systems that don't work but are deployed anyway.

Safety Approach

Skeptical of both AI hype and existential risk framing. Focuses on distinguishing genuine AI capabilities from "snake oil." Advocates for empirical accountability — testing AI claims against evidence rather than speculation. Warns about predictive AI systems that don't work but are deployed anyway.

Intercepted Communications

We need to hold AI systems accountable for their claims and impacts.

Interview with AI Weekly2023-05-15AI Accountability

The hype around AI often overshadows the real capabilities and risks.

Keynote at AI Ethics Conference2023-09-10AI Hype

Empirical testing is essential to validate AI claims.

Panel Discussion on AI Safety2023-11-02AI Safety

Predictive AI systems must be scrutinized before deployment.

Podcast Interview2023-12-01Predictive AI

We must differentiate between genuine innovation and 'snake oil' in AI.

Book Launch for 'AI Snake Oil'2024-01-15AI Innovation

Research Output

2020s6
2010s2

AI Snake Oil

2024

A critical examination of AI capabilities versus hype.

w/ Sayash Kapoor

The Role of AI in Society

2023

AI Ethics Journal

Examined the societal impacts of AI technologies.

Predictive AI Auditing

2022

AI & Society

Discussed methods for auditing predictive AI systems.

Privacy and Security in AI Systems

2022

IEEE Transactions on Information Forensics and Security

Discussed the challenges of ensuring privacy in AI.

Web Transparency and Accountability

2021

Proceedings of the ACM Conference

Explored the need for accountability in web technologies.

Algorithmic Fairness in Machine Learning

2020

Journal of Machine Learning Research

Analyzed fairness issues in machine learning algorithms.

Blockchain Analysis: Implications for Privacy

2019

Journal of Information Security

Investigated privacy concerns related to blockchain technology.

De-anonymization of Large Datasets

2018

Journal of Privacy and Confidentiality

Research on the risks of de-anonymization in data sharing.

Known Associates

Organizational Affiliations

Current

Princeton University

Professor of Computer Science

2016-Present

Former

Stanford University

Postdoctoral Researcher

2019-2021

Microsoft Research

Visiting Researcher

2020

Source Material

Dossier last updated: 2026-03-04