← Back to Intelligence Dossier
Max Tegmark

Max Tegmark

Max Tegmark

Organization
MIT / Future of Life Institute

Position
Professor of Physics, MIT; President, Future of Life Institute

πŸ‡ΈπŸ‡ͺπŸ‡ΊπŸ‡ΈSwedish-American
h-Index--
Citations--
Followers100K+
Awards0
Publications4
Companies3

Intelligence Briefing

MIT physics professor and co-founder of the Future of Life Institute (FLI), a nonprofit focused on reducing existential risks from advanced technologies. Author of "Life 3.0: Being Human in the Age of Artificial Intelligence" (2017). FLI organized the influential open letter calling for a 6-month pause on giant AI experiments (2023). In October 2025, co-authored the "Statement on Superintelligence" calling for a prohibition on superintelligence development until safety is proven. Active at Davos 2026 on AI governance.

Expertise
AI SafetyCosmologyPhysicsAI GovernanceExistential RiskPhysicist
Education

BSc, Physics β€” Royal Institute of Technology (KTH), Stockholm

BA, Economics β€” Stockholm School of Economics

PhD, Physics β€” University of California, Berkeley

Operational History

2026

Participation at Davos

Active discussions on AI governance at the World Economic Forum in Davos.

career
2025

Statement on Superintelligence

Co-authored a statement advocating for a prohibition on superintelligence development until safety is proven.

policy
2023

Open Letter on AI Pause

FLI organized an open letter calling for a 6-month pause on giant AI experiments.

policy

AGI Position Assessment

Risk Level
LOW
MODERATE
HIGH
CRITICAL
Predicted AGI Timeline

2030s

Advocates for a cautious approach to AGI development, emphasizing safety and ethical considerations.

Key Beliefs
  • AGI must be developed with safety as a priority.
  • Proactive governance is essential to mitigate risks.
Safety Approach

Promotes rigorous safety standards and international cooperation.

Intercepted Communications

β€œWe need to ensure that AI is developed safely and responsibly.”

Interview at MIT2026-01-15AI Safety

β€œThe future of humanity depends on how we handle AI.”

Davos 2026 Speech2026-01-20AI Governance

β€œWe must prioritize safety over speed in AI development.”

FLI Conference 20252025-11-10AI Safety

β€œSuperintelligence poses existential risks that we cannot ignore.”

Statement on Superintelligence2025-10-01Existential Risk

β€œAI governance must be proactive, not reactive.”

Public Lecture2026-02-05AI Governance

Research Output

2020s1
2010s2
2000s1

AI Safety Policy Frameworks

2021

Frameworks for ensuring AI safety.

Life 3.0: Being Human in the Age of Artificial Intelligence

2017

Discusses the implications of AI on society.

Cosmic Microwave Background Analysis

2010

Research on the cosmic microwave background.

The Mathematical Universe

2007

Proposes the Mathematical Universe Hypothesis.

Known Associates

Organizational Affiliations

Current

MIT

Professor of Physics

2003 - Present

Future of Life Institute

Co-founder and President

2014 - Present

Former

Institute for Advanced Study

Researcher

2001 - 2003

Source Material

Dossier last updated: 2026-03-04