Future of Life Institute Podcast

On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec's Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.  

You can learn more about Ege's work at https://epoch.ai  

Timestamps:  00:00:00 – Preview and introduction 

00:02:59 – Compute scaling and automation - GATE model 

00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 

00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 

00:47:19 – AI, Wages, and Labor Market Transitions 

00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 

01:06:56 – Moravec's Paradox and Automation of Human Skills 

01:13:59 – Which Jobs Are Most Vulnerable to AI? 

01:33:00 – Timeline Extremes: What Could Change AI Forecasts?

Show Notes

On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.  

You can learn more about Ege's work at https://epoch.ai  

Timestamps:  00:00:00 – Preview and introduction 

00:02:59 – Compute scaling and automation - GATE model 

00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 

00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 

00:47:19 – AI, Wages, and Labor Market Transitions 

00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 

01:06:56 – Moravec’s Paradox and Automation of Human Skills 

01:13:59 – Which Jobs Are Most Vulnerable to AI? 

01:33:00 – Timeline Extremes: What Could Change AI Forecasts?

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.