Future of Life Institute Podcast

On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai   

AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI...  

Timestamps:  

00:00 What situation is humanity in? 

05:00 Why AI progress is fast  

09:56 Tool AI instead of AGI 

15:56 The incentives of AI companies  

19:13 Governments can coordinate a slowdown 

25:20 The need for international coordination  

31:59 Monitoring training runs  

39:10 Do reasoning models undermine compute governance?  

49:09 Why isn't alignment enough?  

59:42 How do we decide if we want AGI?  

01:02:18 Disagreement about AI  

01:11:12 The early days of AI risk

Show Notes

On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai   

AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI...  

Timestamps:  

00:00 What situation is humanity in? 

05:00 Why AI progress is fast  

09:56 Tool AI instead of AGI 

15:56 The incentives of AI companies  

19:13 Governments can coordinate a slowdown 

25:20 The need for international coordination  

31:59 Monitoring training runs  

39:10 Do reasoning models undermine compute governance?  

49:09 Why isn't alignment enough?  

59:42 How do we decide if we want AGI?  

01:02:18 Disagreement about AI  

01:11:12 The early days of AI risk

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.