Future of Life Institute Podcast

On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity's uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI's growing influence in financial trading.  

You can follow Zvi's excellent blog here: https://thezvi.substack.com  

Timestamps:  

00:00:00 Preview and introduction  

00:02:01 Sycophantic AIs  

00:07:28 Bottlenecks for AI agents  

00:21:26 Are benchmarks useful?  

00:32:39 AI agent time horizons  

00:44:18 Impact of automating research 

00:53:00 Limits to scaling inference compute  

01:02:51 Will the future go well for humanity?  

01:12:22 A good plan for safe AI  

01:26:03 What makes AI different?  

01:31:29 AI in trading

Show Notes

On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading.  

You can follow Zvi's excellent blog here: https://thezvi.substack.com  

Timestamps:  

00:00:00 Preview and introduction  

00:02:01 Sycophantic AIs  

00:07:28 Bottlenecks for AI agents  

00:21:26 Are benchmarks useful?  

00:32:39 AI agent time horizons  

00:44:18 Impact of automating research 

00:53:00 Limits to scaling inference compute  

01:02:51 Will the future go well for humanity?  

01:12:22 A good plan for safe AI  

01:26:03 What makes AI different?  

01:31:29 AI in trading

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.