Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.
PRODUCED BY:
CHAPTERS:
(
01:05) Introduction and Book Discussion
(
03:34) Psychology of AI Alarmism
(
07:52) Intelligence Threshold Effects
(
11:38) Growing vs Crafting AI
(
18:23) Illusion of AI Control
(
26:45) Why Iteration Won't Work
(
34:35) The No Retries Problem
(
38:22) Computer Security Lessons
(
49:13) The Cursed Problem
(
59:32) Multiple Curses and Complications
(
01:09:44) AI's Infrastructure Advantage
SOCIAL LINKS:
What is Future of Life Institute Podcast?
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.