Future of Life Institute Podcast

Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.   

Learn more about Esben's work at: https://blog.kran.ai  

00:00 – Intro and preview 

01:13 – AGI security vs traditional cybersecurity 

02:36 – Rebuilding societal infrastructure for embedded security 

03:33 – Sentware: adaptive, self-improving malware 

04:59 – New attack surfaces 

05:38 – Social media as misaligned AI 

06:46 – Personal vs societal defenses 

09:13 – Why private companies underinvest in security 

13:01 – Security as the foundation for any AI deployment 

14:15 – Oversight without a surveillance state 

17:19 – Protocols for safe agent communication 

20:25 – The expensive internet hypothesis 

23:30 – Distributed safety for companies and governments 

28:20 – Cloudflare's "agent labyrinth" example 

31:08 – Positive vision for distributed security 

33:49 – Human value when labor is automated 

41:19 – Encoding law for machines: contracts and enforcement 

44:36 – DarkBench: detecting manipulative LLM behavior 

55:22 – The AGI endgame: default path vs designed future 

57:37 – Powerful tool AI 

01:09:55 – Fast takeoff risk 

01:16:09 – Realistic optimism

Show Notes

Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.   

Learn more about Esben's work at: https://blog.kran.ai  

00:00 – Intro and preview 

01:13 – AGI security vs traditional cybersecurity 

02:36 – Rebuilding societal infrastructure for embedded security 

03:33 – Sentware: adaptive, self-improving malware 

04:59 – New attack surfaces 

05:38 – Social media as misaligned AI 

06:46 – Personal vs societal defenses 

09:13 – Why private companies underinvest in security 

13:01 – Security as the foundation for any AI deployment 

14:15 – Oversight without a surveillance state 

17:19 – Protocols for safe agent communication 

20:25 – The expensive internet hypothesis 

23:30 – Distributed safety for companies and governments 

28:20 – Cloudflare’s “agent labyrinth” example 

31:08 – Positive vision for distributed security 

33:49 – Human value when labor is automated 

41:19 – Encoding law for machines: contracts and enforcement 

44:36 – DarkBench: detecting manipulative LLM behavior 

55:22 – The AGI endgame: default path vs designed future 

57:37 – Powerful tool AI 

01:09:55 – Fast takeoff risk 

01:16:09 – Realistic optimism

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.