Future of Life Institute Podcast

Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.



LINKS:



CHAPTERS:

(00:00) Episode Preview

(00:56) Education to existential risk

(03:03) Lessons from social media

(08:41) Attachment systems and AI

(18:42) AI companions and attachment

(27:23) Anthropomorphism and user disempowerment

(36:06) Cognitive atrophy and tools

(45:54) Children, toys, and attachment

(57:38) AI psychosis and selfhood

(01:10:31) Cognitive security and parenting

(01:26:15) Education, collapse, and speciation

(01:36:40) Preserving humanity and values



PRODUCED BY:

https://aipodcast.ing



SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.