{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Future of Life Institute Podcast","title":"Brain-like AGI and why it's Dangerous (with Steven Byrnes)","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/77451317\"></iframe>","width":"100%","height":180,"duration":4394,"description":"On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.  \n\nYou can learn more about Steven's work at: https://sjbyrnes.com/agi.html  \n\nTimestamps:  \n\n00:00 Preview  \n\n00:54 Brain-like AGI Safety \n\n13:16 Controlled AGI versus Social-instinct AGI  \n\n19:12 Learning from the brain  \n\n28:36 Why is brain-like AI the most likely path to AGI?  \n\n39:23 Honesty in AI models  \n\n44:02 How to help with brain-like AGI safety  \n\n53:36 AI traits with both positive and negative effects  \n\n01:02:44 Different AI safety strategies","thumbnail_url":"https://img.transistorcdn.com/fFhIC-s2qSlHXzmJI7qMGts2WuLwImi4tWmRLH9EdPg/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MmU5/MDZjZGQ5OTI0MDc5/YTk2ZTAxYTgwYTNk/M2VlOC5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}