{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Future of Life Institute Podcast","title":"David Dalrymple on Safeguarded, Transformative AI","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/a15119be\"></iframe>","width":"100%","height":180,"duration":6007,"description":"David \"davidad\" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.  \n\nYou can learn more about David's work at ARIA here:   \n\nhttps://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/   \n\nTimestamps:  \n\n00:00 What is Safeguarded AI?  \n\n16:28 Implementing Safeguarded AI \n\n22:58 Can we trust Safeguarded AIs?  \n\n31:00 Formalizing more of the world  \n\n37:34 The performance cost of verified AI  \n\n47:58 Changing attitudes towards AI  \n\n52:39 Flexible‬‭ Hardware-Enabled‬‭ Guarantees \n\n01:24:15 Mind uploading  \n\n01:36:14 Lessons from David's early life","thumbnail_url":"https://img.transistorcdn.com/fFhIC-s2qSlHXzmJI7qMGts2WuLwImi4tWmRLH9EdPg/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MmU5/MDZjZGQ5OTI0MDc5/YTk2ZTAxYTgwYTNk/M2VlOC5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}