{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Future of Life Institute Podcast","title":"Daniela and Dario Amodei on Anthropic","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/f4f5adff\"></iframe>","width":"100%","height":180,"duration":7288,"description":"Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Topics discussed in this episode include: -Anthropic's mission and research strategy -Recent research and papers by Anthropic -Anthropic's structure as a \"public benefit corporation\" -Career opportunities You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/ Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A Careers at Anthropic: https://www.anthropic.com/#careers Anthropic's Transformer Circuits research: https://transformer-circuits.pub/ Follow Anthropic on Twitter: https://twitter.com/AnthropicAI microCOVID Project: https://www.microcovid.org/ Follow Lucas on Twitter: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:44 What was the intention behind forming Anthropic? 6:28 Do the founders of Anthropic share a similar view on AI? 7:55 What is Anthropic's focused research bet? 11:10 Does AI existential safety fit into Anthropic's work and thinking? 14:14 Examples of AI models today that have properties relevant to future AI existential safety 16:12 Why work on large scale models? 20:02 What does it mean for a model to lie? 22:44 Safety concerns around the open-endedness of large models 29:01 How does safety work fit into race dynamics to more and more powerful AI? 36:16 Anthropic's mission and how it fits into AI alignment 38:40 Why explore large models for AI safety and scaling to more intelligent systems? 43:24 Is Anthropic's research strategy a form of prosaic alignment? 46:22 Anthropic's recent research and papers 49:52 How difficult is it to interpret current AI models? 52:40 Anthropic's research on alignment and societal impact 55:35 Why did you decide to release tools and videos...","thumbnail_url":"https://img.transistorcdn.com/fFhIC-s2qSlHXzmJI7qMGts2WuLwImi4tWmRLH9EdPg/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MmU5/MDZjZGQ5OTI0MDc5/YTk2ZTAxYTgwYTNk/M2VlOC5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}