{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Accidental Gods ","title":"AI: Integral to the future or existential risk? (or both) -  conversations on current evolution with Daniel Thorson","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/9c8f5ddb\"></iframe>","width":"100%","height":180,"duration":4666,"description":"How dangerous is AI? Are Large Language Models likely to subvert our children?  Is Generalised AI going to wipe out all life on the planet?  I don't know the answers to these. It may be that nobody knows, but this week's guest was my go-to when I needed someone with total integrity to help unravel one of the most existential crises of our time, to lay it out as simply as we can without losing the essence of complexity, to help us see the worst cases - and their likelihood - and the best cases, and then to navigate a route past the first and onto the second. Daniel Thorson is an activist - he was active in the early days of the Occupy movement and in Extinction Rebellion. He is a lot more technologically literate than I am - he was active early in Buddhist Geeks. He is a soulful, thoughtful, heartful person, who lives at and works with the Monastic Academy for the Preservation of Life on Earth in Vermont. And he's host of the Emerge podcast, Making Sense of What's Next. So in all ways, when I wanted to explore the existential risks, and maybe the potential of Artificial Intelligence, and wanted to talk with someone I could trust, and whose views I could bring to you unfiltered, Daniel was my first thought, and I'm genuinely thrilled that he agreed to come back onto the podcast to talk about what's going on right now. My first query was triggered by the interview with Eliezer Yudkowsky on the Bankless podcast - Eliezer talked about the dangers of Generalised AI, or Artificial General Intelligence, AGI, and the reasons why it was so hard - he would say impossible - to align the intentions of a silicon-based intelligence with our human values, even if we knew what they were and could define them clearly. Listening to that, was what prompted me to write to Daniel. Since then, I listened many times to two of Daniels own recent podcasts: one with the educational philosopher Zak Stein on the dangers of AI Tutors and one with Jill Nephew, the founder of Inqwire, Public...","thumbnail_url":"https://img.transistorcdn.com/2fOWMRnTk9Jq1cMNEdZ2P6L9hSacKWpQNA4zTc1F1F4/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jNjRl/ZmU1NTg1MWQ2NmFl/MzkzZGIzNjlhYTU4/OTM0NS5qcGVn.webp","thumbnail_width":300,"thumbnail_height":300}