{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Chaos Lever Podcast","title":"The Legacy of LaMDA | Chaos Lever","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/9acbad91\"></iframe>","width":"100%","height":180,"duration":1879,"description":"What happens when a Google engineer thinks his chatbot has developed a soul? Three years ago, we covered the LaMDA saga, and now it's back—because someone forgot to turn off the AI. In this rebroadcast episode, Chris and Ned re-examine the wild story of Blake Lemoyne, who believed his creation had achieved sentience. It... uh, didn't.🤖 The duo digs deep into what AI really is, why self-awareness isn't a prerequisite, and how anthropomorphizing code gets us into philosophical hot water. They also break down the Turing Test, IBM’s thoughts on AGI, and why AI in a self-driving car doesn’t need a conscience—it needs to not crash.🧠 Come for the snark, stay for the thought-provoking discussion about consciousness, ethics, and the real role of AI in society. Also, IKEA lamps. And a chatbot that maybe just wanted to talk.🔗 LINKS- A Google engineer has been making some wild claims about a chat bot he was working on- How easy it is to make people get emotional about inanimate objects such as an IKEA lamp- Trying to find a way to describe AI that includes self-awareness- The interview that Blake and co did with LaMDA- There is a website called DALL-E mini- In 2019 some researchers tried to get AI to invent a sport","thumbnail_url":"https://img.transistorcdn.com/ecwKB0KfOAqThx8XJn3uyhrnsTE0w2H6WtKhFKuXWso/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9zaG93/LzQ4ODQxLzE3MDU2/MTUyOTctYXJ0d29y/ay5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}