{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Crazy Wisdom","title":"Futurescape: Dialogues in Digital Consciousness","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/5c739fa6\"></iframe>","width":"100%","height":180,"duration":3013,"description":"Introduction: David Hundley is a Machine Learning Engineer who has been deeply involved with experimenting with Large Language Models (LLMs). Follow along on his twitter Key Insights & Discussions:   Discoveries with LLMs:  David recently explored a unique function of LLMs that acted as a 'dummy agent'. This function would prompt the LLM to search the internet for a current movie, bypassing its training limitations. David attempted to utilize this function to generate trivia questions, envisaging a trivia game powered by the LLM. However, he faced challenges in getting the agent to converge on the desired output. Parsing the LLM's responses into a structured output proved especially difficult.    Autonomous Agents & AGI:  David believes that AGI (Artificial General Intelligence) essentially comprises autonomous agents. The prospect of these agents executing commands directly on one's computer can be unnerving. However, when LLMs run code, they operate within a contained environment, ensuring safety.    Perceptions of AI:  There's a constant cycle of learning and revisiting motivations and goals in the realm of AI. David warns against anthropomorphizing LLMs, as they don't possess human motivations. He stresses that the math underpinning AI doesn't align with human emotions or motivations.    Emergent Behavior & Consciousness:  David postulates that everything in the universe sums up to a collective result. There's debate over whether living organisms possess true consciousness, and what it means for AGI. The concept of AGI emulating human intelligence is complex. The human psyche is shaped by countless historical experiences and stimuli. So, if AGI were to truly replicate human thought, it would require vast amounts of multimodal input. A challenging question raised is how one tests for consciousness in AGI. David believes that as we continue to push technological boundaries, our definition of consciousness will keep shifting.    Rights & Ethics of AI:  With...","thumbnail_url":"https://img.transistorcdn.com/UZbrDrlO5VTfDNcq188THwbv0T09vcmLyzx3BcPI9bs/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81Y2Rj/OGFiMTYyMGFkNTM5/N2NjOWI2MWM5YzQ1/YTc2Ny5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}