{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Building the Open Metaverse","title":"Liz Rothman, Unpacking the AI Policy & Governance Landscape","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/2d09e4eb\"></iframe>","width":"100%","height":180,"duration":1708,"description":"In this episode of Building the Open Metaverse, hosts Patrick Cozzi and Marc Petit discuss artificial intelligence ethics, safety, and governance with attorney Liz Rothman. Rothman provides an overview of the key issues being debated regarding responsible AI development and use. \n\nShe explains that conversations are focused on three main areas - algorithmic bias and fairness, workforce impacts from automation, and existential risks from advanced AI. On the topic of bias, Rothman emphasizes the need for equity, transparency and privacy protections as AI can amplify discrimination and tracking. She notes that committees globally are trying to assess and plan for the transformative effects AI could have on human labor and jobs. Existentially, questions arise about human identity and agency if AI reaches more generalized intelligence. \n\nAccording to Rothman, urgent current issues involve building trust, safety and privacy around AI systems, as synthetic media makes determining truth more difficult. She states that maintaining personal autonomy requires responsible innovation. Rothman advocates for transparency standards like documentation methods that expose AI model provenance and training data sourcing. However, she acknowledges commercial interests often prevent full transparency.\n\nThe discussion covers intellectual property rules for AI-generated content, which vary between countries. While US law presently requires human authorship, the UK protects some AI output. Rothman suggests IP law could incentivize AI development if adapted with disclosure requirements rather than outright restrictions. However, problematic training data cannot be removed from models once deployed, posing challenges for liability. \n\nRothman expresses hope in rising international dialogues but concern that regulatory solutions are struggling to match the rapid pace of technological change. She highlights organizations pursuing multidisciplinary cooperation on AI ethics and governance, like...","thumbnail_url":"https://img.transistorcdn.com/fue1pN_FUrEPTYlaSr9OrcqmGkxDXcbRwZM2lfUj8qU/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS9jYzJk/OGM0ZjkxMTEwNjUw/NjM3MGYzYmQ2ZjE4/NTZkYS5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}