Future of Life Institute Podcast

Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed's decision to resign from Stability AI, the industry's attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.  

Learn more about Ed's work here: https://ed.newtonrex.com  

Timestamps:  

00:00:00 Preview and intro  

00:04:18 AI-generated music  

00:12:15 Resigning from Stability AI  

00:16:20 AI industry attitudes towards rights 

00:26:22 Fairly Trained  

00:37:16 Special kinds of training data  

00:50:42 The longer-term future of AI  

00:56:09 Will AI improve living standards?  

01:03:10 AI versions of artists  

01:13:28 Authenticity and art  

01:18:45 Competitive pressures in AI 

01:24:06 Priorities going forward

Show Notes

Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.  

Learn more about Ed's work here: https://ed.newtonrex.com  

Timestamps:  

00:00:00 Preview and intro  

00:04:18 AI-generated music  

00:12:15 Resigning from Stability AI  

00:16:20 AI industry attitudes towards rights 

00:26:22 Fairly Trained  

00:37:16 Special kinds of training data  

00:50:42 The longer-term future of AI  

00:56:09 Will AI improve living standards?  

01:03:10 AI versions of artists  

01:13:28 Authenticity and art  

01:18:45 Competitive pressures in AI 

01:24:06 Priorities going forward

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.