Future of Life Institute Podcast

On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.  

You can follow Jeff's work here: https://jeffsebo.net/  

Timestamps:  

00:00:00 Preview and intro 

00:02:56 Imagining artificial consciousness  

00:07:51 Substrate-independence? 

00:11:26 Are we making progress?  

00:18:03 Intuitions about explanations  

00:24:43 AI risk and AI consciousness  

00:40:01 Consciousness and cognitive complexity  

00:51:20 Intuition versus intellect 

00:58:48 AIs as companions  

01:05:24 AI rights  

01:13:00 Acting under time pressure 

01:20:16 Measuring consciousness  

01:32:11 How can you help?

Show Notes

On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.  

You can follow Jeff’s work here: https://jeffsebo.net/  

Timestamps:  

00:00:00 Preview and intro 

00:02:56 Imagining artificial consciousness  

00:07:51 Substrate-independence? 

00:11:26 Are we making progress?  

00:18:03 Intuitions about explanations  

00:24:43 AI risk and AI consciousness  

00:40:01 Consciousness and cognitive complexity  

00:51:20 Intuition versus intellect 

00:58:48 AIs as companions  

01:05:24 AI rights  

01:13:00 Acting under time pressure 

01:20:16 Measuring consciousness  

01:32:11 How can you help?

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.