Future of Life Institute Podcast

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.  

00:00 Nicholas Carlini's contributions to cybersecurity

08:19 Understanding attack strategies 

29:39 High-dimensional spaces and attack intuitions 

51:00 Challenges in open-source model safety 

01:00:11 Unlearning and fact editing in models 

01:10:55 Adversarial examples and human robustness 

01:37:03 Cryptography and AI robustness 

01:55:51 Scaling AI security research

Show Notes

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.  

00:00 Nicholas Carlini's contributions to cybersecurity

08:19 Understanding attack strategies 

29:39 High-dimensional spaces and attack intuitions 

51:00 Challenges in open-source model safety 

01:00:11 Unlearning and fact editing in models 

01:10:55 Adversarial examples and human robustness 

01:37:03 Cryptography and AI robustness 

01:55:51 Scaling AI security research

What is Future of Life Institute Podcast?

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.