{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Future of Life Institute Podcast","title":"Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/ad4d0ce8\"></iframe>","width":"100%","height":180,"duration":8593,"description":"In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.  \n\n00:00 Nicholas Carlini's contributions to cybersecurity\n\n08:19 Understanding attack strategies \n\n29:39 High-dimensional spaces and attack intuitions \n\n51:00 Challenges in open-source model safety \n\n01:00:11 Unlearning and fact editing in models \n\n01:10:55 Adversarial examples and human robustness \n\n01:37:03 Cryptography and AI robustness \n\n01:55:51 Scaling AI security research","thumbnail_url":"https://img.transistorcdn.com/fFhIC-s2qSlHXzmJI7qMGts2WuLwImi4tWmRLH9EdPg/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS81MmU5/MDZjZGQ5OTI0MDc5/YTk2ZTAxYTgwYTNk/M2VlOC5qcGc.webp","thumbnail_width":300,"thumbnail_height":300}