Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.org
CHAPTERS:
(00:00) Episode Preview
(00:20) Introductions and AGI timelines
(10:13) Deception, values, and control
(23:20) Dual use and alignment
(32:22) Frontier labs and governance
(44:12) MATS tracks and mentors
(58:14) Talent archetypes and demand
(01:12:30) Applicant profiles and selection
(01:20:04) Applications, breadth, and growth
(01:29:44) Careers, resources, and ideas
(01:45:49) Final thanks and wrap
PRODUCED BY:
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.