Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.
You can find Nathan's podcast here: https://www.cognitiverevolution.ai
Timestamps:
00:00 AI progress since GPT-4
10:50 Multimodality
19:06 Low-cost models
27:58 Coding versus medicine/law
36:09 AI agents
45:29 How much are people using AI?
53:39 Open source
01:15:22 AI industry analysis
01:29:27 Are some AI models kept internal?
01:41:00 Money is not the limiting factor in AI
01:59:43 AI and biology
02:08:42 Robotics and self-driving
02:24:14 Inference-time compute
02:31:56 AI governance
02:36:29 Big-picture overview of AI progress and safety
Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.
You can find Nathan's podcast here: https://www.cognitiverevolution.ai
Timestamps:
00:00 AI progress since GPT-4
10:50 Multimodality
19:06 Low-cost models
27:58 Coding versus medicine/law
36:09 AI agents
45:29 How much are people using AI?
53:39 Open source
01:15:22 AI industry analysis
01:29:27 Are some AI models kept internal?
01:41:00 Money is not the limiting factor in AI
01:59:43 AI and biology
02:08:42 Robotics and self-driving
02:24:14 Inference-time compute
02:31:56 AI governance
02:36:29 Big-picture overview of AI progress and safety
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.