Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.
Follow Benjamin's work at: https://benjamintodd.substack.com
Timestamps:
00:00 What are reasoning models?
04:04 Reinforcement learning supercharges reasoning
05:06 Reasoning models vs. agents
10:04 Economic impact of automated math/code
12:14 Compute as a bottleneck
15:20 Shift from giant pre-training to post-training/agents
17:02 Three feedback loops: algorithms, chips, robots
20:33 How fast could an algorithmic loop run?
22:03 Chip design and production acceleration
23:42 Industrial/robotics loop and growth dynamics
29:52 Society's slow reaction; "warning shots"
33:03 Robotics: software and hardware bottlenecks
35:05 Scaling robot production
38:12 Robots at ~$0.20/hour?
43:13 Regulation and humans-in-the-loop
49:06 Personal prep: why it still matters
52:04 Build an information network
55:01 Save more money
58:58 Land, real estate, and scarcity in an AI world
01:02:15 Valuable skills: get close to AI, or far from it
01:06:49 Fame, relationships, citizenship
01:10:01 Redistribution, welfare, and politics under AI
01:12:04 Try to become more resilient
01:14:36 Information hygiene
01:22:16 Seven-year horizon and scaling limits by ~2030
Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.
Follow Benjamin's work at: https://benjamintodd.substack.com
Timestamps:
00:00 What are reasoning models?
04:04 Reinforcement learning supercharges reasoning
05:06 Reasoning models vs. agents
10:04 Economic impact of automated math/code
12:14 Compute as a bottleneck
15:20 Shift from giant pre-training to post-training/agents
17:02 Three feedback loops: algorithms, chips, robots
20:33 How fast could an algorithmic loop run?
22:03 Chip design and production acceleration
23:42 Industrial/robotics loop and growth dynamics
29:52 Society’s slow reaction; “warning shots”
33:03 Robotics: software and hardware bottlenecks
35:05 Scaling robot production
38:12 Robots at ~$0.20/hour?
43:13 Regulation and humans-in-the-loop
49:06 Personal prep: why it still matters
52:04 Build an information network
55:01 Save more money
58:58 Land, real estate, and scarcity in an AI world
01:02:15 Valuable skills: get close to AI, or far from it
01:06:49 Fame, relationships, citizenship
01:10:01 Redistribution, welfare, and politics under AI
01:12:04 Try to become more resilient
01:14:36 Information hygiene
01:22:16 Seven-year horizon and scaling limits by ~2030
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.