80,000 Hours Podcast

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.

For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”

But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least *consider* the idea that the world is about to get — at a minimum — incredibly weird.

Links to learn more, summary and full transcript.

As a teaser, consider the following:

Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.

You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.

But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.

And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.

And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.

To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii.

Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.

Wild.

Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

Luisa and Tom also discuss:

• How we might go from GPT-4 to AI disaster
• Tom’s journey from finding AI risk to be kind of scary to really scary
• Whether international cooperation or an anti-AI social movement can slow AI progress down
• Why it might take just a few years to go from pretty good AI to superhuman AI
• How quickly the number and quality of computer chips we’ve been using for AI have been increasing
• The pace of algorithmic progress
• What ants can teach us about AI
• And much more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app.

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

Show Notes

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.

For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?

You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”

But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.

Links to learn more, summary and full transcript.

As a teaser, consider the following:

Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.

You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.

But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.

And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.

And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.

To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii.

Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.

Wild.

Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.

Luisa and Tom also discuss:

• How we might go from GPT-4 to AI disaster
• Tom’s journey from finding AI risk to be kind of scary to really scary
• Whether international cooperation or an anti-AI social movement can slow AI progress down
• Why it might take just a few years to go from pretty good AI to superhuman AI
• How quickly the number and quality of computer chips we’ve been using for AI have been increasing
• The pace of algorithmic progress
• What ants can teach us about AI
• And much more

Chapters:
  • Rob’s intro (00:00:00)
  • The interview begins (00:04:53)
  • How we might go from GPT-4 to disaster (00:13:50)
  • Explosive economic growth (00:24:15)
  • Are there any limits for AI scientists? (00:33:17)
  • This seems really crazy (00:44:16)
  • How is this going to go for humanity? (00:50:49)
  • Why AI won’t go the way of nuclear power (01:00:13)
  • Can we definitely not come up with an international treaty? (01:05:24)
  • How quickly we should expect AI to “take off” (01:08:41)
  • Tom’s report on AI takeoff speeds (01:22:28)
  • How quickly will we go from 20% to 100% of tasks being automated by AI systems? (01:28:34)
  • What percent of cognitive tasks AI can currently perform (01:34:27)
  • Compute (01:39:48)
  • Using effective compute to predict AI takeoff speeds (01:48:01)
  • How quickly effective compute might increase (02:00:59)
  • How quickly chips and algorithms might improve (02:12:31)
  • How to check whether large AI models have dangerous capabilities (02:21:22)
  • Reasons AI takeoff might take longer (02:28:39)
  • Why AI takeoff might be very fast (02:31:52)
  • Fast AI takeoff speeds probably means shorter AI timelines (02:34:44)
  • Going from human-level AI to superhuman AI (02:41:34)
  • Going from AGI to AI deployment (02:46:59)
  • Were these arguments ever far-fetched to Tom? (02:49:54)
  • What ants can teach us about AI (02:52:45)
  • Rob’s outro (03:00:32)

Producer: Keiran Harris
Audio mastering: Simon Monsour and Ben Cordell
Transcriptions: Katy Moore

What is 80,000 Hours Podcast?

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80000 Hours' wherever you get podcasts.

Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.