80,000 Hours Podcast

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”

You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928?

In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as 'anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving.

Links to learn more, summary and full transcript.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by 'longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called 'doomsday argument' alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

• Which worldviews Open Phil finds most plausible, and how it balances them
• How hard it is to get to other solar systems
• The 'simulation argument'
• When transformative AI might actually arrive
• And much more

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Show Notes

You wake up in a mysterious box, and hear the booming voice of God:

“I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it.

If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box.

To get into heaven, you have to answer this correctly: Which way did the coin land?”


You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.

But then you get up, walk outside, and look at the number on your box.

‘3’. Huh. Now you don’t know what to believe.

If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928?

In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as 'anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving.

Links to learn more, summary and full transcript.

Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by 'longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.

Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.

But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.

If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.

If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called 'doomsday argument' alone.

If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.

There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.

In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.

They also discuss:

• Which worldviews Open Phil finds most plausible, and how it balances them
• How hard it is to get to other solar systems
• The 'simulation argument'
• When transformative AI might actually arrive
• And much more

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

What is 80,000 Hours Podcast?

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin and Luisa Rodriguez.