80,000 Hours Podcast

Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.

If you have strong opinions about this one way or another, please email us at podcast@80000hours.org to help us figure out whether more of this ought to exist.

Chapters:

What is 80,000 Hours Podcast?

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin and Luisa Rodriguez.

Rob Wiblin:
Hey, listeners. Rob here. I recently realised that we could be doing more to promote other podcasts with similar goals to those that we have on The 80,000 Hours Podcast. And so we're occasionally going to cross-post episodes from other shows, especially when they're just starting out. And our first example comes from the producer of this very podcast, Keiran Harris, who's been working on a pilot for a new show in his spare time.

I'm actually about to go overseas on holiday, which unfortunately means I'm recording this intro now before I've actually had a chance to listen to any of what it is that he's going to go ahead and make. My only concern is that he might produce something so similar to The 80,000 Hours Podcast, given how closely he and I are aligned, that it could make it difficult for people to think of them as two truly distinct shows. Maybe we'll be able to distinguish them with different theme music or something. But anyway, I hope you stick around and enjoy whatever thing he comes up with. Here's Keiran Harris.

[music]
We’ll meet again. Don't know where, don't know when, but I know we’ll meet again some sunny day.

Keiran Harris:
So I thought I'd open this show with a little bit of background about how I came to be hosting my own podcast. Okay. So in 2017, I was working at a florist, and one night I got blackout drunk. And when I came to, I found myself burning down what I thought was my boss's house. And I don't really want to get into, you know, the reasons behind why I might have wanted to burn down this middle-aged florist's house. But, you know, suffice to say, I had my reasons.

But it turned out that, you know, I actually wasn't burning down the house of my boss at the time. I was actually burning down the house of 80,000 hours director of research, Rob Wiblin. And, you know, he came out and he caught me, and, you know, I was there with my gasoline and my, you know, my matches. And he said that arson of this kind carried a maximum sentence of ten years behind bars. And he said that if I was willing to work for him for ten years instead, that he wouldn't press charges.

And so that's the story of how 80,000 Hours became my own personal prison, really. And so then when Rob Webland asked me to host my own podcast, you know, I was confused. I said, “Why would you want me to do that? You already know full well that I think the work you do at 80,000 Hours has really no merit at all.”

And he said, and I quote, “I don't care. Every extra listener is a step closer to me owning my own speedboat, the only thing I've ever cared about.” So, you know, that gives you some insight into my life over the last six years. And, you know, we don't even have advertisers on the show, so, you know, I don't know.

Anyway, so I had to come up with a premise for a show. And I thought, look, since all these people in this effective altruism community seem to like criticism so much, I thought maybe I could stake out kind of a position as a vocal yet respectful critic of their work. And so I've done that with a show that I call The Worst Ideas in the History of the World.

And I thought there could be no better place to start than the idea of longtermism, which is, very briefly, the idea that helping future generations is a key moral priority of our time.

So I should probably start by giving you a little bit of background with my personal history with longtermism. Now, I was very close with my grandfather, and my grandfather was a kind of amateur scholar and poet. And my grandfather used to say to anyone who would listen, “The only good future person is a dead future person.” Yeah, he did. And look, I fully accept that his view in 2023 is seen as very extreme. It's something that we've moved beyond as a society.

At the same time, I do think there's a little bit of truth to it. Not the whole truth, but there is some. There's some partial... Now, look, I'm not, before you write in any angry emails, you know, I am not saying that the only good future person is a dead future person. You know, I'm not saying that. I'm just saying that he was considered universally wise on virtually every topic. And so it is somewhat strange that he would be right on so many things and then wrong here. But, you know, obviously, if that's what the Oxbridge mafia are telling us, then obviously that's right.

And look, I'm not saying, as my wise grandfather did, that future people are stealing our women. However, it is just a fact that the phenomenon of cougars is on the rise. You know, older women who seek out younger men. And it is just a fact that if you take, you know, women who are my age today, a lot of the men that they'll eventually seek out, if they get into this cougar lifestyle, a lot of those men won't have been born yet, will they? Whereas if we didn't have to deal with future people, our women wouldn't actually have a choice but to... I'm not saying that they shouldn't have a choice. You know, I'm as progressive as anyone you're gonna meet. I'm just saying that, you know, there might be some benefits to closing our time borders. That's all I'm saying.

Now, I've noticed that when longtermists respond to critiques of their work, you know, thoughtful, reasonable critiques that clearly have no ulterior motive, I've noticed that sometimes they say that those critiques are unfair, that they don't accurately represent their work. And I truly believe that longtermist ideas are misguided to the point where you simply don't need to misrepresent them.

And so I thought for this show, what I'd do is I would play a series of clips of longtermists making arguments for their case in their own words, that they endorse, and then I will respond with my thoughts, with my critiques. And at the end of the episode, you can work out which side had the more reasonable arguments. Couldn't be fairer, right?

Now, I should add a disclaimer that all of these clips are from The 80,000 Hours Podcast.


Keiran Harris:
I need to stress that doesn't mean that I'm implying that The 80,000 Hours Podcast is a quality show, because obviously it isn't. It's just that by working at 80,000 Hours, I have access to the archives, and so I don't need to email anyone for permission to use these clips. Saves me a few minutes. That's the only reason I would ever do anything that could be seen as promoting this show.

Okay, so arguably, the most prominent longtermist in the world is Will MacAskill. He wrote the book What We Owe the Future, which is all about longtermism. So it makes sense to start with him. I'm now going to play two short clips from episode number 136 of The 80,000 Hours Podcast. In the first one, Will MacAskill outlines the basic case for longtermism and explains why the future will be really big. Okay, here we go.

Will MacAskill:
I think the core argument is very simple. It's that future people matter modelling. It's that there could be enormous numbers of future people. And then finally, that we can make a difference to the world they inhabit, so we really can make a difference to all of those lives that may be lived. Homo sapiens have been around for about 300,000 years. If we live as long as a typical mammal species, we will survive for hundreds of thousands of years. If we last until the Earth is no longer habitable, we will last for hundreds of millions of years. If one day we take to the stars and have a civilisation that is interstellar, then we could survive for hundreds of trillions of years.

Even on the low estimates, such as us living as long as a typical mammal species, the future is truly vast. On that low estimate, there is about 1000 people in the future for every person alive today. When we look at those longer time scales, that civilization could last for, there are millions, billions, or even trillions of people to come for every person alive today.

Keiran Harris:
Okay, and here's the second clip. Where Will MacAskill makes the case for why future people matter morally.

Will MacAskill:
Imagine you're hiking on a trail, you drop some glass, and suppose you know that in 100 years’ time, someone will cut themselves on that glass. Is it any reason at all for, you know, not taking the time to clean up after yourself that the person who will be harmed lives in 100 years’ time?

Rob Wiblin:
Or hasn't been born yet.

Will MacAskill:
Or maybe hasn't even been born. And it seems like the answer is no. Or if you could prevent a genocide in, you know, 1,000 years versus 10,000 years versus 100,000 years, and it will kill 100,000 people, does it make any difference when those lives will be lived? It just seems like intuitively not. Harm is harm, wherever it occurs. And in that way, distance in time is quite like distance in space. The fact that someone will suffer is bad in and of itself, even if they live on the other side of the world. The fact that someone will suffer is bad in and of itself, even if they will live 10,000 years from now. So I think when we reflect on thought experiments like this, we see that, yeah, we want to give a lot of moral weight to future people.

Keiran Harris:
So I thought we'd go through the points in Will MacAskill's simple three-point case for longtermism one by one. I agree with number two. that the future could be very big and very long. And I agree with number three, that we can reasonably hope to influence whether future people exist and how good or bad their lives are.

But I have serious reservations with number one, which is where we'll start with my list: Five strong reasons for why future people might not matter morally. Okay, so I've written these down on a piece of paper. I'm going to read the headlines and then I'll give you some colour. Here we go.

Reason number one for why future people might not matter morally: they won't get our film and TV references. Okay, so let's imagine that you're hanging out with a future friend and something surprising happens and you feel you're in the mood to be a little bit whimsical and nostalgic and you decide you want to reference the Simpsons. So you'll say to your future friend, you'll say, “Ay, caramba!” and the future friend will just stare at you blankly. Now, MacAskill wants to call that future person staring blankly, he wants to call them as morally valuable as you or I, who, upon hearing “Ay, caramba!,” at least when delivered correctly, we'd break into hysterics, wouldn't we?

Number two on reasons why future people might not matter morally: they'll be able to travel through time and they'll never choose to come to the 2020s. You know, they'll go to the building of the pyramids and they'll visit ancient Greece, but they'll never come to the opening of a new M&Ms World, will they?

Okay, on to the next one. I'm sure you've all heard the common adage directed at longtermists, “If you love future people so much, why don't you marry them?” And if you've ever said this to a longtermist, I'm sure you've probably noticed a certain glint in their eye, you know, almost as if they're taking it as a personal challenge. Now, I don't want to call this a fetish. You know, I don't know if that's the right word. I'm not an expert. I'm not an expert, so you'd have to ask one. But, you know, certainly something odd is going on in the case of these cross-temporal sexual preferences. You know, something is.

Okay, so my third idea for why future people might not matter morally is: by inevitably marrying normal people, future people will destroy the sanctity of marriage. Look, if future people are allowed to marry present people, what's next? A toaster from the past marrying an iguana from an alternate reality? It'll never end.

Okay, back to the list. Reason number four for why future people might not matter morally: they'll know the exact time and manner of your death, which is the perfect combination of smugness and creepiness, isn't it? I mean, you're sitting there eating a ham sandwich and they're thinking, “That's right, go ahead and contribute to the heart attack that'll kill you on November 4, 2045, when you're trying on novelty hats in a hat store. I can't wait.”

Which leads me perfectly into our final counterpoint to the thesis of longtermism: future people are walking spoiler alerts. If there's one thing I can't stand, it's people spoiling the endings of movies, or sports results, or the exact time and manner of your death. But it doesn't end there with future people, does it? No. I mean, imagine being with who you thought was a future friend. And you're a bit vulnerable with them, and you share one of your hopes of the future. And you say, “I really hope I make it to the time when they can make a toothpaste that tastes exactly the same as chocolate ice cream.” And the future person says, “Oh, I think you'll be pleasantly surprised!”

And the amount of heartbreak you feel in that moment for them having spoiled this moment for you, it's this combination, it's this heartbreaking combination of joy at the fact that you're going to get to have this amazing thing, and this absolute despair that you're not going to be able to experience the surprise of it. You know, you'll be sitting there years later in your floating mansion, watching your holographic TV, and you'll ask NewsGPT the headlines. And NewsGPT will say, “Headline number one: Colgate makes a special announcement...” And your head will be buried into your pillow, as the tears stream down your face, and the ambivalence between the best moment of your life being ruined by the worst friend you could ever have.

So there you have it. Five strong reasons for why Will MacAskill's thesis doesn't go through. I'm not saying these are the absolute strongest arguments against longtermism; I'm just saying they're the strongest we've come up with so far.

Okay, for the next section, let's move on to this idea that we can reasonably hope to influence whether future people exist and how good or bad their lives will be. Now, the most prominent idea that longtermists have come up with for increasing the welfare of future people is to reduce what they call existential risk. And again, to be completely fair to these people, I'm going to play a clip of Toby Ord, who is the author of the book The Precipice from episode number 72 of The 80,000 Hours Podcast, where he introduces why longtermists care about this stuff. Okay, here it is.

Toby Ord:
The book is called The Precipice: Existential Risk and the Future of Humanity. And it's about how humanity has been around for 2,000 centuries so far, and how long and great our future might be and how soaring our potential is. But how all of this is at risk. There have been natural risks of human extinction, things like asteroids that could potentially wipe us out, as they have many other species, and there's been this background rate of such risks.

But with humanity's increasing power over time, a rise of technological power, we reach this stage where we may have the power to destroy ourselves, leading to the destruction not only of the present, but of the entire future and everything that we could hope to achieve. And this is something where I now think that their risk is high enough that this century it's about one in six. And I think that the risk is either going to go higher or that we fail out of this time period, or that we get our act together and we lower these risks.

Keiran Harris:
Now, if you share Toby Ord's views, perhaps you would like to dedicate your career to reducing existential risks. But what if instead, you share the views of my wise grandfather, you know, about the only good future person being a dead future person? What should you do then with your career? And I think it's common sense to roughly take the inverse of Toby Ord's approach and to, as subtly as possible, increase existential risks. And I say as subtly as possible, because if you were too obvious about this, you'd probably be committing crimes and someone might stop you. So you've got to be a bit clever about it.

Now, I want to remind you that I don't agree with my grandfather personally, okay? But I have jotted down a few ideas over the years about subtly and creatively increasing existential risks in my notebook. So I thought I'd share a few with you now.

Toby Ord says that the risk of an existential disaster this century is one in six. That's a nice place to start. But there's no doubt we could get that higher if we showed a little ambition. And my first idea for increasing existential risk surrounds great power wars. So, in keeping with the theme of being fair, I thought I'd play you a clip with Chris Blattman from episode number 128 of The 80,000 Hours Podcast.

Chris Blattman:
I think, if anything, there's a common set of very difficult-to-resolve routes to great power wars, which is that in a city, we can imagine a criminal superstructure organising the gangs and being that third party guarantor of peace and security and commitment, and punishing unchecked leaders and solving the five problems. And at a city or even a national level, states perform those functions pretty well, and we've proven pretty good at constructing those institutions. And as an individual, if I decided I wanted to really make a contribution to those, I could probably make a difference on the margin.

On the other hand, it's very hard for anybody, even the US president, to think, how could we change the fundamental structure of the UN Security Council? And so there's a scale of human institutions where we don't yet have the solutions and they're so big and slow and subject to lots of forces and competing actors that they're hard to manipulate. So there's this fundamental problem of anarchy, meaning there's nobody above to sort of settle these disputes and there's a difficulty of tackling that. We still have tools, but we have to be willing to work on these low-return margins.

Keiran Harris:
Okay, so my first idea for subtly and creatively increasing existential risks is called the World Series of World Wars. So this is playing off what society did with the World Series of Poker, which was previously unwatchable on TV, and turning world wars into a spectacle on ESPN. So the idea here would be that you broadcast it like an exciting international event, and you have exciting opening ceremonies, you have exciting graphics and stats and graphs. You have, you know, trophy ceremonies for the winners. You know, just maximise the incentives for countries to start world wars.

And like the World Series of Poker, the World Series of World Wars could have a funny co-commentator who makes jokes about his ex-wives. I don't know, he could say something like, “Why, that country folded slower than my wife on laundry day!”

That's actually pretty good, isn't it? That was just improvised. But it's perfect joke structure, isn't it? You know, I hadn't actually planned to get any laughs with this show. You know, these are obviously very serious topics. I'm definitely not a comedian. But there was this one time when I was at a restaurant and I was with my friends and about halfway through the meal this lady came up to me and she said, excuse me, are you Bret from Flight of the Conchords, only shorter and fatter? And I thought, that's all right, isn't it. Because obviously she must have heard me joking around with my friends and mistaken my comedy stylings for that of one of New Zealand's most talented musical funnymen.

Okay, my second idea relates to nuclear weapons, so I'm going to play a clip from episode 43 of The 80,000 Hours Podcast with Daniel Ellsberg. Here we go.

Daniel Ellsberg:
Our policy has actually been the threat of an insane action. An action that essentially we now know for the last 35 years has involved killing nearly everyone on Earth by the smoke from the burning cities that are planned to be hit in our war plans. And that smoke, we now know, on the nuclear winter calculations, would be lofted into the stratosphere, would spread around the world globally. I'm talking now about a war between the US and Russia, where thousands of weapons would be involved.

And a few hundred of those weapons on cities which are targeted, would be enough to cause smoke that would reduce the sunlight reaching the Earth's surface by about 70%, killing all the harvests worldwide and for a period as long as a decade. But that wouldn't be necessary: killing all the harvests for about a year or even less would exhaust our food supplies, which globally are about 60 days. And nearly everyone would starve to death, except for a small fraction, perhaps 1%, a little more or less, of humans would survive in Australia or New Zealand, as the southern hemisphere is somewhat less affected, eating fish and mollusks.

And that could be a sizable number of people. 1% is 70 million people, but 99% gone. And virtually all the larger animals other than humans, they're not as adaptable as we are. And they can't move thousands of miles and wear clothes, make fires, have houses. They would go extinct altogether, as they did when an asteroid hit the earth 67 or 65 million years ago. Created a very similar effect, blotting out sunlight by the dust that was set up.

Even the word evil seems just overwhelmed by what we're talking about, which is the destruction of most large life and most humans on Earth. Something that was simply not possible 100 years ago.

Keiran Harris:
Okay, my second idea for subtly and creatively increasing existential risks is called Nuclear Armageddon Park. And this was inspired in part by a wonderful documentary I saw on HBO called Class Action Park, which was about a place in New Jersey in the ‘80s that was known as the most dangerous theme park in the world. You know, the rides were genuinely dangerous there, and there were countless injuries and even a few deaths as a result of the park.

And so I thought, what if we built something similar, only with a theme of nuclear weapons? And so I'm calling it Nuclear Armageddon Park. Now, come on, don't say no until you've heard me out. Because we've already had many what are called nuclear near-misses throughout history so far since the development of nuclear weapons. And my grandfather always used to say that near misses is just another term for failure. So I do think we can do better here.

Okay, so what kind of rides might we have in Nuclear Armageddon Park? I just have some early, rough thoughts. I want to make it clear that these are not like final or anything. These are just like early drafts. But I think they're promising. Okay, so what if we had a water slide where people slid down, it was kind of this long tube, classic water slide design. But in the pool at the bottom there you have a nuclear submarine.

Now, the key to this is that you have a real active crew of military folks working in the nuclear sub, but they don't know where they are. You haven't told them that they're in a theme park. And so every time that one of the parkgoers crashes into the side of the sub, you know, they might think they're under attack. This could work with, you know, military from the US or Russia or China, maybe you even have like, officers from all three countries in different slides throughout the park.

But anyway, the idea would be that eventually one of them would make the decision that they're under attack and they might retaliate using their nukes. It's not gonna happen every time, but you're just ramping up the percentages there, aren't you?

Okay, another idea. You've probably seen pictures of people sort of pretending to ride nuclear weapons, right? And so I thought, what if you allow people to do that for real in Nuclear Armageddon Park out of mini planes throughout the park there, you know, you can reenact that scene from the end of Doctor Strangelove where something like that happens. And apologies if you haven't seen that yet, and I've been a real future person by spoiling that for you.

And the last idea I had was just to directly rebuild the exact rides from Class Action Park, except that you give a 50% discount off to any parkgoer who happens to work directly with nuclear weapons in their regular career, with the thinking that some percentage of those parkgoers will at some point develop brain damage as a result of the rides. And...

Now, look, I know this is not all rainbows and unicorns. You know, I don't think this would be easy. I'm not a terrible person. I just think that sometimes doing what's right for the world isn't easy. And anyway, we don't have to do all of these. I'm just starting the conversation.

Okay, my next one definitely needs a bit of background info. So for years, 80,000 Hours has been very whiny about the threat of future pandemics. And one subsection of their concern relates to what's called gain-of-function research. So to kind of explain that, I'm going to play a clip from episode 112 of The 80,000 Hours Podcast with Carl Shulman.

Rob Wiblin:
Okay, so a completely different style of programme, other than these bioweapons, is scientific research, biomedical research, in which scientists try to make new viruses that have additional or different capabilities than what wilder viruses do. I guess that's so-called gain-of-function research. It sounds like even labs that are reasonably well run still have a rate of leaking samples, leaking viruses out of them that is something on the order of, I guess you mentioned one in 500 lab years earlier. I guess maybe we could say if things have gotten a bit better now, maybe it's more like one in 1000, 1 in 2000 lab years.

But it still suggests that if these leaks are happening at all, at that kind of rate, then it might not be so wise to be creating and storing viruses that might be capable of causing a global pandemic in any labs at all. Because perhaps we just haven't yet figured out how to reach the standard of safety that would be required to ensure that there's no way that they can escape into the wild and cause huge amounts of damage.

Carl Shulman:
Yeah. So the safety still seems poor. It’s not something that has gone away in the last decade or two. Like, there have been a number of mishaps just in recent years. For example, those multiple releases of infections of SARS, one after it had been extirpated in the wild, if we have an infection rate of one in 100 per worker year or one in 500 per laboratory year, and given an infection of a new pandemic thing, and a lot of these leaks, like someone else was infected, usually not many, because they don't have high enough R0. So yeah, you might say on the order of one in 1000 per year of work with this kind of thing for an escape. And then there's only a handful of effective labs doing this kind of thing.

So you wouldn't have expected any catastrophic releases to have happened yet, reliably. But also, if you scale this up and had hundreds of labs doing pandemic pathogen gain-of-function kind of work, where they were actually making things that would themselves be ready to cause a pandemic directly, that cumulative threat could get pretty high.

Keiran Harris:
Okay, so my third idea for subtly and creatively increasing existential risks involves making a TV show. Did you ever see the old show, The Dating Game? So, the format for this one is that there was a bachelorette who would question three bachelors who she couldn't see, who were behind a wall. And at the end of the show, she would pick one of them to accompany her on a date, and the show would pay for all their expenses.

So I had an idea for a new dating show called The Dating Gain. So this is not exactly the same format as The Dating Game. So I'll explain how this one works. Okay, so, a normal bachelorette or bachelor, they come on the show, and like The Dating Game, there are three people behind the wall that they could potentially date. But one of those people is working on dangerous gain-of-function research. And if that scientist doesn't get picked to go on the date at the end, they have to release their dangerous pathogens into the wild.

So, you know, I'm certainly not claiming that every show would lead to something as bad or worse as COVID. You know, I'm just saying it would give us kind of a fighting chance.

Okay, so, those are the three ideas I thought I'd share today. You know, I don't want to overwhelm you, but, you know, there are a few other ideas in my notebook, so do reach out if you'd like to hear about more of those. Maybe we could do them in some future shows.

Now, I wanted to wrap this episode up by making something very clear. This whole show is about the ideas that surround longtermism and effective altruism. It's not about the people in the effective altruism and longtermism communities. You know, just because you disagree very strongly with specific ideas, that doesn't mean that the people who do believe those ideas are anything other than intelligent, warm, lovely people worthy of immense respect.

I should, however, flag one thing in the interest of transparency. I have noticed in my years around these communities that most of its adherents are vegetarians or vegans. I think almost all of them love dogs, and I know many of them to be avid painters. And I personally don't believe in guilt by association at all, but I think it would be remiss of me not to mention that, as far as I know, the only historical figure to have loved dogs, loved painting, and to have refused to eat animals at all was the Fuhrer himself, Adolf Hitler.

So, yeah, look, I really, genuinely don't want to influence you thoughts one way or another towards determining whether these are communities that you'd like to be associated with. I just feel like it's important to be completely honest about this kind of thing. Alright, good night everybody.

[music]
We’ll meet again, don't know where, don't know when, but I know we’ll meet again some sunny day. Keep smiling through, just like...

Rob Wiblin:
If you had strong opinions about that one way or another, please email us at podcast@80000hours.org to help us figure out whether more of that ought to exist or not. We'll be back with a regular interview soon.