Resources on how to do good with your career — and anything else we here at 80,000 Hours feel like releasing.
Matt:
Hi, listeners, and welcome to Roomies, a podcast where one of Matt's roommates from address redacted, comes and joins this podcast, and like all of our other guests, right before they leave London for the United States. By the way, happy birthday, America. I'm Matt Reardon from the one-on-one team.
Bella:
I'm Bella from the marketing team.
Huon:
I'm Huon from the one-on-one team.
Matt:
And our special guest today is Ms. Chi Nguyen. I have a question for you, Chi.
Chi:
How do I pronounce my last name?
Matt:
It's a good first question. Well, how do I pronounce your last name?
Chi:
To be honest, I can't really do it myself. But not the way you did it.
Matt:
OK. Well, I'm grateful that you chimed in and spoke a bit because people who are sensitive to accents might have noticed that you're German. Chi, my question for you is, are you the most cooperative German?
Chi:
No, that's Casper Oesterheld.
Bella:
OK, we know the first and last name of the most cooperative German.
Chi:
Yeah. Who's Caspar Esterheld? So. So the joke here is that I work on this thing called Evidential Corporation in Large Worlds where there's a lot of theory behind it, but I'll try to not go into that. Basically... Man. I really don't know how to say. Let's say the. The theory is like, oh, maybe you should cooperate with people even if you cannot causally affect them. So, including people who might be in other parts of the universe or just you know, in other ways very far away or very different. And I've been working on that. I'm pretty much one of the only people in the world working, well, on Earth, working on this that we know of.
Matt:
The hope is there are others.
Chi:
Yeah, there are others, but I'm not the most cooperative German because Caspar Oosterheld is actually the person who came up with this and also kind of mentored me for a while and generally it's. I don't know you should just read his work. It's very good. I think I could, I guess, a promotional podcast episode for Casper Oesterhead. Shout out to Casper Oesterhead. So I'm not the most cooperative German.
Matt:
I was calling back to our conversation last night, of course, when you expressed some nervousness about coming on the show because you might in the future, have some very publicly prominent thing and you may have said something wrong tonight.
Chi:
Don't sell me out like that.
Matt:
And so, well, we just tried to think through what are the very publicly prominent roles that she might hold. One day I suggested Chancellor of Germany, which she thought was funny. And my theory for how that might come to be is like, you're the ECL expert. Perhaps if cooperation is the thing we prize in the future, the most cooperative people get to hold power and make decisions, and maybe that then could be cheap. But no, she set me straight.
Chi:
Yeah, sorry. Actually, maybe I'm like, I'm just doing this to show how cooperative I am. It's like a fake out.
Bella:
Do you have to say yes to everything that you're asked to do to be maximally cooperative? I feel like no.
Chi:
Do I say yes or no to those now to be maximally cooperative?
Bella:
I mean, you should maybe glamorise.
Chi:
I don't. I don't know what that means.
Bella:
Oh, yeah. Can I define it? I think it's when you decline to answer a question, but not in a way that is like, oh, I might. My answer to this question would be the bad one, like the one that makes me look bad. But instead it's like you have a policy of declining to answer certain kinds of questions. You have to actually do this rather than just say it, because otherwise you are just in fact declining to answer because the answer would make you look bad. But there are some kinds of questions you should systematically not give an answer to.
Chi:
I couldn't possibly comment on this.
Bella:
OK.
Chi:
But more seriously, on the object level, for those of you who are interested about like, this evidential cooperation stuff, maybe heard about it, you're not like, even though you end up cooperating with like, lots of kind of weird value systems and should generally make you more pluralistic and like cooperative and so on, you are kind of discerning about who you cooperate with. It's not just like, OK, I'm now like a doormat and I'm just like giving everybody everything that they want. But there are definitely criteria. And it's. In the end, it is a theory about how to get the most of what yourself want. So it's not just about playing nice.
Matt:
How did you get into all this?
Chi:
Oh, dear. So when I got into EA in 2017, real OG now, people who can see me on YouTube will probably wonder how old I was back then because I look like I'm 17, so that'll be like when you were 10. But anyway, not going more into this. So in 2017, when I got into EA, I was still in Germany and back then an organisation now called Centre on Long Term Risk was still called Stiftung for Effective Altruist. I can't get the English out of my German Stiftung ver Effective Altruismus which translates to foundation for Effective Altruism. And they are really interested in what they call S risk, suffering risk, something I'm still pretty interested in. And probably listeners to this are familiar with like X risk, existential risk or like the risk of like going extinct.
And S risk isn't really concerned about that, it's like hey maybe in the like it also takes a long term perspective and like talks about what happens in like the future and so on but it's more concerned about like what are all the sources of like long term suffering that could like come up like come about or whatever.
And they were also into pretty wacky ideas. And it just so happened that like Evidential Corporation in Large Worlds kind of started at CLR. And I guess part of the reason here is also something like hey by default maybe like our universe won't be filled with like lots of suffering. Which is great, but maybe there are other places where it's different and like can we, you know, can we do something about that? Can we get some like gains of trade going?
I don't really know. I honestly think that it's just like historical path dependency. This is kind of a topic that everybody should be concerned with. But like maybe you can reason yourself into it somehow why these suffering people ended up with like acausal trade. But I think it's just because they were basically thinking about any top possible like weird crux. I mean for a while there are like websites that like I mean now they're doing very like much more down to earth things like cooperative AI and whatever. But like in the early days the website was like oh we do like AI and string theory and like quantum stuff. And it was like what is going on with this place? But then most of these things I guess people realise OK this is pretty weird. And the acausal stuff stuck. And I don't know. For me it was kind of the thing that actually I kind of ignored it for like a while because it was too weird for me. But then I guess like because I was in that community over time you become curious and osmosis and so on and so forth and then they got me.
Huon:
I think this maybe ties in a little bit. Like you said earlier that you felt that it was pretty path dependent, that the suffering focused people end up doing the ECL thing. I think that there's this common cause about how much you're interested in just following the logic wherever it leads. And then if it leads you to a crazy sounding place, you're like, well, I guess I'm at a crazy place. And probably that willingness to follow arguments explains a lot of this history.
I personally struggle with this quite a lot where like I feel quite torn where on the one hand, logic seems pretty logical. Like there's something that really pulls at me to like try and iron out those inconsistencies. And yet if you do that too much, you end up with different inconsistencies between like I don't know, I feel like I end up doing sort of mental gymnastics to justify my behaviour that ends up being more in line with what people around me are doing.
Chi:
And I think I like, yeah, I think I get the like not wagering between like if you have like completely different like philosophical worldviews thing, but I think my hypothesis is that actually the kind of like disagreement or like the like scepticism we're doing this wager type reasoning and like other situations is more usually based on like doubting that you got your like credences right. Just like doubting that you got like your like. I know just that you like got some specifics right or wrong that like maybe the options are wrong, the specifics are wrong. I think I'm like oh yeah, just always following the logic and you end up at a weird place. I think that not wanting to do that, it's really legit, but I think it's more based on like, oh like they're like quite a few kind of like reasoning steps that are kind of like not empirically validated. You can't really like observe facts about them. They're just like deduction. And humans are very often kind of wrong about that or like do this poorly, there's too many like conditionals or whatever.
And so even if the way you write it down says I don't know, 10% or whatever, and I can't exact like I can't immediately argue with this, I can still have the hunch that like actually if I did this like properly I would come out with like a much, much lower probability. Or maybe I could like make symmetric arguments for like the exact opposite being true and also having 10% probability. And I think that instinct is good, I buy those kind of counter arguments. But I don't think that there's something wrong with like wager based reasoning in and of itself.
Like I think, for example, one thing that I think is one type of reasoning that's more popular at least within EA and I'm not like necessarily agreeing with the whole move. So I'm not sure if I buy the premises at all. But some people at least claim that like, oh, like if you're like a lot of people in the EA community of course believe in like human level AI or transformative AI at some point like happening and there's some uncertainty over like will it happen in five years or 50 years or 70 years whatever and, or maybe even like in 200 years.
And some people make the argument that like, well you know, even if I'm like 50/50 on these options, I should act as if it's soon because then in those worlds I have higher impact. Now I'm not entirely sure if it's true that in those worlds you have higher impact and so on, but I think that type of reasoning is actually really common and we do it all the time and I think that makes sense.
Bella:
I want to put, I totally agree with everything you said. I want to put an asterisk. That type of reasoning is really common within EA.
Chi:
Yeah.
Bella:
And like I, I think that basically all I'm trying, the point I'm trying to make here is that like you go up to somebody on the street, they will not assign credences to different worldviews. It will not happen. So I don't think that this is like a, it's not a... like it's not going to be a gotcha for many people basically.
Chi:
Is that true? But like if, what about like health insurance or something? Like I'm just like, well you don't...
Bella:
Yeah, that's true.
Chi:
It's not quite the same thing to be fair. It's like, it's a bit different but like, yeah, I don't know.
Bella:
People do things to mitigate risk and people do things that have differential payouts in different eventualities. That's true. But I don't think they're like, oh, if I look at different ways that I could act and how that would correspond to different outcomes compared to how likely I think those outcomes are. I think people are just not going to do, they're not going to be summing over those probabilities it's not going to happen.
Chi:
Yeah, yeah, of course. I mean, nobody. I mean, nobody outside of EA or rationality explicitly sets out in terms of our probabilities.
Bella:
That's the point I'm making.
Matt:
Yeah. Chi, on this point, I think. I know you're not on Twitter, but I will just refer you to the reply section of any tweet that Nate Silver sends out into the world.
Chi:
Oh, no. OK, OK.
Matt:
People saying his projections are meaningless.
Bella:
Can I circle back to the thing about believing in weird arguments and getting off the crazy train and that kind of thing that was talking about earlier? I just feel so confused about what to do about all of this. And one reason is that yeah, like Huon said oh, well, we can yeah, question all of these changes of logic, but there's some sense in which logic is compelling. And then I'm like, yeah. And then also there's another sense in which what comes out as sounding really weird and crazy is determined in part by whatever people at large think. And I think the people at large have some pretty abhorrent views on a lot of topics that seem like, kind of wild to me.
And then I'm like, well, I have this me perspective from which it seems really weird that the world seems to, for example, think that it's like, OK to like, torture and kill animals for you know, relatively small gains or like, that it's just like, obvious that yeah their government should mostly be promoting the interests of people in their own country or like, this kind of stuff. And I'm like, that just seems like, so like, wild to me. But also, if I follow a chain of reasoning that leads to that conclusion, I'm doing something kind of crazy here. And so then I'm like, this kind of seems like just a big. A big mess.
Huon:
Yeah.
Bella:
And I don't really know what to do.
Huon:
I think big mess does seem like a good summary of the situation. I feel like my personal journey on this question when I was an undergrad studying philosophy, I was like, very bullet bitey. I was like, yeah, I don't. Why, why would we be dealing with these theories that don't work in the theoretically clean cases? Like, clearly we should just throw them out at that point. And then I think that the more that I moved out of the tutorial room and into a situation where I was actually making decisions, I think that you realise how much more your reasoning looks like the theories that have poor theoretical properties just by necessity.
Because like, unfortunately I'm not a perfectly rational agent. I don't have a well defined set of priors that I have access to and can do all of the reasoning that I ought to.
Bella:
And I can't go in some extremely neat chain from epistemology to practical ethics.
Huon:
Yeah. And then I think that there's this real danger with trying to copy the ideal because there are plenty of cases where sort of being similar to the ideal but not quite there is a lot worse than some other local maximum. And I think that's roughly my justification for getting off the crazy train before it's like fully crazy is it's like I can't actually trust myself to do this reasoning super well. I think that it's going to point me towards things like important considerations that maybe other people are going to miss and I'm going to try and move in that direction. But there's also something really important to like the fact that society at large has managed to not kill itself yet. And there's like some, I think, I don't know, evolutionary pressure towards...
Chi:
I'm tempted to go into anthropics, of course, but let's move on.
Huon:
Like, how do we. We don't know how to deal with anthropics either. I don't know, it just seems like moderation seems pretty pragmatic to me.
Chi:
The interesting thing here is that like, I actually agree with almost everything you said. So I feel like that's not actually the crux. I'm wondering whether there's a thing where like, I think coherence and transitivity and so on are kind of like overrated. Because I actually really agree with you on being nearby to ideal can be much worse than being ideal.
I think the thing where I have really have trouble getting off the crazy train is when it's like arguments of the form, and that's maybe just because I'm anxious personality, it's like, oh, you know, the commonsensical thing is this. But here's this galaxy brain argument for why it's actually really bad. And then like, and it just goes like all the way up and I'm just like, oh man.
Like that feels really hard to throw out because in some ways like EA is almost like based on the idea that like things that sound intuitively good are in fact not always intuitively good. And like the nice thing is that in the like, beginning of EA, we could focus on things empirically because we mostly you know, people mostly thought about that kind of stuff. And now we've gotten into the whole longtermism mess and theoretical mess. But it's just like this similar structure of like, well, you can do a. Which seems really good commonsensically, but it might be really bad. And I think I'm happy to be incoherent and whatever, but that's kind of where I'm like, man, I kind of. But the crazy train is so compelling. It's like, this actually just makes so much sense to me.
Bella:
Yeah. I think this is the thing I was trying to say about animals, where it's like, I feel unwilling to abandon this belief, even if many people would be like, wow, what? You think that invertebrates might be having experiences? And I'm like, I don't know. Seems like, extremely obvious to me. And and. And extremely bad if it's true. And the other people who think that it's fine to just treat them however we want are wrong. And so I'm. I just don't. I can't. I somehow can't abandon it.
Chi:
Yeah. By the way, this whole podcast started with Matt being like, people are here to hear interesting takes on EA topics. And I was like, I'm here to make a game show out of this podcast. And I prepared lots of games and stuff like that. And I was so ready to derail and not talk about object level things at all. And that did not work.
Huon:
Yeah, I think. I think that's a great segue into it.
Matt:
Just beginning to derail it.
Chi:
Oh, OK. Yeah. Do you want to be derailed?
Matt:
Oh, we have to opt in. I wanted to talk more about evidential decision theory.
Chi:
Oh, you do?
Matt:
Which we covered in a previous episode. So I don't. I want to be derailed.
Chi:
OK, let's go. Let's see.
Bella:
So that's the one with Julian for everybody who wants to now go look at the one where we talked about evidential decision theory, but probably you should just Google those words instead.
Matt:
That's number four with Julian Hazell.
Chi:
Anyway, so I have a quiz. Oh, yeah, don't look at the screen. Guys, I have a quiz. Is it this or that? Where I will present you two options. I have three questions total. I will. Oh, maybe we should keep scores. Just. And the winner gets.
Bella:
Can you keep scores?
Chi:
Because you got to find my dash water.
Bella:
I don't want your dash water.
Chi:
I haven't drunk from it yet.
Bella:
Yeah, they still don't want your dash water.
Chi:
OK, so three questions, and I'll give you it’s self explanatory, whatever. So first question is, I have a quote, and you have to guess whether it's a Donald Trump quote or random stuff that ChatGPT made up. Now, to be fair, I didn't. I tried to fact check a little bit, but I didn't fact check super hard if these quotes are in fact real, because ChatGPT generated the question. So is it, according to ChatGPT, is this a real Donald Trump quote or is it something that ChatGPT made up?
Matt:
So they could both be.
Chi:
They could both be made up.
Matt:
ChatGPT is strictly dominant.
Chi:
No, no, no, no. Does ChatGPT think this is something it made up, or does ChatGPT think this is something Donald Trump said?
Bella:
OK. I’m ready.
Chi:
Here we go. The beauty of me is that I'm very rich.
Matt:
Chat GPT.
Bella:
I think I'm gonna go Chat GPT as well.
Huon:
I'll. I'll take the other side, I reckon Trump said it.
Chi:
And it's Donald Trump, guys.
Bella:
OK, great.
Matt:
Wow. According to ChatGPT.
Chi:
I did. I did Google this, and there were Google hits for it. OK, but like, I mean, OK, to be clear all of these quotes and whatever I'm giving you, they're obviously massively taken out of context and not entirely.
Bella:
I mean, I don't know in what context. You could say the beauty of me is that I'm very rich. Which didn't mean that the beauty of me is.
Huon:
I mean, I think that's great first date material, you know, it's a great context for that quote.
Chi:
Maybe it's like from a Hinge dating app.
Huon:
I would pay quite a bit of money to see Donald Trump's Hinge profile.
Chi:
Do you think he has a dating app profile?
Matt:
No.
Bella:
OK, well, but like, one that he made.
Matt:
Yeah, yeah, he's on the League.
Bella:
What's the League?
Matt:
You guys don't know about the League?
Bella:
No. What's the League?
Matt:
The League was a dating app that was conceived only for elites. I think you had to have a certain amount of income or have gone to a certain school or something like that.
Chi:
OK, well, anyway, the beauty of me is that I'm very rich. I can't say that about myself. But anyway, so the next one is same genre. Is it something that ChatGPT thinks it made up, or is it something that ChatGPT thinks Joe Biden said? Here we go. You cannot go to a 7-11 or a Dunkin Donuts unless you have a slight Indian accent. You cannot go to a 7-11 or a Dunkin Donuts unless you have a slight Indian accent.
Bella:
I think I'm gonna go chat GPT again.
Huon:
I'm. I'm feeling like, it's a crazy thing to have said, and so I'm tempted to say that it's fake, but I think that it's not a thing that chatgpt. Like, it feels too weird to be a lie. So I'm gonna go with Biden.
Matt:
Yeah, I'll go with Biden as well.
Chi:
And it's Biden again. Or, like. I mean, again. But, like. Yeah, I don't know what the context was, but I love it. I googled it. I. It seems like he actually said it. It seems like in context, he afterwards said, it's not a joke. So I don't know. I mean, I looked this up after the podcast. Maybe I didn't have to.
Bella:
He also says it's not a joke?
Chi:
He said it's not a joke. This is not a joke. Maybe it was about a very specific place in America. I don't know.
Huon:
Surely him saying it's not a joke is also part of the joke. Like, that feels too absurd.
Chi:
No, you can't.
Matt:
No, that is something that, you know, old Midwestern dude would say. No, I'm not kidding.
Huon:
Rob Wiblin does this all the time. He refuses --
Chi:
Not a joke when it's a joke?
Huon:
Like, if he's said something ridiculous for satire and people are dragging him on Twitter, he will just double down and say more ridiculous things.
Chi:
But as a politician?
Huon:
I feel like I don't know much about Biden's personality, but I think he enjoys being a bit of a memer sometimes, right?
Chi:
He does?
Huon:
I don't know. I feel like I see some of. Like, I think that the Biden content that I see is filtered for him being a little bit more edgy.
Bella:
What was that really serious announcement he made while holding an ice cream?
Matt:
I forget. But I think I can kind of picture it.
Bella:
It was great. It was something about Israel, Palestine. I forget. Anyway, whatever. He is a little bit of a memer.
Chi:
OK.
Matt:
All right, number three.
Chi:
Last one, guys. So this one is completely different genre. I have two different statements, and you have to figure out whether they are taken from A, the OpenAI web page or the Open Asteroid Impact webpage. Now, for those listeners who don't know what Open Asteroid Impact is, which is probably many, there was like a satire April Fool's kind of website that I guess tried to make fun of OpenAI and just promoted directing asteroids to Earth and to hit Earth like big asteroids hitting Earth for mining.
Matt:
It was an asteroid mining company and they realised that most of these resources are out in asteroids. But it's very inefficient to go and send spaceships to get the asteroids. You bring the asteroids to Earth.
Chi:
Yeah.
Matt:
However, the opportunity is so big, it's very important that we put safety first and that safety first companies are the first to bring asteroids to Earth.
Chi:
That's true. Because of. Yeah, yeah. Because it could go horrifically wrong. Because, you know, the asteroid thing didn't go so well in the past. Anyway, so to two quotes.
Bella:
Oh, oh, OK. Like dinosaurs.
Chi:
Yeah, yeah. I was like, wait, what did you think?
Matt:
That last asteroid. Jeez.
Bella:
Yeah, I was just trying to remember if there was some catastrophic asteroid impact that I had forgotten about or didn't know about.
Chi:
I mean. Anyway, so quote number one, this technology will profoundly transform how we live and we can guide its trajectory, limit abuse and secure broadly beneficial outcomes. That's quote number one. Quote number two, what does the Open in Open Asteroid Impact stand for? Initially, we wanted to open source all of our asteroid redirection software designs and schematics. But our safety advisors have raised concerns that asteroid redirection and mining operations could endanger Earth. Indeed, were someone to redirect an asteroid badly, it might cause massive damage. Something doubtless many terrorist groups are already aware of. Terrorists can even reuse our redirection schematics to make high energy kinetic weapons. Crazy, right? This is why we no longer open source our software and models. Instead we rent out our machines to whoever is willing to pay us money for safety.
So now OPEN stands for the Openness of space. Our legal counsel has advised us that our name has always had this meaning. So please consider the previous paragraphs as fictional exploration only. So which one was OpenAI and which one was Open Asteroid Impact?
Bella:
I think I have a guess.
Chi:
Really? It's a tricky one. I made them more and more difficult.
Bella:
Yeah, they did get harder as it went on.
Chi:
They did get harder as it went on. Matt looks confused.
Matt:
What are you doing?
Chi:
I thought it was funny.
Bella:
The joke is just that it's obvious.
Huon:
I think I'll take that the first one was OpenAI. And I will be having your dash water.
Chi:
Oh, you got the second one. Did you get the second one right?
Huon:
Yeah, I think so.
Chi:
Do you actually want my dash water?
Huon:
Not particularly, but I do want to win.
Chi:
OK, look. OK, great. You win, you win.
Matt:
And Huon has won. He was untouchable going into the final question, in fact, because there were only three.
Chi:
Yeah, yeah.
Huon:
No, we could have tied and then we could have split the dash water. It would have been terrible.
Matt:
Well, Chi, you almost brought us back to one of the topics that I wanted to cover because we are recording this on the 4th of July. Happy birthday, America. Happy birthday, America. And also happy election day in the United Kingdom Day. Did you not know there was an election on today?
Chi:
No, I did know there was an election.
Bella:
Have you voted?
Chi:
I looked up how close it was going to be in my borough and then I decided not to vote.
Matt:
One fun thing to know about me and my roomie is that we live in Keir Starmer's district. I don't think the PM to be is going to have any trouble winning his district.
Bella:
But if he did it would be such a huge deal.
Matt:
If he lost, you mean?
Bella:
Yeah.
Matt:
Yes. Because Chi and I were definitely gonna go line up and vote Conservative.
Bella:
Oh, no, no, I wasn't suggesting that. I was saying if you didn't vote for him and then he lost, you'd feel really terrible, especially if he lost by two votes.
Matt:
I see.
Chi:
Yeah, he's probably not gonna lose by two votes.
Bella:
Probably. But it, you know, you have until 10pm so there's actually still time.
Huon:
Surely you would have had to register to vote already.
Chi:
Wait, I'm registered to vote.
Huon:
Oh, OK.
Chi:
Yeah. Anyway, so that's why I looked a bit caught out when you said it's election day.
Matt:
I'm a disfavored immigrant. I'm not allowed to vote.
Chi:
Oh, you're not?
Bella:
Wait, what?
Matt:
My country broke apart from this one by blood.
Bella:
Wait, is that actually the reason why, or is it not just that she's been in the UK longer than you?
Huon:
No, I can. I can vote. And I've been here less time than that because I'm Commonwealth.
Chi:
Just because you're Commonwealth?
Huon:
Yeah.
Bella:
So why can you vote?
Chi:
Why can I vote?
Bella:
Yeah.
Chi:
I don't know.
Matt:
You have pre settled EU status.
Chi:
I have settled status.
Bella:
And you're gonna throw it all away.
Matt:
Wait, I don't know the difference between pre settled and settled.
Chi:
It means it's forever. Well, it's forever unless I don't enter the country within the next five years again for at least a day. Or an hour.
Matt:
What about half an hour?
Chi:
Yeah, I think that's probably also fine. I mean, I don't. I don't really know. It's not super well specified, but I think if you read it, technically you can just be here for a minute and it's fine.
Matt:
And you just have to get your passport stamped or whatever.
Chi:
No, no, I think. I think you just have to have an entry. I'm not even sure. Like, I think you just. I don't know how this works.
Matt:
Anyway, more conceptual level: Elections. Yes. Have any of you ever felt, either when thinking about decision theories or not, that your willingness and actual execution of a vote for a party was some real signal that that party would do better or get more votes because of your willingness? Do you know what I'm getting at?
Bella:
Yeah.
Huon:
Yeah. I just don't think that I'm particularly similar to the swing voters. And so I don't take my actions to be particularly representative of the people who matter.
Matt:
Interesting. Are you a passionate voter?
Huon:
I mean, not in this election, because I haven't voted.
Bella:
What the hell?
Huon:
I also looked up my electorate and it was a safe Labour seat.
Chi:
I'm sorry, Bella. So you voted?
Bella:
I did. I went out of my way. I went to a nice local primary school and I put in my vote.
Chi:
Look, all listeners, I think this. You should all go vote.
Matt:
But you are. You're playing into my exact point. Right. The fact that you are I should vote, but I'm not gonna. Right. Suggests that you're the kind of voter who can turn an election.
Chi:
Yeah, but not this one.
Huon:
Not this one.
Bella:
But should you be reasoning about the specifics of this particular circumstance or should you be using policies and decision rules? I think people in general should vote. That's why I voted. Not, because I think it will particularly swing it at this time.
Chi:
I think it's very virtuous of you.
Matt:
Yeah. So, I mean, one of the reasons I brought this up is because we have this old article on 80K. Well, it was originally written in 2016 and then was kind of revamped in 2020. And I remember reading the 2016 version of it and being really unimpressed because it did this move. It's an article by Rob Wiblin, our favourite person to talk about on the B podcast here, where he kind of talked about the consequentialist case for voting and the idea that under certain circumstances you might have like a 1 in 1 million to 1 in 10 million chance of swinging a national election. And given the size of budgets, he was using the United States as an example, like the expected value of this vote could reasonably be in the tens of thousands or hundreds of thousands of dollars.
And that's like a pretty good use of what he called an hour. And I think in the 2020 update he's like, well, thanks for all your nitpicking everyone. And he made a much more kind of robust article. Have you guys read this one?
Huon:
I think I've read the 2016 one.
Bella:
I think I've read the 2021, but maybe not very closely. So this is going to go great.
Matt:
Yeah. Did you find it compelling, Huon?
Huon:
I think that the step where you're getting the 1 in 40,000 chance of swinging an election or something because you're trusting Monte Carlo analysis, I'm a little bit more sceptical of where I don't know, that feels like it's using the model in a way that I don't have good reason to believe that the model is well representing reality or something. But yeah, I found it fairly compelling.
Matt:
Yeah. My sense is the 1 in 40,000 got walked back to 1 in a million to 1 in 10 million.
Huon:
Great. I feel good about having chosen that as a place to nitpick.
Matt:
Yeah, I mean, it was taken up and adopted. I guess there's this interesting thing at the end though, because, I mean, I think the real like, kind of nut of it comes to like how do you know you're making the right decision? There's just so much going on. I mean, and the specific example that Rob gives is like the George W. Bush administration and like the casual follower of politics will be like, oh, Iraq War, hundreds of thousands of dead, trillions wasted on bad intelligence for no good reason. This is terrible. And the article mentions like, oh, there's also this thing called PEPFAR where like the Bush administration launched this anti AIDS programme in Africa that saved millions of lives that no one ever talks about.
And so it's like actually really unclear, like whether that would have happened in a Gore administration and whether like the trade off between that and like Iraq is like a trade off that you want to make and like how much research would you have to do to know that's possible and that's in the pipeline? and maybe you should also just take as evidence the fact that these elections are close in the case where your vote is going to matter that like, oh, society is kind of split on this question. And yeah, I wonder how to deal with both the volume of research you'd have to do to feel good at all about where you're falling down and then what evidence you take from other people voting differently than you.
Huon:
We're allowed to be pretty political, right?
Bella:
Yeah. Just as long as you say oh, this is not the position of 80,000 Hours.
Huon:
Great. This is not the position of 80,000 Hours. This is me Huon speaking in my personal capacity. I feel like in the upcoming US Election, it is much easier than in most elections. Right. Because I feel just very good about a heuristic that's like, protect us democracy. And I feel like there is one party that is threatening that and one party that isn't. And so it's just, I don't think that you need to go into the details of like, utilitarian calculus. I think that maybe this applies more to many elections where it is a little bit harder to say because most voters aren't paying attention to the issues that actually matter most. And so there's the issues that actually matter most are sort of more randomly determined.
Chi:
Are you saying you're not a single issue voter for AI?
Huon:
I am saying that I'm not a single issue voter for AI.
Chi:
OK. Spicy.
Matt:
He's got diverse views. Though, in the US case, they go the same way.
Huon:
Sure.
Matt:
However, interesting take that I'm just gonna get shot down on, but I'm gonna raise this anyway. One of the things in the article is like, the risk of nuclear war. Just take that alone and your vote could, in expectation at 1 in 10 million be worth like 25 grand, according to some analysis the Rob did, which seemed kind of reasonable. Scott Alexander has this take that like, yeah, like Trump may be preferable from a nuclear war perspective just because people don't want to escalate against him because he's so unpredictable and therefore like, the net effect is that he's deescalatory.
Bella:
Yeah, I do feel confused about these kinds of considerations where it's like, maybe you want somebody who is like, gonna do a bunch of crazy shit but generally won't be fucked with as much because they're like, yeah, like a worse person.
Chi:
It does feel like an uncooperative race to the bottom, though.
Bella:
Yeah. And also a bit galaxy brained.
Huon:
And maybe too short termist. Like, isn't there something about the relationship between us and China having had serious damage done in Trump's presidency? And then you get into this path dependence thing because there's tit for tat. And plausibly, that's where most of the nuclear risk, at least major nuclear risk, comes from over the coming decades. And so it's actually that relationship and trying to keep that cooperative that matters more than oh, yeah, was Putin less likely to invade Ukraine?
Chi:
I also think there's this what am I saying? Oh, yeah. Because of this race to the bottom thing. I'm just like, oh, yeah. This, I guess, goes great. And you can just exploit being the most hawkish or whatever, as long as you're the only one. But you know, if you just draw another crazy country, state leader somewhere else, then you're just like, well, now we have two crazy people. That's great. So, yeah you're just. You're really relying on other people being sane, or what you consider sane. Responding the way you want them to respond.
Bella:
Yeah. Or like, just this weighing heavily in their calculus, which it might not.
Matt:
Yeah, it's all pretty compelling stuff. Maybe turning things over to Trump's opponent. Again, we're recording on July 4th, and it's unclear whether the Democratic Party wants to put Joe Biden forward for election or still support him because he's obviously the presumptive candidate for that party. But he had this debate performance which was really terrible. And I think people, the claim and I think quite reasonable claim, is it was revealing of his age and mental acuity that he just really trailed off and lost the plot and didn't look focused and engaged for much of it. This has been just a running concern among the public. And now polling is really turning against him. And there's this thought that he might be. Yeah, he might be imminently replaced. And it made me think of an old, I think, LessWrong post called Politics is far too meta or too meta. Do you guys know this one?
Chi:
No, I don't think I know it. No.
Matt:
OK, well, great. It's going to be a riveting discussion. So I think part of the dynamic here is you know, when I think about myself as a voter, and again, I'll just break the policy that maybe means we have to cut this. Like, there was this great interview with Pennsylvania moderate independent voters or something like that. And they just do a close up on this one guy and he says, if it's Trump versus Biden, I would vote for Biden even if he was dead. And I was like, true take. And true for me in the sense that like, I don't put too much weight on just like the one individual and what they do, I care a lot more about the administration that they kind of like, put together and what kind of policies come out of that predictably. Yeah. And yeah, I like, feel pretty good about Biden on that score. So I'm like, less concerned about this age stuff.
However out in public, the thing that people know and talk about is like, oh, is he too old or is he not too old? And then if he does something that suggests he's too old, now, I being someone who I don't know, might have like some weight on the cultural conversation about whether he ought to be replaced or not, need to do this strategic, tactical thing where I'm like, oh, well, has he shown himself to be too old such that he won't win now and winning is the important thing. So even though I would vote for him and I have no problem with him being old insofar as he's shown himself to be old should I now be against Biden as the candidate to win? And is this just like how everyone's treating it? And like, that's the, like that's the only reason this is happening.
Do you guys see what I'm kind of driving at? It's like we're having this meta conversation about how other people would react.
Bella:
The thought is that your decisions do or perhaps should depend on how other people reason about things that are taking place. So you're like kind of replacing your opinions a bit with what you expect the opinions of others to be. Is that roughly it?
Matt:
Yes, that's a description of the reality. And the post that I reference is a criticism of this. He's like, we should all just be honest about the thing that we'd really do and not like, speculate on what other people are going to do and make our position out of that.
Chi:
I mean, I'm. OK, I know it's about the principle and not the object level, but in your particular case, isn't it like, you would vote Biden over Trump, but you wouldn't that doesn't say anything about whether you would vote Biden over some other Democratic candidate? So it's totally like, you can totally both be honest about yes, I would vote Biden and also I'm anti Biden as the candidate.
Matt:
Yeah. So there's this interesting trade off though, where criticising Biden or saying they should replace the nominee is hurting Biden and Biden may just still be the nominee. And so I'm willing to make this trade off.
Bella:
Because you think it will increase the chances that Biden gets replaced, which is good because most people care that he's old, even though Matt doesn't care that he's old.
Yeah. I mean, so I do feel like. So I haven't read this post. I don't even know who wrote it. But I do feel like some of the LessWrong memeplex if we're going to use that term, does this move quite often of being like, here's this thing that happens a lot in the world and it seems bad and please all of you, just act differently. Unfortunately, many of these things are the way that they are because of large intractable social trends that aren't going to change because a LessWrong poster would like it to.
And like, even if everybody who reads this post were to then change their behaviour, the broad dominant social incentives would still be what they are. And so I feel like a lot of these kind of calls to action often seem to me uncompelling because I'm like, well, that would be nice, wouldn't it? But unfortunately I still have to deal with some particular reality that is just not gonna go this way or something.
Chi:
I find interesting. Like, I would like to hear your take about how you think this differs from your reason for voting. Because your reason for voting was like, well, I don't really do, like the calculus of like, is this actually the right will this lead to the best consequences compared to like me doing some other thing. But it's like, I think it's like, it would be good if everybody voted.
Bella:
Yeah.
Chi:
And so I vote. And I'm just like, well, it would be really great if people just said what they think. And if we can like not play like signalling games and just, you know, try to be honest. But it seems like you take a different you. It seems like you're sometimes very sceptical of that.
Bella:
Yeah, this is a great point. And I think maybe those two things are in tension to some degree and I haven't really considered this before. Maybe I can try and work it out. So maybe one thing that's going on with the voting for me is that even if people don't generally do what I do, the thing that I did is still good and productive. Where I feel like, yeah, because it's still doing this Rob Wiblin argument of I've marginally increased the chance that the candidate that I won gets in. And also, it's not very costly where maybe some of the stuff that I'm kind of like, obliquely referring to here is LessWrong, don't do what people do, because what people do is bad in some way, but that, you know, it's still kind of like swimming against the tide in some kind of important sense would be really costly to do and maybe in individual cases just freaks people out and makes them have a bad time. And so that could be a difference. I mean, it could also just be that my views are pretty inconsistent and not well worked out here. Yeah.
Chi:
So then do you think you would still vote if god comes down and it's like, Bella, if you go vote now, I can just guarantee you that you'll definitely get over the course of your lifetime, four hours less or something of work done. I don't know how much time it takes to work three hours or two hours or whatever. And I'm assuming you think that you doing your work is more powerful than you voting. I don't even know what it means to go vote. The value of voting per hour, because you can't really do. Yeah.
Matt:
Well, if you're. If you're in a closed district in a rich country, Rob Wiblin says it could be tens of thousands of dollars.
Chi:
That's true. That's true. That's true. OK. In Bella's particular case, although you said it was kind of close.
Bella:
Not close. There's like, Benjamin estimated a 10 chance that Labour doesn't win. And instead there's this other guy, which I think is decent enough\. But I don't really know where he got that figure from.
Matt:
But also, the marginal Labour seats is not going to matter.
Bella:
That's true. But I think there's something else that's going on for me here, which is I heard some stuff about the guy who was the other person who might win, and he just seemed like generally all around not very good. Like there were a lot of like fairly credible corruption allegations. I don't generally like this like voting block and like ethnic grounds. Anyway I think that thing is, that kind of thing like can be pretty pernicious even though I have a strong view that is in this case. Yeah. Like there's like some extra stuff also.
Huon:
In the worlds where this marginal seat has flipped, that is some evidence to assume that the wave in general has been less. Doesn’t it?
Matt:
Overstated. Yeah, yeah. It depends on where it falls on the continuum of like contested seats.
Huon:
But like the only thing that we're stipulating is that we're in this 10% world where this seat has flipped. I think that should make you think that the election in general is tighter than we are expecting it to be.
Chi:
To be clear, just because I'm the resident, I'm like, oh, she's probably all into this like evidential thinking at some, which, you know, I am. But like I'm actually not convinced that these kind of considerations matter in like the voting context just because the listener might assume otherwise. I feel like I'm going to be forced to go into the object level here. I'm happy. I'm OK too. I'm happy not to, but I just wanted to put it out there. I was like, I'm not sure that these galaxy brain takes work for voting, but anyway.
Bella:
I don't think it’s galaxy brained in the voting case. Yeah, I think that's mainly what's going on in my view where I'm like --
Chi:
You just think you actually like, whether you like you observing yourself going voting actually like makes you like much more --
Bella:
No, I think it's, I think the like non galaxy brain version of it. Like I put some credence now the non galaxy brain version of it. It's just like, I think that there's a lot of people who are like me who like care about stuff, but they're like, they like disdain like very normie forms of like political engagement of which voting is the prime example. Like I think like for example because like none of the candidates actually represent their views. Like maybe they're insufficiently radical, maybe they don't care about issues that are closest to their heart. Maybe they, you know, say stuff that really irritates you and so you're just like, ugh, like forget normie politics. And I'm like, if all of those people forget normie politics, we will have a meaningfully worse world.
Like, it feels like quite important to me that I and people like me don't do that. And whether that's via this evidential route or whether that's just like a, Now I'm going to talk about it. And like, I've been talking about it to people today and I've talked to you guys about it and that's like a more like straightforward causal root or like that kind of thing. Like, I think it makes sense to me.
Chi:
I mean, I obviously buy this. I don't know how the numbers will work out. But the second argument is obviously straightforward and makes sense that you can just walk around and tell other people that you voted and so on and so forth. Yeah maybe that for those people, maybe their counterfactual wouldn't be more valuable. I mean, lots of people in general, this is the total divergence. But in general, I feel like often when people are like, hey, is this or that worth it? Like, is this worth your time? Or whatever, I often feel like it's like, so fake. Or like, it feels pretty fake to me. Like, there are some people who I know some people who are extremely dedicated.
Chi:
And in fact, what they do all day is like, they work, they sleep and they like, maximally rest or relax so that they can work again. Right? Those people exist. But most people are not those people. And then when we are asked the question hey, you could do this good thing, like pick up rubbish on the street or whatever. Or like, something like that, right? And then like, make this calculus of like, oh, is that worth it? Compared to you working that time? It's like, bro there's no trade off here. This is completely wrong. Like, it's just like, either you do this thing or you don't.
Bella:
So on some level, I completely agree with you on this. I'm like, yes, in fact, people are not maximising their lives for the total amount of work that they do. Unless you're Kit Harris or whatever.
Chi:
Oh, dear.
Bella:
You know, but like, sorry, unless you're, you know, other people. But like, for most of us people, yeah you know, I probably spent a collective of like, eight hours this week playing the New York Times puzzle games that are completely pointless because I like them and they put me into a trance. But I also think that there's some finite resource that you have, which is caring or trying. And if I even though the time might not actually replace some work that I would do, I did put some mental effort into OK, where's my polling station? I'm gonna go. And it took me 15 minutes total. But that there was some mental effort there that I think that might be finite.
I'm a little bit confused about whether it's finite or whether... Joey Savoie has this really nice post about how it's not right. Like, there are some things you can do that seem to increase your capacity to do more altruism and help and improve the world more than if you didn't do them, even though they seem kind of worky or something. But I think I maybe buy that on average, there's a finite resource that I'm depleting that isn't exactly time, but it is still depleted by.
Chi:
Yeah, I think I agree. I think I'm fully on board with the general take of you should have the general policy of not sweating the individual thing so much. Fun fact. Before I was in EA, I was an environmentalist, and I would, I hated grocery shopping because there were three different stores, and I would OK. Back then, I also, like. I mean, now I just think organic food is often just work. Like, just worse for the environment and all that stuff. Or I don't believe in this. I don't believe in that. So don't. Don't take me as endorsing any of these, but I was just like, I want it to be fair trade. I want it to be not packaged.
I want it to be organic, and I still want to be able to afford it. And ideally it's seasonal or whatever. And then I would run around between these three different supermarkets just to compare the different products to see if OK, so this one is organic, but it's packaged, and this one is not packaged, but it's. I don't know. It's not fair trade or it's twice as expensive, and I don't have that much. It's absolutely crazy. Like, yeah, and that's obviously. I think that's. I mean, everybody in this room obviously probably thinks that it's very obviously taking it too far. Yeah, that was not fun.
Bella:
I mean, yeah. Can I, can I like stand up for past Chi a little bit?
Chi:
Sure.
Bella:
Like I think that what you were doing was probably not a good altruistic bang for buck. Like I think you were not getting like a good return on your effort that you were putting in here to try and get this right. But I also think that you were doing something that like in some sense is like, in some sense like noble or virtuous that I like kind of endorse, which is like taking issues that like you know about and seem salient to you and trying to do best by the people affected by them and like caring and like putting that into action and like I think this is like pretty cool and like kind of worth celebrating. Even though I don't think that I would go and do the things that you were doing then.
Chi:
Thank you. It did make me wonder just now while you were saying that and while I was reflecting on what would be more effective if. Haha, this is also the perfect. If we're done with this topic, this is the perfect segue into my next game. The question’s like, would it better if like everybody in the world was just suddenly like more willing to like actually put effort in and like do things that like they like for what they think is like good or be like more altruistic or self sacrificing or whatever versus if I could make everybody like I know, just adopt some like methods or tools. Like for example, like actually researching a bit whether the things they think are good are good.
Like I didn't. That was sad. I advocated for a lot of things when I was young that just like I think by my values back then were just like bad, or like I don't know what other tools there are to do. Like I don't know just like which one is actually better? Like do I actually want more people who are like noble and virtuous in that particular way or do I want more people who like you know, have done the like thinking and then maybe don't act on their convictions?
Huon:
So is it about do we want more altruism on the margin or more effectiveness on the margin?
Chi:
Oh no. I mean, maybe. And I think if you go to the extremes, the answer is maybe easy because I think if you go dial up the altruism, you'll get lots of extremists and weird people doing like crazy terrorism stuff and doing completely crazy things, but you know, like on the margin or would we like to have more. Would we like people to be more caring or like act more on their beliefs or would we like people to have better beliefs?
Bella:
I think I feel pretty firm that I think it is, I want people to be more caring and act more on their beliefs. And maybe this is just me wanting to be consistent with the thing that I just said earlier, but I'm not sure. I think that I just actually think this is right because like, I think many people... I think that this world would be like a massive improvement on the current one if not just because of like small, casual, everyday cruelties that actually add up to quite a lot in my opinion. And I think eliminating or even reducing those would just be really cool.
Chi:
I mean, getting rid of factory farming would obviously also be like, yeah, pretty big.
Bella:
I don't know if this would achieve that. Because I think a lot of people's intuitive values are like, nah, yeah, this seems fine. But I also think it's like, this seems fine if we treat them well. And like what people don't do is the step is like, is the food that I eat from animals who've been treated well. Or at least like don't do that very consistently.
Chi:
Yeah.
Huon:
I think I'm on board with marginally more altruism. I guess like more effectiveness probably includes a bunch of people self enriching in ways that actually make the world worse. Whereas more altruism, There's a bunch of really rich people who are not doing a tonne.
Matt:
I mean, bog standard conventional issue is just like NIMBYs in developed countries have increased the value of their property and cost their economies dearly. If they cared a little bit, they'd not do that. Tactically it's smart from their selfish perspective. They've made so much money.
Bella:
Yeah, but sorry, that's not the argument they give. Right. They're not like, oh, NIMBYism is good because it will increase my wealth. They're like, NIMBYism is good because it preserves the character of the area, which is a public good that I don't want to be degraded.
Chi:
I think people do say that. Sorry. I literally know like, yeah. I literally know people who are like, oh, you should be against this because our property value will go down. I think this is very explicit in people's mind.
Bella:
OK.
Chi:
I mean yeah, they might not say it's not what you like lobby with. Maybe.
Bella:
Yeah.
Chi:
It's what you lobby your neighbours with. You tell your neighbours to be a NIMBY because of the value of the property. Then you tell the politicians, look, it's like the neighbourhood.
Bella:
Do I know any NIMBYs?
Matt:
I know lots.
Bella:
I don't even think my parents are NIMBYs. Oh no, maybe my mom is.
Chi:
My parents are really not NIMBYs.
Matt:
Importantly it's like yeah, I think like the thrust of it is just a handful of activists in little areas but they just have a lot of tacit support where people will nod along and not contest them.
Chi:
Yeah, yeah, I realised that maybe the whole like effectiveness altruism thing as we called it now, beliefs versus like acting more on your values, maybe it's kind of hard to keep them apart as well. Because I was going to say something in defence of changing beliefs and then I was like actually I don't really know if this is changing beliefs or making people's values better or whatever.
Because like I was actually thinking when you said the thing about like oh, you think if people were more effective maybe people would be like doing much worse things because they would do things that are self enriching. But I think actually if people were more clear headed about what actually makes them happy, I think people would end up doing better things. Or at least like commonsensically better things. I think it might be bad for economic growth and innovation but like yeah. I don't know.
Hot take. I think continental Europeans just have or like at least like I feel like whenever I go back to Germany I just feel like wow, people have so much more figured out how to live life.
Bella:
Yeah.
Chi:
It's like also one of the countries where people like work the least hours every week and so on and so forth and I'm just like, I mean the German economy seems to be doing fine somehow. But I don't know, I'm just like well it's probably not great for like innovation and growth and so on necessarily because really if you go back to Germany there's nobody there has. I sometimes joke that there's zero people in Germany that have ambition because everybody who does has moved out of the country. But like yeah, I don't know, it's just like when I go back and it might just because it's like, you know, where I'm from, I feel comfy and so on, but it's just so nice. Everything feels kind of slowing down. It's kind of chill.
I don't know, maybe it's also just because sometimes when I visit, not every time, I go to Berlin. And like in Berlin, really people are like, I was there once, I arrived at like 9am and the train station just looked like it was like 4am at night. Or like 6am. Sorry, 5am I guess in London people work at 6, maybe even at 5. So let's stick with 4. It was deserted. It was crazy. The only people who were walking around were like people who were like coming back from clubs and like the odd person going to work. It was so weird. But yeah, maybe it was also 8. Maybe I'm overplaying it, but really it was a very reasonable time.
Matt:
I mean, I mean famously there's this graph of all these countries in Western Europe and North America and they show the effect on their economies of losing their capital city. And yeah, basically every country has like a significant negative effect ranging from like something like 15% to 5% or something like that. In Germany, on a per capita basis, the economy improves when you take Berlin out of the picture.
Bella:
Wait, what?
Chi:
Yeah, yeah, I mean that seems right.
Bella:
What's happening in Berlin?
Matt:
Just a bunch of lazy nightclubbing artists.
Bella:
It's lazy nightclubbing artists or is it poverty?
Chi:
I don't know because obviously like there's obviously like a thing where I come from like, or like, I mean people might not know, but I come from like a kind of middle class, upper middle class family. So that's mostly the environment I'm in. So I wouldn't really know. And like, I don't look at the stats so much, but from my personal perception, it's lazy clubbing artists for sure. Lazy clubbing artists.
But it's just like there's not like any particular thing you think of when you think of Berlin. Like you don't like, I don't even know, like if you want to do like finance, you go to Frankfurt. I moved out of Germany a while ago, so I might get all my stereotypes wrong. Like car industry is obviously a thing in Germany, but you don't go to Berlin for car industry. I think for startups, maybe you go to Berlin. Maybe also just go to like Munich or whatever.
Engineering, you don't really, I think, go to Berlin. They have good doctors. I think another thing is also that lots of other countries have a lot of their population concentrated in Berlin. Sorry. That's right. No, no. In their capitals. Yeah, a lot of the population concentrated in their capitals. And so if you have for example, like in the uk, if you have so much of your population in London, and then London is not doing well economically, that's kind of, like. It's kind of pretty bad. Kind of sucks. But in Germany, it's like. I think Germany has around 80 million people. And Berlin has maybe two or three of those. So it's not. I don't know, it's just. Yeah, it's just another city kind of.
Matt: Did we ruin the segue to your game?
Chi: Oh, yeah, my segue. So segue is the next game is This or That.
Matt: Is it? Is that Would You Rather?
Chi:
Oh, fuck. Oh. Oh. Am I allowed to curse? Yes, you're right. We already did This or That. Would You Rather? So we already had our first question with Berlin.
Matt:
Wait. What was the segue gonna be?
Huon:
Would you rather about effectiveness or altruism.
Chi:
Yeah.
Huon:
Which is a beautiful segue that was left by the wayside to rust.
Matt:
Well, we had to shit on Berlin.
Bella:
Like the Berlin economy.
Chi:
Look, I love Berlin, but it is one reason why I love Berlin, which also maybe indicates something. So you can go to any coffee shop and you buy one coffee and then you sit there for the rest of the day and do work or whatever, and you don't even feel bad about it. I mean, I do feel bad about it, so I buy more, but nobody's making me feel bad about it.
Bella:
Isn't that the case here?
Chi:
What? No, I definitely feel like I have to, I can't buy one coffee and sit somewhere for five hours.
Bella:
Who is exerting this pressure on you?
Matt:
The hustle and bustle, generally.
Bella:
What does that mean?
Matt:
People are walking fast around you. You're like, oh, God, someone wants a seat. You know?
Chi:
OK. I feel. OK. The thing is that I feel internally like my. Like, I'm already feeling kind of bad about it because I'm like, oh, you know, this is. Maybe not. They want. Don't. Don't want this or something. The reason why I don't feel about it bad about it in Berlin is because there's an entire row of people sitting next to me doing the Same thing. And I'm just like, oh dear. Well, if they can do it, then I'm allowed to do it.
OK. Yeah. Anyway. OK, let's do it. Would you rather turn back time to go and remove the Nick Bostrom poem shitting on FHI that he used to have on the about page of the Future of Humanity Institute, which by the way, used to be like an institute at the University of Oxford? Or would you go back and remove Holden Karnosky astroturfing GiveWell and pretending to be other people to say how great GiveWell and their articles are?
Matt:
Do I lose the benefits of Holden's astroturfing?
Bella:
Wait, what are the benefits of Holden's astroturfing?
Matt:
Maybe it helped GiveWell get its early kind of donors.
Bella:
Surely not. Surely this was pretty trivial. I think that's the reason why it was such a bad idea. Is that like, it was dishonest and it didn't even really do much.
Matt:
Well, Bostrom’s thing clearly did nothing.
Chi:
I mean, it didn't have positive effect. But do you mean by clearly did nothing? OK, it was a poem talking about shit Oxford is, and how FHI is much better. Or like the only good thing about it. And then Oxford caused. Sorry, not stipulating a causal relationship here. And then FHI at some point was shut down. Not exactly due to because they wanted to. But we also know that lots of other things happened. And we don't know at all whether this poem was ever read by anybody.
Bella:
I think there was like many more factors in the Oxford FHI relationship breakdown than like just that Bostrom was shitting on them on the website. Like, I have not heard very much about this, but what I have heard suggests that it was pretty overdetermined that it would go badly.
Matt:
I think Holden's reputation matters more. Academics can be weirdos and dumbasses and do a bunch of weird dumb things and pretty much get a pass. And people will still read their books and papers and stuff. But I think Holden is much more engaged in the front lines of the world and so he'd want a stronger reputation. I agree that it's all kind of low stakes.
Bella:
Galaxy brained: What if Holden doing the astroturfing was good because he then apologised for it and was a public example of a person taking accountability for their mistakes?
Matt:
Not implausible.
Bella:
Setting the kind of culture that we have these days where people, when they shut down their org, are like, here's all the ways that I might have caused my org to be bad.
Matt: Listeners, I apologise for all of these episodes in advance.
Chi: Do you know, did Holden apologise for the astroturfing because he was called out? Like, did people catch him or did he just admit to it?
Huon: I think he got caught.
Bella: Yeah, I think he got caught.
Huon: I mean, yeah, this was before my time, but.
Chi:
I feel like that's not much of a... Like, OK, I didn't read his apology. Maybe it was really good or maybe there was like attenuating circumstances or something. Or like, maybe he was caught out but like, it wasn't really widely known. He made it widely known. But like, I'm not. I'm not sure if like saying sorry online after people like, call you out is like setting a good example.
Bella:
So I kind of think it is because online discourse is really bad and it's like somewhat common for people to just like, double down and then like, throw accusations at their enemies and just like, be a general nuisance.
Huon:
I feel like the thing in this vein that I'm more sympathetic to is not like, setting the good example and more like Holden learned an important lesson and then like, actually realised what goes wrong if you're too much of a naive utilitarian about things and that you should be more moderate. And I feel like that is very much the spirit of Open Philanthropy today, at least as I understand it. And I don't. Yeah, I mean, my guess is that the impact of this particular experience is fairly minimal on that. But it does seem good to me.
Chi:
So I guess the question then is also when we go back I didn't specify this. When we go to back and like, remove the astroturfing, do we just like, cause him not to want to do it for whatever reason or does he still like, you know, he would do it? He kind of does do it, but it just disappears. It just doesn't work somehow. Yeah, that makes sense. And I guess that does make a difference. I mean, if I could make him not the kind of person to act like that, even back then probably I take that one. Yeah, I don't know. I don't know how much it's actually. Anyway, people's verdicts. I am the game show host master, so I can demand answers without giving my own.
Huon:
Fix Holden.
Bella:
Yeah, me too.
Chi:
All fixing Holden?
Matt:
Yeah.
Chi:
OK. All right, let's go. Thank you for listening. That was it with Would You Rather this week's episode.
Matt:
Would you rather? All right, guys, you ready for this? Would you rather spend the better part of two days prepping for an interview on BBC Radio 4 to talk about art or spend the better part of two days fighting with people about the Manifest drama on the EA Forum?
Chi: The first one.
Bella: Yeah, the first one.
Huon: Yeah.
Matt: Huon, did you follow the Manifest drama?
Huon: I did not.
Matt: Well done.
Huon: I actually only learned about it in the prep doc for this podcast.
Bella: Whoa. You didn't know it happened?
Huon: I didn't know it happened and I still don't really know what it is. So I can be the audience insert who needs an explanation.
Chi: Can we do like a slow clap for Huon?
Bella: OK.
Matt: Pathetic slow clap for everything.
Huon: I hated this both times that we recorded it.
Bella: My heart was really not in it. I'm sorry.
Chi: For people who notice a difference in audio or video. All of our cameras just died.
Matt: Well, two out of three. And mine and Bella's close up camera is now the main feed, which is all you'll get from here on out, which is good news because that means the episode is almost over.
Bella: OK, so we were going to talk about the fact that you went on the BBC to talk about art and why we should defund all art and that you have a hostile attitude to... What did she say? She was like, why do you degrade...
Huon:
I thought you were gonna say a general disdain for humanity.
Bella:
That was it. General disdain for humanity.
Matt:
General disdain for humanity was the term that people used to describe my views on art.
Chi: Because you cannot see the... It's too late, guys. I don't know what I want to say.
Matt:
So I will link to that interview for those who are really curious. But anyway, the idea is like, I went on Radio 4 and I had like a nine minute segment where the debate of the evening was, you know, basically, should we like have the big public subsidies for art? There were like four people roughly put into like four positions. One was like a staunch libertarian who said, no way. That's not like essential for like the administration of state. Another was like an art professor talking about how art is so good for the soul. I think another one was a moral philosophy professor who talks about how art might have intrinsic value. And then I was kind of the EA, kind of weirdly thrust into this place where I'm like, well, funding art might have opportunity costs.
And, you know, with my 80,000 Hours hat on, I was like taking this tack that like, oh, it's especially important that like very talented and maybe like especially predisposed towards like meaning seeking types of people might be the kind of people who like big government art subsidies or programmes lead into art. Whereas they could, you know, be like dealing with the biggest problems in the world. And the responses were in line with what you guys kind of put to other people.
Bella: Yeah, I think it was just, it was so wild how like you were being forced to justify your position in terms of like, you know, what are the moral costs of the thing that you're advocating? What are the downsides? What are the ways in which it goes against human nature? Was something that was brought up a couple times. But like the presumptive status of like the let's fund art view is just like, yes, this is obviously morally good and has no downsides or costs. Like, it's just like crazy. Like, sorry, maybe that's an exaggeration. It felt unfair to me, the framing or status of your argumentative position here.
Matt: I certainly felt that. I think one interesting thing that maybe kind of put this into focus to a certain degree. Also, hello, friends outside.
Huon: Should I close the window?
Bella: Yeah, it's getting hotter.
Matt: Yeah. I think one thing that brought this into relief was at the end of the episode -- it didn't help very much. It's OK. We'll just, we'll just see how it sounds and some audio feed will be maybe OK -- at the end of the episode, the kind of panellists, the main panellists who were having the debate kind of recapped what they had heard from the other witnesses. And one of the panellists kind of framed my position as, it was really strange. He was like pointing to these trade offs, but those aren't the real trade offs. Those aren't the real decisions that we make. Nobody's asking to take away from foreign aid budgets to give to art. And I felt like that was a good segue into a new segment that we have, Bad Billionaires.
So on an episode with Maria, which is the last one to come up before this one, I think, oh wait, it might have been two episodes ago.
Chi:
I actually thought that parts of the recap were kind of good. But I maybe have to relisten to it. Just wanted to throw it out there. OK, Bad Billionaires.
Matt: Well, this really interesting kind of piece of news broke, I think like two days after I was on the BBC that Warren Buffett was no longer going to give the vast majority of his 100 billion-plus kind of wealth to the Bill and Melinda Gates Foundation, which I think in large part works on neglected tropical diseases and things that we think are a big deal and save lots of lives, and was instead going to give it to a foundation run by his three children and who all run foundations of their own now aimed at kind of various things. And I thought it was fun to just look at what those were about.
Bella:
Yeah, I'm excited.
Matt:
And one of them specifically was just like, kind of intensely local let's build the community in all the ways that we can think to build the community. So it was a hodgepodge of different things, but art and culture was like a big part of that. Art and culture in Nebraska specifically is an important thing that we need to advance. And like, I don't know, is that like, competitive with the kind of you know, vaccine and disease work happening in the developing world that the Gates Foundation do? We can leave as an exercise to the reader.
Chi: Nebraska! Nebraska! Nebraska!
Matt: Well, it's art and Nebraska.
Bella: I think it's that it's late and so I'm getting argumentative and a little bit hot takey. But I am a little bit like, how are people not more mad about this? Like, I get that yeah, people are generally mad about this. And like, I think the headline was like, Warren Buffett to donate $100 million to nothing in particular. Which was quite funny. But this just seems like, actually unjustifiable when it is like, clearly I think that basically the problem that we have in the art debate case is that people don't engage with trade offs. They're like, oh, well, we can just like, funding art is good. And I don't have to engage with this question of whether it's better than alternatives, because nobody's actually asking the question of whether we should change foreign aid budgets to art.
This is like, literally that is happening. And like, why are people not more angry? Like, this just seems unjustifiable.
Huon: Yeah, I mean, I guess that we maybe think of charity as being pretty discretionary in general. And so it was like, it wasn't owed to the causes that Warren Buffett follow through on the promise.
Bella: I mean, maybe he made no legally binding commitment here. Clearly he didn’t. But it does seem like, I don't know, like a kind of like, obligation framing here is like, yeah, charity is discretionary and private and you can do what you want with it. But at some point you kind of made a commitment to like the prospective beneficiaries of the Bill and Melinda Gates Foundation that you're now reneging on, like you were going to provide them some kind of care, respite, alleviation of the unceasing struggles of poverty and whatever, and now you're just not going to do that. And doesn't that seem kind of crappy even from this non utilitarian framing?
Chi: I agree with you, that it seems crappy from that perspective, but it also seems a bit like, at least for me it would feel. Well, I actually care about commitments, so maybe not. But like I think, yes, I think, I still think the thing that's actually bad or really bad isn't necessarily this person broke a commitment, which I agree is like bad. But I would guess that like a thing you're more upset about is in fact like, yeah, this money was going to do a lot of good. Now it's arguably, you know, going to something else. And I think one issue here is that like I feel like people generally like don't...
Yeah, I guess it's like what Huon said, it's just like some people do like go pretty hard on billionaires or like against billionaires, but that's mostly because they're like oh like too much power and blah, blah. But nobody is like, what the fuck? Why are you not like using your money for good things? I feel like there's just like. No, there's like a standard in society that we're all meant to like not be like above some certain threshold of like everyday assholeness. But there's like no expectation of actually being a good person.
Bella: But we have standards to compare them to, right? Like there are many very wealthy people who are like trying their best to like actually give to like to causes that will do a lot of good and are like, you know, talking publicly about that and like why are we not holding people to this standard? Like why is this not a thing that people talk? I don't know. Anyway, sorry, I'm mostly just ranting.
Chi:
No, I mean I, I mostly agree. Like, I mean I also would like to hold people to a higher standard. I'm just saying descriptively, like in fact society just doesn't seem to hold people to the standard. And it's like as soon as you do anything for other people, good person. And I was like, well...
Bella:
I think that's not necessarily true. Because of this Copenhagen interpretation of ethics thing where actually sometimes when you do some stuff for good people get more mad at you than if you.
Matt:
Indeed, what are we doing right now?
Bella:
That's true. That's true.
Matt:
I think there's something to it, though some justification for this where it's like. I think one thing you can say is you know, for some people, there was no chance they were ever going to do something important or really understand the arguments or the nature of the decision they were making. It's much harder to blame people who don't even have the faintest idea what's going on or what decisions are before then maybe you can say they're a bit reckless for not realising they have great privilege and should even think at a meta level about what to do with it.
But in the case of Warren Buffett, it's like, you obviously understood exactly what the Gates foundation was doing and exactly what their model was and exactly how many lives it saved, and that was valuable. And now you're doing something which, in fact, I think in earlier interviews you can find Buffett saying, oh, my kids are going to be fine. I think it's really silly that all the wealthy people leave all their money to their kids and. All right, you're leaving your money to your kids to run foundations. But how different is this, really?
Bella: Yeah, it's not. It's to let them do whatever, you know, pet project they have, basically.
Chi: Yeah.
Matt: And so it's like, I should have known better kind of situation.
Chi: OK, I know, I agree.
Bella: I have run to your position.
Chi: OK, OK, wait. I do think there's something where you know, it's like, if every time somebody wants to donate lots of money, lots of people start shouting at them, nobody will want to donate lots of money. And I mean. Yeah. Anyway, also something. Something about people like to be really upset about what open philanthropy spends their money on. Open Philanthropy money is, of course, from Dustin Moskowitz and Cari Tuna. And yeah, but they people are appreciative, but people are also kind of shouting at them. And I'm just like, yeah, you can't maybe you just don't want to be shouted at. And for some reason, people. Yeah, I think you were right with the Copenhagen stuff. It's just like, people don't shout at you if you just don't do it. If they're just like. If you're just openly a selfish prick, that is just like, oh yeah, I guess that's cool. Well, I don't even. You're not even openly a selfish prick. You're just not charitable and that's fine.
A thing that I found really interesting is also about like us that I saw online and I wanted to get your takes is somebody made the point that like we seem to think this like you know, billionaires like giving their money to their children or whatever. It's like, I mean, I know people have lots of takes on inheritance and so on, so forth, but like broadly like fine-ish or something. At least not from a utilitarian perspective and so on so forth. But most people would think that like monarchies or whatever or like anything where you like just like get a lot of power because you're like somebody's child.
Although I know that I think a lot of monarchies, that's actually not how you become. Anyway, it doesn't really matter. But yeah, it's kind of fucked up or kind of weird, and that arguably just like if you're like Warren Buffett, you actually just have a lot more power than like the leader of like whatever country. But still we would be really upset if there was like. I know some like, we would just think it's like not good if countries have like this hereditary like passing the... like there's a difference because like countries can use force in like a way that's like generally seen as legitimate which billionaires I guess can't. But like they have a lot of other power, like just having 130 billion or something dollars is just, it's so much money. Like you can do a lot of stuff with it.
Matt:
I should have mentioned the other option. So I mentioned art because that was like one part of the purview of one of Warren's three children. The other, the other child is. I mean it's funny that I call them child because they're like in their 60s and 70s. Descendants. Yeah. His other heirs as they've become. Yeah. I think one of them works on earth centred ways of being and healing is the main theme of their charity, which I think has some kind of cultural and artistic resonance to it. And the other person works on like border patrolling and countering human trafficking with what appears to be force. There are lots of military people on his website, people under arms being photographed on the front page of that website.
So yeah, I'll link to the article that kind of explains the transaction here in the show notes.
But I don't know, I think it speaks like this broader point of like actually kind of like what you sex up in the media. Either from like a. More like, I don't know, maybe familiar to our listeners, kind of like leftist perspective, art and stuff is great. Or from the right wing perspective, border patrol stuff, you know, like these things matter and decisions get made based on what you sex up. So trade offs really are kind of everywhere.
Bella:
Yeah, they are everywhere.
Chi:
Well, if we were giving less government subsidies to art, do you think there would be less or more philanthropy to the arts though?
Matt:
Oh, oh, great question. This is interesting. Could government funding make the field less sexy?
Bella:
Oh, I don't know that's what Chi was saying. I think she was being like, maybe government subsidies are funding private donations such that if you remove the government subsidies, the arts get the same amount of funding and something else random, like whatever average philanthropic dollar is funding gets less funding.
Chi:
Less money to give fancy buildings your name.
Bella:
Yeah, maybe that's the counterfactual. Or maybe it's like local youth engagement, something. I don't know. That's like my caricature of American philanthropy.
Matt:
So I kind of suspect that like the fight over how much government subsidy for the art there is and whether the arts are really getting short shrift does just like call more attention to it and make it feel more salient and more urgent. So what you'd have to do is provide such big subsidies that there's no fight. And no question we all feel like it's adequate, which like, you know, different people have different views of adequacy. But I think it would probably take quite a lot. Yeah.
Matt: Chi, is ECL adequately funded?
Chi: No, why? I mean, currently...
Matt: Well, this was your chance to appeal to our audience.
Chi: Sorry, sorry. Dear listeners. I know I didn't do a great job explaining what I'm working on like at all. In fact, I kind of intentionally didn't explain, just started talking about in a very high level way. But if you want to, you can read my stuff. It's online on LessWrong and the EA Forum. I'm Chi. You can read about ECL and if you want to fund some work in the area, you can reach out to me and I can connect you with all the five people in the world who consider working on this topic.
And yeah, I think, I mean, funding currently is OK. But by funding currently, I just mean like for the next half year or something or like the next year and like afterwards is looking a bit dire. There were like multiple large funding sources that were making noises about liking the area and then pulled out for at least one of them for like reasons of not no longer being convinced. The other one, to be fair, that is more like not being convinced, which people can take it, some evidence or whatever.
So if you want to jump in, especially, I think institutional donors have some problem with funding this kind of stuff because it's kind of weird. If you want to help out a friend that you've never met before and her four colleagues in the field, let me know.
Matt: ECL: it's the latest thing, folks. Our guest tonight is Chi Nguyen. Take care, all.
Chi: Bye. Am I not getting a real clap?