80,000 Hours Podcast

Effective altruism, in a slogan, aims to 'do the most good.' Utilitarianism, in a slogan, says we should act to 'produce the greatest good for the greatest number.' It's clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism?

Today's guest, Andreas Mogensen — senior research fellow at Oxford University's Global Priorities Institute — rejects utilitarianism, but as he explains, this does little to dampen his enthusiasm for the project of effective altruism.

Links to learn more, summary and full transcript.

Andreas leans towards 'deontological' or rule-based theories of ethics, rather than 'consequentialist' theories like utilitarianism which look exclusively at the effects of a person's actions.

Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially.

However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger's wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do.

In a world as full of preventable suffering as our own, this simple 'principle of beneficence' is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance.

As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one's income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they'll get the biggest 'bang for buck'.

For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves.

What arguments could a non-utilitarian moral theory mount against such giving?

Many approaches to morality will say it's permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless.

In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover:

• Should we treat thought experiments that feature very large numbers with great suspicion?
• If we had to allow someone to die to avoid preventing the World Cup final from being broadcast to the world, is that permissible?
• What might a virtue ethicist regard as 'doing the most good'?
• If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be?
• If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Show Notes

Effective altruism, in a slogan, aims to 'do the most good.' Utilitarianism, in a slogan, says we should act to 'produce the greatest good for the greatest number.' It's clear enough why utilitarians should be interested in the project of effective altruism. But what about the many people who reject utilitarianism?

 Today's guest, Andreas Mogensen — senior research fellow at Oxford University's Global Priorities Institute — rejects utilitarianism, but as he explains, this does little to dampen his enthusiasm for the project of effective altruism.

 Links to learn more, summary and full transcript.

Andreas leans towards 'deontological' or rule-based theories of ethics, rather than 'consequentialist' theories like utilitarianism which look exclusively at the effects of a person's actions.

Like most people involved in effective altruism, he parts ways with utilitarianism in rejecting its maximal level of demandingness, the idea that the ends justify the means, and the notion that the only moral reason for action is to benefit everyone in the world considered impartially.

However, Andreas believes any plausible theory of morality must give some weight to the harms and benefits we provide to other people. If we can improve a stranger's wellbeing enormously at negligible cost to ourselves and without violating any other moral prohibition, that must be at minimum a praiseworthy thing to do.

In a world as full of preventable suffering as our own, this simple 'principle of beneficence' is probably the only premise one needs to grant for the effective altruist project of identifying the most impactful ways to help others to be of great moral interest and importance.

As an illustrative example Andreas refers to the Giving What We Can pledge to donate 10% of one's income to the most impactful charities available, a pledge he took in 2009. Many effective altruism enthusiasts have taken such a pledge, while others spend their careers trying to figure out the most cost-effective places pledgers can give, where they'll get the biggest 'bang for buck'.

For someone living in a world as unequal as our own, this pledge at a very minimum gives an upper-middle class person in a rich country the chance to transfer money to someone living on about 1% as much as they do. The benefit an extremely poor recipient receives from the money is likely far more than the donor could get spending it on themselves.

What arguments could a non-utilitarian moral theory mount against such giving?

Many approaches to morality will say it's permissible not to give away 10% of your income to help others as effectively as is possible. But if they will almost all regard it as praiseworthy to benefit others without giving up something else of equivalent moral value, then Andreas argues they should be enthusiastic about effective altruism as an intellectual and practical project nonetheless.

In this conversation, Andreas and Rob discuss how robust the above line of argument is, and also cover:

• Should we treat thought experiments that feature very large numbers with great suspicion?
• If we had to allow someone to die to avoid preventing the World Cup final from being broadcast to the world, is that permissible?
• What might a virtue ethicist regard as 'doing the most good'?
• If a deontological theory of morality parted ways with common effective altruist practices, how would that likely be?
• If we can explain how we came to hold a view on a moral issue by referring to evolutionary selective pressures, should we disbelieve that view?

Chapters:
  • Rob’s intro (00:00:00)
  • The interview begins (00:01:36)
  • Deontology and effective altruism (00:04:59)
  • Giving What We Can (00:28:56)
  • Longtermism without consequentialism (00:38:01)
  • Further differences between deontologists and consequentialists (00:44:13)
  • Virtue ethics and effective altruism (01:08:15)
  • Is Andreas really a deontologist? (01:13:26)
  • Large number scepticism (01:21:11)
  • Evolutionary debunking arguments (01:58:48)
  • How Andreas’s views have changed (02:12:18)
  • Derek Parfit’s influence on Andreas (02:17:27)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

What is 80,000 Hours Podcast?

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin and Luisa Rodriguez.

Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and why you should wait for the commercial break to save my life. I’m Rob Wiblin, Head of Research at 80,000 Hours.

If there’s one group of people you subscribers consistently enjoy listening to it’s philosophers. I find philosophers, or at least the ones I hang out with, tend to be highly reflective, clear reasoners, and good communicators, even by the standards of academics.

Andreas Mogensen is one of the sharpest people I know and brings all of those philosophical virtues in spades.

We discuss whether ethical theories other than consequentialism should embrace or reject effective altruism, and whether we should be skeptical of philosophical thought experiments that involve incredibly large numbers, among other topics.

Before we get to that though I really wanted to encourage you to check out an interview I did which was just released on our other show, called 80k After Hours.

I speak with Andrés Jiménez Zorilla who co-founded an innovative new company called the Shrimp Welfare Project. Humans farm hundreds of billions of shrimp or prawns each year, and including those caught at sea we kill them by the tens of trillions. I knew next to nothing about shrimp let alone shrimp farming, and so I was fascinated to download a whole lot of empirical information about both.

The evidence that prawns are conscious and can suffer is pretty good, and Andrés’ team think they’ve found a way to improve the wellbeing of farmed shrimp that won’t cost the shrimp farming industry much if anything, and so might actually be picked up.

I’m just really happy with how the interview turned out and wouldn’t want you to miss it, so go have a listen on our other feed 80k After Hours.

Alright, with that out of the way, I bring you Andreas Mogensen.

Today I’m speaking with Andreas Mogensen. Andreas is a senior research fellow at Oxford University’s Global Priorities Institute. A philosopher by training, Andreas studied at Cambridge before doing his PhD at Oxford, and then winning an Examination Fellowship at Oxford’s All Souls College, one of the most competitive postgraduate placements in the whole world.

Rob Wiblin: His current research interests are primarily normative and applied ethics, but he has previously published on metaethics and moral epistemology — in particular, so-called “evolutionary debunking” arguments. Some of his many publications include “Against large number scepticism,” “Giving isn’t demanding,” “Do evolutionary debunking arguments rest on a mistake about evolutionary explanations?,” “Moral demands and the far future,” and “Why the Asymmetry does not support antinatalism.”

Rob Wiblin: He’s known among my friends for having an incredibly broad general knowledge, and was among the first people to take the Giving What We Can pledge to give 10% of his income to the best charities that he could find, all the way back in 2010. Thanks for coming on the podcast, Andreas.

Andreas Mogensen: Thanks. I’m very happy to be here.

Rob Wiblin: I hope we’ll get to talk about deontological moral theories — but first, what are you working on at the moment, and why do you think it’s important?

Andreas Mogensen: The most recent research project I’ve been working on is to do with the so-called hinge of history hypothesis. As I’m sure listeners to the show might well know, the hinge of history hypothesis asserts that we are living at a uniquely important moment in human history. The practical importance of this relates to this question about optimal timing. So if we think the current time is unexceptional and maybe there’ll be some much more important events that occur much later in human history, you might think the best thing for us is to be patient, invest our resources, and wait for this decisive moment. But if the current time is uniquely important, then maybe we should be burning through all our resources in order to influence what happens now at this decisive point in time.

Andreas Mogensen: Will MacAskill published a paper relatively recently in this volume Ethics and Existence arguing against the hinge of history hypothesis. One thing I’ve been working on is a reply to Will’s paper. That reply is not especially constructive. It just explains why I’m not satisfied with Will’s arguments. The thing I’ve been sort of trying to think about recently is whether I’ve got something more positive to say about this issue, rather than just being critical of Will’s arguments.

Rob Wiblin: How’s that going?

Andreas Mogensen: I haven’t made that much progress.

Rob Wiblin: It’s a tricky question.

Andreas Mogensen: Yeah. I think one worry I have is that the debate as it currently exists is a little bit sort of conceptually muddled. And I think these ideas — like what it means for a time or a person to be important, or the distinction between direct expenditure and investment — are quite unclear. So the first thing I’ve been trying to work on is just seeing if a bit of old-school philosophical analysis can maybe help us get a better conceptual handle on the best way of framing these issues. But I haven’t made much headway on that yet.

Rob Wiblin: Yeah. I did enjoy that response to Will’s piece. Is that actually up on the web at the moment, or is it still in writing?

Andreas Mogensen: It’s still in writing, yeah. But hopefully it should go up soon.

Rob Wiblin: Yeah. I did enjoy that one, but it definitely was in the business of explaining why Will is confused rather than staking out any affirmative view. So I think it’s great that you’re going to make an attempt to put your own view forward, so that then Will can potentially try to pull your own views down, and it’s equal on both sides.

Andreas Mogensen: It would be an honour to have my own views pulled apart with the same scrutiny.

Rob Wiblin: That same methodical rigour that you brought to Will, yeah.

OK, pushing on. The first major topic of today that we want to talk about is deontology and its relationship to effective altruism. Some listeners will know that something that’s a little bit unusual about you as an academic and a global priorities researcher is that, of course, you’re doing a lot of work on themes and topics that are super relevant to effective altruism and longtermism as schools of thought. But unlike many people, or possibly the majority of people who are doing that, you lean towards endorsing and researching deontological approaches to ethics rather than consequentialist ones, which certainly are very overrepresented among people who work on longtermism and effective altruism.

Rob Wiblin: Now, my personal view is that the relationship between ethical theories and the notion that it would be good for us to try to do as much good as we can with our career or with our life, I think that that’s something that many people are confused about. I’m curious to see to what extent you agree.

Rob Wiblin: But before we get to that one, we’re going to be talking about some complicated philosophy here. So I’m going to give you licence to dumb things down a little bit, or to explain things in very broad strokes rather than be completely accurate. In that spirit, can you give a really brief, dumbed-down explanation for listeners of the difference between deontology and consequentialism?

Andreas Mogensen: Yeah. Roughly speaking, consequentialism is the view that the defining property of morally right actions is that they suitably promote the good. The best-known example of a view of this kind is a sort of totalist utilitarianism, which says that an act is right if and only if it maximises total welfare.

Andreas Mogensen: Nonconsequentialist theories needn’t deny that promoting the good is morally important, but at the very least, they’ll say that other things matter in determining which acts are right and wrong. So promoting the good might be one of the considerations that go together to make an act right, but other things — like respect for rights, honouring one’s promises, partiality to one’s nearest and dearest — these are other morally relevant considerations that bear on right action.

Rob Wiblin: So I suppose consequentialists have this challenge: when you’re trying to evaluate what consequences are good, how do you tell what is good? Some people think about it in terms of preference satisfaction. Other people lean towards thinking about pleasure and pain and things like that. Other people might think about wellbeing in some more general sense, talk about people flourishing and so on.

Rob Wiblin: Deontologists, on the other hand, have this question of: how would you know what principles of right action might constrain our actions? Or what actions would be permissible or impermissible, and why? What sort of theories of permissibility and impermissibility do deontologists play with?

Andreas Mogensen: The best-known theory in the deontological camp is Kant’s moral theory. He offers many different principles which are supposed to be equivalent, but his best-known claim is that an act is right if and only if you can will the maxim of that action as a universal law. There’s a lot of scholarly controversy about how exactly to interpret this. What on Earth is a maxim? What does it mean to will a maxim of one’s action as a universal law?

Andreas Mogensen: The association between deontology and Kant’s moral theory is, from my perspective, somewhat unfortunate, because I’m not especially sympathetic towards Kant’s moral theory. Because as I said, on the one hand, it’s difficult to know exactly what this general principle means. Kant in general is not a very clear writer, but this feels almost especially unclear.

Andreas Mogensen: But at least on some natural interpretations of what it means, it seems to just have some absurd implications. It sort of seems to suggest that you’re not permitted to act in some way unless all rational agents could act similarly. As Parfit pointed out, this seems to suggest that it’s impermissible to aim to give more to charity than the average person, since not everyone could give more to charity than the average person.

Rob Wiblin: I see. Right, right.

Andreas Mogensen: So then you need some interpretation of that principle that I just described that avoids that implication.

Rob Wiblin: I see. And then things end up getting complicated. What’s an approach to deontology that’s not Kantian that you think is more plausible?

Andreas Mogensen: I’m a lot more sympathetic to a view that departs from both Kantianism and utilitarianism or consequentialism by denying that there is a single defining property of all right actions. That sort of holy grail for a lot of moral philosophers is to find some single property that separates right actions from wrong actions. For Kant, it’s this thing about universal law. For utilitarians, it’s maximising aggregate welfare.

Andreas Mogensen: A view that I’m quite attracted to, especially identified with the work of the philosopher W. D. Ross, rejects this assumption that there’s a single defining property of right actions. It’s the view that there are many distinct properties that contribute to making an act right or wrong. So there are many sources of moral reasons; there isn’t just one. It might be that promoting the good is in and of itself always a reason in favour of doing some action, but there are other sources of moral reasons — like respect for others’ rights, keeping one’s promises, things of that nature.

Rob Wiblin: I see. It’s a somewhat more pluralistic approach, or it’s an approach that’s happy to kind of bless the mess, or accept that there’s a whole lot of competing different things that we need to worry about. Both in terms of consequences, which you’re taking into account, but also in terms of are you following other principles of good action — like honesty, nonviolence, and so on.

Andreas Mogensen: Yeah. That’s right.

Rob Wiblin: How do you figure out what those principles of good action are? How do we get evidence about that?

Andreas Mogensen: It’s obviously a subject of significant philosophical disagreement, but I tend to favour a relatively mainstream approach to moral theorising, which emphasises intuitions about particular cases — typically thought experiments — to make sure that everything is sort of clean and as uncomplicated as possible. And then trying to build up theoretical principles based on these intuitions about these cases — balancing these intuitions about particular cases with more abstract theoretical intuitions about what moral principles ought to be like, and seeking to bring these various judgements into coherence with one another, because they very often turn out to be contradictory. Which is very annoying, but helps keep moral philosophers in business trying to sort out what to do with these contradictions that we very readily discover in our moral thinking.

Rob Wiblin: Yeah. We’ll have some examples of thought experiments and different intuitions that people have throughout the conversation. But you’re saying the classic issue with this approach is that very often people end up with quite strongly conflicting intuitions. I guess it would be one thing to have conflicting intuitions across people. But when you have conflicting intuitions within people — where you change something that seems like it shouldn’t be morally relevant, and the person themselves judges it is not necessarily morally decisive, but then their intuition about the case changes a lot — how do you try to reconcile those using this approach?

Andreas Mogensen: That’s a very good question. This is really something we’ll probably get on later, but one approach that I think people are increasingly interested in is the idea that when we find our intuitions or our beliefs in disagreement, an especially useful thing is to step back from those intuitions and beliefs for the time being, and just think about looking for psychological explanations, or even deeper explanations — say, evolutionary explanations — for why we might have particular beliefs or particular intuitions. And try to see if we can find some kind of discrediting explanation for some of these intuitions, or other reasons to suggest that some of these intuitions might come from a less reliable source than others.

Rob Wiblin: So one approach is to reflect on the competing intuitions, and then as you try to figure out where those intuitions are coming from, perhaps internally, you yourself might realise that you think some of them are more reliable than others, and then you end up in a kind of reflective equilibrium that is somewhat more coherent than what you had to start with.

Andreas Mogensen: Yeah. That would be the hope, at the very least. Unfortunately, it doesn’t often play out that way. I think we’re often left with a lot more uncertainty.

Rob Wiblin: Yeah. But sometimes. OK, we’ll come back to that later. Now though, what’s a reason why you kind of prefer a deontological approach to ethics over a purely consequentialist one?

Andreas Mogensen: Related to what we were just discussing, to a large extent, I think the strongest motivations for rejecting consequentialism and endorsing a nonconsequentialist or deontological approach to ethics derive from just having intuitions about particular cases.

So one very well-known case that some of your listeners might already be familiar with is the so-called transplant case, which I believe was originally devised by Judith Thomson. It goes like this: Imagine that there are five people and they’re dying from various kinds of organ failure. It’s possible to kill a healthy young man and redistribute his organs to save the five from dying. We stipulate that there are no additional adverse consequences from killing the one and redistributing his organs, over and above the loss of his life, the mourning of his loved ones and so forth — all burdens that would otherwise fall on the five who are dying.

Andreas Mogensen: On its face, actually, utilitarianism says that the right act is to kill the one and redistribute his organs. Intuitively, doing so is impermissible because this infringes on the right of the one not to be killed. If we take that intuition at face value, then that might suggest to us that consequentialism is, in a sense, too permissive. It allows us to do certain things that are in fact wrong because it apparently mistakenly claims that the ends always justify the means.

Rob Wiblin: Yeah. This is a pretty standard objection, and because you’re in the business of taking moral intuitions about cases seriously, it’s not that hard to find hypothetical thought experiments where utilitarianism or consequentialism alone gives pretty odd results — where it suggests that we should do things that we don’t actually feel that we should do, commit seemingly grave wrongs like murder in order to produce some good outcome. That’s unacceptable.

Rob Wiblin: So I guess you’re trying to find some reconciliation of how far one can go in terms of doing good. What are the boundaries on the things that we can’t do in pursuit of making the world a better place, because they violate principles of good action that are just too strong?

Andreas Mogensen: Yeah. That’s right.

Rob Wiblin: Many people associate the kind of mentality that 80,000 Hours has about doing as much good as possible very strongly with consequentialism and utilitarianism. I guess that’s natural, because certainly those ideas are very overrepresented among our staff relative to the general population. Relative to most people, I’m pretty keen on utilitarianism. It certainly does seem that utilitarianism should imply something like you should try to do as much good as you can with your life. Do you think that’s causing people to overstate the intellectual relationship between utilitarianism and trying to do as much good as possible?

Andreas Mogensen: Yeah. Maybe a bit. As I said earlier, if you’re a deontological moral philosopher, you don’t necessarily deny that promoting the good is an important source of moral reasons. You need only deny that this is the only source of moral reasons that we have. So you can definitely believe that we have very strong duties of beneficence and those should play a very strong role in shaping how we live our lives.

Andreas Mogensen: I think specifically this formulation of doing as much good as possible without any additional qualifiers might be something that only consequentialists can really sign onto. Because nonconsequentialists are going to think that pursuit of the impartial good has to be capable of being constrained by other moral considerations, like respect for rights or maybe partiality towards one’s nearest and dearest.

Andreas Mogensen: I also slightly suspect that possibly you don’t really intend the message in that sense. When you say that 80,000 Hours is about doing as much good as possible, you’re probably bracketing the possibility that there are some perhaps purely hypothetical scenarios where doing as much good as possible might require intentionally killing a person. You might think those cases are just actually very, very unrealistic, or you as an organisation don’t want to take a stand on whether the ends always justify the means in that sense.

Rob Wiblin: Yeah. We actually do have a bunch of pages on the website where we talk about this issue and we take the stance that the ends don’t always justify the means.

Rob Wiblin: But setting that aside, if it’s the case that most people who are doing research or philosophy within a deontological ethical framework do still care about consequences — they just don’t believe that they’re the only thing that matters — if people give significant weight to benefits and harms that are getting done, then it seems like the interest of those moral philosophers in how you can do as much impartial good as possible should be pretty substantial. Unless you think that the constraints on people’s actions, because of these other principles of a right action are either strong or numerous or onerous in some way in practice.

Andreas Mogensen: Yeah. I think that’s broadly correct. One area where nonconsequentialist moral philosophers or standard nonconsequentialist views may come into some degree of conflict with this idea of doing as much good as possible relates to the utilitarian theory of “interpersonal aggregation.” Roughly, this is the question of how one should add up and weigh and balance harms and benefits to different people in order to work out whom one should help when you can’t help everyone.

Andreas Mogensen: So set aside those kinds of things we were just talking about, and just focus on cases where there’s no question of harmfully imposing on someone without their consent. You’re not being asked to make some very great sacrifice of your own interests, and you don’t have any special relationships with anyone. Even in these kinds of cases, where you’re just asking, “How should I allocate a benefit between different people?,” I think many nonconsequentialist philosophers are likely to disagree with certain claims on which we should in these contexts maximise the good — at least if the good is total welfare, measured as, say, just units of pleasure and pain or something along those lines.

Rob Wiblin: Why would they object to that? Because we’ve said there are no other issues being raised here, like close relationships that you have with people or needing to undo harms that you’ve done to someone. It’s a case where you’re considering different ways of benefiting complete strangers. Why, in that case, would you not just think about the aggregate amount of happiness?

Andreas Mogensen: One point of contention is how one should weigh very significant burdens that might fall on one person against trivial benefits that might accrue to other people. An example, which I think we might discuss a bit later on in greater depth, is suppose I can save someone from a premature death, but I can otherwise provide arbitrarily many people with some trivial benefit, like they get to lick a tasty lollipop. They each get their own. They don’t have to share.

Andreas Mogensen: On its standard interpretation, utilitarianism entails that if you’ve got enough people to whom you can grant this very trivial benefit — an additional lick of a tasty lollipop — those together can outweigh even something as important as sparing a person from premature death.

Rob Wiblin: I see.

Andreas Mogensen: Whilst you don’t, strictly speaking, need to reject consequentialism in order to support the opposite verdict about that sort of case, I think that the opposite verdict — that there are certain sufficiently bad harms, such that we ought to spare one person from that grave harm rather than providing trivial benefits to arbitrarily many people — is a view that nonconsequentialist philosophers are especially likely to accept.

Rob Wiblin: OK, I suppose there’s issues about distribution of harms and benefits that the deontologist might be interested in. Were they concerned about the worst-off member disproportionately, or concerned about focusing on particularly severe harms rather than small ones, perhaps. Where a consequentialist might be more neutral about those things.

Andreas Mogensen: Yeah. Though in principle, there’s no contradiction between being a consequentialist and being especially concerned with the worse off, in that you might have a so-called prioritarian theory of good outcomes. Roughly speaking, prioritarianism says that benefits matter more when they’re received by people who are worse off in absolute terms. So it departs from utilitarianism, which says that it’s equally important to benefit anyone no matter how well off they are, in saying that benefits for the worse off matter morally more. But you can cash this out as a theory of which outcomes are good and bad. So you needn’t in principle disagree with consequentialism in order to have a view of that kind.

Rob Wiblin: Yeah. A view that you hear from time to time is people who will hear about 80,000 Hours or effective altruism — all of this research that’s being done, all this thought that’s been given to the question of how would one do as much good as possible from an impartial point of view — and they’ll say, “Well, I personally don’t endorse consequentialism or utilitarianism, so this is of little interest to me.” It sounds like you think that is misguided. Can you unpack why?

Andreas Mogensen: Yeah. I agree that’s misguided. Well, it might not be totally misguided in that there may be some misunderstanding or some ways in which certain expressions that are used to communicate what effective altruism is about might inadvertently build in consequentialist assumptions. So as I said, if you talk about the importance of doing as much good as possible without any qualification, it sounds very much like you are just asserting consequentialism.

Andreas Mogensen: It’s pretty difficult to say exactly what effective altruism is as a kind of position. Like all social movements, it’s pretty hard to pick out some definitive statements that all and only effective altruists accept. But there’s certainly no in-principle reason why one couldn’t reject consequentialism but nonetheless believe that we have very strong obligations of beneficence, very strong obligations to help others and promote the welfare of others, and that there are certain principles for aggregating harms and benefits across people that bear on this question such that we ought to be especially concerned about helping more people as opposed to fewer.

Andreas Mogensen: So there is a sign of what I think of as a kind of extreme nonconsequentialist or deontological view, which says, for example, that if you could save the lives of five people or save the lives of one other person, and these lives are otherwise equivalent — these people will live for roughly as long as one another, they have roughly similar dependents and people who care about them — there is a sort of fully non-aggregative theory of ethics which says that even in such cases, it’s not true that one ought to save the greater number.

Andreas Mogensen: Some people, most famously John Taurek, have argued that in such cases, one ought simply to flip a coin because that gives each person an equal chance of being saved. But I think that’s a minority view amongst nonconsequentialist philosophers and amongst philosophers generally. Most people think that you ought to save the greater number in such cases.

Andreas Mogensen: Where people might have more qualms is in the kind of cases that I described earlier, where you’re not trading off roughly similar harms to different people, but rather very significant harms to a smaller number of people versus relatively insignificant benefits or harms to a much, much greater number of people. Those are the sorts of questions about aggregation where I think many people are going to get off the boat, so to speak. They’re going to be unwilling to follow the principles of utilitarian cost-effectiveness analysis, basically.

Andreas Mogensen: But I think it’s somewhat difficult to come up with real-world practical cases, where that sort of choice is one that would overturn concrete recommendations about where one ought to donate, for example. I didn’t look into this all that closely, but I don’t think amongst the sort of organisations that are recommended by GiveWell, for example, there’s any point where one might worry that a recommendation of a particular organisation derives from giving weight to very, very small benefits and aggregating those according to the number of people who can be benefited. I think all of the people who are benefited by the organisations, or at least the primary beneficiaries, are people who are having their lives transformed quite significantly.

Rob Wiblin: I see. It sounds like the thread that connects all of these different ideas is beneficence. Definitely consequentialists, and it sounds like most deontologists, in general believe that one either has duties of beneficence, or that beneficence is a good thing even if you’re not obligated to do it. And beneficence is helping people in an impartial way, providing benefits to people and avoiding harms.

Rob Wiblin: Where does beneficence feature, how prominently does it feature, and what form does it take in mainstream deontological thinking?

Andreas Mogensen: There are maybe two specific debates where the nature of our obligations of beneficence are debated. On the one hand, there’s this question of just how strong are these obligations of beneficence? I mentioned earlier that one standard objection to utilitarianism is that it’s too permissive: it allows us to do certain things that intuitively we aren’t permitted to do. Another objection is that it’s too demanding of us: it asks us to do certain things in sacrificing our own interest for the sake of the general good that intuitively we aren’t required to do.

Andreas Mogensen: Nonetheless, it’s possible to make a fairly good case, without assuming utilitarianism or consequentialism, that we have very strong obligations of beneficence. This is the kind of line of discussion which, at least in the modern era, originates with Peter Singer’s article “Famine, affluence, and morality,” and the subsequent debate that leads on from that paper.

Andreas Mogensen: Roughly speaking, in that paper Singer argues that, relying on moral ideas or moral principles that virtually everyone should be happy to accept, we have very strong obligations to aid people who are in need or whose lives are threatened by the effects of severe poverty, for example. So that suggests that, even if we are not always required to sacrifice our own interests just so long as that would do slightly more to benefit somebody else, we may nonetheless have very strong obligations of beneficence — very strong obligations to make significant sacrifices of our own wellbeing to help people who would otherwise be very badly off and to make their lives better.

Andreas Mogensen: There’s a lot of debate about exactly what kind of principles should govern obligations of beneficence, whether Singer is in fact right that obligations of beneficence are quite as strong as he claims they are in that paper. There are various discussions about various factors that might explain, for example, why we ought to be willing to make much greater sacrifices to help this famous example of a child who’s drowning in a shallow pond right at our feet, as opposed to helping, say, distant strangers or distant children whose lives we might be able to save by donating to cost-effective charities.

Andreas Mogensen: So there’s a lot of debate about that issue and various attempts to explain why there might be a morally relevant difference between these two examples: saving the child drowning in a shallow pond right at your feet, or saving a life through donations in some far away country, say. So that’s one subject of debate.

Andreas Mogensen: The other major subject of debate are these questions about aggregation that I suggested, and the ways in which we should balance gains and losses of different sizes to different people, especially when we have to make a decision about whom to help and we can’t help everybody.

Rob Wiblin: Yeah. Maybe a useful applied case here is your decision to join Giving What We Can, committing to give 10% of your income to the charities that would do the best from an impartial beneficent point of view. Can you talk through how you think about that decision within your ethical framework?

Andreas Mogensen: Yeah. I joined Giving What We Can, I think, almost immediately after it was started up. I was very convinced by this argument that you find in “Famine, affluence, and morality,” and also in a subsequent book by Peter Unger called Living High and Letting Die. So I was very much convinced of this conclusion that we have very strong obligations to help other people and to spare them from these very significant harms that arise from severe poverty. I think I was on board with this idea as soon as it was proposed.

Andreas Mogensen: Perhaps at the time I wasn’t quite as focused on the issue of effectiveness of helping the greatest number of people possible, perhaps in large part because I think it hadn’t occurred to me that there might be significant differences in the cost effectiveness of different interventions. But as I said, it’s very standard for nonconsequentialists to think that if you can help different people who might otherwise suffer roughly similar kinds of harms, then you ought to help the greater number.

Rob Wiblin: Basically, your philosophy is that it is good to make the world a better place in the way that a consequentialist would think about it — which is people having flourishing lives and not having awful lives — so long as you’re not violating some other principle of right action. Then when you think about specific things, like giving 10% of your income to charity, you notice that the benefit to the recipients is much larger than the cost to you. So you’ve got the beneficence aspect there, and then you might stop and think, “Does this violate some other principle of right action? Can I think of thought experiments that are related to this topic that would say that, no, even though it’s producing desirable outcomes, I’m not permitted to give away this money, or it wouldn’t be desirable, or there’s important offsetting factors?” Basically, it sounds like you thought about it and you noticed that there were not any of those.

Andreas Mogensen: Yes. I think that’s right. Obviously, there are some worries about the harms that would be caused by various charitable interventions, and especially harms that might be caused to societies by aid. So those might in principle weigh as reasons against — and perhaps as reasons against that would give rise to some kind of constraint on right action. But I think the conversation that circles around these questions of whether we might inadvertently be doing more harm than good in, say, donating or in providing aid, they don’t have a distinctively nonconsequentialist slant. The question is roughly, as I said, “Are we doing more harm than good? Or are we in fact doing more good than harm?”

Andreas Mogensen: My view is that these questions about whether there are constraints on ways in which we can permissively harm other people for the sake of bringing about the greater good, those don’t really seem to be practically relevant to this issue — and I suspect not really to any issues that are very much at the forefront of the effective altruist debate at the moment.

Rob Wiblin: Yeah. It sounds like very strong reasoning. So whether you come from a purely consequentialist point of view or this mixed deontological beneficence point of view, it seems like the ethical case is very strong for giving 10% of your income away to whatever method you think will produce the best impartial good.

Rob Wiblin: But my impression is that there’s not a tonne of interest in that kind of action, analysing different approaches for doing the most impartial good from people who approach philosophy from a deontological point of view. That is not a big research interest and possibly not a big interest among people who just have a deontological frame of mind in their day-to-day life. Do you have any explanation for why that is the case, given that beneficence is a feature of most of these theories, and it seems like the case is powerful that this is an example of required beneficence?

Andreas Mogensen: I’m not sure. On the one hand, I’m not totally sure that there isn’t interest from nonconsequentialist philosophers. I mean, I am such a person. I think if you look at the list of people who joined Giving What We Can early on, I think you find quite a number of moral or political philosophers who I believe reject consequentialism. Like Adam Smith. No, Adam Swift, not Adam Smith.

Rob Wiblin: Adam Smith would be a good get at this stage.

Andreas Mogensen: Yeah. Alex Voorhoeve at LSE. I think they both joined before you did.

Rob Wiblin: Yeah. Yeah. Yeah.

Andreas Mogensen: One of the things you might think is that, historically, there hasn’t really been that much interest amongst consequentialists and utilitarians in figuring out how to really do the very most good.

Rob Wiblin: Very true.

Andreas Mogensen: Amongst the classical utilitarians, Bentham and Mill, for example, to my knowledge at least, you don’t find the kind of sustained inquiry into cause prioritisation that characterises the contemporary effective altruist movement, in spite of the fact that these were avowed utilitarians or avowed consequentialists. So it may be that in some sense, this is just a question that is, for some reason, unusual for moral philosophers or thinkers of many different moral persuasions to become heavily involved with. There might be no particular question that applies to why nonconsequentialists would be especially involved in this question.

Andreas Mogensen: It might also just be a kind of division-of-labour-type issue. If you’re a nonconsequentialist philosopher, you probably think the place where you can add value to the conversation is in picking up on the kind of considerations that consequentialists are apt to miss in their singular focus on promoting the good. That might therefore create an environment where it tends to be consequentialist moral philosophers — who care the most, in some sense, about promoting the good, in that they care about nothing else — who are most likely to focus on these sort of practical questions about cause prioritisation.

Andreas Mogensen: But yeah, I don’t really have a good explanation for why this should be so, except for the obvious point that I think you brought up: that utilitarianism or consequentialism entails straightforwardly that you should be very concerned about this project. Nonetheless, I believe that there are good arguments for being very interested in these questions, but there is more room to resist this sort of concern for doing the most good as an overriding moral concern, or as a central moral concern in how one lives one’s life.

Rob Wiblin: Yeah. It is a very good point that so many philosophers who avowed consequentialism in one form or another, it seems like they took barely a passing interest in the question of what actions would actually do the maximum good — which you would think would be almost a monomaniacal focus for those folks. That’s exactly the kind of way that you would think that they would be inclined to think, and exactly the kind of thing that they would focus on. So perhaps it’s not so surprising that people who are applying different moral frameworks haven’t taken a huge interest in that either, given that it’s somewhat further afield in terms of the style of reasoning and the kind of questions that you might ask.

Rob Wiblin: So far, I’ve kind of been eliding the difference between something being desirable and something being mandatory. Is it possible to give a slight survey of deontologists on the whole? Would they think that taking the Giving What We Can pledge or giving 10% of your income to the best charities is desirable, or mandatory, or neither? What fraction might fall into those different camps?

Andreas Mogensen: I suspect that most, perhaps all, would view this as morally praiseworthy. As I said, they might have some qualms if the procedure that is used for prioritising among projects violates certain principles that should restrict the way in which we aggregate minor harms against very significant harms, or, for example, considerations of fairness that might guide how we allocate goods amongst competing claimants. But I think it’s actually somewhat difficult to identify practically relevant cases where that is an issue in the kind of interventions or organisations that effective altruists recommend, especially in the global health and wellbeing space.

Andreas Mogensen: I would expect that virtually all would think of this as something that’s morally praiseworthy. Whether most would think it’s obligatory? I wouldn’t be surprised if indeed most people thought it was obligatory to donate 10% of one’s income to help the worst off, if one is, say, a middle-class person in a developed country. I expect you would probably find quite a lot of people who would regard that as obligatory.

Rob Wiblin: OK. That is cool to know. I’m slightly saying that because it’s a little bit surprising that there’s not more uptake, but I suppose as you pointed out, almost no one has taken up this kind of approach. So it’s not surprising to find that in any particular group of people, mostly they haven’t taken it up either.

A separate research interest that you have is in longtermism — this idea that, arguably, the best actions that we could take would be the ones that would have the best consequences far in the future, after our normal lifespans are over? This is another one of these ideas that seems strictly entailed by utilitarianism, but might well be entailed — or might well be entailed with some variations — by many other moral philosophies that one might follow. What ethical reasons cause you to take a big interest in longtermism?

Andreas Mogensen: So, as I said, it’s characteristic of contemporary nonconsequentialist philosophers to think that promoting the good is an important source of moral reasons, but not the only one. If we understand longtermism in the first instance as a claim about value — about what kind of actions best promote the good — this suggests a very natural connection, or a reason why one would be interested in this question, right? If you think that reasons to promote the good are an important class of moral reasons, then it’s important to know in what way those reasons point to what kind of actions they direct us.

Andreas Mogensen: So from that perspective, it would be very natural to be interested in longtermism, if you take seriously that there are stringent obligations of beneficence — even if you deny that that’s the whole of morality. Once you get into longtermism, you get into population ethics, and here a whole new set of controversies arise where there’s additional scope for disagreement about how outcomes that are associated with creating additional happy lives bear on what we morally ought to do.

Andreas Mogensen: Whilst I think longtermism can be supported from within a range of different theories of the good, it perhaps has a most natural home within this kind of totalist utilitarianism that says one outcome is better than another, just in case it contains a greater sum total of welfare. So in particular on this view, you can make the outcome better by creating additional happy lives.

Andreas Mogensen: I think many people are intuitively sceptical of the idea that we have moral reasons to bring additional happy lives into existence. You can resist this from within consequentialism by adopting a particular theory of the good that, say, rules out total utilitarianism. But it is, again, an idea where perhaps nonconsequentialists are especially apt to deny that there could be a moral obligation to create additional happy lives, as opposed to benefiting already existing people, or people who will exist regardless of what action we take.

Rob Wiblin: I see. So I suppose the reason to take an interest in longtermism is very similar to the reason to take an interest in effective altruism, because longtermism is just one particular school of thought that one might have about how one might pursue beneficence, how one might do the most good from an impartial point of view. I guess you’re saying for many deontologists who would be enthusiastic about the idea that it is great to give 10% of your income to help extremely badly off people overseas, one reason they might get off the boat on the trip to longtermism and thinking about people very far in the future is that they may have different views on population ethics.

Rob Wiblin: Population ethics is this whole research area where people will compare different scenarios that have different numbers of people in them and different identities of people in them, and try to figure out which of these scenarios is better to create. Deontologists, for whatever reason, are perhaps somewhat more inclined to resist the idea that a scenario can be much better just because it has way more people in it, people who don’t exist right now. That’s a potential parting of ways.

Andreas Mogensen: Yeah, I think so. I think many people find this idea intuitively bizarre, let’s say — and that includes many consequentialists or utilitarians, or people who identify as such. So it’s certainly not particular to nonconsequentialism to think that, as Jan Narveson put it, we should be in favour of making people happy, but neutral about making happy people. As I recall, Narveson sets up this claim in a paper in which he’s otherwise developing a utilitarian approach to thinking about population. So, one can be sceptical of the value of adding happy lives, whilst otherwise endorsing a utilitarian or consequentialist approach to ethics.

Andreas Mogensen: But I think there’s a sense in which consequentialism perhaps lends itself especially easily to a view on which we could be obligated to create additional happy lives. It says that ultimately what matters morally is to promote the good, and so if we identify the good with welfare, for example, one can very naturally fall into thinking that we should be in favour of having as much welfare as possible — even if that would not be increasing the welfare of any existing person or any person who might exist, regardless of which action we take.

Andreas Mogensen: Whereas nonconsequentialists are more likely to take this line of thought — which is developed especially well, I think, in some recent papers by Johann Frick — that morality is ultimately rooted in concern for people, so our overriding concern should be to ensure that if a person comes into existence, then their life goes well for them. And welfare is not otherwise a source of moral demands; its moral significance can’t be detached from our concern for individuals.

Andreas Mogensen: I think consequentialism — especially in this totalist utilitarian form — has sometimes seemed to open itself up to an objection that it treats people as if they were vessels, or a receptacle for something valuable. That concern is especially likely to arise when we contemplate this idea that we should be in the business of creating more people so that there can be more welfare in total. It seems as if we’re sort of valuing people as means to the production of utility.

Andreas Mogensen: As I said, there are ways of amending or otherwise devising one’s theory of the good such that one might avoid the implication that it improves the outcome to add people who have good lives. But, again, this is a case where nonconsequentialists are perhaps more likely to get off the boat than consequentialists for perhaps roughly the kind of reasons I’ve tried to set out.

Rob Wiblin: Yeah, yeah. I’m keen to pull more on what the potential differences are in views that people would have. So this population ethics case is one. Another that you’ve mentioned a few times is this issue that deontologists might have a different view on very big harms versus very small harms, and how you weigh those up against one another.

Rob Wiblin: What are some other ways in which someone with your moral perspective might have a different idea about how one ought to go about doing the most good? Or how they might have different priorities from an effective altruist point of view than someone who was more just a straight utilitarian?

Andreas Mogensen: One other example of these sorts of questions about allocative or distributive ethics, where there might be a tendency for consequentialists and nonconsequentialists to disagree, might relate to this issue of fairness. You can imagine cases where you could provide a roughly similar benefit to one of two people, but a slightly greater benefit to one of them. In these cases, utilitarians, and perhaps consequentialists generally, will think that one ought to help the person that one can benefit the most. But I think many nonconsequentialists are actively sceptical of this, at least in cases where the difference in the benefit that one can provide isn’t especially great.

Andreas Mogensen: So for example, suppose that we can choose between saving the lives of two people. One of these people will have some postoperative pain and the other won’t. Otherwise, their lives will be similar from then on. I think it’s perhaps somewhat natural to think that it’s unfair to the person who would have some postoperative pain that they should entirely lose any claim to our help just because they won’t be benefited quite as much as the other person. You might think it’s fairer still in this case to, say, flip a coin to decide between the two.

Rob Wiblin: OK, so you’ve got issues around distribution of benefits to people.

Andreas Mogensen: You might think this is actually roughly the same kind of issue as the one we were discussing earlier, about how to weigh very significant benefits against relatively trivial benefits: although here we’re not dealing with very large numbers of people, in some sense we’re asking how the provision of a comparatively minor benefit should determine how we decide who is spared from a very significant harm — in this case, losing their life. So you might think these are actually reflective of the same underlying issue. But nonetheless, it doesn’t involve these questions about very, very large numbers. Here we just have one person and one other person who can be benefited slightly more.

Rob Wiblin: Are there any other ways that the deontologists might think about these things differently?

Andreas Mogensen: Yeah. One other issue relates to a paper I published recently. This relates to the issue of discounting. There’s a kind of standard practice within the economic evaluation of long-term investments which involves discounting future welfare. And most people, when they reflect on this, think that this can’t morally be justified. It may be that people in fact discount their own future welfare because they’re impatient, but having less moral concern for people merely because they’re more distant in time seems indefensible. I think that’s very much a position that many philosophers and economists, insofar as they’ve thought about this from a normative perspective, that’s the kind of position to which they have tended to gravitate.

Andreas Mogensen: So one issue that I was interested in exploring was whether one could justify something that at least looks like a discount rate by appeal to the moral importance of partiality. Consequentialists typically adopt the view that, as I said, you should maximise the good. And the good here is the good considered impartially: it’s not what’s good or bad from my perspective or from your perspective, but what’s good as considered from what is sometimes put as “the point of view of the universe.”

Andreas Mogensen: Most people, I think, have a pre-theoretic belief at the very least that we ought to be especially concerned about the welfare of some people and not others. We ought not to be completely impartial between the welfare of all the different people who exist. So for example, I ought to be, and am indeed, more concerned about the welfare of my wife than about your welfare, Rob. Sorry.

Rob Wiblin: Yeah. It’s all right.

Andreas Mogensen: So we think that there are these reasons to be especially concerned with the welfare of some people and not others, that relate to the particular relationships in which we stand to some people and not to others. And this, I think, is thought of as a distinctively nonconsequentialist moral consideration of partiality.

Andreas Mogensen: Then the idea to be explored is that we might claim that we stand in relations to our nearer descendants that give us reasons to care more about their welfare than our more distant descendants. It’s an idea that’s been floated at a couple of points in the literature on discounting. It’s discussed by Derek Parfit in his 1984 book Reasons and Persons. A number of economists have mentioned it: Thomas Schelling, I believe, mentions this as an idea; Nick Stern characterises this as something like the best ethical argument for a positive discount rate.

Andreas Mogensen: But there’s been surprisingly little sustained philosophical inquiry into whether one can indeed justify something like a positive discount rate on future wellbeing in something like this way. In the paper, I’ve roughly argued that you can indeed justify something that looks somewhat like a positive rate of pure intergenerational time preference in this way. But, nonetheless, you probably can’t justify the very extreme conclusions about discounting that you can derive from the ordinary way in which economists approach discounting — where they apply a kind of constant exponential discount rate to future wellbeing, such that if you go sufficiently far forward into the future, then people’s welfare essentially counts for nothing.

Rob Wiblin: So the idea here is that you might have a deontological theory that says, yes, impartial beneficence is good, but we also have other moral reasons, which are partial reasons to benefit people. So we have particular reasons to help our friends and family and people who are close to us. And you point out in this paper that this can potentially give us something like an increasing indifference to the future as it carries on, because our connection to people as time passes becomes weaker and weaker and so our partial reasons to help our great-great-grandchildren might be less than those to help our grandchildren. So we gradually just care less about how the future is going, basically.

Andreas Mogensen: Yes. That’s the idea. Exactly.

Rob Wiblin: Yeah. Cool, yeah. We’ll stick up a link to that paper. Pushing on this idea of competing moral obligations, if I try to imagine a deontologist who thinks that it’s, say, possibly even bad or not desirable to take the Giving What We Can pledge, my guess is that they would say something like, “You have other moral priorities that should be taking up your money and your resources and your attention and so on, that are not about this neutral beneficence to help strangers. Maybe you should be trying to push for justice in your society. Or you should be spending all of that money on your kid, because your partial reasons to help your own child are so strong that they have a claim on almost all of your resources and giving it to a stranger is the wrong idea.”

Rob Wiblin: What moral motivations or reasons for action might a deontologist think are competitive with the ability to really cheaply provide a huge benefit to a stranger?

Andreas Mogensen: The point you raised about obligations to our child, I think people would be unlikely to take this view, at least if we’re assuming that people are already quite affluent. The other view that you mentioned — that there may be requirements of justice on us to be especially concerned about ameliorating injustices in which we are personally implicated or otherwise involved — I think that’s probably a more common line of resistance. So the idea that one should allocate one’s beneficence in a purely impartial way or in a way that completely disregards the particular relationships in which one is embedded, that’s a good point.

Rob Wiblin: Another thread that I’ve heard is that possibly one has obligations to undo past injustices that might be so important that they’re competitive with the need to help strangers, or the desirability to help strangers. Have you heard of any specific cases of that that people bring up?

Andreas Mogensen: I similarly can’t think of any specific instances of people raising this objection in any publications, but it’s certainly an idea that’s come up in discussion, very similar to the other idea that you mentioned. That we as, say, beneficiaries of past injustices may have especially strong obligations to rectify those injustices. For example, to compensate the contemporary victims of those injustices or the descendants of historical injustices.

Andreas Mogensen: I think that’s definitely a very significant consideration to take into mind, and it then becomes very difficult to say exactly how one should weigh these competing considerations in respect of the value of justice and value of rectifying past injustices against the potentially significantly greater benefits that we can provide for people who, for example, suffer from the adverse effects of extreme poverty.

Andreas Mogensen: Though this may be less of an issue if the question is a choice between focusing on, let’s say, the rectification of historical or contemporary injustices that obtain within one’s own country or society, and helping some of the poorest people in the world — in that you might think that we are also the beneficiaries of an international economic system that unjustly has impoverished some of the poorest people in the world.

Andreas Mogensen: So these considerations that relate to rectifying past or contemporary injustices may not tell one way or the other so decisively, if the choice is between rectifying injustices within our own society versus helping individuals who are badly off in developing countries, say, if we live in a developed country. In that we might think the network of injustices of which we’ve benefited spans the globe. So these considerations of rectifying injustices in which we may be complicit or otherwise benefit from may also be thought to apply to helping people in severe poverty.

Andreas Mogensen: It might become more of a significant challenge if we think about issues that are concerned with promoting a good long-term future, where it might be less straightforward to frame the kind of activities in which one would be engaged in this way. Though I think Toby Ord in The Precipice does take up this issue, and connects longtermist concerns to the rectification of past injustice: the thought being in a very straightforward sense that if we all die in some extinction catastrophe, then we’ll never be in a position to right past wrongs.

Rob Wiblin: I suppose one might also think that if a particular group of people through their own negligence end up killing everyone, that that is incredibly unjust, as well as being harmful. So it’s important to prevent that injustice.

Andreas Mogensen: Yes, definitely.

Rob Wiblin: Yeah. Bringing all of this together a little bit, it seems like your perspective on cause priorities might actually be different from the mainstream within effective altruism barely at all. Because even other folks who are not active researchers in deontological ethical theories, even those like me who in general are most enthusiastic and most excited to talk about consequentialist ideas, we almost all think that in practice there are side constraints on your actions. Constraints like you can’t steal money from other people and then give it to charity.

Rob Wiblin: And there’s various different reasons for that. One is moral uncertainty. Those things might just be prohibited. We might have really strong nonconsequentialist reasons not to do them. Or alternatively, that even if you can’t identify the enormous harm that will be caused by that right now, in practice we think that most of these rules are very sound and allow people to get along well, and it would be a disaster if you started just violating all of these social norms because your particular calculation suggests that it’s good.

Rob Wiblin: Is there actually anywhere in practice where the actions that would be most good from an effective altruist point of view would be different than the mainstream — within the sort of things that you might find people talking about on the Effective Altruism Forum — because of your particular take on ethics?

Andreas Mogensen: Yes. Probably I’m more sceptical of longtermism than others, or at least I’m more sceptical of the position that we might call “deontic longtermism.” “Deontic” here is a bit of moral philosophy jargon which roughly means “to do with what one is obligated or permitted to do.”

Andreas Mogensen: So we can distinguish this sort of claim, often called “axiological strong longtermism” — “axiology” is another part of moral philosophy jargon which means “having to do with the theory of the good.” So, axiological strong longtermism is, very roughly, the claim that those acts that best promote the good do so by virtue of their potential effects on the long-run future.

Andreas Mogensen: And then “deontic strong longtermism” would be the claim that those acts that are right are, by and large, right by virtue of the fact that they would potentially have significantly beneficial effects concerning the long-run development of Earth-originating intelligent life, or something along those lines.

Andreas Mogensen: There’s a kind of obvious sense in which I think deontologists have more room to resist deontic longtermism than consequentialists, in that deontic longtermism follows quite straightforwardly from axiological longtermism, if you believe consequentialism.

Rob Wiblin: So you’re saying that the connection between what is good and then what you ought to do in consequentialism is kind of a one-to-one correspondence: that if it produces a better outcome, then you ought to do it. So axiological and deontic is not a very important distinction within consequentialism, but within deontology these things come apart much more.

Andreas Mogensen: Yeah. So, as I mentioned earlier, in particular you might be sceptical that we can have moral obligations to bring into existence individuals who have lives worth living, just for the sake of there being more welfare in the world. And that’s a view to which I’m quite attracted: that there are no such obligations.

Andreas Mogensen: I’m similarly also attracted to the significantly less intuitive claim that if we can choose which of two individuals to bring into existence, and both will have lives worth living, then I’m also sceptical of the claim that we are obligated to bring into existence the person who would have the better life. I think that’s a claim that many nonconsequentialists think is wrong. We are obligated to bring into existence the person who would have the better life.

Andreas Mogensen: So for example, if we can choose between bringing into existence someone who will live a good life, that would be a full human life — they would live to, say, 80 years — or we could choose to bring into existence someone who would live a life that is very short for a human being, so perhaps they’ll be born with some congenital conditions such that they die at the age of 30. Many people, I think, intuitively feel that we morally ought to choose to bring into existence the person who would have the better life. And whilst I agree it would be better, I’m inclined towards the view that it would not be wrong to choose to bring into existence the person who would have the shorter, less good life instead. This is a lot more controversial, certainly amongst all philosophers, I suspect.

Rob Wiblin: That’s a controversial issue, or a much debated topic.

Andreas Mogensen: Yes, exactly. This is related to the classic nonidentity problem, popularised by Derek Parfit in his 1984 book Reasons and Persons. In some sense, the nonidentity problem refers to the theoretical paradox that arises in these cases where our actions change the identity of the person we might bring into existence in a way that seems to imply that no one is made better or worse off — or certainly that nobody’s made worse off as a result of our choosing an action that seems to bring about a worse outcome.

Andreas Mogensen: For example, bringing into existence the person who would live for only 30 years, as opposed to the person who would live for 80 years: the paradox is that a lot of people think it is morally better to bring into existence the person who would live for 80 years, and we ought to do this rather than bring into existence the person who will die prematurely. But bringing into existence the person who will die prematurely instead is not worse for anyone. It’s not worse for that person; they get to live 30 happy years. It’s not worse for the person who could have been brought into existence and would have died at 80, since there is no such person because we chose not to create them. And we can stipulate that it’s not worse for us.

Andreas Mogensen: So this is a well-known paradox in practical ethics, and there are different ways of reacting to this paradox. I’m perhaps unusually sympathetic to the view — which I think is best defended by David Boonin in his recent book on the nonidentity problem — that while it is better to bring into existence the person who would live to 80 years rather than the person who would die prematurely, it isn’t obligatory to bring such a life into existence.

Rob Wiblin: OK, yeah, we’ll stick up a link to that paper. I’ve read a lot of your papers in prepping for this, and I know that you talk about this issue in one of them. So I’ll figure out which one of them it is and we’ll link to that in the blog post associated with the episode.

Rob Wiblin: I’m keen to dig a little bit more on trying to find deontological ideas that are in tension with effective altruism or impartial beneficence. It seems like some moral philosophers have had a degree of hostility to the mentality that many effective altruists would have, and possibly this hostility could extend all the way to feeling odd about someone taking a pledge to give 10% of their income to the best charities, even if they’re quite comfortable. Although maybe they’d feel that that’s OK because it’s not such an extreme action.

Rob Wiblin: But some of it seems to come from this sense that one ought not to treat oneself as a means to an end, or that there’s something good about being authentic, I guess, or pursuing one’s own interests and not just trying to help others. A famous deontological thinker who reasoned this way was Ayn Rand. She almost actively thought that charity was bad, unless one was most passionate about it. She almost advocated an aesthetic of selfishness, or certainly self-focusedness. There’s also Bernard Williams, who has kind of run this line that there’s something wrong with the disposition of just thinking about your life as a way to aid others.

Rob Wiblin: Could you flesh that out a little bit? Because I imagine you know a tonne more about this issue than I do.

Andreas Mogensen: Yes, you’re right. It’s difficult to conceptualise this naturally as a claim that it’s morally wrong, for example, to engage in impartial beneficence. But yes, there might be a belief that in a wider sense, when we think about how we ought to live, morality is one consideration that determines how we ought to live. But there are other considerations — that might be thought not to be specifically moral considerations — that bear on how we ought to live our lives, and what sort of careers we ought to choose, and perhaps even where we ought to choose to donate.

Andreas Mogensen: So for example, you mentioned authenticity. You might think a lot of people believe that people, in some sense, ought to be authentic. I think it’s less clear that they believe one morally ought to be authentic. But one is, nonetheless, in some sense criticisable if one isn’t authentic — if one doesn’t live in accordance with one’s deepest principles, aims and ambitions, beliefs, and so on and so forth.

Andreas Mogensen: You might well believe — and I think, in a rough sense, this is what Bernard Williams believed — that utilitarianism, by virtue of asking a person to be willing to subordinate their own interests to achieving any marginally greater benefit for others, asks a person to give up their [authenticity]. Usually Williams puts it in terms of “losing their integrity,” but I think it’s very natural to understand what Williams means by integrity is very closely connected with an ideal of personal authenticity.

Andreas Mogensen: And more generally, there’s also a very famous paper by Susan Wolf called “Moral saints,” where Wolf argues, very roughly, that a life that is overwhelmingly governed by concern to do what is morally best, or even overwhelmingly governed by concern to do what is morally right, is not a very attractive kind of life. I think she says something along the lines of, “It’s not the kind of person that we want our children to turn out to be.” There are certain distinctive goods that might be lost if we were in some sense overly moral, overly scrupulous, overly concerned, or exclusively concerned with doing whatever was morally best in any given situation.

Andreas Mogensen: One of the examples she gives is something like no one could have a wry or cynical sense of humour if we were always trying to be as morally good as possible. We would be these kind of —

Rob Wiblin: Very dour. Very Protestant.

Andreas Mogensen: Yeah. Sort of meek, insipid characters, or something along those lines.

Rob Wiblin: I see, yeah.

Andreas Mogensen: So, yes, I definitely think one could have concerns about that. But I think they’re somewhat difficult to bring to bear, except on something like a full-blown utilitarianism, which says one ought always to bring about the outcome that is best, considered impartially. If we’re just talking about the idea that one should donate 10% of one’s income if one is already quite well off, living in a developed country, middle class, et cetera, I think it’s very hard to make the case — unless one is in quite unusual circumstances — that this would violate one’s integrity or authenticity to such an extent that one ought not to do this.

Andreas Mogensen: Perhaps there can be some resistance to the idea that one should be guided by impartial concern in choosing where to give. Perhaps that might be a place where these concerns about authenticity could become more significant or offer a sharper line of resistance to standard effective altruist claims. The thought being that if one just donates to wherever GiveWell recommends that one donates, then one loses something of the authenticity that can be achieved by giving to a cause that is personally meaningful in a way that perhaps GiveWell’s top recommended charities are not personally meaningful to someone.

Andreas Mogensen: I think that definitely could be a consideration that would lead one to resist certain kinds of effective altruist conclusions. It’s not something that weighs especially heavily with me. I think I’m on record now as very sceptical about the existence of the legitimacy of authenticity as a personal ideal. I have a paper in the European Journal of Philosophy arguing, roughly, that there is no such thing as the deep or central self, such that one ought to express the traits that belong to that deep or central self in one’s actions and in the way one lives one’s life. So I’m quite sceptical of that.

Andreas Mogensen: But, yes, certainly the idea with authenticity is in some sense a very widely adopted, very widely accepted personal ideal that many people, I think, treat as an important part of the answer to this very general ethical question of how one ought to live — which you might think is not decided or shouldn’t be decided entirely by concern for what would be morally best, or what one morally ought to do.

Rob Wiblin: OK, pushing on from deontology. In terms of thinking about other schools of thought within ethics and what they would make of effective altruism, the third big school of thought within ethics is virtue ethics, which I guess approaches ethics from the perspective of cultivating personal characteristics that are considered desirable, like kindness or honesty or so on. Or at least that’s how I would very briefly describe it. I don’t want to spend too long in this one because this isn’t your area so much, but do you have any idea what virtue ethicists would think of effective altruism as a community and an activity?

Andreas Mogensen: I could try some guesses. I think it’s important to distinguish different varieties of virtue ethics. As you suggested, the thing that perhaps unites all virtue ethicists at the broadest or most abstract level is some kind of belief that the most important questions in moral inquiry or in ethical inquiry relate to the nature and development of the virtues. And then you can take this thought in different directions.

Andreas Mogensen: So some people within the virtue ethical school believe that this line of inquiry or this way of thinking provides us with the materials for a distinctively virtue ethical criterion of right action — in that you could sort of believe the first thing and, nonetheless, believe utilitarianism, for example. You might think if we really want to achieve good outcomes, the thing that we should really focus on is cultivating the right character traits in people. So in that sense, one could have a virtue-theoretic or a virtue-focused consequentialism.

Andreas Mogensen: You might think that the things we should really be focusing on if we want to make the world a better place is developing the right character traits in people, as opposed to developing the right algorithmic decision procedures, or developing the right social institutions. But you could also believe in a distinctively virtue ethical criterion of right action. Perhaps the best-known example of this is due to Rosalind Hursthouse, and it says that an act is right if and only if and because it is what a virtuous person would characteristically do. That isn’t especially informative unless you can provide a self-standing theory of what the virtues are.

Rob Wiblin: Yeah. It raises that question.

Andreas Mogensen: Yeah. This then provides another point where we go deeper and look at deeper varieties of virtue ethics. Perhaps the most prominent theory of the virtues amongst contemporary virtue ethicists is still a neo-Aristotelian account of the virtues — where, roughly, we understand the virtues as those character traits that are necessary in order to lead a flourishing human life. Unfortunately, since I’m to some extent uncertain about what character traits are necessary in order to lead a flourishing, characteristically human life, I’m somewhat unsure how this school of thinking about right action would bear on the sort of questions that we’re addressing.

Rob Wiblin: It’s just unclear, I suppose.

Andreas Mogensen: Yes. At least it’s unclear to me. There’s certainly been work from a virtue ethical perspective on, say, environmental ethics, which has a natural relationship perhaps to questions about longtermism. But unfortunately, it’s not something I know too much about.

Andreas Mogensen: I did once toy with the idea of making some kind of argument that effective altruism is best conceived of as a form of virtue ethics, in that, as I said earlier, I think it’s very hard to pick out some kind of claim about what makes an act right or wrong that would be agreed upon by all people who are part of the effective altruist movement. Similarly, it might be very hard to pick out a theory of the good that all members of this community would agree on. I had a sort of hunch that you might find more agreement.

Rob Wiblin: About what character traits are good.

Andreas Mogensen: If you thought about what kind of mental dispositions, at the very least. You might think a mental disposition doesn’t suffice to yield a character trait, but it’s sort of moving in this direction of thinking about what kind of traits of a person should we wish to cultivate. So, for example, this idea of cultivating a scout mindset.

Rob Wiblin: Intellectual integrity and curiosity, and kindness, and concern for the wellbeing of strangers. Yeah.

Andreas Mogensen: Yeah. So I think I had the hunch that maybe it was easier to find common ground amongst effective altruists in terms of what kind of broad dispositions or broad traits it’s desirable for a person to have than about these more fine-grained questions about right action or good outcomes — over which there is indeed a lot of disagreement.

Andreas Mogensen: I’m not sure if this really works out. I liked the title more because it’s very provocative and sounds very counterintuitive. I think then once you spell out the argument it’s perhaps not so counterintuitive. But yes, that would be sort of virtue ethical only in the very broad sense of thinking that questions about what kind of character traits we ought to recognise as virtues and develop in ourselves and in others and in our children, insofar as those are prioritised as central ethical questions. This wouldn’t be presupposing any distinctively virtue ethical criterion of right action.

Rob Wiblin: You mentioned before we started recording that you were slightly nervous about this section because you identify as a deontologist, but some of your colleagues within philosophy question whether you truly, deeply are, at least on their conception. Why is that? Why would they not think of you as a fellow traveller?

Andreas Mogensen: I suspect largely because, very roughly, my view is that promoting the good matters a lot and there are many other sources of moral reasons that can in principle bind us and constrain us and direct us to perform various actions. But, as we discussed, certainly those constraints on harming or killing others in ways that might promote the overall good are not really practically significant in my life. For this reason, I think perhaps a large part of the answer for how we ought to live can be derived by thinking about what we can do to promote the good.

Andreas Mogensen: So in that sense, you might think I’m very much being drawn in the direction of a consequentialist moral view. As you mentioned, the practical conclusions to which I’m attracted, the concrete conclusions about how we, in practice, ought to live our life, will tend often to coincide with those that are favoured by consequentialists.

Andreas Mogensen: As you also mentioned, I think consequentialists often have various theories that can be proposed about why in practice the dictates of consequentialism coincide significantly with those of common-sense morality. I might believe that even in principle one shouldn’t kill one person in order to save the lives of five others. You might think that in the thought experiments set up in the transplant case — where I stipulate that there are no other adverse consequences from my choice — in that case, yes, I ought to kill the one and save the five. But you probably think that any real-world case that might resemble this is actually one in which there would be significant indirect adverse consequences, such that one ought to do the thing that I would claim is what one ought in principle to do.

Andreas Mogensen: So there is certainly a school of thought within moral philosophy that sees these different moral theories as — at least in the kind of circumstances in which we find ourselves now — to a large extent, converging on the practical conclusions that we ought to draw.

Andreas Mogensen: There are some nonconsequentialists I should mention who are sceptical of the very idea of promoting the good as a source of moral reasons. So they would deny that that something would promote the good — at least in the distinctive specialised sense that they would think of consequentialists as invoking, they would deny that that is, in fact, a morally significant consideration.

Andreas Mogensen: Or a very closely related view — perhaps, ultimately, the same view — is one which sort of denies that the good is a kind of independent foundational source of moral reasons, that stands apart from and can in principle counterbalance other sources of moral reasons that might derive from, say, respect for the rights of others, or a desire to be able to justify one’s actions in terms that no reasonable person could reject.

Andreas Mogensen: So I think I’m more inclined to believe this view on which the good may be conceived as a self-standing source of moral reasons. This might be a sense in which I am a half-hearted nonconsequentialist. Certainly there are flavours of deontology that are much more hostile to the kind of moral considerations that primarily animate consequentialists than the view I accept. So it’s perhaps not surprising that I place closer to the consequentialist camp than many other philosophers who otherwise reject consequentialism would place themselves.

Rob Wiblin: OK, to just sum up my take on this whole section, it seems to me like effective altruism — as a practice and as a community and as a culture — to some extent has already absorbed and adjusted to many of the critiques that deontologists might make. Almost from the outset, people were trying to be very clear that effective altruism is not utilitarianism in two ways. One is that we do take very seriously ethical side constraints, like saying, “Even if you think it will have good consequences, you shouldn’t go out lying and stealing and cheating people and so on.” That is really built very deeply into the culture.

Rob Wiblin: And then also around demandingness, people have a wide range of views on this, but effective altruism in practice, relative to utilitarianism, is much more inclined to say something like, “It’s good to be doing more good, and it is reasonable maybe to ask yourself to make some sacrifices to do that. But we don’t in practice want to ask people to make massive sacrifices for very long, or in general, or if they don’t think they’re really up to it or they’re excited about doing it.”

Rob Wiblin: And 80,000 Hours, on both points we have a page where we talk about ethical side constraints, and we also encourage people not to make huge sacrifices when deciding what career to take — both for pragmatic reasons and because we think it might not be ethically required and so on. So at least those points have kind of already been absorbed and taken seriously.

Rob Wiblin: But then an area where perhaps they do come apart is that many people coming from a deontological point of view might think that our really pressing ethical priorities are to prevent injustices that are occurring in the world. It might be cases where injustices are going on and they’ll place extra weight on how bad they are, above and beyond the negative consequences.

Rob Wiblin: So of course, consequentialists are going to hate modern slavery because of its negative consequences. But a deontologist might hate it both because of that and above and beyond that, because it’s an injustice committed against a person. So that kind of cause area might show up more highly in their prioritisation than it would for a typical person in the effective altruism community.

Rob Wiblin: Then of course there’s the population ethics issues, which are controversial for many different people from many different schools of thought.

Rob Wiblin: That’s my summary of the main areas where I see these ideas are going partially in separate directions. Does that seem kind of right?

Andreas Mogensen: Yes, I think so. With respect to the issue of justice, I think you’re more likely to find the subject of disagreement being in relation to injustices in which we especially are complicit or otherwise involved or beneficiaries of said injustices. In that, in principle, consequentialists can believe that injustice is, in and of itself, something that makes an outcome worse overall.

Andreas Mogensen: The considerations about injustices in which we in particular are implicated is more likely to bring out a disagreement between consequentialists and nonconsequentialists, in that consequentialists roughly favour a completely impartial conception of morality, whereas nonconsequentialists are more likely to recognise obligations that I incur in virtue of the particular relationships in which I stand to some people and not to others — and one such particular relationship in which I might stand to some people and not to others is that I am the beneficiary of an injustice that they have suffered or their ancestors have suffered and from which they continue to suffer today.

Andreas Mogensen: I think it’s also worth highlighting this issue about aggregation that I mentioned. The kind of cost-effectiveness analysis that effective altruist organisations tend to use I think is probably of a kind that nonconsequentialists might disagree with as a matter of principle, insofar as it doesn’t build in any kind of explicit unwillingness to weigh many, many small benefits against the prevention of a single serious harm.

Andreas Mogensen: Though, as I mentioned, I think in practice it’s much, much harder to point to any practical decisions about prioritisation where these disagreements about how to aggregate benefits and harms across different individuals are really a live issue. Though certainly we could imagine that such issues will or could arise.

Rob Wiblin: Yeah. Speaking of this issue of aggregation, another thing that I was really keen to talk about is this paper you have on large number scepticism. Basically, you have this great paper arguing that we shouldn’t necessarily be sceptical about thought experiments that involve huge numbers of things that are very hard to comprehend. That’s a position you call “large number scepticism” — after, I guess, other authors who’ve advocated for this. Large number scepticism is something that many people have advocated over the years — including me, I imagine, on this show a couple of times.

Rob Wiblin: First, to help get people to care about this issue at all, can you lay out the electrocution at the World Cup final thought experiment?

Andreas Mogensen: Yeah. So this is a thought experiment that originally derives from the book What We Owe to Each Other by T. M. Scanlon. He asked us to imagine: There’s a World Cup game in progress, and we are at a television station that’s broadcasting this game. And then we discover that there’s been an accident. A man called Jones has fallen. There’s some equipment that’s collapsed on top of him. His arm is crushed and he’s currently receiving extremely painful electrical shocks. And we can, of course, alleviate his pain. We can rescue him from under this equipment that’s fallen on top of him, but unfortunately the only way to do so will require us to shut off the broadcast. And so we’ll have to deprive a large number of people who are watching the match of the pleasure — of the enjoyment of watching this football match.

Andreas Mogensen: So the question is: “Ought we to rescue Jones, or ought we allow the broadcast to go on?” I think many people have the intuition that we ought to rescue Jones, no matter how many people are watching this broadcast. So again, this is a thought that no number of such relatively insignificant benefits should be allowed to outweigh this single terrible harm that otherwise befalls this poor man who’s been trapped under this equipment in our television station.

Rob Wiblin: I suppose if there’s a small number of viewers, then we won’t feel very much tension about this. We won’t find it necessarily to be a difficult question at all. But the reason we chose the World Cup final is that potentially you can say this has happened at the central broadcasting place at the stadium, and there might be 500 million people watching the game — I guess they claim that a billion people watch the World Cup final, but I think that includes delayed transmission, so maybe only the 500 million watching live would actually be affected if they cut it off.

Rob Wiblin: But yeah, there’s a whole lot of different thought experiments that kind of have this flavour, where you’ve got a bunch of modest benefits to a very large number of people and then a very large cost to a single identifiable individual. And then people have this intuition that there’s just no number of people who can get the smaller benefit that is sufficient to offset the other case.

Rob Wiblin: I suppose these are assorted variations on the Repugnant Conclusion: the case where if you compare two worlds, one where you have a small number of people all having fantastically interesting and flourishing lives, versus extremely large numbers of people who have lives that are just barely worth living. What’s the modern example that people use? I guess “potatoes and Muzak” is one summary of these lives that are barely worth living. Another one is lizards just sunning themselves on a rock, as an alternative. I don’t know how much lizards enjoy sunning themselves on a rock, but we’ll assume it’s not very much. Can you give us a taste of the case in favour of large number scepticism in these cases?

Andreas Mogensen: Yeah. So the basic argument is that we shouldn’t trust our intuitive reactions to these cases because they involve very, very, very large numbers, as you mentioned. And we are intuitively poorly equipped to grasp very large numbers. That’s the sort of basic claim.

Rob Wiblin: Yeah, in a nutshell. I guess critics might say that these cases have been almost contrived in an interesting way in order to take advantage of this fact that we can’t picture the vast numbers of lizards sunning themselves on rocks. Or we can’t picture simultaneously all of the people watching the World Cup final. And so we can’t grasp the amount of good that is at stake in those cases. Whereas it’s so easy to picture the person being electrocuted right in front of you and to fully grasp the gravity of that. So they would say, no, we need to come up with different thought experiments that don’t involve such large numbers and things that are so hard to comprehend.

Andreas Mogensen: Yeah, that’s definitely one way of putting it. There are many different arguments that have been put forward for highlighting ways in which we are supposed to be especially bad at grasping very, very large numbers. For example, perhaps one of the most famous instances of somebody who argued for large number scepticism in the literature is John Broome. And to illustrate the difficulty that we face in grasping very, very large numbers, Broome highlights people’s intuitive resistance to the theory of evolution by natural selection.

Andreas Mogensen: His thought is that people find it intuitively impossible that the process of natural selection, however many years it goes on for, should be able to take us from self-replicating RNA to something like us, that’s blessed with consciousness and intelligence and rationality. His claim is that this reflects our inability to understand just how much time is involved, just how much time is available for mutation and selection to do its job and take us from something that seems relatively simple to something as complex as a human being.

Rob Wiblin: Yeah. Do you want to explain why you don’t buy this?

Andreas Mogensen: The reason I’m sceptical of this is, on the one hand, I don’t recall that Broome cites any additional evidence to suggest that this is indeed the explanation for people’s resistance. It might seem to be a plausible explanation for why people would be resistant to the theory of natural selection. But I think one reason to suppose that it in fact is not a plausible explanation of people’s resistance to the theory of natural selection is that questions about the age of the Earth have actually played quite a significant role in controversies relating to the theory of evolution or in attacks on the theory of evolution. There’s a well-known controversy that arose in the 19th century when Lord Kelvin produced estimates of the age of the Earth, which seemed to suggest that the Earth was not old enough for Darwin’s theory to work.

Andreas Mogensen: And I think many creationists actually treat the age of the Earth as a pivotal issue in their attacks on the scientific consensus. The specific age that our planet has and the specific time that has in fact been available for natural selection to do its work is a subject of controversy. Well, it’s certainly something that people pick up on, insofar as they’re sceptical of the theory of natural selection. And that’s, to some extent, surprising if people were supposed to be guided by the intuition that any amount of time would be insufficient for natural selection to yield creatures with intelligence and consciousness. So it wouldn’t matter if the Earth was 10,000 years old, a million years old, or 4.5 billion years old. There would be no point in disputing this issue if one thought it was just in principle impossible.

Rob Wiblin: Yeah. So to give a little bit more context, I suppose we wouldn’t be surprised if people said that in order to justify allowing someone to be electrocuted, you needed to have a very large number of people watching the World Cup final. But perhaps what is a little bit more mysterious is people’s moral intuition that there’s no imaginable number of people who could be watching it that would be sufficient to justify allowing the person to be electrocuted.

Rob Wiblin: Basically it seems like over the decades, people have had the intuition that folks are not going to be very good at thinking about this case, that something must be going wrong when people picture this in their minds, and that’s helping to explain why they have this reaction. And they’ve made seemingly some quite sloppy arguments in favour of wanting to dismiss this, and basically you go through them one by one and pull them apart. Judging by your facial expression, maybe you’re more sympathetic to them. But I guess they range from not quite right to reasonably sloppy.

Rob Wiblin: Let’s go through a couple of them. So that was one about evolution there. What’s another argument that people have made that you reject?

Andreas Mogensen: So Michael Huemer has this paper, I think it came out in 2008, “In Defence of Repugnance,” where he mounts a number of arguments for large number scepticism. The anecdote is as follows: Imagine that an astronomer is giving a public lecture, and she mentions that the Sun is going to die in 5 billion years. Then imagine that some member of the audience becomes noticeably agitated, and the astronomer tries to reassure this person, stressing the Sun will not burn out for 5 billion years. The audience member then sighs with relief, and they say, “Oh, 5 billion. I thought you said 5 million years.”

Andreas Mogensen: Intuitively, this would be a very strange reaction for somebody to have. And the thought that we’re supposed to take from this is that mentally we are unable to sufficiently distinguish between 5 million and 5 billion. These very large numbers in our minds blur together in such a way that we are unable to appreciate the significance of the very, very large differences between these very, very large numbers.

Rob Wiblin: Yeah. A thousandfold difference. But you don’t buy it. Why is that?

Andreas Mogensen: I’m sceptical that the weirdness of the imagined audience member’s reaction reflects something that’s specific to large numbers. This is something that’s specific to our attitudes towards time. So one thing you might expect is roughly that either people engage in a form of discounting — where ultimately they end up discounting the very far future into complete insignificance, so events that are 5 million years from now are viewed as more or less the same as events that are 5 billion years from now — or they resist discounting and they’re impartial across times, and then they care exactly as much about events that are 5 million years from now as events that are 5 billion years from now.

Andreas Mogensen: Those I think are the most psychologically realistic attitudes towards time that people might adopt. So it would be very odd for someone to be very concerned about events that would occur in 5 million years’ time, but not at all concerned about events that would occur in 5 billion years’ time.

Andreas Mogensen: One reason to suggest that it’s specifically the dimension of time that is doing some kind of work here is to just think about a similar kind of case where similar kinds of numbers are in play, but there’s no issue about time. The case I suggest in the paper is: Imagine that I came to believe it was certain that the US and China will go to war in this century. Now, it’s one thing to learn that 5 million people would be killed in such a conflict — obviously a catastrophic tragedy. But I would react very differently if you told me that 5 billion people were to die in such a war.

Andreas Mogensen: And I’d be kind of surprised if you were surprised by the fact that I was significantly more worried upon learning that the casualties would be 5 billion, than that there would be 5 million. But of course, these are the same kind of numbers that are in play in Huemer’s example with the astronomer. So that, to me, suggests that perhaps the issue of distance in time might be the thing that explains the oddity of the audience member’s reaction in Huemer’s vignette, as opposed to something that specifically concerns our mental ability to grasp large numbers.

Rob Wiblin: In the case where 5 billion people were dying, if that was going to happen soon, that would suggest you have a two-in-three chance of dying in this war. Whereas in the 5 million case, it’s only two in 3,000. So I guess you have a personal selfish motivation to treat them quite differently. But yeah, it does seem like time is probably the key issue in that case, and people’s discounting and so on. It’s not really about the large numbers per se. Yeah. What’s another argument people have made that you reject?

Andreas Mogensen: There are so many. Another thing is that some people have claimed that evidence of what’s known as “scope insensitivity” in the literature on what’s known as “contingent valuation” supports this kind of large number scepticism.

Andreas Mogensen: I’ll first have to explain what this means. So contingent valuation is roughly a method that’s used to assign monetary values to various goods that don’t ordinarily have market prices — typically by asking people about their willingness to pay to provide or protect this good, or the compensation that they would be willing to accept if this good were not to be provided.

Andreas Mogensen: So that’s contingent valuation, and there are various methodological concerns that arise in relation to this that have been discussed extensively amongst economists. One especially significant one is the phenomenon that’s known as scope insensitivity, which is roughly the fact that people’s willingness to pay for various goods doesn’t increase appropriately in response to an increase in the size or scope of the good to be provided.

Andreas Mogensen: A very, very famous example of this, and the one that almost always is cited, is this study by William Desvousges and colleagues, where respondents were asked how much their household would agree to pay each year in higher prices for wire net covers to save a certain number of birds from drowning in uncovered waste oil holding ponds.

Andreas Mogensen: One group of respondents was asked about their willingness to pay to prevent 2,000 birds dying each year. A different group of respondents were asked about their willingness to pay to prevent 20,000 birds from dying each year. And a third group were asked about their willingness to pay to prevent 200,000 birds from dying each year. And it turns out that these very large differences in numbers made virtually no difference in people’s willingness to pay. So the sample mean in these three groups was $80 for the 2,000 birds, $78 for the 20,000 birds, and $88 for the 200,000 birds. The sample median was $25 for each of the three groups.

Rob Wiblin: So it seems like we should value 100 times as many birds significantly more. We should be willing to pay substantially more for that, but in practice we’re willing to pay barely any different amount at all.

Andreas Mogensen: Yes. Though one thing that you might think is that people might sort of disagree with this or something like that. So one way in which you might criticise this particular study, say as supporting large number scepticism, is by highlighting that, although I focused on the very large numbers that were mentioned, in the materials that were provided to people, some quite small numbers were also mentioned. In particular, it’s mentioned that 2,000 birds is much less than 1% of the total population of migratory waterfowl, 20,000 birds is less than 1%, and 200,000 birds is about 2% — so respondents were told.

Andreas Mogensen: So there were also some small numbers involved, and I’ve seen some suggestion that it’s possible that in some sense, what people actually care about is fractions of the bird population or something along those lines; they’re particularly concerned that we don’t lose a very large fraction of the population of birds. So in some sense, they were dealing with small numbers, because there were only small percentages involved. Now you might very reasonably think that’s just the wrong way to value birds, but…

Rob Wiblin: That’s an alternative explanation for what’s going on.

Andreas Mogensen: Yeah, exactly.

Rob Wiblin: But it seems like if you have this view, this large number scepticism thing — where you think people can’t conceive of differences between really large amounts, so they kind of value them all the same — it seems like it might be very natural to also have small number scepticism, where you think people can’t tell the difference between one thousandth of the birds, one ten thousandths of the birds, one hundred thousandths of the birds. Or one-in-1,000 chance, one-in-10,000 chance: famously, people are really bad at evaluating risk once the odds get really low.

Rob Wiblin: You might just think people can only evaluate things around 50%, because that’s what we have the most experience of, and it’s easiest to see large percentage differences, I guess. So in a sense, large number and small number scepticism seem like two sides of the same coin to me.

Andreas Mogensen: Yeah, though these feel like in some sense numbers that aren’t that small. One versus two, or less than one versus two, or something along those lines. The broader issue that’s kind of in the background here is that in some sense you might well believe that there’s no uniquely correct way to number something. Anything could be regarded as an instance of a larger… I think Frege has this example of a deck of cards, which is one deck of cards, but it’s also 50… How many cards are in a deck?

Rob Wiblin: Fifty-two, I think.

Andreas Mogensen: Fifty-two. So you could attach the number one to this object or the number 52 to this object. And in some sense, what we want to know is: What’s the morally significant way to number objects or to impose numbers on this choice that people are faced with? Or more specifically: What is the way that people perceive as morally important for attaching numbers to this kind of problem?

Andreas Mogensen: But there are other examples of this scope insensitivity that in some sense deal with quite small numbers. So there’s also this paper by Peter Diamond and colleagues, which is about protecting wilderness areas in Colorado, Idaho, Montana, and Wyoming. And here the choices are between protecting one, two, or three out of a total of 57.

Rob Wiblin: And did they similarly find there that people are willing to pay about the same amount for one, two, or three?

Andreas Mogensen: Yeah, that’s right. So again here, in some sense, large numbers are also in play, because if you look at how many acres are involved, it’s a large number of acres.

Rob Wiblin: So there’s a bunch of other empirical evidence that people bring forward in this paper, claiming that it supports large number scepticism. And there’s either alternative interpretations that you think are equally or more plausible, or in fact, they’ve just kind of made a conceptual error and this doesn’t really support large number scepticism at all.

Rob Wiblin: But after reading the paper, I still am a large number sceptic, I think. And by “large number,” I mean two or more. I think basically what goes wrong when people evaluate these cases is something along the lines of, you ask them to say how much they’re willing to pay to prevent these birds from drowning in an oil spill or whatever. You can ask people how much they are willing to pay for one bird or 10 birds or 100 birds. But the decision procedure is they picture a bird covered in oil or a bird suffering. And then they think intuitively, “How bad is this? How much is my heart being pulled by this image of one bird?” And then they decide how much to pay based on that.

Rob Wiblin: Now it’s not exactly like that. I expect you’d see some variation as the numbers went up from one to 10 to 100, say. Nonetheless, most of the work is being done by the mental picture of how that is a specific thing. And I think that this can help explain why people have the view that no number of people watching the World Cup would justify allowing someone to be electrocuted: they picture one person watching the World Cup and they picture one person being electrocuted. And they think, well, clearly the electrocution is way worse.

Rob Wiblin: And then they try to picture one person being electrocuted against 500 million people watching the World Cup. But all they do is, again, picture one person watching the World Cup, because that’s all that you could really mentally imagine because you could only ever be one person at a time. And then they notice, well, this is just the same. This feels just as bad. Even though I’ve increased the number massively, my evaluation hasn’t shifted. And so extrapolating upwards, there would be no number of people watching the World Cup that would change my judgement.

Rob Wiblin: In a moment I’m going to explain that I don’t share this judgement at all; I don’t have this intuition. I’m trying to imagine what might cause me to think this. But what do you make of that picture?

Andreas Mogensen: Yeah. So one concern I might raise is that the psychological theory you’ve proposed might also seem to suggest that people should, say, think it no more important or no better to save five lives as opposed to one life. The thought being, on the kind of story you told: I imagine the people who are over here, I can save this one person over here, and I imagine what their life would be like. And then when confronted with the five people over there, I imagine what one of their lives would be like. And I’m sort of mentally unable to multiply these lives. In some sense, I’m still in the grip of just thinking about the single life and what that life would be like. But I think in fact, people do believe it’s more important to prioritise saving the five lives over the one life. So it must be some specific problem with their inability to grasp very large numbers that we have to invoke.

Rob Wiblin: Yeah. These experiments that you mention in the paper, they use this strategy where they would randomly ask different people how much they’d be willing to pay for different numbers of birds. So the direct comparison couldn’t be made. So when you think about 20,000 birds, you still just picture one, or at least you don’t even know how much you value one. So you start out by picturing the one.

Rob Wiblin: In the case with the same person, if you one after another put directly one person or five people, I think people can immediately see that there’s something wrong with valuing one the same as the five. It allows them to engage a different mental process, where they weigh them up against one another directly and say, the five should be five times as valuable. So there’s probably a different mental procedure going on here. I suppose then you would say, well, why don’t they do that with the World Cup case? Because they do have a chance to reflect still and think that there’s way more, right?

Andreas Mogensen: Yeah. That’s right. So one of my particular criticisms of using this contingent valuation literature to support large number scepticism is that these really dramatic examples of scope insensitivity — like the case of saving the birds from drowning in these waste oil holding ponds — those all arrive, as you said, when the experimental design involves separate evaluation: so different people (or sometimes the same person at different times, but typically different people) are asked to think about different numbers and their willingness to pay for different numbers. They’re asked about their willingness to pay to provide a benefit of a certain size or scope. You don’t get these effects if people are asked to state their willingness to pay under joint evaluation, where they have the two options side by side and can easily make comparisons.

Andreas Mogensen: And as you mentioned, the intuitions that we are dealing with about this World Cup case or the Repugnant Conclusion, these do involve joint evaluation: we’re simultaneously considering the two options that we can bring about. And for various reasons, there’s a view that joint evaluation facilitates the comparison of different options. It makes the problem informationally easier, and judgements in joint evaluation are therefore more likely to be reliable, you might think.

Andreas Mogensen: So there’s a question about how much one can learn by looking at people’s reactions to choices under conditions of separate evaluation and extrapolate from those cases to make judgements about what we should expect under conditions of joint evaluation — which are the kind of conditions we are dealing with when we reflect on cases like the World Cup case or the Repugnant Conclusion.

Rob Wiblin: OK, so trying to bring back in the big number thing. You can imagine that maybe the first-pass thing is to imagine just the individual case, but then you’re like, I actually want to try to weigh up the number. And so you try to picture a larger number of people watching the World Cup and think about how good that is. And you strain to imagine a big crowd of people all enjoying and watching the World Cup and having a great time.

Rob Wiblin: And you’re like, no, that’s still not enough to offset it, even though you might begin to feel some pull as the crowd gets bigger. But at some point you actually just can’t really picture any more. You’ve maxed out the number of people watching the World Cup in some enormous crowd. And it’s still not enough. Maybe give up trying to picture any greater good there, because it just becomes so difficult to mentally imagine. Do you think that could be a contributor to people then concluding that there’s no number that would be sufficient to offset it? Or am I being too cynical of people’s mental processes?

Andreas Mogensen: Yeah, I think something like that might be true, but I’m unconvinced of the extent to which this ability to pictorially imagine a situation is going to be the deciding factor in people’s judgements. So I might think if I ask people, “Is it better to save a trillion people, or a trillion and one persons?,” I think most people would say that it’s better to save the trillion and one than the trillion people.

Rob Wiblin: But it feels easier to analytically reason through why that must be the case without needing to try to picture the two different things. I imagine that the decision procedure that people use to spit out the answer that a trillion and one is better, is this quite different than the one where they’re trying to picture these almost maximally different goods?

Andreas Mogensen: So the conclusion of the paper is merely that the case for large number scepticism is as yet inconclusive. And I sort of end the paper by encouraging people to do more research that would address this question more directly. It might be true that in some sense, hypotheses along the lines that you suggested might be correct and might explain people’s intuitions about these cases. But it is ultimately a sort of empirical question, and part of the claim I make is that the empirical existing literature does not warrant a conclusion of large number scepticism.

Andreas Mogensen: But I certainly don’t want to rule out that additional investigations could find more definitive evidence that would support large number scepticism. In particular, definitive evidence that would suggest some kind of inappropriate upper bound in people’s ability to respond to very, very large numbers. But I do think it’s then an empirical question to which I don’t think we know the answer: When people reason about these large numbers, what is the relative contribution of their ability to picture in their minds this large collection of individuals, versus their ability to recognise that in a more abstract or intellectual sense, what kind of numbers of individuals are involved and to be guided by those considerations?

Rob Wiblin: As you can tell, I agreed with you that the arguments that people had put forward for large number scepticism in the past were much more wanting than I’d appreciated when I’d seen them before. I’d kind of been taken in by them. I guess because I found the conclusion so intuitively plausible, I wasn’t that tempted to scrutinise the arguments people were making maybe as much as I should have.

Rob Wiblin: I’d be curious to know what a survey would actually find about what a wide selection of randomly chosen people from the public make of cases like this. Because at least for myself, as I mentioned, I actually don’t share this intuition at all, that there’s no number of people who could watch the World Cup where it would be justified to allow someone to die by electrocution. And in fact, I think that intuition that there’s no number is actually crazy and ridiculous and completely inconsistent with other actions that we take all the time.

Rob Wiblin: If you think about the World Cup final, about 500 million people watching live and all of the amount of effort that has gone into allowing this event to go ahead: many people have died in order to cause this event to go ahead in the construction of the stadiums. It’s very common for people to die. I mean, famously in Qatar, tonnes of people have died in the construction of all of these stadiums. But even just in a normal case, when you’re doing big construction works, people die. Having people come in to the stadium, probably some people died in car crashes. At least one person might have died in a car crash so that everyone could get there. Or at least if that happened, we wouldn’t think that was a decisive argument against holding the World Cup final, that someone might well die in a car crash as a result of all of the traffic and all the travel that’s involved in it going ahead.

Rob Wiblin: Let alone the number of lifetimes that are spent effectively watching the game. I think I calculated this out at some point; I think many lifetimes basically are spent just watching this individual game, because people don’t live 500 million hours. We live substantially less than that. And then on top of that, just think about the opportunity cost of hosting the World Cup, or even just the final: we’re talking about tens of millions, maybe hundreds of millions of dollars that go into allowing this event. Amounts that could have saved thousands of people’s lives.

Rob Wiblin: Now, we’re apparently comfortable with that, or at least every time the World Cup happens, there’s not a flurry of people saying that it’s moral outrage, that this was allowed to happen rather than lives were saved. It’s an outrage that things happen in general because sometimes people die as a result. Sometimes there’s industrial accidents. And so we just have to shut the whole thing down. Indeed, shut down all entertainment as far as I can tell, because it’s just like the death of one individual is given absolute priority over the entertainment of an unlimited number.

Rob Wiblin: So anyway, that’s my rant about this case, where I’m saying people who think that you should definitely, no matter what, always shut things off in order to save the person from electrocution, I think need to think about what implications that would have for society as a whole and how basically it would necessitate the ending of basically everything that makes life worth living.

Andreas Mogensen: Yeah. This is a relatively widely discussed objection to this kind of principle. I think the key issue that it raises is how one should formulate this view for conditions of uncertainty. So at least for the cases of people dying in construction accidents in building stadiums that you mentioned, you might think the salient characteristic of these cases is that nobody knows in advance who will die. Nobody knows in advance that this particular person is going to lose their life so that these people can enjoy this sporting event. Whereas in the case that I described, it is certain that Jones will suffer these terrible harms. And it is also certain, as we stipulate, that these many people will otherwise enjoy this benefit from watching the event.

Andreas Mogensen: So there’s this question of, although it is uncertain who will die, we nonetheless, in some sense, know the outcome: that someone or other will lose their life. But we can, at this cost, allow people to enjoy these relatively minor goods.

Andreas Mogensen: I think the salient question that it raises for this view — and this is an ongoing subject of controversy in the current literature on this problem — is how does this view work under risk? And in particular, should we think that a view of this kind must recognise something like a morally significant difference between identified and statistical victims? There are indeed people who have argued that, roughly speaking, that is how we should understand a view of this kind as playing out when we understand cases of social risk-taking.

Andreas Mogensen: There’s a great paper by Johann Frick on this, that appeared in Philosophy & Public Affairs a couple of years ago. It’s very much addressed to this issue, which I think is a recognised objection to these views, which deny that we can aggregate very many minor benefits so as to outweigh a single significant harm. I think probably Elizabeth Ashford was one of the first people to really make this significant, that this view is overly demanding on us — or maybe perceived as overly demanding on us, in roughly the ways that you described.

Rob Wiblin: So just while you were talking, I got to calculate how many lifetimes are spent watching the World Cup final. And I think if it goes for two hours and there’s half a billion people watching, then roughly 1,437 lifetimes are spent just watching the game. Many multiples in one person’s lifetime.

Rob Wiblin: So yeah, the statistical life versus identifiable life is kind of a classic issue. I suppose inasmuch as things become impermissible once we can identify victims, it would suggest that hypothetically, if we could get very good at predicting the future — let’s say we built a fantastic model of the world that would allow us to identify what people would die when you build a building rather than being unsure about it — then suddenly most of modern civilisation would become impermissible, because now we’ve turned statistical victims into individual victims.

Rob Wiblin: I guess we also have to suppose that we couldn’t act on this information. So it’s like a Cassandra model, I suppose, that can tell us who will die if we go ahead, but we won’t be able to stop them. Then in principle, suddenly vast numbers of activities that were previously allowed would now become unallowed. It wouldn’t matter how great the benefits were, as long as there wasn’t a sufficiently large benefit to a specific other identifiable individual.

Andreas Mogensen: Yes. Another point that has been raised in relation to this is something like, if we imagine the full-scale effect of the many prohibitions that would have to be enacted in order to comply with this kind of regime that you’re imagining, then cumulatively the burdens on particular individuals might indeed be very significant.

Rob Wiblin: Yeah. Do you want to maybe unpack the intuition about the weighing of harms versus benefits when they’re very small versus very large? If you personally feel this intuition that there’s no number of small benefits that would outweigh a massive harm to an identifiable individual? I’m not sure whether you actually do feel that intuition personally.

Andreas Mogensen: I definitely feel the intuition. I’m not sure that I ultimately accept this view, in large part because it’s very difficult to make this view work in conditions of uncertainty. At one point, I thought it could be made to work. Then I had a conversation over dinner with a philosopher from UCL, Joe Horton, who’s very much against this kind of view. And he very much made me see that under conditions of risk, it’s very difficult to get plausible answers from this kind of view.

Andreas Mogensen: I’m still sort of uncertain, but I think there are many good reasons to be sceptical of this intuition and these cases, involving how it operates under risk and its potential to be extremely demanding of us if we, in some sense, don’t have the right take on how to deal with cases of risk. I think these are excellent reasons to query this intuition, but I certainly feel the intuition very strongly.

Rob Wiblin: I see. And the risk issue is kind of a variation on the rant that I was giving, where you’re saying there’s many different things that we do that run the risk of some identifiable individual suffering some calamity. And so this would prohibit too much of our behaviour.

Andreas Mogensen: Yeah, yeah. Definitely. The sort of people who die in building stadiums is actually a sort of an issue that’s been raised in this specifically.

Rob Wiblin: Oh, right. Oh, great.

Andreas Mogensen: There are other examples. Take water fluoridation, for example. Very rarely there’s an accident where fluoridation of the water causes someone to die from an excess of fluoride. I think this happened last time in Alaska in the ’90s or something like that.

Rob Wiblin: OK, so very rare. Yeah.

Andreas Mogensen: Yeah. Or at least this is the last time it happened in the US or something like that.

Rob Wiblin: And it’s like, how many cavities do you have to prevent. It’s a similar case.

Andreas Mogensen: Yeah.

Rob Wiblin: I suppose in that case, you don’t have the intuition that fluoridation has to be impermissible because at some point someone might die.

Andreas Mogensen: Yeah.

Rob Wiblin: It’s very interesting. I wonder what is going on with the World Cup case that makes it different. I wonder if there’s something about the frivolity of the World Cup somehow that raises different intuitions than the healthcare-versus-healthcare case in the fluoridation of water?

Andreas Mogensen: Well, insofar as you want to defend a view of this kind about aggregation, I think the line you would take is that under conditions of risk, each person’s claim on you to be provided with some benefits needs to be discounted by the probability that they in particular will be benefited if you take some action. So under conditions of full certainty — where one person will certainly die or suffer some very terrible harm unless we aid them, and these many other people could otherwise certainly be provided with some comparatively trivial benefits — that person’s very significant, very strong claim on us to be aided is not discounted at all, because it’s fully certain that we would aid them significantly if we tried.

Andreas Mogensen: By contrast, in the cases where if we undertake some action, a random process will be initiated that we are very confident, perhaps certain, will significantly harm someone or other, but will with certainty provide comparatively minor benefits to a very large number of people, each person’s claim not to be significantly burdened by that very bad outcome that we know will be imposed on someone or other needs to be discounted by the probability that they in particular will be harmed. And therefore each person’s claim ultimately against you is not very strong, and is not significantly stronger and perhaps not stronger at all than each person’s claim on you to provide them with this relatively minor benefit with certainty.

Andreas Mogensen: The broader picture here is that ultimately what we need to do is we need to think about the strength of people’s claims on us to provide them with particular benefits. We think that a person has a very strong claim on us to be provided with a very strong benefit with certainty, and such a claim cannot be outweighed by any number of minor claims that people have on us to be benefited in some modest way with certainty.

Andreas Mogensen: But under conditions where everything is uncertain and there’s no identified individual whom we know will be harmed by this random process that we initiate — although we know someone or other will be seriously harmed — each of these people who may be harmed in fact has only a relatively modest claim on us against undertaking this action, which is discounted in proportion to the probability that they in particular will be harmed. Therefore their claim is roughly of the same category of moral seriousness as the claims that others — or perhaps even that person themselves — might have to be benefited, more or less certainly, in this otherwise relatively modest way.

Andreas Mogensen: That’s the broad theoretical picture that’s developed by Johann Frick in this paper. And I think something along these lines is what you would wish to accept insofar as you want to defend a view that says you shouldn’t aggregate the many minor benefits in the World Cup case, but you should be willing for us to run these societal risks that inevitably result in some person or other dying, but provide these large numbers of relatively modest benefits.

Rob Wiblin: Yeah. Nice. Zooming out a little bit, the arguments that people have raised about our inability to properly conceive of very large numbers of things might be called “psychological debunking arguments,” kind of deconstructing how we think about things and using that to show that our moral intuitions are wrong or not trustworthy.

Rob Wiblin: But another class of debunking arguments that you’ve spent quite a lot of time looking at are evolutionary debunking arguments, where basically people will reason that as a result of evolutionary pressures, we are inclined to form a particular view because that was selected for in the ancestral environment. So the fact that we have these moral intuitions as a result of these evolutionary pressures shouldn’t be regarded as a very strong reason to endorse those intuitions. Was that an OK explanation of them?

Andreas Mogensen: Yeah. In some sense, the key issue isn’t merely that they were selected for — because you might think that all kinds of mental dispositions that we have were selected for, but most of those belief-forming dispositions are reliable. So, the key claim that people typically advance in order to run these evolutionary debunking arguments is that there was selection for particular moral beliefs or the disposition to adopt particular moral beliefs, and these selection pressures can be explained without making any assumptions about the truth — or, for that matter, falsity — of these moral beliefs.

Andreas Mogensen: In some sense, natural selection is indifferent to the truth or falsity of the moral beliefs that are favoured by natural selection, in a way that natural selection is not indifferent to the truth or falsity of the kind of beliefs…

Rob Wiblin: “Does that tree exist?”

Andreas Mogensen: Yes, exactly. So ordinary perceptual beliefs, introspective beliefs, inductive inferences from the past to the future — all of those kinds of cognitive dispositions you would expect are favoured only insofar as they are reliable guides to the truth. But people have tended to think that this doesn’t seem very plausible for the case of the operation of natural selection on our moral psychology.

Rob Wiblin: Yeah. That kind of argument makes intuitive sense to me, and I think a lot of people. But you think that it shouldn’t be regarded as a super strong argument. Can you flesh out why?

Andreas Mogensen: Yeah. In large part I’m simply unconvinced by the arguments that have been taken to show that if these truth-indifferent selection pressures have operated in our past, then this sort of undermines our reasons for taking our moral intuitions or our evolved moral beliefs seriously. I think there are three broad classes of arguments that have been used here. These arguments sort of invoke three different kinds of epistemological principles — principles that I think should govern our beliefs, and, in particular, govern when we should revise our beliefs in response to additional evidence.

Andreas Mogensen: So one kind of argument that people sometimes make is a kind of Occam’s razor–style argument. So they claim, very roughly, that we should accept all and only the things that we are committed to as part of the best available explanation of our experience. And because they think the evolutionary debunking argument suggests that we can explain why we have the moral beliefs that we do without invoking any moral facts or any moral properties, this provides an argument by appeal to Occam’s razor for supposing that, in fact, there are no such moral facts or moral properties corresponding to the moral beliefs that we have.

Andreas Mogensen: A very closely related argument invokes a kind of principle of what’s often called “sensitivity” as governing our beliefs. Very roughly, this is the claim that you ought to give up a certain belief that you hold if you discover that you would hold this belief even if it were false. People think this applies in this case for the reason that the facts about natural selection that we’ve rehearsed — namely, natural selection’s indifference to the truth or falsity of the moral beliefs that it favours — should lead us to suppose that the moral beliefs that we hold are moral beliefs that we would hold even if they were false. And therefore, by this sort of principle that I suggested earlier — that we only hold beliefs that we don’t have reason to think are insensitive to their own truth or falsity — we ought to give up these beliefs.

Andreas Mogensen: So these are sort of two key lines of arguments. On the one hand, you can question each of the principles that I’ve just outlined. For example, one worry that you might have about both of the principles that I’ve just suggested is that they can lend themselves somewhat straightforwardly to a very general kind of scepticism about the external world.

Andreas Mogensen: So take the principle that we should accept only that which is required in the explanation of our experiences. I might worry that a principle of this kind gives us no reason to believe that there is an external world composed of familiar three-dimensional objects — as opposed to, say, that we are brains in vats whose experience is purely simulated and in fact we don’t have any hands, and virtually everything we believe about the external world is false.

Andreas Mogensen: Similarly, the principle that one ought not to hold a belief if one knew one would hold this belief even if it were false, suggests that one ought not to believe that one isn’t a brain in a vat of the kind that I just described. For, if we were, in fact, merely envatted brains and not people who have bodies, we would nonetheless believe that we are people who have bodies and not envatted brains who are trapped in a kind of computer simulation.

Andreas Mogensen: The issue this obviously raises is what issue you take towards these sceptical hypotheses in general. I think most philosophers are inclined to believe that the correct theory of knowledge, the correct epistemology, should not license this kind of broad scepticism about the external world that seems to be supported by these principles.

Andreas Mogensen: Even if you think that external-world scepticism is actually somewhat plausible, one thing you might suggest here is that at the very least, insofar as these arguments rely on these principles, they perhaps suggest that our beliefs about right and wrong are in no worse a position than our beliefs that we have hands, say. If you’re just calling more or less everything into question, that you would also call our moral beliefs into question, that might not be too surprising. But it doesn’t seem to suggest any special or particular problem with our moral beliefs.

Rob Wiblin: I see. Couldn’t I react and say, I am not sure whether the external world exists, and I’m not sure whether I’m a brain in a vat, and I agree I would have similar experiences in either case. And likewise, I’m just not sure about what moral facts are either, because I think that I would take this or that other position because of evolution either way. Because in this case, you’re not asserting that the external world doesn’t exist, you’re just saying, it’s hard for me to know. To some degree it’s unknowable. And it seems like it’s a very plausible thing to both say that and to say that moral facts are unknowable as well.

Andreas Mogensen: Yeah, certainly. I mean, being uncertain is one thing. I’d take the sort of fully fledged scepticism that we’re dealing with here as a view in which I have no reason to believe that I have hands, say, or I have no reason to believe that it is wrong to torture innocent people or something like that. It’s not merely a matter of being uncertain, but a matter of, in some sense, completely suspending belief, because one thinks that one has no reason to believe any of these things. That’s a little bit more extreme and therefore might seem a little bit less inviting.

Andreas Mogensen: But yes, one could in principle be willing to embrace a kind of general scepticism about many of our beliefs, including our beliefs about the external world. But certainly many of those who have formulated and pressed these evolutionary debunking arguments are not fans of a kind of generalised scepticism, and want to try to highlight some specific problem that applies to our moral beliefs.

Andreas Mogensen: So one problem is that the kind of epistemic principles that I’ve just outlined have this problem that they lend themselves perhaps somewhat too easily to constructing general sceptical arguments. So that’s one kind of worry that we might have about these arguments.

Andreas Mogensen: Then a second worry that I emphasise more is: so take this view that says we should rely on Occam’s razor and we shouldn’t be willing to admit anything into our picture of the world which doesn’t feature in the best explanation of our experiences. There’s a big controversy that started in 1977 about whether moral facts ever explain our moral intuitions or our moral beliefs, which begins with this very famous thought experiment by Gilbert Harman: You would turn down an alley and you see some children setting a cat on fire, and you instinctively form the belief that what they’re doing is wrong. And the question that Harman wanted to discuss was, does the wrongness of setting fire to the cat, does the wrongness of what the children are doing, does that explain why you believe that in fact what they’re doing is wrong?

Andreas Mogensen: And people take different stands on this debate. Some people have said no and some people have said yes. Most of the sort of people who identify with the kind of naturalist realist view of ethics have taken the view that yes, indeed, the wrongness of burning the cat explains why you think that burning the cat is wrong. A naturalist realist view is roughly a view on which indeed there are objective facts about right and wrong, but these are natural facts similar to the kind of facts that are investigable by the empirical sciences, or they are themselves facts of the kind that are amenable to empirical investigation.

Andreas Mogensen: There’s been this longstanding debate about whether moral facts enter into the explanation of our moral beliefs. And you might think that this discussion about evolutionary debunking arguments is sometimes raising just the same issue, because at least certainly when the appeal is made to Occam’s razor, the idea is that we can, in some sense, show that moral facts are not ever needed to explain our moral beliefs.

Andreas Mogensen: So one sort of worry that I raised for that line of argument is that, in and of themselves, evolutionary facts do not show that moral considerations are irrelevant in explaining why we have the moral beliefs we do. They only show that such facts are explanatorily irrelevant in accounting for why those beliefs were selected for in the first place.

Andreas Mogensen: So it can, in principle, be true that some fact figures in the explanation of why particular organisms within their lifetimes acquire a certain trait, without being part of the explanation for why those traits were selectively advantageous — in the sense that had they arisen from any other cause within the organism’s lifetime, they would nonetheless have been equally advantageous.

Andreas Mogensen: So in this paper, I use this very stylised example — which is wholly fictional, so far as I know — of insects that acquire a green coloration by eating a certain kind of moss during a juvenile stage. And having this green coloration is very important for their survival, because it allows them to become camouflaged, but it’s completely irrelevant to the explanation of why it is important to be green that one becomes green by eating this kind of moss. And so the connection to evolution debunking arguments — which is perhaps a little bit hard to see —

Rob Wiblin: I see it Andreas, but can you explain it for the audience?

Andreas Mogensen: So it could in principle be true that moral facts enter into the explanation of our moral beliefs by virtue of explaining why we acquire the beliefs that we do in the course of our lifetimes, without entering into the explanation for why those beliefs may have been selectively advantageous to hold over evolutionary time. And nothing in the evolution debunking argument suggests that this is not the case.

Andreas Mogensen: And basically this earlier debate that arose from this argument by Gilbert Harman, and this example with the cat being set on fire, that was all about this question: “Do moral facts figure into the explanation for why we adopt the moral beliefs that we do in the course of our lives and in the course of our experience?” Some philosophers thought the answer to that question was yes. And at least I claimed that the kind of evolution debunking arguments that invoke, say, Occam’s razor provide no reason for those philosophers to think that they were wrong about that. Though, of course, you might think they’re wrong for independent reasons.

Rob Wiblin: OK, right. We are running out of time and I’ve got another section, so we’ll have to push on from this one. But is there something that people could go and read that somewhat sums up your take on evolutionary debunking arguments?

Andreas Mogensen: People can read my PhD thesis online, which is the best summary of what I believe. But the kind of points that I’ve just been discussing are best captured in this paper that I published in Analysis called “Evolutionary debunking arguments and the proximate/ultimate distinction.”

Rob Wiblin: OK, we’ll stick up a link to both of those. And if anyone properly reads Andreas’s PhD thesis, then I guess drop him an email and let him know.

Yeah. What’s an important way your philosophical views have changed over the last 10 or 15 years?

Andreas Mogensen: Perhaps those of my views that have changed the most, or the most frequently at least, are my views about metaethics — very roughly, my views about the sort of ultimate metaphysical nature of right and wrong. I think about 15 years ago, I was probably an undergraduate still, and at that point I think I was very strongly sold on some form of antirealism — in particular, some form of moral error theory or nihilism seemed very plausible to me.

Andreas Mogensen: And then I became a naturalist realist of the kind that I described earlier — so I thought that there probably were objective facts about right and wrong, and these were ordinary, natural facts of the same kind as the kind of facts studied in the empirical sciences.

Andreas Mogensen: Then I became a non-cognitivist for a while. This is roughly the view that moral judgements are not beliefs that purport to represent the world as being a particular way, but more like desires — more like a way of taking up a practical stance to the world. And I held onto that view for quite a long time.

Andreas Mogensen: And then, more recently, I switched over to non-naturalist realism — the view that there are objective moral facts, and these are not facts of the kind fundamental principles governing which are amenable to empirical investigation.

Rob Wiblin: I see. Metaethics sounds hard. What could listeners look at if they wanted to understand your current view of metaethics?

Andreas Mogensen: I think Michael Huemer’s book, Ethical Intuitionism, is a very good defence of the kind of view that I’m quite attracted to at this point.

Rob Wiblin: Yeah, OK. We might get Huemer on the show at some point; he seems like a really fun character.

Rob Wiblin: You wrote nine or so years ago saying that you didn’t think giving 10% of one’s income was likely to be negative for someone’s wellbeing if they had the kind of middle-class existence in a rich country like you do. How do you think that stands up today? Does it match your experience as well over the years?

Andreas Mogensen: Yeah. It definitely seems to match my own experience of having taken this Giving What We Can pledge. I think to a large extent that hasn’t really been a burden for me at all, in large part because it’s made me part of this fantastic community of people that I wouldn’t otherwise have been a part of, which wasn’t really the focus of the kind of research to which I was referring way back then.

Andreas Mogensen: It’s been a while since I looked into this literature, but I remember at one point worrying that the replication crisis had probably felled many of the results on which I relied, and seeing if this was the case — and not, in fact, finding that there were sort of significant instances of failed replications of some of the research on which I was relying in that paper. I think that was a couple of years ago at this point, so that might no longer be the case, but my impression at least at that time was that much of the research on which I was relying was still acceptable at this point.

Rob Wiblin: What’s a question or two in the field of global priorities research that you are excited about that might not be already very prominent among the audience?

Andreas Mogensen: One thing that’s a kind of work in progress that I am especially excited about is a paper that’s being worked on by Jacob Barrett and Loren Fryxell at the Global Priorities Institute, where I’m based. This is about the neglectedness heuristic that’s used by 80,000 Hours in prioritising amongst different projects, and is widely accepted or widely used amongst effective altruists.

Andreas Mogensen: So, roughly, the neglectedness heuristic says that we should prioritise working on causes that are neglected, typically because it’s assumed that there are diminishing returns in different cause areas. And there’s been this concern that this may not always be true — and in particular may not be true in political contexts, where it’s often extremely important that one is able to reach some kind of tipping point, where if enough people contribute then some kind of threshold is crossed and some kind of desirable outcome is brought about in that kind of setting.

Andreas Mogensen: And even in that case, where you sort of know that there are such tipping points, there are arguments that you should think that, in expectation, benefits decrease at the margin. I think there’s some early work by Owen Cotton-Barratt on this, and there’s lots of stuff to go into on this. But Jacob and Loren are working on this paper where, as I recall from when this was presented at a workshop, they are going to show that in fact this neglectedness heuristic does mislead us in some important sociopolitical contexts, and that in such contexts it can be more important, or better, to follow that kind of bandwagoning heuristic of joining in where many other people are already piling on.

Rob Wiblin: Fantastic. If that’s online, we’ll find it and stick up a link to it.

Andreas Mogensen: It’s definitely not out yet.

Rob Wiblin: Oh, it’s definitely not out. OK.

Andreas Mogensen: Yeah, so it’s definitely work that they’re still working on. I’ve seen a conference presentation of this, but I don’t think any paper is publicly available or shared yet.

Rob Wiblin: OK. I think there’s other critiques of the neglectedness heuristic that are out there, so maybe we’ll try to find the best one, and stick up a link to that instead.

Well, it’s almost 8:30, so regrettably it’s time to release you back into the wild. One final question is another one from the audience: “On a more personal note, I’d be curious to know what influence Derek Parfit has had on Andreas, both intellectually and personally. As far as I know, they were both at All Souls, although I’m not sure whether they were there at the same time. But generally Parfit seems to be referenced a lot in GPI papers, so I’d be curious to know what influence he’s had.”

Andreas Mogensen: Unfortunately, Derek retired from All Souls the year that I became a fellow of All Souls, so I didn’t actually have a chance to interact with him very much. Through his work philosophically, he’s been an extraordinary influence on me, but I didn’t get that much chance to interact with him in person.

Andreas Mogensen: One year we did set the philosophy paper for the Examination Fellowship together, which was an interesting experience in that his approach to setting exam questions was, as I recall, that we should try by and large to recycle past questions, and avoid, so far as possible, raising questions that are supposed to be tricky or require you to approach a familiar topic from an unfamiliar angle — which I think are otherwise questions of a kind that are beloved of people who set philosophy exams, at least in Oxford.

Andreas Mogensen: I think his view was something like, the best philosophers are not always quick on their feet and so exams shouldn’t require them to reason about a topic in a completely unfamiliar way, as opposed to giving them an opportunity to just tell us the arguments and ideas they’ve already developed and internalised. At the time, I thought this was sort of bizarre, but over time I’ve become more and more inclined towards this kind of view, and have become more and more inclined to rely on this sort of approach in cases where I’ve been called upon to set exams.

Rob Wiblin: I guess giving people quirky questions that throw them off maybe introduces noise into the examination process in a way as well. I suppose that’s kind of the idea about people thinking quickly on their feet, is that some people will get lucky and have good ideas quickly on their feet and other people won’t, and that’s not really measuring their underlying philosophical ability, which maybe requires a bit more time to come to fruition.

Andreas Mogensen: Yeah.

Rob Wiblin: All right, my guest today has been Andreas Mogensen. Thanks so much for coming on The 80,000 Hours Podcast, Andreas.

Andreas Mogensen: Thanks so much for having me.

Rob Wiblin: Three quick notices today.

We’re currently working to hire a full stack web developer for our job board. You can find out all about that role at 80000hours.org/latest, or check out the job board as it currently exists at 80000hours.org/jobs. Applications for that close on the 26th September.

Second, we’ve started putting out weekly newsletters covering some of the most important things we learned about that week. A lot of that content isn’t available anywhere else, so if you’d like to keep track of what the research team here is thinking about you can join the list at 80000hours.org/newsletter.

Finally, I’ve mentioned the show Real Dictators before as one of my favourite podcasts to listen to for pleasure. But I just wanted to remind people about it because I’ve so enjoyed listening to their recent biographies of Napoleon and Pol Pot among others.

Another podcast you might be interested in is The Most Interesting People I Know, made by my friend Garrison Lovely. It’s a long-form interview show like this one and attempts to cover similar themes but more through the lens of US progressive politics, seeing as how Garrison is American and personally feels a lot of affinity for the left.

It’s inherently interesting to see how the ideas we talk about here are interpreted overseas and by people with diverse political backgrounds. Sadly Garrison isn’t actively making new episodes just now but maybe he’ll resume some time in future. One of my favourite episodes was episode 16, Andrés Gómez Emilsson on solving consciousness and being happy all the time.

And speaking of Andréses, don’t miss my interview with Andrés Jiménez Zorilla about the Shrimp Welfare Project over on our other show, 80k After Hours. It’s just very good.

All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.

Audio mastering and technical editing for this episode by Ben Cordell and Beppe Rådvik.

Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.

Thanks for joining, talk to you again soon.