80,000 Hours Podcast

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount!

Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

Links to learn more, summary and full transcript.

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong.

In today's conversation, Alan and Rob explore these issues and many others:

• Simple rules of thumb for having philosophical insights
• A key flaw that hid in Pascal's wager from the very beginning
• Whether we have to simply ignore infinities because they mess everything up
• What fundamentally is 'probability'?
• Some of the many reasons 'frequentism' doesn't work as an account of probability
• Why the standard account of counterfactuals in philosophy is deeply flawed
• And why counterfactuals present a fatal problem for one sort of consequentialism

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app.

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Show Notes

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play?

 The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount!

 Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.”

 Links to learn more, summary and full transcript.

The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped.

We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits.

These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good.

Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact.

Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong.

In today's conversation, Alan and Rob explore these issues and many others:

• Simple rules of thumb for having philosophical insights
• A key flaw that hid in Pascal's wager from the very beginning
• Whether we have to simply ignore infinities because they mess everything up
• What fundamentally is 'probability'?
• Some of the many reasons 'frequentism' doesn't work as an account of probability
• Why the standard account of counterfactuals in philosophy is deeply flawed
• And why counterfactuals present a fatal problem for one sort of consequentialism

Chapters:

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

What is 80,000 Hours Podcast?

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin and Luisa Rodriguez.

Rob Wiblin: Hi listeners, this is The 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and whether if I were at least as handsome as Ryan Gosling, I’d be exactly as handsome as Ryan Gosling. I’m Rob Wiblin, Head of Research at 80,000 Hours.

Many people have asked me to interview Alan Hájek over the years, and this year I finally got to do it when I went home for Effective Altruism Global Australia back in July.

Alan is one of the most prominent philosophers focused on resolving puzzles in probability and expected value, issues that turn out to crop up shockingly often when you try doing research to figure out how to do the most good.

We did the first hour on stage at the conference, then another few hours in Alan’s office.

The first hour is very accessible, but I won’t lie — the later part of the conversation is among the more advanced content we’ve had on the show.

To help you out, I’ve recorded a number of cut-ins to quickly define terms — like ‘epicycles’ or ‘Dutch books’ — which are familiar to philosophers but not so familiar to the rest of us.

Katy Moore has also gone through the transcript and added a lot of links that will take you to articles that will elaborate on concepts and arguments as they come up.

People familiar with these topics should find this episode super informative, while I hope those who aren’t so familiar will learn a lot even if they don’t follow every point being made.

Some of the things we cover are:

Simple tricks for doing philosophy well
Why frequentism is misguided
What probability really is
Problems with using expected value, and whether we should use it anyway
A fundamental problem with Pascal’s wager
And why the dominant theory of counterfactuals in philosophy is unworkable
A quick notice is that 80,000 Hours is currently hiring a recruiter to help grow our team, with applications for that role closing soon on November 2. I’ll say a little more about that in the outro, but if you’d like to know more, you can get a full job description on our job board at 80000hours.org/jobs.

All right, without further ado I bring you Alan Hájek.

Today I’m speaking with Alan Hájek. Alan is a professor of philosophy at the Australian National University. Years ago, he did his PhD at Princeton, where he won a fellowship for displaying, in the judgement of the university, the highest scholarly excellence of all graduate students across all of their disciplines. These days, he has broad-ranging interests across epistemology, philosophy of language, and philosophical methodology.

Rob Wiblin: His work is of great interest to me, and the effective altruism community, because he is one of the world’s top experts on the philosophy of probability, Bayesianism, decision theory, expected value, and counterfactuals. Some of his more memorably titled papers include: Fifteen Arguments Against Hypothetical Frequentism, Waging War on Pascal’s Wager, Vexing Expectations, Most Counterfactuals Are False, and a followup paper titled Most Counterfactuals are Still False. He’s currently working on a book on counterfactuals, titled ‘Would’ Work.

Rob Wiblin: This is our first-ever live recording of the show, so for once, I should also welcome all of you in the audience as well. And before we quieten down for the recording, let’s have a round of applause for Alan.

Alan Hájek: Thank you. Great to be here, Rob. Thanks so much for having me on the show.

Rob Wiblin: I hope we’re going to get to talk about whether Bayesianism is the final big evolution in probability, and why you think objective consequentialism is even wronger than many people can even imagine. But first, you started out studying maths and statistics, which are kind of practical fields in which you could have been gainfully employed doing something of obvious value. What made you switch into philosophy after you finished your undergrad degree?

Alan Hájek: Yeah. I was studying, as you say, maths and statistics, and I went to lots of courses on probability. And my professors would write equations on the board — “P of this equals P of that” — and I was wondering, “What is this ‘P’? What does probability mean?” And I asked one of them, “What does probability mean?” And he looked at me like I needed medication, and he said, “That’s a philosophical question.”

Alan Hájek: And I began a master’s in statistics, and I got as far as photocopying the article that my supervisor wanted me to read. I sat down, started reading it, and I went, “Ugh.” I don’t know if you’ve seen Pulp Fiction, but there’s a moment where Jules the hitman has a moment of clarity. And I had a moment of clarity: I realised I didn’t want to go on with statistics. But what to do instead?

Alan Hájek: I travelled the world, hoping to find myself on the road. And I did. It happened in western Ontario; a friend of mine, who’d had his moment of clarity, was studying philosophy there. He showed me the courses that he was studying, that he was looking at, and I thought, “Wow, how cool is that?” And you could hear the penny drop several provinces away. That was my second moment of clarity: “That’s what I want to do. Philosophy.”

Rob Wiblin: What was it that bugged you? I mean, obviously most statisticians don’t care about these deeper questions about what probability is, given that they can just do operations on it. Why couldn’t you let it go?

Alan Hájek: In a way, I was less interested in the practical payoffs. I was just wondering, where does probability fit into my conceptual scheme? How does it relate to other things? What does it really mean? When I say something like, “The probability of rain tomorrow is 0.3,” what have I said? I understand the stuff about rain, but the probability stuff I didn’t understand. And here I am, I’m still asking that question: What does ‘P’ mean?

Rob Wiblin: Yeah, OK. We’ll get to what ‘P’ means later on. But what are your main focuses at the moment? I guess you’re working on counterfactuals primarily?

Alan Hájek: Yeah, that’s right. And the book ‘Would’ Work — thanks for the advertising! And, as always, I’m working on probability. I still ask “What is ‘P’?” And that’s related to decision theory, something that I think quite a lot about. That got me into things like Pascal’s wager. I’m still thinking about that. We’ll probably talk about my heuristics, and some of that comes up as well.

Rob Wiblin: Yeah, makes sense. So we’re going to get into the details of probability and counterfactuals later on, but first I wanted to dive into this other passion of yours, which is philosophical methodology. How did you first get into that topic?

Alan Hájek: It began when I was a graduate student at Princeton. I was surrounded by these really good philosophers. I wanted one day to be a good philosopher, or as good as I could be. And I noticed there were these recurring patterns of thought — sometimes in their work, sometimes in conversations, sometimes in Q&A. For example, I would hear a really good question at Q&A. Three weeks later, the same person would ask a similar question. I thought, “It worked last time. It worked again. There’s a recurring pattern here.” I’ll internalise this. So I started to make a list of these recurring patterns of thought — philosophical techniques that seem to be fertile — and now my list is hundreds long.

Rob Wiblin: Why isn’t this just already an obvious thing? You would think you would start a philosophy PhD, and here’s all the tools in the toolkit — just go and do this.

Alan Hájek: Yeah. It’s funny, I think that these are strangely underexplored. We teach our students logic, for example. That’s certainly one tool in the toolbox. But I think we have all these other techniques and we don’t nearly discuss them enough, think about them.

Alan Hájek: Then I went on to teach at Caltech, and I had these very smart students for 10 weeks. How do I convey to them how philosophy’s done? Of course, I had them read the classics — Descartes, and Hume, and so on. But along the way, I would occasionally drop in these philosophical heuristics, these techniques — partly just to show them how philosophy is done, but also just that it helps you do philosophy.

Alan Hájek: And philosophy’s hard — I think we could use all the help that we can get. Especially when you’ve just written some philosophical paper, some view of your own. It’s curiously hard to be your own critic, and other people are lining up to point out the problems with your view. But these heuristics, I think, help guide your mind to finding the problems before others gleefully point them out.

Rob Wiblin: Before others do it for you.

Alan Hájek: Yeah, that’s right.

Rob Wiblin: Yeah. In one of your papers on this topic, you sound extremely defensive — in a sense that you’re worried that other philosophers are going to judge you for daring to write down a bunch of heuristics. Do you want to explain what’s going on with that?

Alan Hájek: Yes. It’s as if I’m the magician who gives away the tricks of the trade, and I find this a very strange attitude. Think of some other area where there are heuristics, like chess. There’s no problem with a chess book actually giving you some advice — castle early and often, avoid isolated pawns. Or in maths there are various fertile heuristics. But I don’t know, somehow in philosophy, some people view it with a bit of suspicion — that this isn’t really depth; this isn’t really getting to the profundity of philosophy.

Alan Hájek: And I’m not saying you should just follow these heuristics mindlessly — just like when you’re playing chess, you shouldn’t just mindlessly play the things. But I think they help. I think they get you closer to your targets. And I think that they actually help creativity too: as I said, philosophy is hard, so these are just techniques to breaking down a hard problem into easier sub-problems, and then you can make progress on those.

Rob Wiblin: OK, so yeah, let’s hear one of these heuristics. What’s one that really stands out as especially useful?

Alan Hájek: Yeah. I like the one I call “Check extreme cases.” You’re in some domain. Extreme cases are things like the first case or the last case, or the biggest or the smallest, or the best or the worst, or the smelliest, or what have you. Now you’ve got this huge search space, and someone gives a big philosophical thesis. Suppose you want to stress-test it: are there counterexamples? Hard problem: somewhere in this search space, find trouble, find counterexamples. Easier sub-problem: go to the corner cases, go to the extreme cases. Often the trouble lurks there if it lurks anywhere, and it’s a smaller search space. So that’s the technique. I could give you some examples if you like.

Rob Wiblin: Yeah, yeah. Give us an example or two.

Alan Hájek: All right. Grandiose philosophical thesis: “Every event has a cause.” At first you might think, “Gee, I don’t know. Is that true or false? It’s kind of hard to tell.” All right, hard problem: come up with a counterexample to “Every event has a cause.” Easier sub-problem: consider extreme cases of events. For example, the first event. Call it the Big Bang. The Big Bang didn’t have a cause. Counterexample.

Alan Hájek: Or philosophers sometimes say that you should only believe in entities that have “causal efficacy” — they have some oomph. That’s maybe a reason to be suspicious of numbers: maybe numbers don’t exist because they don’t cause anything. And then Lewis has us imagine: “Well, what about the entity which is the whole of history?”

Rob Wiblin: Everything all together.

Alan Hájek: There is causation within it, but that doesn’t cause anything, so according to this principle, you shouldn’t believe in the whole of history. So there the heuristic is doing negative work: it’s destructive, shooting down some position. But I think it could also be constructive.

Rob Wiblin: Yeah, maybe it’s worth explaining a little bit of one of your theories of what philosophy is.

Alan Hájek: Yeah, I think you’re thinking of: “A lot of philosophy is the demolition of common sense followed by damage control.”

Rob Wiblin: Yeah, I love that quote.

Alan Hájek: Yeah. Philosophy often comes up with some radical claim like, “We don’t know anything.” But then we try to soften the blow a bit, and we find some way —

Rob Wiblin: Maybe we know a little bit.

Alan Hájek: We know a little bit, or we have to understand knowledge the right way. Anyway, this extreme-cases heuristic was somewhat negative: it was pointing out a counterexample to some big thesis. I think it could also be constructive.

Alan Hájek: Maybe longtermism could be thought of in this way. Maybe the thing that comes naturally to us is to focus on the short-term consequences of what we do, and we think that’s what matters. Then you push that out a bit, and then an extreme case would be, “Well, gosh, our actions have consequences until the end of time, for the rest of history, so maybe we should be more focused on that.” And that’s now the beginning of a more positive movement.

Rob Wiblin: Yeah. So the philosophical question there might be, “For how long should we consider the consequences?” or “What should be the scope of our moral consideration?” And here you say, “Well, let’s consider the extreme possibility. We should consider all space and all time forever.”

Alan Hájek: That’s right. So I started with “Check extreme cases.” Then sometimes you might just check near-extreme cases — so you back off a bit and they’re a little bit more plausible. So maybe we don’t need to look until the end of time, but still look far ahead, and that is still at some odds with initial common sense.

Rob Wiblin: Yeah. I guess people might often come back and say, “Well sure, in the extreme situation it doesn’t work. Lots of things don’t work in extremes. It’s more sensible to focus on the middle cases, and so this isn’t actually such a powerful objection.” What do you think of that?

Alan Hájek: I think it’s for that very reason that this is a fertile heuristic, because we spend our lives mostly living among the normal cases, so extreme cases don’t come so naturally to us, even though they may well be trouble for some philosophical position. In fact, maybe especially because they’re extreme, they’re more trouble than the middle cases.

Rob Wiblin: Yeah, I guess it depends on whether the claim is a more pragmatic one about how you ought to do things every day, or whether you’re trying to claim “I’ve discovered some transcendent fundamental truth” — and you’ll be like, “Well, it doesn’t work in this one case. You were claiming that this was something that should cover everything and now it doesn’t.”

Alan Hájek: And philosophers often do that. They have these grandiose claims — “Every event has a cause” or what have you — and this is a good way of stress-testing such claims.

Rob Wiblin: OK, yeah. What’s another heuristic?

Alan Hájek: Yeah, I like to focus on the word “the.” I say, see the word “the” in neon lights — because it typically comes with a presupposition. “The X” typically presupposes there’s exactly one X. There are two ways that could go wrong: there could be multiple Xs, or there could be none at all. So an example of that: “Do the right thing.” It just rolls off the tongue.

Rob Wiblin: Sounds straightforward enough.

Alan Hájek: The right thing. Now that sounds like there’s exactly one right thing to do. Well, two ways we could challenge that. There could be multiple right things, and maybe it’s OK if you do any one of them. But still we’re challenging the presupposition that there’s exactly one. I don’t know if we’ll get to talk about Pascal’s wager at some point, but it turns out that there are many ways to follow Pascal’s advice.

Alan Hájek: Going in the other direction, maybe there’s no right thing. Think of a moral dilemma, like Sophie’s Choice — there’s no right thing to do. Or Sartre’s case of the student who’s torn between fighting in the war or staying home with his mother. What’s the right thing to do in this moral dilemma? It’s not clear that there is one.

Rob Wiblin: Or at least if you start saying that there is one thing, then you’re making a claim maybe without even realising that you’re making a claim. You’ve slipped in an assumption by using “the.”

Alan Hájek: That’s right. Yeah.

Rob Wiblin: Any other examples of that one? Of the “the” or people just assuming that there’s one answer?

Alan Hájek: Yeah, maximising expected utility. We’ll probably talk about expected utility later on, and here we’re being told, “Do the thing that maximises expected utility.” If you hear it that way, then there are problems on each side. There could be ties for expected utility, or there could be nothing that maximises it — things just get better and better without end.

Rob Wiblin: I guess it’s not the same, but it reminds me of this saying that in philosophy you can have two problems. One is to have no answer to a question, and the other is to have too many explanations and you can’t tell which one’s the right one.

Alan Hájek: That’s right, and this corresponds to the two ways. And again, it’s breaking down a harder problem into easier sub-problems. So you look on each side, are there too many of these things or are there not enough?

Rob Wiblin: OK, yeah, what’s another one?

Alan Hájek: Philosophers love to talk about what’s possible. They love to say, “X is possible.”

Rob Wiblin: Yeah, I guess a classic case of that is the zombie thing, Chalmers’s zombies.

Alan Hájek: Exactly. That’s right: Are zombies possible?

Rob Wiblin: That is, beings that would act like people but have no conscious experience.

Alan Hájek: That’s exactly right. And philosophers love to distinguish different senses of possibility — logical, metaphysical, nomic (what’s consistent with the laws of nature), doxastic (what’s consistent with one’s beliefs), epistemic, deontic, and so on. And there are various techniques for generating possibilities, or for arguing that something — call it X — is possible.

Alan Hájek: And when you think about it, there are two moving parts to that: there’s “X” and “is possible.” So focus on “is possible” first. One kind of technique says to look at some other property of X, and it follows from X having this other property that X is possible.

Alan Hájek: Let’s just do a few of those. “X is actual.” Suppose X is actual. So for a lot of the modalities I just talked about, the possibilities, that’s a good way to establish that X is possible.

Alan Hájek: Or you gave a good one: “X is conceivable.” Dave Chalmers appeals to that one in the zombie argument. X is conceivable, so it’s possible.

Rob Wiblin: So it’s possible to imagine it, and therefore you start reasoning from that.

Alan Hájek: Yeah, that’s right.

Rob Wiblin: So what are some examples of cases where philosophers make this argument? I guess we’ve got the zombie one, but it seems like it’s maybe a slightly dubious style of argument, where you’re saying, “Something can be imagined, therefore I’m going to conclude things from that.”

Alan Hájek: Yeah, that’s right. And of course philosophers love thought experiments, and I guess soon we’ll be talking about some pretty recherché thought experiments. And even if these things aren’t actual, like the St. Petersburg game or something, still we might say, “Look, we can conceive of this. In that sense it’s possible and we should take it seriously and test our theories against it.”

Alan Hájek: Now that was, by the way, just one way to show that X is possible. And remember, there’s the other moving part. X is possible. How about we look at some other object — let’s call it Y, some other thing. And Y is possible, we could all agree on that. And X is appropriately related to Y, and we conclude that X must be possible too.

Rob Wiblin: So related in what way?

Alan Hájek: So here are some ways. If Y is possible and Y entails X, then it seems pretty safe that X is possible too. If they’re compatible, then it seems that X is possible too. Here’s one of my favourite ones: “Almost X is possible.” So let Y be “almost X” — it’s very similar to X in some appropriate sense. And then you say, “Well, the small difference between Y (which is almost X) and X won’t make a difference to possibility.”

Alan Hájek: An example of that, according to behaviourism: “It’s not possible to have a mental life and no behavioural manifestation of it whatsoever.” Reply: Yes, it is possible because it’s possible to have almost no behavioural manifestation of your mental life. Think of Stephen Hawking, sadly no longer with us. Towards the end, I guess he had just very minimal behaviour: movement of his finger, not much more. Obviously he had a rich mental life. And now imagine just that finger movement stopping, so he loses that last bit of behaviour. It’s not like the lights suddenly go out for Stephen Hawking. So it is possible.

Rob Wiblin: This is reminding me of another line of argument that some people might have heard. Some people will say, “You couldn’t create a mechanical mind” or “You couldn’t create a mind out of machines.” And you’ll be like, “Well, let’s take a human brain and just replace one neuron with a mechanism that does just what that neuron does. Are you not a person now? Are you not conscious?” And they’ll be like, “No. No, not really.” So you’re like, “What if we replace another one?” And then the person is either just forced to say that you’re becoming gradually and gradually less conscious, or that there’s some sharp cutoff, which also seems implausible.

Alan Hájek: Excellent. This is another of the heuristics of this kind. Let’s call it “extrapolation.” So you have a sequence of things and you think that each of them is possible. Now let’s just go to the next step. That should be possible too. Or “interpolation”: start with two things that you know are possible — maybe because they’re actual — and now interpolate on some whatever gradable scale. Hume’s missing shade of blue is like this: take two shades of blue that are actual — therefore possible — and now imagine a missing shade of blue. It’s not actual, but it’s somehow between those two on some natural scale. Well, that seems to be possible too.

Rob Wiblin: Couldn’t someone respond that it’s not possible? That actually they’re too close, there’s no shade in between?

Alan Hájek: They could say that, and you’ve got to be careful with your scales too. And sometimes extrapolating or interpolating will give you wrong answers. Think of the sequence (1/n): 1/2, 1/3, 1/4, et cetera — is it possible for that sequence to reach zero? Well, no, it gets as close as you like, so almost reaching zero is possible, but it never actually hits zero. That’s not possible for that sequence.

Rob Wiblin: What’s an example of the thing where you repeat the sequence again and again and again? I guess that’s an example where you’re kind of extrapolating. Are there examples where you take some principle and you just keep operating it, and then you get some absurd conclusion?

Alan Hájek: Maybe the lottery paradox is like this? Is that what you’re thinking of?

Rob Wiblin: Yeah, yeah.

Alan Hájek: Or even better, maybe the preface paradox.

Rob Wiblin: Oh, I don’t know that one.

Alan Hájek: Yeah, let’s do the preface paradox. This puts pressure on the idea that belief is closed under conjunction.

Rob Wiblin: OK.

Alan Hájek: You’ve just written a long book and then you write the following preface: “I believe everything I say in this book, but I’m confident that there’s a mistake somewhere in the book. I did my best to get sentence number one right. I believe sentence number one. I believe sentence number two…” all the way to: “I believe the final sentence of the book. But I don’t believe the conjunction of the book. On the contrary, I know that in long books like this, there’s always a mistake somewhere.” So this puts pressure on the idea that belief, even rational belief, is closed under conjunction.

Rob Wiblin: Yeah. A different approach that sometimes I apply is trying to subdivide things super closely. Every so often people say to me that it’s not possible to put a probability on things that have never happened before. So it’s fine to say that the probability of a coin flip being heads is 50%. But if you say something like the probability of a nuclear war in 2027 is 1.5%, then this is just a bunch of silliness because it’s never happened before. But then of course you can say that if you define all events sufficiently closely, then none of them have ever happened before: all events are unprecedented once you specify exactly what they are.

Alan Hájek: That’s right. If you specify the events too precisely, then they’re all unique. If you specify them too loosely, then of course anything can be described as “something happens.” We have many instances of that, and now you want to find the right level of grain that describes this event in an informative way and such that it’s susceptible, let’s hope, to some probability.

Rob Wiblin: Now that I think about it, that potentially creates trouble for the coin being 50%, because you say, “Each coin flip is unique in its own way,” and in any specific case, it depends on how they flipped it. And if you specify that, then it’s 100% and 0%.

Alan Hájek: Yeah, maybe we’ll soon be talking about frequentism, but this is related to a famous problem for the frequentist account of probability: the problem of the single case. At some level of specification, every event only happens once. Then for some events, that is quite a natural thing to do: this year’s AFL Grand Final — that’s Australian rules football, for the non-Australians — that seems to be a unique event, and there may be an issue about how we assign probabilities to it. But it seems as far as frequencies go, it’s a one-off thing: it’s either one or zero for, say, Collingwood (my team) winning.

Rob Wiblin: Yeah, yeah. Are there any other heuristics that people should keep in mind if they want to outwit their friends in conversation, before we move on?

Alan Hájek: Oh, I like to think about putting contrastive stress on different words in a claim, and that makes you realise there’s a contrast class: this as opposed to that. Let’s take for example: “Smoking a pack a day causes lung cancer.” That seems reasonable when you just say it like that: Smoking as opposed to other things you could do with a cigarette, like stroking it. Yes, it seems like the smoking is relevant. Smoking a pack a day, smoking one pack a day: as opposed to none, yes, that seems to cause lung cancer. But what about smoking one pack a day as opposed to three packs a day? Now it’s not so clear that it’s the one pack a day that causes the lung cancer as opposed to three. And this now makes you think that explanation must be a contrastive matter.

Rob Wiblin: Yeah, I see.

Alan Hájek: Yeah. This as opposed to what?

Rob Wiblin: I mean, people slip in counterfactuals or alternatives. They’re saying “one as opposed to zero,” but we don’t say that most of the time, and so you could accidentally end up having a counterfactual that you haven’t really properly thought through.

Alan Hájek: And yes, this makes you be honest. What am I contrasting this thing with? Maybe that was implicit and not obvious. And this technique, where I mentally italicise different words in a claim, and say in each case, “This as opposed to what?” Then I realise I was making a presupposition or something like that.

Rob Wiblin: Yeah. So I suppose all of these heuristics can potentially provide stress cases, or considerations against conclusions that people are putting forward. In general, do you think that there’s very often decisive arguments against a position? Or is it more often that all of these weaknesses kind of accumulate, and then you end up with a view that there’s too many problems here for this to be quite the right theory? I’m just thinking most theories you can throw something at them and be like, “It doesn’t feel quite right in this case.”

Alan Hájek: Yeah, Lewis famously said that philosophical positions are rarely refuted. Gettier may have done it. That’s the justified-true-belief analysis of knowledge. Gödel might have done it. But it’s rare.

Rob Wiblin: That’s about it.

Alan Hájek: I think there are some killer problems for various positions. We might talk about some later. But yes, often it’s more a matter of accumulation. There are some smaller problems, but they start adding up and overall the position doesn’t seem tenable. It doesn’t seem to best capture the data points.

Rob Wiblin: Do people sometimes give up on them? Because it feels like you could patch the first three things kind of, but then just more and more cracks start appearing, and you’re trying to patch it up everywhere and it starts to feel more complicated to patch the theory than just to come up with a new one.

Alan Hájek: Yeah, that’s right. Some might say that happened to the justified-true-belief account of knowledge. That can’t be quite right, it seems, because of Gettier and so on. Then we add an epicycle and another one and another one, and if it goes too far, it starts to feel like a degenerating research programme.

Rob Wiblin: Yeah, the explanation is now so long it just lost its original intuitive appeal. It’s no longer simple either.

Alan Hájek: That’s right. And even if it turned out all right — I can’t come up with any counterexamples to this tremendously long, complicated analysis — do I really feel illuminated? Suppose that knowledge is this thing with many epicycles: do I feel like I understand it better? Not really.

Rob Wiblin: OK, what are some common moves that philosophers make trying to debunk other philosophers’ ideas that you think are overrated? That are not as strong an objection as people make out?

Alan Hájek: Well, there’s a whole movement in philosophy that I now distance myself from, and I think this speaks to your question. Philosophers love to appeal to other worlds, and in particular similar worlds. This may come up when we talk about counterfactuals, that the standard analysis of counterfactuals appeals to most similar possible worlds. But not just counterfactuals — knowledge. We often talk about so-called safety and sensitivity, and that’s a matter of what’s going on in nearby worlds — where you believe something, or where the thing you believe is true or not true.

Alan Hájek: I used to just talk this way too. I used to love all this talk of similar worlds, and other worlds in general. And now I’ve jumped off the bus, and I think one should not be casting everything in terms of other worlds and similarity relations. I prefer to do it in terms of probability, and stay in the actual world — probability is anchored here and it will give my preferred accounts of things. Doesn’t quite speak to your question, but I think it’s close.

Rob Wiblin: OK, any other advice for budding young philosophers in the audience, before we push onto concrete questions?

Alan Hájek: A high value for me in philosophy is clarity. I really care about clarity. And these heuristics I think help in that mission as well: trying to just really get clear on what a position is, and what its possible defects are. And as I think Searle said, “If you can’t say it clearly, you don’t really understand what you’re saying.” And I would begin with that.

Rob Wiblin: Yes. I’m not sure 100% agree with that. So philosophers do tend to just be absolutely obsessed with making sure that their thing applies to every case. When you read philosophy papers, you get this impression that it’s a long series of paragraphs added to consider every objection that some philosopher might raise, so there’s nothing that can be said in response. I suppose in one sense that’s extremely clear. In another sense, it feels like maybe you are losing the forest for the trees sometimes, in considering often clever, small objections — rather than: is the core part of the thesis sound?

Alan Hájek: I see. Yeah, maybe being too pedantic.

Rob Wiblin: Yeah.

Alan Hájek: “Come on, you know roughly what I meant.”

Rob Wiblin: It seems to some degree philosophers are kind of professional pedants. That’s their whole thing, is to find the edge cases.

Alan Hájek: That’s right. And I’m sure you know the way that philosophers often write is often in this very defensive style, because you are preparing yourself for the most uncharitable objection, because you didn’t quite nail the point. Given your wording, there’s some counterexample. And this should be tempered with the principle of charity: that you should try to give a charitable interpretation of what’s being said, and look for more interesting, more profound, deeper objections.

Rob Wiblin: Yeah, it is the most challenging part of interviewing philosophers, I think: that they really want to be right. I’m just like, “Can you, for the love of God, say something wrong, say something wrong, but less words, please?”

Alan Hájek: Yeah, right. And yeah, you ask a question and, “Well, that’s not quite right the way you said it.”

Rob Wiblin: Yeah.

Alan Hájek: But yeah, if at least the spirit of it is conveyed, that is often good enough.

Rob Wiblin: Yeah. I guess if you get too sloppy in that direction, then you can’t do philosophy so much anymore. Then you just become like ordinary people who accept common sense, and are not looking for the little ways that, when you investigate more deeply, it does actually demolish common sense, when you think about it properly.

OK, let’s push on to a topic that is dear to all of our hearts: probability. The core question I really want to ask here is whether Bayesianism, as is often practised by people in the audience, whether it’s true in some sense. But before I ask that, we should probably wind back a step. At a high level, what are the different theories of what probability is? What ideas do people have about it?

Alan Hájek: Yeah, this is what my professor should’ve said when I asked the question, “What is ‘P’?” and he thought I needed medication. This would be the beginning of a reply.

Alan Hájek: Let’s distinguish three main kinds of probability. The first, let’s call it “objective chance.” Out there in the world, there’s probability, and it’s mind-independent: it doesn’t care what we think. So think maybe of radioactive decay, the radium atom has a decay law. It’s got a half-life of 1,600 years, which is a probabilistic fact. It doesn’t care what anyone thinks. It’s just out there in the world. It’s like mass and charge; it’s some property out there. That’s the first kind.

Rob Wiblin: Yeah, makes sense. Under that one, do we have quantum mechanics issues — like the probability of an electron being here rather than there — is that in the same category?

Alan Hájek: Yeah, and that’s a big question in the interpretation of quantum mechanics: whether there really are objective probabilities, or is in fact everything deterministic, and really the chances are all ones and zeros on one view?

Rob Wiblin: Right. Well, let’s skip that.

Alan Hájek: But that’s a debate to be had. OK, second interpretation, let’s call it “subjective probability.” And that’s now more about rational agents, people like us — call it “credence” if you like — and it’s degree of confidence. Sometimes we outright believe things, but often our degrees of confidence are more nuanced than that. We believe some things more than others, as we might say. We have degrees of belief. And that would be the second interpretation. But here, not just anything goes. It seems these are rational degrees of belief, and it’ll turn out, according to Bayesianism, they’re constrained by probability theory.

Rob Wiblin: So would there be like the correct credence?

Alan Hájek: Well, excellent. This will get us to a big issue in Bayesianism. At one extreme, you’ve got a very permissive, radical subjectivism, where the only constraint is to obey the probability calculus. “Thou shalt obey Kolmogorov’s axioms,” something like that.

Rob Wiblin: It’s like everything has got to sum to one, or the probability of something happening is one?

Alan Hájek: That’s right. Probabilities are non-negative, they have a top value of one, they add in a certain way. Now that seems very permissive. I can assign probability, even one, to a meteor quantum tunnelling through this lecture theatre before we are finished and going out the other side, as long as I give probability zero to it not doing so, and so on. And very high probability to the sun not rising tomorrow, and so on.

Rob Wiblin: It doesn’t violate any fundamental axioms of probability?

Alan Hájek: That’s right. But it seems somehow too unconstrained. Now at the other end of the spectrum, this is the so-called uniqueness thesis: that your evidence uniquely constrains you, and there’s exactly one credence you should have. And in fact, there was — let’s call it an “ur-prior” —

Rob Wiblin: The correct thing to think before you saw anything at all.

Alan Hájek: Yeah. Prior probability before any evidence. You should start there, and just update it on your evidence as it comes in. Now that seems too extreme perhaps in the other direction: that given my evidence, there’s a sharp probability I should give to it raining tomorrow.

Rob Wiblin: And if I said 71.2 rather than 71.1, I was wrong to think that.

Alan Hájek: Yes, that’s right. And you’d think that different people could disagree to some extent, but not according to this thesis — not if they’re rational. Then you’ve got all the positions in between in this spectrum. And we could then have discussions about just how much credences are constrained.

Alan Hájek: Actually, this nicely brings us to the third main interpretation. Let’s call it “evidential probability.” The thought is that evidence does put some constraints on hypotheses or propositions — “degrees of support,” let’s call them, and probabilities measure these degrees of support. So for example, maybe, given your evidence, it’s very probable that the sun will rise tomorrow. Then we might say the evidential probability of sunrise, given your evidence, is high.

Rob Wiblin: I haven’t quite tracked that. How is this different than the previous ones?

Alan Hájek: So the previous one, we started with just subjective probability, and notice how we morphed into the third one. But first we said we had the permissive, radical subjectivism, that said it’s up to you, your choice.

Rob Wiblin: You can believe whatever you like.

Alan Hájek: Yeah, your credence, just run with it, and then update it appropriately. That definitely was not evidential probability. Then probably, when we get to the other extreme, the uniqueness thesis, then it does seem that that would be something very much like evidential probability that would constrain you.

Rob Wiblin: I see.

Alan Hájek: And then there are the positions in between. I don’t want to make a sharp division between subjective and evidential probability — really it’s a spectrum, and this corresponds to what you might call subjective Bayesianism, or more objective Bayesianism, and degrees of that as we move down the spectrum.

Rob Wiblin: OK, so the spectrum between the subjective Bayesianism and this evidential Bayesianism: where you can believe anything, and where you have to believe one specific thing. Both of them feel pretty terrible in their own way. You’re saying maybe the most plausible thing, or the most intuitive thing, is going to be somewhere in the middle, where there’s a range of stuff that you can believe that’s not stupid. But you can go too far. And it’s also not just one specific thing. You’re not an idiot if you don’t believe the exact truth. I guess whenever you have a spectrum like that, it’s going to feel a little bit arbitrary, whatever point you draw on there. So how would we even begin to know what is the correct point along that spectrum?

Alan Hájek: Well, we start adding some constraints to the radical subjectivism. Maybe for example, the principle of indifference — you might think that’s a kind of evidential probability.

Rob Wiblin: Can you explain that?

Alan Hájek: Yeah. When your evidence bears symmetrically on a range of possibilities — not favouring any over any other — then you should give equal probability to these cases. It’s most plausible in gambling cases, like you toss a coin: heads, tails. My evidence doesn’t favour heads over tails. It seems I should give half-half to those possibilities. And then we can complicate that, and fine-grain some problem, and then say either you have no evidence (so you should give equal probability to the cases), or you do have evidence, but it bears symmetrically on them, and you should give equal probabilities.

Alan Hájek: Now a big discussion could be had. There are serious problems with the principle of indifference as I just stated it, and various people think it’s bankrupt because of these problems. But it’s funny how people often reach for it intuitively, even when they’ve disavowed it a minute ago. For example, in the Monty Hall problem. Maybe you know that?

Rob Wiblin: Yeah.

Alan Hájek: Behind one of three doors there’s some prize: what probability should you give to it being behind a particular door? And then the problem continues. Now no one says, “I give probability 1/17 to the prize being behind door number one, because I’m just unconstrained, and that’s what I feel like.” 1/17? You feel the pull of the principle of indifference.

Rob Wiblin: Yeah, I see.

Alan Hájek: I should give third, third, third, even if you’ve just a moment ago disavowed the principle of indifference. So it seems like, in some restricted cases at least, it has some pull.

Rob Wiblin: I suppose we’ve been talking here about what you should believe, which I suppose is like smuggling in this idea that there is anything that you should believe. Where we practically have brought in ethical considerations almost into what you should believe. Maybe that’ll be too much to get into here.

Alan Hájek: I think so. We may be getting into ethics later too, which is not my area of specialisation, but I’m always happy to try.

Rob Wiblin: OK, with that kind of scene-setting about what probability might be out of the way, the real question I want to ask: me and my friends, when we’re hanging out, when we’re chatting, we’re watching the news, or talking about events in the world, I might say something like, “I think there’s an 80% chance that Boris Johnson is going to resign in the next week or two.” And someone else might say, “No, I think it’s 70%.” And then we go and check the news, and more ministers have resigned, and we’re like, “Okay, I think it’s like 85% now.”

Alan Hájek: 100% now.

Rob Wiblin: 100% now, yeah. But are we doing the right thing? Is this the correct thing to do? Or are we just playing some game that’s fun for us, and is not really any better than any other approach? Or at least not uniquely privileged as the correct approach?

Alan Hájek: So I am a big fan of Bayesianism. I guess I say on Mondays, Wednesdays, Fridays, I call myself a Bayesian. Tuesdays, Thursdays, Saturdays, I’m not so sure. Today’s Saturday, right?

Rob Wiblin: It is a Saturday, yeah.

Alan Hájek: So maybe I’m not so sure, but I think there’s something right about this. We have these degrees of confidence. I think we have to allow this. It’s not just all-or-nothing belief. It does seem there are better and worse ways to manage these degrees of confidence. There are various good reasons to think that they should obey probability theory. We may talk about that — Dutch books, for example. There are ways that they should update so-called conditionalisation.

Alan Hájek: And we can then have disputes about how much constraint there should be on the priors that you have, and we had a bit of that discussion before. But it seems like a very good model, and it’s very fertile too. Lots of good things are closely related to this. Maybe soon we’ll talk about decision theory, another thing that I’m fond of, and probabilities figure centrally in that. So I think it’s a good place to start.

Rob Wiblin: OK, a couple questions. One is: what are the live alternatives? And then maybe I’m curious to know: why are you, for some of the week, not into Bayesianism, or not into this approach? Let’s do that one first. What are the reservations that you have?

Alan Hájek: Yeah, we start with some axiomatisation of probability, and Kolmogorov is the standard. And he axiomatised unconditional probability first, and then conditional probabilities, defined in terms of unconditional probabilities. The probability of A, given B, is a ratio: probability of A and B divided by the probability of B, assuming that the probability of B is positive.

Alan Hájek: I actually think it should be the other way around. I think the conditional probability is the basic notion, the fundamental notion. I think there are problems with Kolmogorov’s axiomatisation, as I’ve just put it. He did have further subtleties, which I can’t get into here. But there are two kinds of problem. Remember I had this proviso — provided that the bottom line of that ratio is positive; you can’t divide by zero. But here’s now another problematic feature of the standard theory: with real valued probabilities, it seems that probability-zero events can happen. Now that’s not intuitive; we normally think if something can happen, it must have positive probability.

Rob Wiblin: Yeah, what’s an example?

Alan Hájek: Think for example of the radium atom decaying exactly at noon tomorrow. What’s the probability of that on the standard theory, where it’s real valued probabilities — in this case, an exponential decay law? The probability of it decaying exactly at noon is zero. Now, it decaying in some infinitesimal region around noon, that could be something more, but noon itself, that point gets probability zero. Or throw a dart at a dartboard randomly, what’s the probability that it hits the exact centre point?

Rob Wiblin: So for people who didn’t do much maths at university, this is this classic thing. Imagine that you’re going to randomly choose any number between zero and one. The probability of it being any specific number is zero, because there’s uncountable infinitely many numbers. Indeed, I think the probability of you randomly picking a number that could ever even be written down is zero, right?

Alan Hájek: Well, are you thinking of computable numbers?

Rob Wiblin: Yeah, it’s not computable. Right.

Alan Hájek: And there are uncountably many of those.

Rob Wiblin: OK, so you’re saying each of these specific numbers has a probability of being chosen of zero, or any specific example that you pick is zero, and yet we know that one of them will be chosen?

Alan Hájek: Yes.

Rob Wiblin: OK, yeah. It’s a bit like the lottery paradox.

Alan Hájek: It is. It’s like an uncountable lottery paradox. And this is a puzzling feature. There is this rather strong intuition: “Come on, if it can happen, it should get some positive probability.” We want to distinguish it from an impossible event. And it seems the way to do that is, well, it’s got greater probability.

Alan Hájek: Actually this is another way in which we might go away from this numerical Kolmogorov axiomatisation. You might say more fundamental is comparative probability. And now the primitive is something like “X is more probable than Y.” Or maybe better still, “W, given X, is more probable than Y, given Z,” or at least as probable. And now make that the fundamental notion, and then we can maybe recover the numerical theory out of certain axioms and so on, on those fundamental comparative probabilities.

Rob Wiblin: Yeah, I’m just thinking about the “choosing a specific number” case. It’s actually quite a bad situation. So normally with Bayesianism, we’re thinking that we’ve got a prior probability that any number be chosen, and then we update a lot based on the evidence. But of course our prior on any number is going to be zero. When you multiply anything by zero, it’s still zero. So a random number gets chosen. You have your prior, you observe the outcome, and you still think it wasn’t chosen, because it was impossible for any number to be chosen, whatever it was. It’s quite bad.

Alan Hájek: Well, impossible in this probabilistic sense. And that’s the trouble, that it seems that probability, so to speak, has blurry vision. It does not distinguish between the genuine impossibilities and these very unlikely — but in some good sense, possible — cases.

Alan Hájek: And this is related to my worry about the ratio formula. Again, you can’t divide by zero. Now what happens in this experiment, where you randomly choose a real number from zero to one. You learn its value, but it’s something you antecedently gave probability zero. If you just looked at the ratio as standardly understood as involving a ratio of real numbers, it was zero divided by zero, it was zero that you’d get that particular number.

Alan Hájek: Now, how do you update on that? How do you, as we say, conditionalise on something you originally gave probability zero? Now I have to say that this quickly gets us into very sophisticated mathematical refinements. Kolmogorov himself was well aware of this problem, and he had a sophistication. People bring on hyperreal numbers; they bring on richer number systems that are meant to preserve this intuition that if something’s possible, it’d better have positive probability. And we can let that debate unfold. But it’s just to say, now this is Saturday, and I’m just trying to bring out that there are various problems with just this simple statement.

Rob Wiblin: It’s not as straightforward as it may seem.

Alan Hájek: Just follow Kolmogorov: probabilism says your probabilities should obey these axioms. Well, it’s more complicated. Should we take conditional probability as primitive? Do we have to worry about the probability zeros? Do we enrich the number system beyond the reals? Should we take comparative probabilities as basic? Lots of debates to be had.

Rob Wiblin: Yeah, this might be a stupid frivolous question, but is it possible to have a probability that’s a negative probability, or a probability above one? I’m just thinking obviously we have imaginary numbers now, off the real number line. Is any of this coherent?

Alan Hájek: Well some people think so. Feynman thought there could be negative probabilities. And of course you’re not obeying the Kolmogorov axiomatisation, “Thou shalt have non-negative probabilities.” There are some thoughts along these lines, that there’s something that behaves probability-like, some function that seems to have probabilistic properties, and then you see that it sometimes goes negative. There are meant to be some physical examples of this, and then you say, “Ah, well those are the negative probabilities.”

Alan Hájek: There are some problems with this. Just to take one: consider the usual formula for independence. Two events, A and B, are independent, just in case the probability of A and B equals the probability of A times the probability of B. Now suppose you have independent negative-probability events: their product, something negative times something negative, is positive. And that seems to be a problem if you think the event itself, and maybe the conjunction of it with something else, has negative probability. How does independence pan out? Huge issue we could get into there too.

Rob Wiblin: Are there any other downsides to Bayesianism that seem important, or weigh on you at all?

Alan Hájek: Here’s one thing that just bothers me a bit, and I’ll throw it out there. As a slogan, I’ll say, “Subjective Bayesianism is anchoring and adjustment” — and I need to explain what I mean by that. “Anchoring and adjustment” is a heuristic that people often use when estimating some quantity. They’re given a so-called “anchor” — some starting point for thinking about the value of that quantity — and then they adjust until they reach an estimate that they find plausible. The trouble is that sometimes the anchor is entirely irrelevant to the quantity, and it just should be ignored, yet it still influences the final estimate — the adjustment is insufficient.

Alan Hájek: There are a couple of classic examples I can give you. Tversky and Kahneman had a famous study. They asked people to watch the spin of a roulette wheel, which was rigged to land on either 10 or 65, and then they were asked whether the percentage of African countries in the United Nations was higher or lower than the number that they saw. And then they were asked to estimate the percentage. Those who saw a low number tended to give substantially lower estimates for the percentage than those who saw a high number. Of course they knew that the roulette number, the anchor, provided no information whatsoever about the percentage, yet it still influenced their estimate. And that just seems absurd, that just seems crazy.

Alan Hájek: There’s another famous study from Ariely et al. They asked MBA students at MIT to write down the last two digits of their social security number. And then they were asked whether they would pay this number of dollars for some product — say, a bottle of wine or a box of fancy chocolates, and so on. And then they were asked what was the maximum amount they were willing to pay for the product. Those who wrote down higher two-digit numbers were willing to pay substantially more. And of course, they knew that their social security number was completely uninformative about the value of the product, but still, they anchored on it, and it influenced their final valuation. So the idea is that the residue of the anchor remained, even after the adjustment of thinking, “Well, how valuable is this product, really?”

Alan Hájek: Now these seem to be paradigm cases of irrationality, but now consider a punitive paradigm of rationality: subjective Bayesianism. Here you start with a prior — that’s your initial probability distribution before you get any information. And the only constraint on this is that it obeys the probability calculus, that’s the version I’m thinking of. That’s your anchor. Your prior is your anchor. And then you get some information and you update by conditionalising on it, as we say. So, your new probabilities are your old probabilities, conditional on that information. That’s your adjustment.

Alan Hájek: But the trouble is that your prior has no evidential value; it’s not based on any information. And you know this — that’s what makes it a prior. And its residue remains, even after the adjustment, often. Now, we can imagine that your prior was even determined by the spin of a roulette wheel, or by your social security number, as long as it obeys the probability calculus, and still it influences your final probabilities, your posterior probabilities, as we say. Now the worry is, why isn’t that just as absurd as before? We were laughing at the people in the African United Nations experiment, or the wine and chocolate experiment. What’s the relevant difference? And look, there are things that one can say, but I just put that out there as something that needs some attention.

Rob Wiblin: I think, in theory, if you get a lot of empirical information over time, then the influence of your starting point becomes more and more gradually irrelevant over time, as it’s kind of washed out by all of these updates you’re making, conditionalising on things that you’ve observed. So there’s one thing, at least if you’re around long enough and collecting enough evidence to move away from the very first prior that you started with.

Rob Wiblin: Another difference is that, I think in theory you’re meant to start with something like an uninformed prior — which is to say, a prior that is extremely agnostic, that isn’t really pretending to know all that much. Now, I think it’s a bit hard to define exactly what is the appropriate uninformed prior, but the hope is that your views are going to be very flexible initially. Because it would be foolish to think that before you’ve looked at any evidence, you should have a really strong view about things. So maybe that’s one way in which the prior should hopefully be doing not that much work, even if we conceive that it is largely arbitrary.

Alan Hájek: Yes. Excellent. Both good replies. The first one was the washing out of the priors in the long run. And there are these famous convergence theorems for Bayesianism that, in the limit, under certain appropriate conditions, effectively the prior is completely washed out. And that’s good, but then, as Keynes said, “In the long run, we are all dead.” At any point in our finite lives, we will not have reached that limit, and the worry is that the residue of the prior will remain at all of these finite stages, which is all we ever have. So it’s nice to know that there are these theorems that we’ll get there in the end, but the problem is we’re never at the end.

Alan Hájek: Regarding the second reply, and that’s certainly a good one: yes, I think actually that’s the way to go, that you want some sort of constraints beyond the probability calculus on the priors. And as you said, maybe uninformative priors, as we might say, maybe there are some constraints on what the priors should be — not just anything goes. And in fact, there’s a so-called uniqueness thesis that says there’s even exactly one — there’s only one rational starting point, which is somehow maximally uninformative, you might say. For example, the principle of indifference might kick in at that point. And now we get into a good debate about just how constrained priors should be.

Alan Hájek: So anyway, I raised my anchoring and adjustment worry as a concern for that completely unconstrained version of Bayesianism that just said, “Thou shalt obey the probability calculus — but beyond that, it’s OK, whatever you do.” It seems that you need to be a bit more constrained than that. And then, as you say, the analogy to anchoring and adjustment starts to go away. For example, we can’t just spin a roulette wheel, or we can’t just get your social security number and make that your prior. That’s not the right starting point.

Rob Wiblin: Yeah, we’ve only got five minutes left on stage. So I wanted to give you a chance to offer the audience a few criticisms of frequentism. So first thing: what is frequentism? And secondly: what are some of the problems that we haven’t yet mentioned with it?

Alan Hájek: OK, frequentism. Remember my taxonomy of objective probability, subjective probability, evidential probability? Frequentism would be a version of objective probability: out there in the world, mind-independent, there are these probabilities. What are they? They’re relative frequencies.

Alan Hájek: Take a simple case: what’s the objective probability that the coin lands heads? Let’s suppose it’s a fair coin. Well, you toss the coin a number of times, and see how it lands, and count the number of heads over the grand total. If that number turns out to be a half, then bingo, you’ve got a chance of 1/2 for heads. And now we generalise that. So in general, the probability of some attribute, relative to some reference class — attribute A relative to reference class R — is the relative frequency of As in that reference class.

Rob Wiblin: Makes sense, I think.

Alan Hájek: I think it may still be the most popular account among scientists.

Rob Wiblin: Yeah, it’s always scientists, isn’t it?

Alan Hájek: They need to talk to philosophers.

Rob Wiblin: Yeah, absolutely. On so many things. But OK, so this has an intuitive appeal, and it’s a bit like how you’re taught about probability at high school, or something like that.

Alan Hájek: Yes, that’s right. And we all know that there’s got to be some close connection between probability and frequency. Frequentism posits the tightest connection of all: identity. And now we could distinguish two kinds of frequentism. You might call it “actual frequentism”: you just look at what actually happens. You just in fact toss the coin some number of times, count the number of heads, divide by the total of trials, you’re done.

Alan Hájek: Then we could go hypothetical and say, “Well no, I really meant some long run of heads.” Maybe I didn’t get a long run, so I go hypothetical. I imagine, counterfactually, a long run of trials, and it’s the relative frequency I would get maybe in the limit, if I have infinitely many trials.

Alan Hájek: We have two versions. And you probably want to ask why I don’t like frequentism. I think any version of frequentism has the following problem: I think it’s just built into the very notion of probability that fixing the probability of something is compatible with just any pattern, and any frequency of the corresponding outcomes.

Alan Hájek: Let’s do it for the coin toss: let’s suppose we have a fair coin, by which I mean the chance of heads is 1/2. I say that’s compatible with any distribution of heads and tails, including heads on every toss, tails on every toss, and everything in between. I can even tell you what the probabilities of those outcomes are: it’s 1/2n, for n trials for each exact sequence.

Alan Hájek: But it seems you can’t say that if you’re a frequentist, because if the coin lands heads every time, then that’s probability 1, according to frequentism. So there are various problems. We talked about the problem of the single case earlier. There’s the problem of the double case: if I toss a coin twice, then I can only have probabilities of zero, half, and one. Offhand, I would have thought there could be other biases. And as I like to point out, it turns out that you cannot toss a fair coin an odd number of times according to frequentism — at least this actual frequentism — because just by definition, according to them, if it’s an odd number of times, you can’t have exactly a ratio of 1/2 of heads.

Rob Wiblin: It seems pretty weird.

Alan Hájek: That seems pretty weird. You’ll also have, I guess, a version of the gambler’s fallacy: suppose you somehow know that the chance of heads is 1/2 of this coin, and you start tossing the coin, and you see a surprisingly long run of heads, as could happen. But if you know that — if somehow God tells you — that the chance of heads is 1/2, you know that tails must be coming, because they’ve got to make up for the run of heads that’s happened so far.

Alan Hájek: OK, that was actual frequentism. Now let’s go to the other kind, the hypothetical frequentism. But now we have some counterfactual. This is not what actually happens. If you were to toss the coin some large number of times, maybe infinitely many times, this is what would happen.

Alan Hájek: I think for starters, this is a very weird counterfactual. Imagine tossing an actual coin, like the 20-cent coin in my pocket, infinitely many times. What would that even mean? This coin would disintegrate before then, so we’d have to violate the laws of nature so that it would survive infinitely long. I think it’s very strange to think there’s a fact of the matter of exactly how the coin would land in this infinite sequence, and even what its frequencies would be.

Rob Wiblin: Yeah, so you’re trying to make this very practical theory, or that’s kind of its virtue, but then it seems to involve this bizarre scenario that could never happen to back it up?

Alan Hájek: Yeah, exactly. And I’ll give actual frequentism credit: at least it was practical, anchored in the world, gave you numbers that you could ascertain. Now when the problems start piling up, and I gave you a few of them, you start retreating to this other version, hypothetical frequentism. But this seems to have other problems: it’s not practical, it’s not ascertainable, I can’t make sense of these counterfactuals. They seem to be about the wrong thing, about some idealised coin that would never disintegrate, and what have you.

Alan Hájek: And so put it all together and I think frequentism is looking pretty bad. I know there are lots of frequentists out there. Read my papers: Fifteen Arguments Against Finite Frequentism and Fifteen Arguments Against Hypothetical Frequentism. I think it’s in a bit of trouble.

Rob Wiblin: All right, well, we have to go upstairs, because other people need the stage here. But yeah, we’re going to go chat about ways of breaking expected value, counterfactuals, and your many objections to objective utilitarianism.

Alan Hájek: Cool.

Rob Wiblin: But for now, can everyone give a round of applause to Alan?

Alan Hájek: Thanks so much.

Rob Wiblin: All right, we’re back in your office, Alan. I was just saying, before we started recording, that in general I have a prejudice against doing interviews in front of live audiences, because my experience listening to other shows is that guests tend to kind of pander to the crowd a little bit. Especially if they’re talking about charged issues. They don’t quite have the courage to say things that the audience wouldn’t like.

Alan Hájek: I hope I didn’t pander too much to the crowd.

Rob Wiblin: No, fortunately we’re mostly safe on these topics, except perhaps frequentism. Although I see you probably have an unusually sympathetic crowd to your criticisms of frequentism.

Alan Hájek: Hopefully, yes.

Rob Wiblin: Cool. But now we can do the really hard-hitting, difficult, challenging stuff that we’re here for.

Alan Hájek: Go for it. I’m ready.

Rob Wiblin: So before we came in here, we were chatting about criticisms of frequentism. I guess one that you didn’t bring up that jumps to mind for me is that it seems very odd that frequentism is saying: we’ve got one particular coin; what is the chance that this coin is going to land heads or tails? It seems like it depends then on all of these other coins, or the same coin at other points in time. Why should the properties of this coin, or our knowledge of them, depend on things far away in space and time? It’s very odd in that respect.

Alan Hájek: Yeah. This radium atom: point to a particular atom, it decays with probability a half in 1,600 years. That seems to be an intrinsic property of this atom. Seems a little odd that its probability depends on how all these other atoms, maybe very far away in space and time, happen to go — decaying or not.

Rob Wiblin: I guess the atom case sharpens it, because with a coin you can flip it many times, but each radium atom can only decay once.

Alan Hájek: That’s it.

Rob Wiblin: And so you can imagine a scenario where there was only one radium atom. Or what if there were lots of radium atoms and then you’ve got some frequency, but then you shrink it down such that now there’s only one left. I guess the frequentists have to say now there’s no probability left anymore.

Alan Hájek: It’s just one or zero at that point.

Rob Wiblin: Yeah, exactly.

Alan Hájek: Yeah. It seems very strange that probability depends on these very extraneous facts. You’d think it’s just that the protagonist is right here and now: it’s this atom we are talking about.

Rob Wiblin: Yeah.

Alan Hájek: But it reminds me a bit of Hume’s theory of causation, about constant conjunction. Take a paradigm case of causation: like I put my hand in a flame by accident and I feel pain. It seems somewhat odd to me to say that what makes that causal claim true is a fact about these very disparate events across space and time: whether putting hands in flames was followed by pains across space and time. No, it seems like the protagonists are right here and now.

Rob Wiblin: I see. So this is an account of causality where you’re saying if the flame causes the pain in sufficiently many cases, or hypothetical cases, then the flame and the pain are correlated really strongly. So I’m saying, “What has that got to do with it?”

Alan Hájek: So Hume had a version of causation like that: his account involving constant conjunction. I think that frequentism, actual frequentism especially, is quite a lot like that. And I have the same reaction just as you did, that “Look, we should just be staring at my poor hand and the flame or the radium atom. It seems odd that its chance depends on these maybe very distant facts and how they pan out.”

Rob Wiblin: Yeah. OK, I guess causality is its own bundle of eggs.

Alan Hájek: Another big topic here.

Rob Wiblin: Let’s go back to probability. I think that’s probably enough of frequentism; people can go and check out your papers demolishing it. 30 Different Ways and maybe some extra ones.

Alan Hájek: My original paper was called “30 Arguments Against Frequentism.” And I sent it in and I was told it was a good paper, but it was much too long. In fact, it was twice as long as what they could publish. So they said that you need to cut it. Well, it was easy: Fifteen Arguments Against Finite Frequentism and Fifteen Arguments Against Hypothetical Frequentism.

Rob Wiblin: Brilliant. I guess people will often cite both and you get double the citations potentially.

Alan Hájek: Absolutely. I should have divided it some more.

Rob Wiblin: Well, apparently this was a big problem in genetics, where you would have people who were doing whole genome studies. Their studies of the effects of different genes, they would just break it into one paper for every chromosome.

Alan Hájek: There you go.

Rob Wiblin: So they could potentially get 30 different papers out of basically exactly the same study.

Alan Hájek: That’s how you pump up your citation, your various markers of productivity.

Rob Wiblin: Yeah. Pushing on from frequentism, I’m wondering, are there other ways that I could in practice start reasoning differently? Start, in my daily life, thinking about the probability of Boris Johnson being deposed in different ways? The adjustment that I’m most familiar with seeing people make is going from these point estimates of probabilities to ranges and so on.

Alan Hájek: Yeah, good. I’d be glad to talk about that. That’s right, and you might say this sharp probabilism — where you’re supposed to assign a sharp real number — is just psychologically implausible. It’s not something that we could do.

Alan Hájek: Take again the case of my probability of rain tomorrow: it’s not sharp to infinitely many decimal places. What do I say? What’s the probability of rain? Well, it’s 0.6-ish. It’s in the region of 0.6. Now the first move you might make is to say, well, the probabilities should be intervals, maybe 0.5 to 0.7 in my case. But in a way, that just makes the problem of precision worse. So now you’ve got two precise numbers, the end points of the interval: 0.50000… and 0.70000… That doesn’t seem to quite fix the problem.

Rob Wiblin: Yeah. I suppose if you wanted to defend it, you’d say you’ve got two numbers now that you’ve chosen, but they’re not as important somehow. No, you’re not looking convinced.

Alan Hájek: Well, you could say that, or you could say these probabilities get determined by something else. Maybe there are the judgements that I make — van Fraassen would say this — and now you just look at all of the probability functions that respect my judgements. And they’ll form a set, probably not just a singleton set, but the set itself will have some sharp boundaries — and it could well work out that that set has a boundary of 0.5 and 0.7, and we represent my credence with an interval.

Rob Wiblin: I see, yeah.

Alan Hájek: And yeah, I might not so easily access these probabilities too. Maybe I’m bad at introspecting. As Williamson would say: maybe my credences are not “luminous” to myself, but I might still have them and they might have these forms, like intervals or sets.

Rob Wiblin: I guess something that’s appealing about saying the probability of rain in Canberra tomorrow is between 0.5 and 0.7 in a practical sense is that it helps to indicate your level of uncertainty to other people. Whereas if you just say it’s 0.6, well, firstly people sometimes laugh at you because it sounds so precise. It sounds ridiculous to be so sure. They’re less likely to laugh at you if you give a range.

Alan Hájek: Yeah.

Rob Wiblin: I guess actually it does potentially indicate something technical, which is how quickly you would update your beliefs as you get new information. If you have a very wide range, you’re saying, “Well, a wide range of point estimates would be plausible and reasonable, and so as I see the weather report, I’m going to shift a lot.” Whereas it’s interesting if you say it’s 0.6, in some sense, you’re making a claim that you are 100% sure that it’s 0.6, and you’ll just never change your mind about that even after it rains or doesn’t.

Alan Hájek: Oh, I’m not sure you’re committed to that.

Rob Wiblin: No? OK.

Alan Hájek: I think it’s OK. You’ve got the 0.6 credence initially, and then you conditionalise, as we say — you update that as the evidence comes in. Maybe you see the rain, for example.

Rob Wiblin: I see, right.

Alan Hájek: That becomes a new sharp value of 1. That’s OK.

Rob Wiblin: But I suppose it would be a misunderstanding to interpret someone as saying the probability of it raining in Canberra tomorrow is 60% with 100% probability. That’s not what’s being claimed.

Alan Hájek: Well that brings us to another issue, with another choice point for varieties of Bayesianism: whether you allow higher-order probabilities. Do you have, for example, credences of credences, probabilities of probabilities in general? You could say no, that doesn’t make sense, or that maybe they collapse down to just the first-order probabilities. But you could say, “I have probabilities about various ways the world could be, including what my credences are.” Because that’s part of the world. Why not have intermediate credences for that too?

Rob Wiblin: I suppose the thing that it feels like you’re doing when you introspect, and try to give these ranges to actual questions, is you plug in different numbers in your mind and then see how much the mind revolts at that number. So if you say the probability of rain tomorrow is 1%, then you’re like, “No, that’s too crazy.” Then you have a level of craziness to all these different numbers, and then that gives you some sort of distribution over what numbers are plausible. But because you have kind of imperfect access to what you think you ought to believe, that’s maybe what you’re measuring.

Alan Hájek: And maybe we could compare your credences to, say, lotteries. What do you think is more probable: rain tomorrow, or a 100-ticket lottery where one of tickets 1 to 60 is the winner? It feels maybe slightly higher than that, but now I make it 1 to 70, or maybe slightly lower than that, and maybe I can somehow home in on what my probability is. That’s a way of eliciting credence from myself.

Rob Wiblin: So are philosophers working on any adjustments to how we do these things that could plausibly affect how I reason about uncertain events on a day-to-day basis that might actually help me live better?

Alan Hájek: Starting, I suppose, with the imprecise credences. Maybe it’s too much to ask of you to have the sharp probabilities and update by conditionalising all the time, but this was meant to be a friendly amendment. Jeffrey calls it “Bayesianism with a human face,” and it seems more psychologically plausible that you’ve got these ranges of probabilities — intervals or sets. So that’s one kind of humanising.

Rob Wiblin: Yeah. You offered the objection that now you’ve just chosen two numbers. You’ve made it so that now you’ve got double the problems. Is that a good objection in your view?

Alan Hájek: Well, I think it has to be taken seriously, but now maybe things start getting worse. I don’t want to have this exact sharp interval from 0.5 to 0.7 in my example. Maybe what I should have is a probability distribution over numbers.

Rob Wiblin: So it’s smooth.

Alan Hájek: Smooth, and maybe it hits a peak somewhere in the middle, near 0.6, and it tapers off towards the edges. And maybe it doesn’t just stop sharply at 0.5 and 0.7. But now we just raise the problem again. So really I’ve got this sharp, exactly that, sharp probability function over the range of values?

Rob Wiblin: Surely you’re unsure about the value of the probability that you should give to each probability — and so infinite recursion.

Alan Hájek: Yeah, and we have uncertainty all the way up or all the way down. And then one reply might be, “Look, we are just representing things; we’re providing models. So don’t reify this so much. Don’t take this all literally that this has to be in your head.” But these models with various levels of sophistication may better or worse represent what things are going on in your head.

Rob Wiblin: Things my brain is doing.

Alan Hájek: Something like that.

Rob Wiblin: Right. So infinite recursion, actually, it’s not always a problem, right? You could just say, “There is just uncertainty distributions all the way up.”

Alan Hájek: Yeah, right.

Rob Wiblin: And maybe that’s actually fine and it cashes out. I suppose you could in fact represent those uncertainties as far as you like, so that they could all cash out to this point estimate. But if you like, you can go one level up and represent uncertainty there and you can go another one. At some point, it doesn’t feel like it’s adding any value to you to do it any further. Maybe it does point first.

Alan Hájek: Maybe things stop at a fixed point. Or maybe you get a bit more information at each level you go back. This is another of the heuristics, by the way: infinite regress is a technique often used. And often infinite regress is thought to be a bad thing, and it’s a fatal problem for a view if it faces such a regress. But that’s not always clear. Some notions seem to be well understood in terms of infinite hierarchies like that.

Alan Hájek: Lewis’s notion of convention is like that in terms of common knowledge or common belief, which is a recursive thing about knowing of each other, what they believe and so on for higher orders. And we don’t just say, “Well, that can’t be right because we have an infinite regress.”

Rob Wiblin: Especially if the regress converges on something where it just becomes the same every time.

Alan Hájek: Yeah, that’s a fixed point.

Rob Wiblin: Oh, the fixed point. And then you’re like, “Well, that’s just fine.”

Alan Hájek: Yeah, that’s right.

Rob Wiblin: Interesting. OK, so we’ve got these — what are these called again? The range of probabilities? What’s the term for that? The probability distribution on the probability.

Alan Hájek: Sometimes we call it the “representor.” The representor is the set of probability functions that represent your credences. And they’re meant to be all of the precisifications of your imprecise credence, but that are faithful to it, but then fill in all of the precise ways.

Rob Wiblin: So this really does feel like a friendly amendment to me. I feel it’s the same spirit, and we’re just going to make it a little bit better or represent high levels of uncertainty. All good.

Alan Hájek: That’s right. And again, this takes me back to earlier, when I was saying that I’m a Bayesian on Mondays, Wednesdays, Fridays, and so on. Look, so many things deserve the name Bayesian and I shouldn’t really say that I’ve jumped ship on Tuesdays and today, Saturday, just because I make certain choices in that tree of choice points.

Rob Wiblin: Yeah, makes sense. Are there any more radical departures? Or are there people who are probability nihilists?

Alan Hájek: Yes. I suppose you could say that all there is are just these all-or-nothing states for a start when it comes to credences — beliefs, as we might say.

Rob Wiblin: Well, sometimes there are the kind of people who are just like, “There is what is actual, and that has probability one. Everything else is probability zero.” And that’s kind of all that there is to say about probability: you should believe the thing that will happen or the thing that is true, and you shouldn’t believe the thing that is false, and that’s kind of the end of the story.

Alan Hájek: And that just shows you that objective norms are sometimes tough to live up to.

Rob Wiblin: Yeah, right? Not easy.

Alan Hájek: You ought to be omniscient. It’s tough to be omniscient, but that’s not the fault of the norm. It’s just that the norm is hard to live up to. Well, that does seem a little extreme, I guess.

Rob Wiblin: It’s unhelpful even if it’s right.

Alan Hájek: Yeah. It’s certainly not giving you good advice, for example. I think you’re asking are there alternatives?

Rob Wiblin: Yeah. Are there any more radical departures? So let’s say we come back in 500 years, philosophy has advanced a tonne. Are there any research directions that could take us quite away from our current practices regarding probability? Or have we explored a decent fraction of the space here, and now it’s more refinements?

Alan Hájek: Now I’m very immersed in the probabilistic paradigm, but there are definitely alternatives, ways of representing belief. In fact, we could divide them into two camps: representations of full belief, as we might say — all-or-nothing belief — and then representations of partial belief, the thing that comes in degrees. And on the first side, the full belief, you’ve got things like so-called AGM, this is belief revision theory.

Rob Wiblin: I don’t think I’ve heard of this at all.

Alan Hájek: That’ll take seriously just having these all-or-nothing attitudes. Beliefs in which you add to some corpus that you have.

Rob Wiblin: Oh, so is this you know some things with certainty, and then everything else you don’t know.

Alan Hájek: Or believe them at least.

Rob Wiblin: OK.

Alan Hájek: Epistemic logic is like that — the logic of knowledge — all or nothing. And then on the side of this graded thing, the partial beliefs, you’ve got rivals to the probability-based account like Dempster-Shafer theory, possibility theory, plausibility theory, ranking theory. So there’s a lot of schools of thought. And I have to confess my own immersion in one camp, and that’s the one I’ve studied. But I have to acknowledge and celebrate the existence of these alternatives. They’re important in their ways too.

Rob Wiblin: OK, so that is quite a lot on probability. Let’s move down the track now to an application of probability estimates, which is expected value. We’ve got a whole cluster of questions around that. To start though: what is expected value? It’s a term that we throw around a lot, but I think it’s actually quite technical, in a way.

Alan Hájek: Yeah, that’s right. And that’ll quickly turn into a discussion of expected utility. It comes up especially in decision theory, expected utility theory. You have a choice among certain options: what should you do? Let’s assume that the world could be in various ways — there are various states of the world, and you don’t have control over what they are, but you assign probabilities to the various states. And the combination of a particular action of yours and the state of the world together determines an outcome. You can value the outcomes more or less, and we could put numbers which measure how much you value them.

Alan Hájek: Now, the expected value is a weighted average of these values, where the weights are the probabilities. So, that turns out to be a sum of products: you take probability multiplied by value and add across all of the states, and that weighted average is the expected value. And now think of that as a figure of merit. That’s how choice-worthy each of your actions is.

Rob Wiblin: “Choice-worthy” is another one of these things that no normal, non-philosopher would say.

Alan Hájek: That shows you where I’ve been.

Rob Wiblin: Yeah.

Alan Hájek: And now you should maximise that quantity: do whatever action — or actions, because maybe it’s not one — will yield the highest value along that score.

Rob Wiblin: Cool. So, I guess to make that concrete and really simple, if you had a bet where you got $1 if a coin comes up heads, $2 if a coin comes up tails — and you think it’s a fair coin — then the expected value, in dollars in this case, is $1.50.

Alan Hájek: Yes. That’s right.

Rob Wiblin: And of course, normally we would rather talk about utility or wellbeing or something, because we don’t value dollars all the same, especially as the amounts become larger. Or maybe you don’t value dollars at all. So utility is the thing where it ultimately cashes out.

Alan Hájek: That’s right. And very soon we’ll talk about some motivations for why we should really shift from payoffs, just measured in terms of dollars, to this other thing: utility.

Rob Wiblin: Expected value is just super important to the project of effective altruism and doing good, because any project you take on is going to have a very wide range of possible outcomes, at least if you’re doing anything remotely interesting. It could include bad outcomes, potentially lots of neutral outcomes, possibly things go super well and you have a really good outcome.

Rob Wiblin: And you’re just going to be absolutely screwed, absolutely paralysed, if you don’t have a way of making decisions under that uncertainty where you’re able to weigh up, “What should I do, given that things might go badly, or might go well, or might be neutral?” And expected value provides this extremely natural way of weighing those things up, where it says, “Something is twice as likely is twice as important. Something that’s good, if it’s twice as likely, is twice as good. And that should get twice as much weight in your decision.”

Rob Wiblin: So, in everyday situations — like what to buy at the shops, what to watch on Netflix — the expected value approach produces extremely natural answers that seem sensible to basically everyone, at least in principle. Even though, of course, we’re almost never consciously calculating expected value. But I’ve actually heard from neuroscientists that have been doing research on how we make decisions, and apparently there is a process in the brain that effectively represents expected value, basically. Where particular neurons will fire based on their expected reward, and then the one that fires most frequently or harder in some way basically ends up winning the decision at some choice point in the brain. Anyway, I probably just said something that’s completely wrong.

Alan Hájek: Well, no. In fact, I think it goes even further. I think it’s not just people, I think even bees are thought to do this too.

Rob Wiblin: Oh, absolutely. It maybe makes sense that it has to be this way, because evolution’s going to push us towards doing things that get us the most food or whatever. And obviously when you have different options on the table, the brain’s got to choose the bigger one.

Alan Hájek: Yep, and you’ll be selected against if you don’t make the right choices that roughly maximise expected value, or close to it.

Rob Wiblin: So maybe we can say expected value is probably deeply embedded in how we’re wired in some sense, even though we’re not actually doing the math most of the time. It’s embedded in our instincts about what risks to take and which ones not to.

But philosophers have looked at a whole bunch of much stranger situations to see if they can make expected value break — whether they can make it stop working, or at least arguably stop working. One of these that you are really into is the St. Petersburg paradox, which some people will have heard of, but other people won’t have. Can you lay out what the St. Petersburg paradox is?

Alan Hájek: Absolutely. A fair coin will be tossed repeatedly until it lands heads for the first time, and you will get escalating rewards — the longer it takes, the better for you — as follows: if the coin lands heads immediately, you’ll get $2; if it takes two trials for the first heads, so it’s tails and then heads, you’ll get $4; if it takes three trials, you’ll get $8. In general, if it takes n trials for the first heads, you’ll get 2n dollars, and so on.

Alan Hájek: All right. How good is that, at least in terms of expected dollar amount? Well, let’s do the calculation. With probability 1/2, you get $2, and multiply those. 1/2 times 2 is 1. With probability 1/4, you get $4. 1/4 times 4 is 1. And now we keep adding terms like that. It’s 1 + 1 + 1. 1/2n times 2n is 1, forever. 1 + 1 + 1 added up forever is infinity. So the expected value of this game in terms of dollar amount is infinity.

Alan Hájek: So, prima facie, it looks like you should be prepared to pay any finite amount to play the game just once. And you should think you’re getting a fantastic deal paying just a finite amount. But that’s very unintuitive, because, very probably, you’ll make a rather small amount. For example, most people wouldn’t pay $100 to play this game, it seems, because very probably they will make a lot less than that.

Alan Hájek: And therein lies a paradox. There are various paradoxical aspects of this game, and it is paradoxical that every possible outcome is finite. You know you’ll get a finite outcome, finite payoff, and yet you value it infinitely. How did that happen?

Rob Wiblin: Something’s gone wrong here, it seems.

Alan Hájek: Yeah. It reminds me, as Garrison Keillor says, is it Lake Wobegon? “All of the children are above average.” So in this case, all of the payoffs are below average, so to speak.

Rob Wiblin: I see. Right, right. It reminds me of the paradox that almost all numbers are small, because no matter what cutoff you choose for numbers being small, below that, most numbers are larger than that.

Rob Wiblin: Yeah. Okay, so the St. Petersburg paradox seems weird. It seems like we should value this thing infinitely, but we do not. How can we fix it?

Alan Hájek: Absolutely. Just look at the moving parts in expected value theory, and we could tweak each of them. I think there are five different ways of replying to the St. Petersburg paradox.

Alan Hájek: We could do something to the probabilities. So don’t just stick in the probabilities as they were before — maybe some function of them. And Lara Buchak has a very nice risk-weighted utility theory, which will tweak the probabilities.

Alan Hájek: Now look at the payoffs. Look at these rewards, which were dollar amounts. Well, maybe we should tweak that. And this was Bernoulli’s solution, actually: that how much you value things is not just a linear function — in this case, dollar amount. There’s what we call diminishing marginal utility: the richer you are, the less you value some incremental amount like an extra dollar. In fact, he thought that how much you really value the money goes more like by the logarithm of the dollar amount rather than by the face value, the dollars themselves. And it turns out if you replace the dollar amounts by the logarithm of their amounts, then you get a convergent series. You get a sum that is finite.

Alan Hájek: Then we can tweak the formula, the combination rule. So, previously it was this weighted average. Maybe we could do something to that. Well, there would be more radical departures like maximin. It’s got nothing to do with expected value, but maybe that’s the decision rule we should follow.

Alan Hájek: An alternative that I like was introduced by an economist, Chew, and studied and developed further in joint work by Chris Bottomley, a former ANU student, and a former PhD student of mine, Tim Williamson. (Not the famous epistemologist at Oxford, Tim Williamson, but funnily enough, the younger Tim Williamson is also at Oxford now, and I’m sure he’ll be famous soon too!). The theory is called weighted linear utility theory.

Rob Wiblin: What are the other options on the table?

Alan Hájek: Now, hold fixed the combination rule, which was this sum of products, and have some different operation on what you do to it. So, previously it was maximising that expectation. Well, maybe we don’t have to maximise it. An alternative would be to minimise it. That would be a pretty crazy theory. But more plausible would be we satisfice — so it’s just good enough to get a sufficiently high expected value. You don’t have to literally maximise it.

Rob Wiblin: I can see the appeal of satisficing. Satisficing would be maximising the probability of being above some particular value. Is that right?

Alan Hájek: Well, it may not necessarily maximise it, but just provided you’re maybe sufficiently high up in the ordering of your actions, that’s good enough.

Rob Wiblin: Oh, so then you stop valuing any improvement beyond that?

Alan Hájek: Yeah, something like that.

Rob Wiblin: So is this the same as having bounded utility in a way? Bounded utility would be, you say, “My wellbeing can’t go above some particular level, so any money beyond that would be worthless.”

Alan Hájek: I was thinking it was different, that maybe you were allowing unbounded utility, but you’re not insisting on maximising this overall quantity. You just say “near enough is good enough.” Benson & Hedges cigarettes used to have this slogan, “When only the best will do.” And satisficing says, “Oh no, you don’t need the best.” And in fact, Voltaire said that the best is the enemy of the good, and maybe you don’t always have to strive for the best.

Rob Wiblin: So this would get us out of the St. Petersburg paradox, because at some point, rather than just adding 1 + 1 + 1, you’d say, “Well, beyond this point, I would be above my satisficing level, and now I don’t value any additional wins.”

Alan Hájek: And if I instead just received $30, maybe that would be good enough, and I don’t have to play the game.

Rob Wiblin: So changing the decision rule. What other options are there?

Alan Hájek: And I’ve left for last the one that I’ve actually argued for in print, so I’d better say something for it, which is: biting the bullet. And how do you do that? Well, maybe it’s not so crazy after all to value the St. Petersburg game infinitely. And here’s an argument for that. By the way, I’ve made a switch to expected utility theory, where I’m now replacing the dollar amounts, the values, with how much you value them in the utility sense — and that’s the quantity of expected utility that you’re maximising.

Alan Hájek: Let’s agree that expected utility theory is plausible, at least for the finite cases, and somehow it went wrong, it seems, in the infinite case. Well, did it? Imagine various truncations of the St. Petersburg game. For example, if the first heads does not happen by the 10th toss, we call the game off and we go home. That’s the end of the game. Well, it seems that the value of that game is 10. All right?

Rob Wiblin: Yeah.

Alan Hájek: Now truncate at the 11th toss. It seems the value of that’s 11. Truncate at the 12th toss, the value is 12, and so on. Now, the St. Petersburg game is strictly better than all of these. This is so-called “dominance reasoning” that, come what may, the St. Petersburg game is at least as good as each of these, and with some probability it’s genuinely better. So, the St. Petersburg game, it seems, is better than 10. It’s better than 11. It’s better than 12… Keep going. It’s better than all of them.

Rob Wiblin: Yeah. It’s better than any integer.

Alan Hájek: Better than any integer. So that seems a reason to value it infinitely. And that’s a way of biting the bullet.

Alan Hájek: But there’s a revenge problem. Maybe we’ve solved the original St. Petersburg problem, but there’s a St. Petersburg-like game that we can easily introduce which will thwart even this proposal. Now let the payoffs go up not exponentially as they did in St. Petersburg — let them go up super-exponentially. The payoffs are not 2n. They’re 2 to the 2n. Now, take logs as Bernoulli would have us do. Take the log of 2 to the 2n, and you get 2n. You’re right back where you started.

Rob Wiblin: So, now it’s a sequence again of adding up 1 + 1 + 1 + 1… utility this time.

Alan Hájek: Yeah, that’s right. So, it’s not enough just to have diminishing marginal utility. It seems really what this solution is asking for is bounded utility, so I can’t just keep on ramping up the payoff sufficiently to get arbitrarily high utility. That’s the solution now to be discussed. Should utility be bounded? And Arrow and Hardin and Aumann, and various luminaries have advocated this solution. And it’s actually implicit in various theories too, that really, utility is bounded.

Rob Wiblin: So, this seems like a super-compelling response, that we don’t value each dollar equally as much. And this captures, I think, the intuition for why people don’t want to pay so much. Because they’re like, “Even if there’s some infinitesimal probability of me winning an enormous amount of money, I just don’t value the money enough to make it worth it. By then I’m so unfathomably rich, more money is not worth very much to me.”

Alan Hájek: And that may be true of humans. We probably have a saturation point, and there’s only so much we can value things given our finite heads and so on. But there’s still, I think, an in-principle problem. You might ask, why should utility be bounded? Various other quantities are not bounded: length is not bounded, volume is not bounded, time, spacetime curvature — various things are not bounded. And why should utility be? Normally when you do have a bounded quantity, you can say why it is and you can say what the bound is. Think of, say, angle: if you think of it one way, angle is bounded by 0 to 360 degrees, and it’s easy to explain that. Probability is bounded with a top value of 1, bottom value of 0. Not so easy to say it in the case of utility.

Alan Hájek: And the problem gets worse if we make utility depend on one of these other unbounded quantities in what seems to be an unbounded way. Here’s a poignant example: I had a student at Caltech who hated his hometown so much that he said that, for him, utility was distance from his hometown. The further away the better. And then length, being unbounded, then gave him unbounded utility. He was joking, but you see the point, that in principle there could be these relationships between utility and some unbounded quantity that would yield unbounded utility.

Alan Hájek: Also, in this case, we’re talking about the rationality of individual action: what should you do? But fairly soon expected utility–like reasoning applies to, say, population ethics. And you can imagine a St. Petersburg–like paradox where the loci of value are different people in a population, and we can run a paradox for that. Now the analogical replies don’t seem so good — diminishing marginal value of people doesn’t sound so good. You want each new person to count equally. And you certainly don’t want to bound, it seems, the total value across people as the population grows.

Rob Wiblin: Yeah. I mean to start with, that has the counterintuitive conclusion that how valuable it might be to create an extra person on Earth might depend on how many aliens there are or something like that. Or it’s like you have to know how many beings there are in the entire universe in order to tell how good it is to add an extra one, which seems odd.

Alan Hájek: Yeah. That’s it. So, for all these reasons, I don’t like the bounding utility solution. And that’s why I look elsewhere, and even bite the ultimate bullet, at least in print, that maybe that verdict from expected utility theory is not so bad.

Rob Wiblin: OK, so you tend towards just biting the bullet and saying, “We should maybe just accept this.” But it doesn’t seem like in an actual, practical situation — “Would you pay an infinite amount? Would you pay any finite amount if someone actually came up and offered this to you?” — it doesn’t seem very action-guiding to bite the bullet, in a sense.

Alan Hájek: And yet I found something attractive about that dominance reasoning that was iterated.

Rob Wiblin: It makes sense. Yeah, I see the argument. But yeah, where does it leave us if you’re willing to bite the bullet on something like that?

Alan Hájek: Yeah.

Rob Wiblin: I suppose one thing you might say is that, in any practical situation where someone came up and offered you this game, you wouldn’t believe them.

Alan Hájek: Now we get into Pascal’s mugging, I think.

Rob Wiblin: Yeah. So then it starts to approach this other thing, which some people in the audience will be familiar with, this Pascal’s mugging situation. But this might be an escape or a way of reconciling biting the bullet with not actually playing the game in practice: they would say, “Well, if I was 100% sure that I was playing the St. Petersburg paradox game with a particular kind of setup, then yes, I would value it at infinite dollars. But because I don’t believe that I ever am, because I don’t even think it’s possible in the universe for someone to deliver these gains, then I’m not actually compelled in a real situation to take it.”

Alan Hájek: Let me make the St. Petersburg game more paradoxical.

Rob Wiblin: OK.

Alan Hájek: So, in the first telling of the story, you just genuinely believe this offer and you think you should take it at any price. Richard Jeffrey famously says, “Anyone who offers you the St. Petersburg game is a liar, because they’re pretending to have an indefinitely large bank account.” And I suppose that’s true, that we would think someone who offers you this game is a liar. But I don’t think that gets us out of the paradox so easily, because the paradox hits you with its full force as long as you would assign just some positive probability to the offer being genuine.

Alan Hájek: Suppose someone comes up to you and offers you the St. Petersburg game, and you give probability 0.000000000001 to them telling the truth. As we would ordinarily say, you think they’re a liar. You’d give overwhelming probability that they’re not telling the truth.

Alan Hájek: But now think about that 0.000000000001 and the paradox hits you with its full force, because now you multiply that by infinity. And even that extremely low probability of the St. Petersburg game has infinite expected value for you, given your assigning positive credence, however small. One in a googolplex will still keep the game alive enough that you should value that prospect infinitely. So, it seems the only way to escape the paradox is to genuinely give probability zero to it. This brings us back to our earlier discussion that our probability zero should be reserved for impossibility, and that anything that’s possible should get positive probability — perhaps one in a googolplex, but something positive. All right?

Rob Wiblin: I guess some people might be inclined to say, “Well, an infinite payoff or infinite value is impossible in the universe as it is.” But we don’t know that.

Alan Hájek: Maybe it’s impossible. But that’s right: are you so sure that you really can zero it out? And now the way I like to put this is it’s like a dilemma: you can either be, so to speak, practically irrational or theoretically irrational. Now, if you assign any positive probability whatsoever to the St. Petersburg game, then the infinity clobbers you, and you think you’re already enjoying infinite expected utility right now. And that seems practically irrational. You’ll think pretty much anything you do is infinitely good because there’s some prospect that you’ll be playing the St. Petersburg game at the end. In terms of expected value, it’s infinitely good. That’s a worry on the practical side.

Alan Hájek: Now, the solution I’m imagining is you give probability zero, but now the worry is that that’s theoretically irrational because your evidence is not so decisive. I mean, can you really, with such confidence, rule it out? Namely, it has the same probability as a genuine impossibility? It seems, well, it’s a contingent matter.

Rob Wiblin: It’s conceivable.

Alan Hájek: Yeah, it’s conceivable. You’ve got all this evidence against the St. Petersburg game, but not so decisive, it seems, that it just absolutely rules it out.

Rob Wiblin: Yeah. It feels like structurally, this path down to solving the St. Petersburg paradox has become extremely similar to Pascal’s wager.

Alan Hájek: Absolutely.

Rob Wiblin: OK, so I guess many people will have heard of Pascal’s wager at some stage, but maybe do you want to repeat it? Give everyone a refresher?

Alan Hájek: Delighted to, because I’ve thought a lot about Pascal’s wager. This is Pascal’s argument for why you should believe in God or cultivate belief in God. And just to locate it historically, we should contrast Pascal’s wager to predecessors, which purported to establish the existence of God, prove the existence of God. I’m thinking of things like the ontological argument: St. Anselm and Descartes had one, Thomas Aquinas had five ways, Descartes had a cosmological argument. And there the conclusion was: God exists.

Alan Hájek: Pascal won’t have a bar of this. He says, “Reason can decide nothing here.” You can’t just by some clever proof establish the existence of God. But he turned his attention to the attitude we should have to the existence of God. Should you believe in God or not? That’s now a decision problem, and that’s why it’s relevant to our discussion about decision theory. And he argued that you should believe in God, or at least wager for God, as he said. Think of that as “cultivate belief in God.” The short version of the argument is because it’s the best bet. And in fact, Hacking writes that this was the first-ever exercise of decision theory.

Rob Wiblin: Oh, right. Huh.

Alan Hájek: Which is interesting, because of all the cases, this is such a problematic case for decision theory, which is ironic.

Rob Wiblin: Yeah. We’re opening with the paradox, basically.

Alan Hájek: Opening with the paradox. Anyway, Lakatos said that every research programme is born refuted, and maybe you could say that of this very case: decision theory was born refuted with a problematic case.

Alan Hájek: Here’s how the argument goes. There are two ways the world could be: God exists, or God does not exist. Two things you could choose to do: Believe in God, or not believe — or, as Pascal says, wager for God and wager against God. And here are the payoffs: If God exists and you believe in God, you get salvation. Let’s call it that. Infinite reward, infinite utility — “An infinity of infinitely happy life,” as Pascal says. And now, in every other case — where God does not exist or you don’t believe in God — you get some finite payoff. And there’s some controversy about the case where God does exist and you don’t believe: maybe you get negative infinity, maybe you have infinite damnation.

Rob Wiblin: It’s sufficient to put zero there. Isn’t it? Or not?

Alan Hájek: I think Pascal himself in the text is telling us that really, that’s a finite term. That wager against God or don’t believe in God, God exists is only finite in utility, not infinitely bad.`

Alan Hájek: OK, so that’s the first premise. That’s the decision matrix, as we say. Those are the utilities: infinity for believe in God, God exists; or wager for God, God exists. Finite everywhere else. Then, the premise about the probability: the probability that God exists should be positive. So your credence, as we would say, should be positive.

Rob Wiblin: Nonzero.

Alan Hájek: Nonzero, as we might say. It’s possible that God exists, so you should respect that by giving a positive probability. This theme keeps coming up. And now Pascal does what we recognise as an expected utility calculation, and just does the sum. You’ve got infinity times some positive probability, plus some finite stuff. Add it up, you get infinity. So, it looks like wagering for God, believing in God, has infinite expected utility. And wagering against God, not believing in God, the expected value was some finite stuff plus some finite stuff, which is finite. Infinity beats finite. Therefore, you should believe in God. That’s Pascal’s wager.

Rob Wiblin: I see. So, is this structurally analogous to the St. Petersburg paradox, as you’re biting the bullet on it? Or are there any differences here that are important?

Alan Hájek: Interesting. It’s structurally similar in that infinite utility is what you get in the punchline. Notice we got to it in a different way in Pascal’s wager from St. Petersburg. In St. Petersburg, we were adding finite terms and every possible payoff was finite, but just because of the way they’re summed, you get infinity. In Pascal’s Wager, it’s different: you just get this single hit of infinity; this one possible outcome that just gets you the infinite utility in one shot. That’s a structural difference, but I think there are other parallels here.

Rob Wiblin: Yeah. I guess in both cases, the infinity is really messing us up here, because even a tiny possibility of an infinity just swamps everything else.

Alan Hájek: Absolutely.

Rob Wiblin: Unless you literally assign zero probability to it, then it’s basically the only thing that matters in the entire process.

Alan Hájek: Exactly. Yeah, yeah. Now let’s do it. A strict atheist who gives probability zero to God’s existence is not going to be moved by Pascal’s wager. But that may seem overly dogmatic. Surely you should give at least some probability to God’s existence. Maybe extremely small. As you might say, maybe one in a googolplex, but not that small. And as soon as you give it positive probability…

Rob Wiblin: You’re toast.

Alan Hájek: You’re toast. Or you have to wager for God.

Rob Wiblin: Yeah.

Alan Hájek: But I think there is now a revenge problem for Pascal, as I like to point out. Pascal’s wager is invalid. Let’s grant him his premises. So we grant him the matrix of utilities, as I said before. Grant him you should give positive probability to God’s existence. Grant him that you should maximise expected utility. Fine. Does not follow that you should wager for God by those lights. Why not?

Rob Wiblin: Yeah. Why’s that?

Alan Hájek: Here’s an alternative strategy, and Pascal’s definitely not recommending this. Toss a coin. Heads, you believe in God. Tails, you don’t. OK?

Rob Wiblin: Yeah.

Alan Hájek: What’s the expected utility by Pascal’s lights for that? With probability 1/2, the coin lands heads, and then you get the infinite expectation that Pascal was talking about. With probability 1/2 you get some finite expectation. The expectation of this “mixed strategy,” as we say, is still infinity. So, by the lights of expected utility theory, this is equally good. This is another way to get infinite expected value.

Rob Wiblin: Hoisted on his own petard.

Alan Hájek: Well, that’s it. That’s right. And I’ve just started.

Rob Wiblin: Yeah, I was going to say, won’t you have a problem that’s like, you flip the coin and you get whatever result says that you shouldn’t believe in God. Now it seems like you want to flip the coin again.

Alan Hájek: Yeah. And that’s often a problem with mixed strategies. Maybe you don’t like the way the coin lands and you want to have another shot. But it is a strategy that is alternative to Pascal’s. He was certainly not recommending…

Rob Wiblin: It has the same payoff.

Alan Hájek: It has the same expected utility. And now we just run down the slippery slope. Now, suppose you wager for God if and only if your lottery ticket wins in the next lottery. And let’s suppose there’s a billion tickets in the lottery. One in a billion times infinity is still infinity. And so do the calculation: infinite expectation for that strategy, the lottery ticket. I wait to see whether a meteor quantum tunnels through this room before the end of our interview. Some tiny probability of this happening — I don’t know, one in a googolplex, call it — multiply that by infinity, and I have infinite expected utility for this strategy. Wager for God if and only if the meteor happens. And now it starts to look like whatever I do, there’s some positive probability that I will get the infinite payoff.

Rob Wiblin: Yeah, infinite payoff.

Alan Hájek: And that has infinite expected utility. Even if I try to avoid belief in God, there’s some probability that I’ll fail.

Rob Wiblin: Yeah, you’ll accidentally believe in God.

Alan Hájek: Accidentally believe in God, and then I get the infinite expected utility at that point. And so my conclusion was not only is Pascal’s wager invalid — namely the conclusion that you should wager for God does not follow from the premises — it’s invalid in, so to speak, the worst possible way.

Rob Wiblin: It recommends everything equally.

Alan Hájek: Everything by those lights should be equally good. Whatever you do has infinite expected utility.

Rob Wiblin: Yeah. It is interesting that the power of the infinity to create the paradox here is so powerful that it also destroys the paradox, or it just ends up producing some garbage result that everything is permissible.

Alan Hájek: That’s it. So the great strength of Pascal’s wager was that you didn’t need to worry about the probability that you assigned to God’s existence: infinity just swamps it, as long as it’s positive. And now the revenge problem is, well, that swamping effect of infinity now affects just anything you choose. There’ll be some probability that you wind up wagering for God, and away we go.

Rob Wiblin: I guess you don’t even have to say that you necessarily get the infinite payoff via the original envisaged path to believing in God. It could be just anything you do might pay off in infinity.

Alan Hájek: Right. That’s right.

Rob Wiblin: There’s all kinds of different ways that could happen.

Alan Hájek: That’s it. And the St. Petersburg game may be waiting down the road.

Rob Wiblin: Exactly. You go past the church, go to the casino and… Yeah.

Alan Hájek: Yeah.

Rob Wiblin: OK, it seems like there’s a very natural response here, which is to say, in situations where you have different possible paths to infinite positive value, wouldn’t it be very natural to choose, as a tiebreaker, the one that has the highest probability of the infinite?

Alan Hájek: Absolutely.

Rob Wiblin: Now I’m weighing up the casino with the St. Petersburg game and the church, and then I have to make a decision: which one has a higher probability of getting me infinite value?

Alan Hájek: That’s right, yeah. Very natural reply, and a few people have made it. Schlesinger, for example, has a version of that, and it’s certainly very plausible. First point, notice you’ve added an extra premise. Maybe you could put it this way, it’s like a lexical ordering: when some action uniquely maximises expected utility, do that. When you have ties, you need a tiebreaker. And you offered a tiebreaker: now look at the probabilities and maximise the probability of, in this case, the infinite payoff. That’s OK, that’s fine. But I think my point still stood from before that the argument as stated was invalid, and it needed help from this extra premise, this tiebreaker, when you have the ties.

Alan Hájek: By the way, this takes us back to our discussion of heuristics before. You’ve got multiple ways of realising, in this case, the maximal expected value. And just to say a bit more about this reply, it’s very intuitive, what you said. It’d be nice to generalise: what really is the rule? So, in this case, we had something of infinite utility. But what’s the general rule that we’re applying?

Rob Wiblin: Because it seems like, you said lexical, which is indicating that there’s two steps here. It seems like the rule is: If infinite utility is possible, maximise the probability of infinite payoff. If it’s not, then do normal expected value.

Alan Hájek: Yeah.

Rob Wiblin: I guess a trouble here is going to be that infinite value is on the table, because we’ve said we assign it nonzero possibility in any action. And so, actually, we never get to the second step: normal expected value never arises, because we’re always stuck on the first step, just trying to maximise the probability of infinite payoff.

Alan Hájek: And then there are so many routes to getting this infinite expected utility, and it seems like you’re going to be reaching for the tiebreaker all the time now, because there are just too many ties. And it’d be nice to really clarify what the rule is, this lexical priority rule. It made sense in Pascal’s wager as you said it, but that seems to be a special case of some more general thing, and it would be nice to give that some foundations.

Alan Hájek: Expected utility theory has, it seems, some firm foundations in terms of preference axioms and so-called representation theorem, which a lot of people appeal to. It’d be nice to have something parallel to that for this enhanced decision theory, where you have this lexical rule. I’ll just put in a plug for Paul Bartha, who has offered something like that. I think he calls it “relative utility theory”, which is kind of a generalisation of the familiar expected utility theory. It involves ratios, utility ratios.

Rob Wiblin: I see. It’s a generalisation into the infinite cases?

Alan Hájek: That’s right.

Rob Wiblin: Or it’s robust to them?

Alan Hájek: Yes. So he, I think, can deliver your intuition, and Schlesinger’s rule, it’ll fall out. Which is good, because now it’s giving more foundational support for this rule.

Rob Wiblin: So zooming out for a minute: We’ve got this St. Petersburg game, we’ve got Pascal’s wager. They’re both introducing infinities by different passages, and then they seem to just really create an awful lot of trouble for expected value. So, I want to go out tonight and choose what movie to watch and make decisions based on expected value instead of weighting things by their probability linearly.

Alan Hájek: I know, yeah.

Rob Wiblin: I want to feel like I’m doing the right thing here, but all these people are coming up and saying, “Hey, I’ve got these paradoxes for expected value that produce garbage results.” Or at least, that require totally rethinking it. How comfortable should I feel when I use expected value to make decisions in life? Are these wacky cases with convergences to infinities and putting infinities in things, are they fundamentally a problem? Or are they just more curiosities?

Alan Hájek: Well, one solution is — I guess I mentioned it earlier — you really do just zero out these crazy cases. You don’t even give them one-in-a-googolplex credence. And that would certainly quarantine them. I have made versions of this worry in a few places, how even just everyday decisions seem to be contaminated by, in this case, infinity. I’ve also talked about it in relation to a game that has no expectation at all, the so-called Pasadena game. And the game itself may seem pathological, but if you give it any credence, then even a simple choice like, “Where should I go out for dinner tonight? Will it be Chinese or pizza?” If you give some probability to the crazy stuff happening at the end, that easy decision gets infected too. So I guess you have to do the dogmatic thing and just say, “Look, I’m just zeroing out…”

Rob Wiblin: I suppose you can choose your dogmatism. You can either be, “When things become sufficiently weird, I give them zero probability. It just seems dogmatic.” Or you can be, “I refuse to consider infinities.” Just give them some finite positive value and leave it at that. Or you just have to become a fanatic who pursues infinite values all the time.

Alan Hájek: Well, and you heard me before putting in an argument for the crazy thing. That’s right. And so for practical purposes, I think you have to be dogmatic. And maybe even in some cases, not just being dogmatic in giving probability zero to these scenarios. In some cases, you just don’t even consider them. They’re just not even in your space of possibilities to begin with. It’s not that you recognise it and give it probability zero. This is one statistician’s reply that I’ve heard: “You just don’t even put it in your model of the world.”

Rob Wiblin: I see. OK. So to speak up for being crazy for a minute, imagine that we really did think that infinite utility was a live possibility. Let’s say that we didn’t, for example, think that the universe was going to peter out, either become very big or very small, such that we’re in a steady-state universe. And maybe you could set up a system where you do live forever. There’s nothing that interferes with your life. And so maybe you could get an infinite utility that way.

Rob Wiblin: So, we have some theory that makes it feel not infinitesimally likely, but maybe 1-in-1,000 likely. Then it feels less crazy to say you should orient your life around trying to do that, trying to get the infinite utility by living forever because the universe permits that. So maybe we can bite the bullet.

Alan Hájek: Another way to go is to give infinity a more nuanced treatment. So far, I was imagining — it’ll be hard to convey this just over the podcast — but I’m sort of drawing the figure eight of infinity. It’s the figure eight on its side, infinity. And that’s the un-nuanced infinity that seems to have these problems. If you halve it or you multiply it by one in a googolplex, you still get the same sideways figure-eight infinity back. But if you had a more mathematically nuanced treatment of infinity, where halving something or multiplying it by one in a googolplex made a difference, then we might get the ordering that we want again. This is another way of handling the problem, by the way, which led to your lexical rule. Maybe if we just distinguish among different infinities…

Rob Wiblin: Oh God, I’m scared of it. This just seems like it’s going to create more problems.

Alan Hájek: And it’s also scary: just the sheer mathematics of it is formidable. But it turns out that there are these systems — for example, the surreal numbers, hyperreal numbers — where you have infinities, and multiplying them makes a difference. Multiplying by 1/2 or what have you will change the value, will make it smaller in this case. And so maybe now you get the ordering that you are hoping for, and you can choose Chinese over pizza after all, if you keep track of the sizes of all of these infinities. And that’s been a bit of a cottage industry too, of doing these highly technical, highly sophisticated refinements of decision theory.

Rob Wiblin: I see. Let me make another line of argument here. Infinities mess shit up. Some listeners might be familiar with the Banach-Tarski paradox. Basically, you take a sphere, a solid sphere. If you divide it into an infinite number of points — the mathematicians in the audience might be annoyed by this — but divide it into an infinite number of points, and then move them around in some special way. And it seems like you can get two full spheres out of the matter or the volume of the original sphere. It’s like you’ve doubled the amount of volume that you have just by splitting something into infinite points and then putting it together again.

Rob Wiblin: I don’t think that that could happen in the universe, probably. It doesn’t seem like that happens. And it’s like, maybe just whenever we put infinities into these decisions, we’re just going to find lots of problems and lots of things that will never happen in the real world. And so we should be OK to dismiss infinities and throw them out, just on the basis that they make life unlivable.

Alan Hájek: I know, great. Feynman was told about the Banach-Tarski paradox, and it was presented to him involving an orange. You’ve got an orange of a certain size, and by suitably cutting it up, you can create two oranges of that size, and in fact you can keep multiplying them. And Feynman bet that that was just nonsense, that wasn’t true. And then someone explained to him how you do it — “There’s this infinitely precise surgery that involves non-measurable sets,” and so on — and Feynman said, “Come on, I thought you meant a real orange.”

Alan Hájek: Now, of course we understand that reaction. But I feel like saying, “Yeah, but that doesn’t really solve the paradox.” Well, thank God we can’t do infinitely precise surgery on oranges, hence our theory of measure is safe. You feel like saying no, that of course this is highly implausible that you can actually do this, but aren’t you worried that there’s something wrong with our theory of measure that it seems to allow this result?

Alan Hájek: And I feel like saying something similar about decision theory. Notice that Richard Jeffrey’s reply was rather like Feynman’s regarding Banach-Tarski. Jeffrey said with regard to the St. Petersburg paradox: anyone who offers you the St. Petersburg game is a liar. And of course, that’s true. No one in the real world is going to offer you the St. Petersburg game genuinely. But I still have that niggling feeling, too. Look, there’s still something wrong with our theory of measure in the Banach-Tarski case, of expected utility and rational decision in the case of St. Petersburg. And it’d be nice to solve that problem. But maybe that’s now the philosopher in me, rather than the physicist or the engineer in me.

Rob Wiblin: It’s a very common theme, I guess, in philosophy: that one flips between the sublime realm of ideas, if you like, and highly idealised situations. And then you bring it back into the world and you have to say, “Is this still relevant?” You do a bunch of maths and you’re like, “Does this apply to the universe?” And I guess people sometimes do have different judgements on whether it’s still relevant, as you’ve made it stranger and stranger.

Alan Hájek: Yeah, that’s right. Philosophers often have highly fanciful thought experiments to make some philosophical points. Like Frank Jackson imagined Mary in a room, and she knows all the physical facts, but she’s never seen red. And when she sees red for the first time, it seems that she’s learned something. The Chinese room from Searle is a famous thought experiment. Putnam had Twin Earth, and so on.

Alan Hájek: Now, it seems to me philosophically unsatisfying to reply, “Well, there’s no such room. There’s no room with Mary in it. There’s no Chinese room, Twin Earth.”

Rob Wiblin: “This is all rubbish.”

Alan Hájek: “This is all rubbish. There’s no Twin Earth.” Yeah, of course we know that. We never said there was. But these thought experiments — and St. Petersburg I’ll put in the same category, and Banach-Tarski — they’re putting pressure on some entrenched notion of ours.

Rob Wiblin: Yeah, because you want your theories of things to apply in all cases, to not be contingent on specific empirical facts about how finely you can cut things.

Alan Hájek: Exactly, yeah. In the case of decision theory, fingers crossed that there are no sufficiently large bank accounts. And then you look at the world. OK, we’re good. In the case of Banach-Tarksi, fingers crossed that you can’t do that surgery on oranges. Phew! Turns out you can’t. All is good.

Alan Hájek: No, you still think that the mere possibility, the conceivability, as we were talking about before, already is enough to make one worry.

Rob Wiblin: Is enough to break it.

Alan Hájek: The thought experiments are problem enough.

Rob Wiblin: Well, we should move on from expected value. Many people, they hear about these cases and they’re like, “I’m going to now have serious reservations about using expected value to make decisions in my life on what to do.” I guess I probably know a very outsized fraction of the number of people in the world who actually do this. But do you have any advice for them? It feels like this field hasn’t wrapped up yet. We haven’t answered this one.

Alan Hájek: Yeah. By the way, we’ve talked about some revisions of expected utility theory that might provide some therapy, like the risk-weighted theory or the weighted linear utility theory. That might provide some relief.

Rob Wiblin: I’m just guessing that all of these are going to have their own paradoxes that result.

Alan Hájek: In fact, let’s just cut to the chase: they do. And in fact, there’ll be either reformulations or the negative version, like a negative St. Petersburg game that’s trouble for some of these theories. So problems are still lurking. But in a way I think we can quarantine the problem. It’s a little bit like what Hume said, something along these lines that when he leaves his philosophising and he goes out into the world, he plays billiards or whatever, he leaves his philosophical problems behind. And of course, we shouldn’t be paralysed by the St. Petersburg game or what have you. But I think they’re important to think about conceptually, while we are being philosophers. In the real world, we will not write out a decision matrix that’ll have these problematic features. Pascal’s wager actually is interesting in that case.

Rob Wiblin: Well, there is a possibility of creating new universes. Infinities do seem possible, maybe, in the universe as it is. Sorry, maybe that’s not the point you were making.

Alan Hájek: Yeah. And also, people really do take seriously what Pascal said.

Rob Wiblin: Yeah, of course, people do act on it.

Alan Hájek: There are lots of Christians who say that Pascal said it exactly right. And not just Christians. There’ll be other religions which will acknowledge some source of infinite utility. So now this becomes a practical problem. This is not like Banach-Tarski and cutting up oranges — given certain beliefs people really have in the real world, this seems like a live issue.

Rob Wiblin: Yeah. It seems like we at least don’t have an impossibility proof. We don’t have an impossibility theorem yet that shows that we can’t have a satisfying decision theory. So, the dream remains.

Alan Hájek: That’s right. Keep working and keep philosophers employed. Keep GPI funded.

Rob Wiblin: More research is needed. We’ve got to pay higher salaries to get better people into those disciplines.

Alan Hájek: Absolutely. Look how hard the problems are.

Rob Wiblin: All right. Let’s push on and talk now about counterfactuals, which have been one of your big recent passions.

Alan Hájek: Yeah.

Rob Wiblin: To start with, can you explain: what are counterfactuals? We use the term loosely quite a lot, but precisely, what are they?

Alan Hájek: They’re conditionals. They’re ‘if-then’ statements. And we typically express them with a subjunctive conditional: “If it were the case that P, it would be the case that Q,” or “If it had been the case that P, it would’ve been the case that Q.” And typically, they presuppose that the first bit — what we call the antecedent, the P bit — is false. And then they have us say something about a hypothetical scenario. I mean, sometimes we might allow the antecedent to be true, but the typical case is where P, in “if P, then Q,” is false.

Rob Wiblin: So, we can say, “If I were able to fly, I could travel to New York.” The antecedent is “I were able to fly.”

Alan Hájek: Yeah. But there’s a proposition that’s false in the actual world: I can fly. And then we imagine a situation in which that is realised.

Rob Wiblin: Yes, I see. OK, that’s the antecedent.

Alan Hájek: The second bit is the consequent. If P, then Q: P is the antecedent, Q is the consequent.

Rob Wiblin: Is there like a paradigm example of a counterfactual that is used in philosophy when you’re teaching this to students?

Alan Hájek: Yes. I’ll give you a couple. Actually, this is a good way of bringing out the difference between two kinds of conditionals: the counterfactual, as opposed to what’s called the indicative conditional.

Alan Hájek: “If Oswald didn’t kill Kennedy, then someone else did.” Now, that seems true, and this is an indicative conditional. So we are assuming that Kennedy really was killed. We know that, but we are not sure who did it. If Oswald didn’t kill Kennedy, then someone else did — because one way or another, he was killed.

Rob Wiblin: I see. Yeah.

Alan Hájek: OK, now compare that to: “If Oswald hadn’t killed Kennedy, someone else would have.” Now that’s quite different, and maybe far less plausible. That suggests there was a backup assassin, maybe some conspiracy, and we could easily deny that.

Rob Wiblin: Yes.

Alan Hájek: Or maybe even easier, how about we’re not completely sure who wrote Hamlet. “If Shakespeare didn’t write Hamlet, then someone else did.” That seems true because we know Hamlet was written. “If Shakespeare hadn’t written Hamlet, someone else would have.”

Rob Wiblin: Sounds crazy.

Alan Hájek: That sounds crazy, as if Hamlet was just fated to be written and Shakespeare happened to be the vehicle for it, but someone else would’ve stepped in if need be.

Rob Wiblin: God would’ve seeded it in their mind. So, what’s the name for these two different kinds?

Alan Hájek: Indicative and counterfactual. We use the subjunctive conditional, that mood, typically to express the counterfactual. We’ll call those counterfactuals, where typically there’s the presupposition that the antecedent is false.

Rob Wiblin: Yeah. What’s the history of the study of counterfactuals? I mean, people have been making statements of this kind since humanity began speaking, I imagine, but it seems like discussion of this kind of “would, could, if, then” — it doesn’t seem like that’s in the classic pantheon of old-school philosophers. They didn’t seem to think about this very much.

Alan Hájek: I think you’re right. They did talk about conditionals. For example, the Stoics talked about conditionals — Diodorus, Chrysippus — but often they were talking more about what we’d now call the indicative conditional. For example, they had what we would now call the material conditional that has a certain truth table. Basically, it’s true however P and Q turn out, except where P is true and Q is false, then ‘if P then Q’ is false. Otherwise, true in every combination.

Rob Wiblin: Why did they care about that?

Alan Hájek: They were doing logic. They cared about logic more generally, and certainly the material conditional is a standard part of logic, and they got onto it early.

Rob Wiblin: So when did the study of this kind of stuff flourish?

Alan Hájek: Counterfactuals?

Rob Wiblin: Yeah.

Alan Hájek: I’d say it started to hit its heyday in the ’40s, when Chisholm and Goodman began to write about it. Goodman wrote some classic stuff, especially in Fact, Fiction, and Forecast. That book, there’s a classic treatment of counterfactuals.

Alan Hájek: And then I think the real heyday for counterfactuals came a bit later — in the ’60s, perhaps towards the end of the ’60s — especially Stalnaker and Lewis and their classic other possible worlds accounts. Roughly, the idea is that if it were the case that P, it would be that Q is true, just in case at the closest P world, Q is the case.

Rob Wiblin: I see.

Alan Hájek: By “P world,” I mean a world where P is true. And then they had this famous debate about, well, is there a unique closest P world? Lewis thought not. And by the way, notice this is a use of a heuristic from earlier, when Stalnaker talks about the closest P world. There are two ways you could challenge that: there could be more than one P world, and there could be ties. If Bizet and Verdi were compatriots, would they have been French? Would they have been Italian? Seems maybe they’re equally close possibilities.

Alan Hájek: And going in the other direction, maybe there’s no closest world, just ever-closer worlds. I’ll give you an example of that. It could be relevant later. Lewis imagines the following case. If I were taller than seven feet, how tall would I be? And let’s imagine that the closer I am hypothetically to my actual height, the better for closeness, in the sense that’s relevant here. Would I be seven-foot-one? Well, Lewis’s thought is that that’s a gratuitous departure from my actual height. Seven foot, half an inch would be closer. Seven foot, quarter of an inch closer still. An infinite sequence, a bit like Zeno’s paradox: ever-closer worlds, none closest. And that’s meant to be trouble for Stalnaker. Perhaps later on, I’ll argue it’ll turn out to be trouble for Lewis. That was a classic period in the study of counterfactuals.

Rob Wiblin: Is this under the banner of modal logic? Is this basically what this is?

Alan Hájek: Yeah. You could certainly say that. Certainly counterfactuals seem to have a modal element.

Rob Wiblin: Well, yeah. What does “modal” mean here?

Alan Hájek: It means something like it’s not just a matter of how things are actually — somehow possibility is involved, or perhaps necessity. I would call probability a modality too. Not just things as they actually turn out.

Rob Wiblin: OK, so why should we care about counterfactuals and conditionals and so on? From one point of view, it seems all pretty straightforward. We use these terms all the time. We don’t get confused. Everything seems fine. Yeah, how does it relate to actually important questions?

Alan Hájek: Very good. For a start, it’s philosophically important to study counterfactuals because so many philosophers reach for counterfactuals in the study of other things. Things like causation, dispositions, explanation, laws of nature, free will, perception, confirmation — philosophers are often analysing things, or at least referring to counterfactuals in the study of these other important philosophical concepts. So, starting with philosophy, it’s all over the place.

Alan Hájek: Then science, I think, traffics in counterfactuals in various ways. You could ask a question like, “If I were to drill a hole through the Earth and drop a ball, what would happen?” “It would be a harmonic oscillator,” the physics textbook tells you. That would be a counterfactual.

Alan Hájek: I think the social sciences traffic in counterfactuals. History, for example: “If the archduke hadn’t been assassinated, there wouldn’t have been World War I,” you might say. Economics worries about the incremental benefit of some commodity or the change to the economy.

Alan Hájek: In psychology, counterfactuals are very important. Think of, say, regret. Regret is often informed by counterfactuals. Actually, I’m somewhat prone to regret. And when I psychoanalyse myself, I think that’s why I got so interested in counterfactuals, and that’s why I especially want them to come out false: because then the counterfactuals that underpin my regret I could banish. And think of relief: “Thank God that this happened, because if it hadn’t happened, this bad thing would’ve happened.” So I think psychology is permeated with counterfactuals.

Rob Wiblin: Yeah. OK, so it’s important in answering lots of other relevant philosophical questions. And I guess also it’s such a core part of human reasoning all the time. We constantly have to think about how things would have been in order to establish causation and think about what is good and bad relative to other stuff.

Alan Hájek: Exactly. And that was going to be my next point. Just in daily life, it seems we need to be thinking about counterfactuals, about decision-making. And I could add that to the philosophical list too — that rational decision traffics in counterfactuals — but never mind the philosophy, just common sense is permeated with thinking, “Well, if I were to do this, this would happen. If I were to do that, something else would happen. What should I do?” Daily life is riddled with counterfactuals.

Rob Wiblin: When philosophers first look at counterfactuals, what is the problem that they identify? Why are they interesting or potentially challenging?

Alan Hájek: For a start, they can’t just be given the straightforward truth conditions of, for example, the material conditional. That might be your first stab at analysing “if it were the case that P, it would be that Q.”

Rob Wiblin: What would that be?

Alan Hájek: It would be, that’s true in every case, except where p is true and q is false. But that would be a disaster. Because now all counterfactuals — where they’re genuinely counterfactual (the antecedent is false) — would just come out true, and you wouldn’t be able to make distinctions. Consider a counterfactual of the form, “If I were to let go of the cup, it would fall.” And I don’t actually let go of the cup. Now, on this material conditional analysis, that comes out true, because it has a false antecedent. So far so good, I guess. But now: “If I were to let go of the cup, I would finish up on the Moon” — that would come out true as well, because it still has a false antecedent. So obviously, we want to make distinctions among these conditionals with false antecedents.

Rob Wiblin: Yeah, I see.

Alan Hájek: Some of them true, some of them false. We are going to need some more sophisticated machinery.

Rob Wiblin: Got it. OK. With a statement like, “If I let go of the cup, then this other thing will follow,” if you never let go of the cup, then plausibly anything could come after without the statement being false, because the ‘if’ condition is not met because you did not do it.

Alan Hájek: Well, and intuitively that’s the wrong answer. We want to be able to say, “These are the true ones and these are the false ones” — not just anything goes if some false antecedent were the case.

Rob Wiblin: Right, right.

Alan Hájek: Now we need to have some subtle way of distinguishing between the true ones and the false ones. It can’t just be the material conditional.

Rob Wiblin: So how would you try to do that?

Alan Hájek: Well, now philosophers like to reach for possible worlds. And possible worlds have been very influential, successful in the study of modal logic — like necessity, possibility. For example, we say that something’s necessary if it’s true in all possible worlds. Something’s possible if it’s true in some possible world. We may have to restrict the worlds suitably, but that’s the first stab.

Alan Hájek: And now the thought is, let’s do something similar to that for counterfactuals. And the thought is that we don’t just want to look at all the worlds. Let’s look at certain privileged worlds: the ones that matter. And the way we say that is, “the most similar worlds where the antecedent is true.” Roughly, this style of analysis says: ‘P would Q’ (‘if P were the case, Q would be the case’) is true just in case the most similar P worlds are Q. And maybe all of them: all of the most similar P worlds are Q. And then let the debate begin.

Rob Wiblin: I see. This might make things clearer. You’ve got an issue where you want to say, “If I let go of the cup, then it would fall down.” But scenarios in which you let go of the cup are a very wide range of possible scenarios. Lots of things could be in it as well.

Alan Hájek: That’s right.

Rob Wiblin: For example, what if you let go of the cup, and also suddenly a table appeared underneath the cup to catch it? Then it wouldn’t fall down and hit the floor.

Alan Hájek: I make a lot of that very point actually. Yeah, that’s right.

Rob Wiblin: Then you have to be like, so which worlds — out of the vast space of possible counterfactual worlds in which you drop the cup — are we actually talking about when we make some counterfactual claim like this? And you’re saying the standard account in philosophy is to say, “Well, it’s similar to the actual world in every respect, except for this one change where you let go of the cup,” which sounds very intuitive.

Alan Hájek: Yes. You have to make some ramifications. You can’t just tweak this one fact and keep everything else as it was.

Rob Wiblin: Because you might end up with some inconsistency?

Alan Hájek: That’s right. You can’t just insert the one change — like, I release a cup where in fact I didn’t — because there’ll be all sorts of ramifications of that, ripple effects that you have to take into account. But now, the most similar worlds where all of that stuff is taken care of, the thought is those are the worlds that matter to the evaluation of the counterfactual.

Rob Wiblin: So I imagine many people listening will be like, “Yeah, obviously you want to make the minimal change when you are producing a counterfactual. If the operation you’re doing is on the counterfactual that changes that I let go of the cup, you shouldn’t be adding a table underneath it as well. That’s ridiculous.” But I guess you think this “most similar worlds” account has problems.

Alan Hájek: Well, there are three things I don’t like about the most similar worlds accounts, namely: “the most,” “similar,” and “worlds.” But otherwise, I’m right on board! I should say, for years, I’d just assumed this philosophical… let’s call it “orthodoxy” — but more recently I’ve come to have my misgivings, which I will now gladly share with you.

Alan Hájek: So, let’s go through them. “The most”: that means that we are supposed to just attend to the closest worlds — the ones that are first in this ordering — and we don’t look further back. We only look at the front row, so to speak, of the antecedent worlds. And I say no, sometimes you’ve got to look further back. I’ll give you an example. Consider the last US election and consider this counterfactual: “If Trump or Biden had won the election, the president would be a Democrat.” Does that sound true?

Rob Wiblin: That sounds intuitively wrong because it seems like in the cases where Trump won, it wouldn’t be true.

Alan Hájek: Exactly. Spot on. OK, but now let’s run it through the similarity semantics. What’s the most similar world where the antecedent is true?

Rob Wiblin: I guess one where Biden won, right? That’s more similar.

Alan Hájek: It’s right here. It’s the actual world. We are standing in it. And it is true in the actual world that the president is a Democrat. So, this should come out true according to this similar account where, as does seem intuitive, the most similar world to the actual world is itself.

Rob Wiblin: So, it seems like in that case, when you’re saying, “If Trump or Biden won,” you’re trying to make a claim about all the worlds in which either of those two facts hold, and not just about the one world that’s more similar.

Alan Hájek: Yeah. So, I say you have to, as you say, you’ve also got to consider the Trump worlds, and that will push you further back from the front row.

Rob Wiblin: Oh, and then I guess you’ve got a question of how deep? Including all of them, including the ones where the world exploded for no reason?

Alan Hájek: Now, on a strict conditional account, you have to look at all of the worlds. And maybe that’s going to be contextually circumscribed in some way, but that does give you some impetus to look beyond just the front row. Not just the closest.

Rob Wiblin: Yeah. OK.

Alan Hájek: Let me give you another case. And I’m glad we talked about Lewis’s seven-foot example earlier, because now I think it’s going to backfire. Remember, according to him, in that example, what matters to similarity is just closeness to my actual height. And he had this sequence of ever-closer worlds getting closer and closer to my actual height. None closest. All right. Now let me make the smallest tweak to Lewis’s apparent counterexample. And I think it backfires on him. If I were at least seven feet tall, how tall would I be? So, if I were greater than or equal to seven feet tall, how tall would I be?

Rob Wiblin: Well, then it’s a hard bound, so it’s easy. It’s seven feet.

Alan Hájek: Well, there you go. According to that ordering — which he used, so I think it’s fair for me to now use it against him — the closest worlds are going to be exactly seven foot, 7.0000 … to infinitely many decimal places. He told us. That’s closest in the ordering. So, by his lights, if I were at least seven feet tall, I would be 7.00 … to infinitely many decimal places tall.

Rob Wiblin: Exactly seven feet tall.

Alan Hájek: Exactly. And I say, “Whoa, really? That comes as a surprise to me.” If anything, I would say, “Well, I suppose I might be, but it’s highly unlikely I’d be exactly so precisely seven foot. I might be a little bit more than seven foot.” I think in these cases, the similarity account is giving implausibly specific verdicts. It’s committed to the truth of implausibly specific counterfactuals.

Alan Hájek: In the seven-foot case, it was: “If I were at least seven feet tall, I’d be exactly seven feet tall.” Really? So specific? In the Trump or Biden case, the specific verdict that the president would be a Democrat when that doesn’t take into account the Trump possibility. And again, I think this is just symptomatic of only looking at the front row of the worlds — the closest worlds. Sometimes you need to look a bit further back, where, for example, I’m a bit taller than seven feet, or where the alternative in the disjunction, Trump winning the election, has to be taken seriously.

Rob Wiblin: Right. So, it seems like when we’re describing lots of these antecedents — if Trump or Biden won, if I were at least seven feet tall — we’re actually trying to indicate a range of different possible worlds. And the most similar just picks out one of them, somewhat arbitrarily it seems.

Alan Hájek: Yeah. Exactly. One thing that might push us towards is the strict conditional account, where you look at all of the antecedent worlds, maybe perhaps contextually restricted in some way. And by the way, earlier I should have mentioned von Fintel and Gillies as proponents of that kind of view.

Alan Hájek: Or you could go another way — this would be my way, and this might be getting to my positive view later — where I would say you have to look at the worlds that have positive chance at the relevant time. And that will take us sometimes further back than just the most similar worlds.

Rob Wiblin: Yeah, Okay. You’ve objected to “the,” you’ve objected to “most.” Let’s now object to “similar.”

Alan Hájek: “Similar” — lots of problems there, I think. In the early days of this similarity approach, it was assumed that similarity was a commonsensical thing. It’s what the folk would regard as resemblance. And Kit Fine came up with what seems to be a devastating counterexample to that understanding of similarity.

Alan Hájek: His example was: Cast your mind back to the ’60s, the Cold War, and consider the counterfactual: “If Nixon had pressed the button on the nuclear bomb, there would’ve been a holocaust.” That seems intuitively true. Let’s say we want that to come out true. But holocausts make a big difference. Holocaust worlds are not similar to our world where the holocaust didn’t happen. More similar would be a world where Nixon presses the button and then the mechanism just fizzles. No bomb is triggered and it’s business pretty much as usual.

Rob Wiblin: OK. So whatever disturbance is created by the button not working is much, much smaller than the disturbance to the actual world created by the holocaust versus not.

Alan Hájek: That’s right. So, if you are assuming commonsensical similarity, it seems that we are going to get the wrong verdict. We’ll judge that counterfactual to be false by the lights of commonsensical resemblance.

Rob Wiblin: Yeah. In fact, you could confidently make the statement that if Nixon had pressed the nuclear button, it wouldn’t have worked.

Alan Hájek: Yeah, exactly. It would’ve fizzled. Because that would be the most similar way of realising Nixon pressing the button. Wrong answer. Lewis took this very seriously and then fashioned a set of priorities of what matters to similarity. Well, really, to handle this Fine case, the Nixon example. And in fact, you might worry that what he comes up with is ad hoc, just reverse engineered to this particular case, and you wonder how much it’ll generalise. And in fact, as we’ll see in a moment, it doesn’t seem to generalise so well.

Alan Hájek: Anyway, here’s what Lewis said: The first priority is to avoid big miracles, as we might say: widespread, diverse violations of law. Second priority: maximise perfect match of history. Third priority: avoid small miracles, small violations of law — there’s a bit more, but that’s the main idea. This was supposed to handle the deterministic case. Interestingly, Lewis assumes determinism in a way I don’t want him to, because…

Rob Wiblin: Seems like the universe isn’t deterministic.

Alan Hájek: Yes, and in fact, he himself, in other work, thinks that the actual world we live in is indeterministic. So, one doesn’t want to assume that it’s deterministic to handle this case. And to be fair to him, he did then later also consider the priorities for indeterministic worlds, and he introduces the notion of a “quasi-miracle.” I should tell you a bit about that. A quasi-miracle is something that — while it’s consistent with the laws of nature (it’s not a genuine miracle) — is somehow remarkable. It’s like a pattern of outcomes seem to be conspiring in a surprising way.

Alan Hájek: Now, it’s a little bit hard to pin that down exactly. And in fact, that’s perhaps a problem that this very notion of quasi-miracles is a little bit shaky. But anyway, to give you the sense of it, go back to the Nixon example and imagine a world where all traces of Nixon pressing the button are just erased. There’s no trace that he pressed the button. Now, in a deterministic world, it seems that would take a big miracle, because you’d have to erase this trace and that trace and another one over there. You’d need widespread miracles to remove all of those. But in a chancy world, you don’t need any miracles. Quite lawfully, a sequence of chancy events could collectively erase all traces of the button pressing — just by chance, they all vanish.

Rob Wiblin: Yeah. A thermodynamic miracle — that sort of thing.

Alan Hájek: That sort of thing, quantum mechanical, or whatever. And then Lewis wants to say that quasi-miracles detract from similarity. So, we don’t want to say, for example, “If Nixon had pressed the button, then all traces would’ve been erased.” And we want to somehow banish the quasi-miracles.

Rob Wiblin: Things that are consistent with the laws of physics, but I guess are a tiny fraction of worlds that are consistent.

Alan Hájek: And they’re somehow remarkable, and so they detract from similarity in virtue of that. All right? And so, now we have the more complicated package.

Rob Wiblin: We’ve got: no big miracles, then perfect match of history, then no small miracles, then the fourth priority is no quasi-miracles.

Alan Hájek: Yeah. And actually, you might now wonder how these priorities interact with each other, the indeterministic and the deterministic ones. So, avoid the big miracles, and then interspersed between that and the avoid small miracles, we’ve got this other priority: maximise perfect match. Already, you might think that’s a bit strange — like, where do I insert “avoid medium-sized miracles”?

Rob Wiblin: It’s definitely feeling a bit arbitrary.

Alan Hájek: You’d think there’s a continuum among the miracles, and it’s a little odd that they’re interrupted by this different kind of priority. But now, how do we square that stuff with the quasi-miracle stuff? So, how do we prioritise “avoid big quasi-miracles” as opposed to “avoid medium-sized genuine miracles” and so on? It gets more complicated. And one point I want to make is: notice we’ve come some distance from the original commonsensical resemblance. Now we’ve got this quite complicated theory of how similarity works for counterfactuals.

Rob Wiblin: Taking a step back and looking at the progression here, we started out with this kind of appealing, very simple, very clean statement: the most similar world. If that worked in all the cases, it would be beautiful, so elegant; it’s very parsimonious. Let’s just keep that.

Rob Wiblin: Now we’ve got this far more cumbersome theory. We’ve got like four different stages, and you’re beginning to suspect that if we looked at more cases, we’d find more problems, and we’re going to have seven different steps in order to try to make it work.

Rob Wiblin: Then you have a question as a philosopher. I suppose you can say, on the one hand, maybe just the way humans use language with counterfactuals is super messy and complicated — and in fact, even seven wouldn’t be enough because there’ll be some weird cases with that. And maybe in order to actually know what people are referring to when they use counterfactuals, you just have to have the full intuition of a human mind to understand what is being referred to.

Rob Wiblin: On the other hand, you might think, “Actually, counterfactuals should be cleaner than this. There should be a more simple statement.” And now we’re in epicycles, where we’re trying to fix this theory that was broken fundamentally at its core and we need a different approach. Is this right? That you could see what’s going on in two different ways maybe?

Alan Hájek: I think that’s a good way to put it. I think, even when we add the epicycles, as we’ve been doing, we’re still not done. In fact, let’s —

Rob Wiblin: Yeah, let’s do some.

Alan Hájek: Here’s more trouble. Let’s take Lewis’s priorities and consider the following counterfactual. The story is that I did not scratch my finger yesterday. Let yesterday be the 24-hour period that ended at midnight last night, and I’ll include midnight as part of yesterday. “I did not scratch my finger yesterday. If I had scratched my finger, I would’ve done so at midnight. Precisely at midnight and no earlier.”

Alan Hájek: It seems this comes out true on the Lewisian priorities. Why is that? Well, we don’t need any big miracles to get the antecedent to come out true. So, we move to the second priority. We now want to maximise perfect match of history. Well, we get to do that by delaying my hypothetical scratch as late as possible because we get more and more match of what actually happened. That means that the maximal match will happen if I delay the scratch until midnight.

Rob Wiblin: So, exactly midnight.

Alan Hájek: That’s when I would’ve scratched. Exactly.

Rob Wiblin: You’re saying, “If I had scratched my finger yesterday, I would have done it at midnight exactly.” Seems crazy.

Alan Hájek: That seems crazy. Notice it’s another example of what I call implausible specificity. Why then exactly? Now, to make things even worse, let’s suppose that it’s a somewhat probabilistic matter when I scratch. I start the day a little bit itchy. I’m more likely to scratch earlier in the day. And as the day progresses, it becomes less and less likely that I’ll scratch. And in fact, midnight is the least likely time for me to scratch. Nevertheless, by those priorities, it seems to come out true that if I’d scratched yesterday, I would’ve done so at midnight, namely the least probable time. And I think that makes the problem even worse.

Rob Wiblin: So, we’ve tried patching it and now we’ve got this other problem that’s appeared, another case of excessive specificity. Are there any other issues with the Lewis attempted patch of this theory?

Alan Hájek: Yeah. Look, good on him for going out on a limb and telling us what matters to similarity. The trouble is I think he’ll get some counterexamples, and they’ll have this form of implausible specificity. Now you might retreat again, and say, “Well, similarity is context dependent, and it’s vague, and it’s complicated,” and don’t really say what matters to similarity. And Goodman, by the way, said similarity is insidious. It’s an imposter. It’s a quack. And he thinks that similarity is not suitable to ground any philosophical project, actually.

Rob Wiblin: Oh wow. Well, let’s come back to this in a second. Yeah, what are some other counterexamples?

Alan Hájek: So now I’m imagining that we retreat and we just say, “Similarity, I won’t tell you much about it. It’s this context dependent, vague, complicated thing.” Well, now I guess it’s harder to come up with counterexamples, but now it’s not clear what the theory’s saying. It’s not making predictions. It’s not providing any explanation. I like to put this in the form of a dilemma. And to explain this, I should just say something about Harry Potter.

Rob Wiblin: Yeah, go for it.

Alan Hájek: I went to one of the Harry Potter movies. I don’t remember which one. This was a long time ago. And to be honest, I didn’t like it as much as everyone else seemed to, and for the following reason: early on in the movie, Harry has got his magic wand, and you see him doing all these cool magic tricks. Then later on in the same movie, Harry’s in danger, and we in the audience are supposed to feel scared for him. But I felt like yelling at the screen, “Just use your bloody wand!” And my complaint was I felt we weren’t properly told the rules of the game. What could the wand do and not do? I didn’t know that suddenly the wand couldn’t get him out of this difficult situation.

Rob Wiblin: Yeah. It’s a common problem with fantasy and fiction I think.

Alan Hájek: That’s it. Now, I think this is a worry about the similarity account if you do this retreat. If you just say, “Similarity, well, I won’t tell you much about it. I won’t tell you the rules of the game. It’s just whatever it needs to be to make the counterfactuals come out true where we want them to come out true.” Well, that’s like this Harry Potter wand theory. It’s a “magic wand” similarity.

Rob Wiblin: If ever you find yourself in a corner, you just change the rules and say, “No, I meant something else.”

Alan Hájek: Yeah, that’s right. Or you just don’t specify the rules in the first place and so there are no counterexamples. That’s the Harry Potter horn of a dilemma. Or if you do the more, I think, philosophically honest thing, like Lewis did, and just say, “Look, I’m going to try hard to tell you what matters to similarity,” then I worry there are going to be counterexamples, like the scratching the finger case.

Rob Wiblin: OK, so some people in the audience might understandably think we’re being awfully fussy here about exactly what counterfactual are we referring to in these cases — where in actual reality, if two people were having a conversation and made statements like this, there would be no confusion about what they’re referring to.

Rob Wiblin: To defend the fussiness for a minute, the challenge here is that in these everyday cases — where someone says, “If Nixon pressed the button, then Y” or “If I’d scratched my finger yesterday, then Z” — intuitively we know what we’re communicating through all of the context, but we’re going to try to develop rules about the nature of counterfactuals and what reasonable logic you can apply to that from these cases. And then we’re going to start applying it to very unusual cases — to strange things like, “If we can conceive of P zombies, then this other thing,” where we are not going to have the same intuition about what the situation really is or what logic can reasonably apply.

Rob Wiblin: And if we develop rules that, as it turns out, don’t actually work in all of the cases where we do have an intuition about what works and what doesn’t, then we could extend this incorrect logic to all these other cases where we’re not going to be able to see the error. I think that is one reason why we really would ideally like to pin down what is and isn’t legitimate when we’re doing counterfactual reasoning in the cases where we might be able to see mistakes. So then in cases where we can’t see the mistakes, we feel we’re on more solid ground.

Alan Hájek: That’s right. By the way, this is a very general problem in philosophy. I think that we often fashion our conceptual analyses to familiar cases, and then we hope that they still apply to these maybe more recherché cases. And sometimes philosophers say, “Well, it’s spoils to the victor” — in some far-fetched cases, we haven’t earned the right to some firm intuition about these cases because they’re strange, and just let the theory dictate what we should say about those cases. On the other hand, we talked about this earlier too, sometimes we really do look to those strange cases like St. Petersburg to stress test an analysis that was formed to handle more familiar cases.

Rob Wiblin: But let me take the other angle for a minute. We’ll say language is always messy. People are constantly saying things that technically aren’t true, but they communicate the broad thrust of the argument, and people know what they’re saying. And here you are applying a real fine-toothed comb to everyone’s statements and saying, “Oh, this isn’t precisely right” — but that’s because you’re taking the kinds of things that people might say an ordinary life and then treating them as if they’re factual to a level that they were never designed to be.

Alan Hájek: That’s an important part of my overall view about counterfactuals. I think that the things we say — the counterfactuals that we use in daily life — are mostly loose talk. And let me say something about that. I think that they’re false, but approximately true and close enough to the truth that they convey useful information. And I think this is perfectly familiar.

Alan Hájek: Perhaps soon I’ll say something about why I think most counterfactuals are false, and people think this is some crazy radical view, and I say, “Come on!” Most of what we say could be false for completely intelligible reasons. Sometimes we are joking, sometimes we are being ironic. Sometimes we’re exaggerating a bit. Loose talk I think is a common explanation. Just think of other cases not involving counterfactuals. We say things like “Six and a half million people have died of COVID.” Sadly. Well, we don’t say “Really? So, 6,500,000 people?”

Rob Wiblin: It’s awfully specific, isn’t it?

Alan Hájek: No, and of course we didn’t intend to convey that. What we meant was something like: roughly six and a half million people have died, and that’s approximately true, even though the exact truth is something slightly different.

Rob Wiblin: Or someone says, “I’ll be there at 11:00.” And you’re like, “Liar! The probability of arriving at any specific instant is zero.”

Alan Hájek: Exactly. That’s it. See how a familiar phenomenon this is. No one truly arrived at 11:00 when they said they’d arrive at 11. Of course we charitably understand them: “Roughly 11:00.” Or think of the claim, say, “tennis balls are spherical.” Well, they’re not in the mathematical sense spheres, because that has a very specific meaning. They are approximately spherical, and close enough to spherical that we can treat them as if they’re spheres for most practical purposes. For example, I’m packing a box with tennis balls and I want to calculate how many tennis balls can I fit in the box. I won’t go far wrong if I treat the balls as perfect spheres, because they approximately are. Then I do the calculation using perfect spheres and it’ll work well enough for the approximate spheres that the tennis balls really are.

Rob Wiblin: OK, so if we kind of relax our attempts to come up with a super-precise theory of counterfactuals, like the most similar world, and we accept that they’re fuzzy and a bit messy and contextual and so on, where does that leave us as philosophers, or as people using counterfactual reasoning?

Alan Hájek: First I’ve distinguished the truth values of the counterfactuals from the way we use them: the assertability, the acceptability of them. Should I go into each of those aspects?

Rob Wiblin: Yeah. I’m not sure what those mean.

Alan Hájek: Good. Maybe first I’ll hit you with the shocker that most counterfactuals are false, and people will think I’ve lost my philosophical marbles, and then I’ll soften the blow by saying something about assertability and acceptability.

Rob Wiblin: Yeah, yeah. Go for it.

Alan Hájek: Great. For a start, most counterfactuals are false. Consider the coin in my pocket. Let’s assume it’s a fair coin. I’ll never toss it. If I were to toss it, it would land heads. Not tails; it would land heads.

Rob Wiblin: Doesn’t seem right.

Alan Hájek: That doesn’t seem right. Thank you. I don’t think that’s right. I think that’s false. And why? Well, it’s a chancy coin I’m imagining, and if I were to toss it, it might land heads, it might land tails. All right. Now let’s make the coin heavily biased to heads. Let’s say 99% chance of heads, 1% chance of tails. If I were to toss the coin, it would land heads, not tails. Still bad, I say: it still might land tails.

Alan Hájek: Consider a huge lottery — let’s say it has a million tickets — that’s never played. “If the lottery were played, ticket number 1 would lose.” I say no. And notice, by the way, the problem there: if you say that of ticket number 1, seems you’d better say it about ticket 2, ticket 3 would lose, blah, blah, blah, ticket number million would lose. Seems like you’re committed to every ticket would lose.

Rob Wiblin: One has to win.

Alan Hájek: There’s got to be a winning ticket. So in fact you’d contradict yourself if you said all of that. And now consider your favourite intuitive commonsensical counterfactual. I’m holding a cup. If I were to release the cup, it would fall. Now I know it’s very tempting to say that’s true. I still say it’s false — because it’s a lottery, it’s a chance process, I say. If the cup were released, it might not fall because someone might quickly place a table under it. A very surprising updraft of air might suddenly lift it rather than letting it fall. Physics tells us that has some positive chance, and so on. So these things might happen. I know some of them are extremely improbable. I don’t mind. Just as in the lottery case, it was extremely improbable that ticket number 1 would be the winner.

Rob Wiblin: So these things aren’t absolutely certain. It’s not true in every possible counterfactual world that the cup does fall. I guess some people might wonder, does it really matter that in some infinitesimal fraction of possible counterfactual worlds, the consequent doesn’t actually occur? Or are you being a bit obtuse here about this?

Alan Hájek: I get that a lot. And maybe I am obtuse. Well, I’m being pedantic, but I do think that our use of counterfactuals commits us to this kind of pedantry in various ways. For example, look at the logic of counterfactuals, modus ponens seems plausible — that’s the rule “If P, then Q. P, therefore Q.” So modus ponens will fail, it seems, if you lower the bar for the chanciness below one.

Rob Wiblin: So this is like, “If P then probably Q. If P, then Q,” then that doesn’t go through.

Alan Hájek: Yeah, that’s it, right. If you thought all you needed for the truth of the counterfactual was the probability — high probability of Q given P, something like that — then you could easily have a case where P is true, the probability is high, and yet it didn’t happen. It was very probable that ticket number 1 would lose in the lottery. But sometimes ticket number 1 wins. Or I should say it better: the ticket that in fact wins in a lottery, it was very improbable that it would win. But you don’t want to say if the lottery were played, it would lose just because its probability of losing was high.

Rob Wiblin: Maybe this is a misplaced rant. But in general, I wonder whether formal logic, as it’s taught — exactly things like if P, then Q; P, so Q — whether that stuff is as useful as people like to think, because the world just so rarely affords us the opportunity to use these certain logical rules. If you’re in computer science or something, maybe fine, but in actual everyday life, almost all we ever have is “If P then probably Q; P, so probably Q.” That kind of more informal reasoning is basically all the luxury that we’re ever afforded. And then it means that, for example, strict logical fallacies are not so interesting. Instead you want to be looking at probabilistic fallacies — ways in which arguments are not as powerful as they seem like they might be, and so on.

Alan Hájek: Yeah, that’s right. Let’s consider one logical rule: the way that conjunction behaves. If P is true and Q is true, then the conjunction P and Q is true and vice versa. Earlier we talked about some problems to do with conjunction — that was the lottery paradox and the preface paradox — where we have the interaction of say, rational belief and conjunction. So you might say in that case that rational belief is not bound by this strict conjunction rule, because we saw a case where you could rationally believe each of the sentences in your book, but not rationally believe the conjunction of them.

Alan Hájek: Actually, something very parallel to this is relevant to counterfactuals: the way there seems to be a sort of conjunction introduction rule for counterfactuals. If I were to release the cup, it would fall. If I were to release the cup, it would break. Therefore, if I were to release the cup, it would fall and break. So that’s a conjunction introduction in the consequent. I think that’s valid and I want to respect it.

Alan Hájek: And actually notice how if you set the bar for probability less than 1 for the truth of the counterfactuals, you’ll violate this conjunction rule in the consequent. I did it, in fact, for the lottery: “If the lottery were played, ticket number 1 would lose,” says someone who thinks that high probability is good enough for truth. “Ticket number 2 would lose, …” and now I conjoin: all of the tickets would lose. And notice I did that and you didn’t blink. You didn’t stop me and say, “Hey Al, you can’t conjoin those consequents like that.” I think it’s very intuitive that you can. I did. And then you get the absurdity that if the lottery were played, every ticket would lose, which we all agree is false.

Rob Wiblin: So most counterfactuals are false in this respect. What does that imply for people who are using counterfactuals in their reasoning?

Alan Hájek: Great. And this brings me to the next bit I wanted to talk about: the assertability and the acceptability of counterfactuals. When I go around saying that most counterfactuals are false, a lot of philosophers think I’ve lost my philosophical marbles. This is just crazy.

Alan Hájek: Actually, just a little sociological observation. Some of them think I’m crazy and some of them think I’ve just given good arguments and I’m exactly right. The former — the ones who think I’m crazy — tend to be more philosophers of language and philosophers of mind, who think that principles of charity and humanity are operative, and I’m attributing to competent speakers some deep confusion or something. And I must be just getting how semantics works wrong. The latter — who think that I’ve given good arguments, the conclusion is right — tend to be more philosophers of science or maybe metaphysicians who just take seriously that this is what chance means, what it implies for counterfactuals. We live in a chancy world. That’s just what you get.

Alan Hájek: Now, how do I soften the blow? I don’t think falsehood is so scary. I don’t think it’s so troubling if stuff that we say turns out false.

Rob Wiblin: Lots of things we say are.

Alan Hájek: Lots of things. And in fact, Gil Harman and Donald Davidson apparently did a bit of an early X-PHI, bit of experimental philosophy — I don’t know if they ever published this — where they eavesdropped on conversations in bars and cafes, just listening to how much truth and falsehood people say in normal conversation. And they concluded that much or most of what we say is false for completely understandable reasons. I mentioned some before. Sometimes we are joking, we’re exaggerating. Loose talk is very important here. Sometimes we just have false beliefs. Sometimes we have true beliefs, but we just take shortcuts. We just don’t want to say the full thing; it would be wordy, and so we just cut to the chase.

Alan Hájek: I think it’s like that with counterfactuals. I think they are in fact false, but they’re assertable, the ones that we think are good. And I have a story about that. First I should tell you my truth conditions for counterfactuals themselves and then you’ll see the rest.

Rob Wiblin: Sure, go for it.

Alan Hájek: I think that ‘if P were the case, Q would be the case’ is true just in case the chance of Q, given P, is 1 at a suitable time. And that suitable time is when P was not yet settled. Things could have gone either way with respect to P: it had positive chance, and not-P had positive chance. Now that’s a very demanding truth condition. And that’s why I get results like, “If I were to let go of the cup, it would fall” is false — because the chance isn’t 1 of falling given releasing it. It’s very close to 1. It’s approximately 1.

Alan Hájek: That’s the key to my understanding acceptability of counterfactuals: they’re acceptable if they’re approximately true, if the conditional probability is close enough to 1. And that’s very much like stuff I said about the tennis balls and about the COVID cases. These claims were false, pretty obviously, but they were useful. They were approximately true, good to reason with, acceptable — acceptability is a story about what’s useful to reason with — and they’re assertable.

Alan Hájek: And now let me give the story about assertability for counterfactuals. We take some possibilities seriously in our conversations, and that’s a context-sensitive matter. And I say that “if P were the case, Q would be the case” is assertable, just in case the conditional chance is 1. And it’s the following conditional chance: “Q, given P and the serious possibilities” — the stuff that we think are live possibilities in our conversation. That’s context dependent. I don’t think that the truth of counterfactuals is context dependent. Everyone says that nearly but me. But I think that the assertability is, and I think it —

Rob Wiblin: Depends on adding additional conditions.

Alan Hájek: That’s it. An extra condition about what possibilities we’re taking seriously. In a normal context, we don’t take seriously the released cup suddenly being saved by a table or an updraft. But if I then draw your attention to these possibilities, then the context shifts.

Rob Wiblin: You start adding a bunch of conjunctions saying, “…and no thermodynamic updraft, and no one catches it, and…” yeah.

Alan Hájek: That’s it. And then it becomes even unassertable, because in that context, we’re taking seriously these rather odd possibilities. But I now locate the context sensitivity in the assertability conditions and not the truth. And I think we can explain away the intuitions that these counterfactuals are true. My error theory is they’re false, but they’re assertable, they’re acceptable, they’re good to reason with. And that’s because they’re approximately true. And that’s because the conditional chance is close to 1. It’s like the number of COVID deaths is close to 6.5 million.

Rob Wiblin: So a lot of this is currently resting on the view that we live in an indeterministic world, which I guess we have good reason to think that we do. But hypothetically, if we — starting to make counterfactuals here — if we lived in a deterministic world, would a bunch of this problem go away?

Alan Hájek: I think it doesn’t.

Rob Wiblin: Oh, OK.

Alan Hájek: I think even under determinism, our counterfactuals will come out mostly false. And it’s actually a live debate. And some physicists actually think the world is deterministic. Even quantum mechanics can be given a deterministic interpretation.

Rob Wiblin: Is this like it turns out that things collapse for reasons that we don’t yet know exactly what it is, and there’s only one path?

Alan Hájek: Or certain interpretations, like Bohm’s, are deterministic. That’s a live debate. And some physicists might say I’ve taken the wrong turn here.

Alan Hájek: But now let’s take the deterministic case. Now I think there’s a different problem. It’s not so much chanciness. It’s what I would call “unspecificity.” Let’s take a case that philosophers are fond of talking about. Sophie sadly missed the big parade of, I think, baseball players. At a certain point, Pedro danced. She would’ve loved to see that. Sadly, she missed that. She didn’t go to the parade. If Sophie had gone to the parade, she would have seen Pedro dance. All right, now let this be a deterministic case. There’s no chanciness.

Rob Wiblin: We don’t know that. She might have gone to the bathroom.

Alan Hájek: She might have gone to the bathroom, exactly, at just the wrong time. Or the case that is usually considered: she might have got stuck behind a tall person. If Sophie had gone to the parade and got stuck behind a tall person, she would have seen Pedro dance — that seems false. She wouldn’t have seen. OK, now I want to make a lot of that. If she’d gone to the parade somehow or other — that was the antecedent — she would have seen specifically Pedro dance. I say, no, it depends on how the antecedent is realised. If she’d gone to the parade and got stuck behind a tall person, if she’d gone to the parade and gone to the bathroom at the wrong time, it’s not true that she would’ve seen Pedro dance. That might have happened. These are ways of realising the antecedent.

Rob Wiblin: For the same reason, you want to add in, I’m guessing, a whole bunch of additional conditions to the antecedent — like “and she wasn’t stuck behind a tall person and she didn’t go to the bathroom” — all of the stuff that is being assumed in normal language, because we engage in loose talk, but actually should be there if we wanted to formalise it.

Alan Hájek: Exactly right. And now some people will say to me, “Well, context will determine what should be added or not.” And I say that’s a matter of assertability, but as far as the truth goes, I take the counterfactuals at face value. You said if she’d gone to the parade, she would have seen Pedro dance. Now you’re telling me that a defeater of her seeing Pedro dance is getting stuck behind a tall person. And she might get stuck behind a tall person. So I say that it’s not true that she would have seen Pedro dance. She might not have. She might have got stuck.

Rob Wiblin: Are there any other approaches to counterfactuals that it’s worth people having in mind, or alternative ideas that people have put forward to make sense of all of this?

Alan Hájek: Sure. An important alternative approach to counterfactuals involves conditional probabilities, and I like this approach. I’m thinking of Adams, Edgington, Skyrms. Now, first they think that counterfactuals don’t have truth values, but they can be assigned probabilities.

Alan Hájek: According to Adams, for example, your probability for “if P, would Q” is your conditional probability of Q, given P, before learning that P was false. And for Edgington, the correct probability is the conditional chance of Q, given P, just before it turned out that P. And Skyrms identifies that counterfactual’s assertability for you with your expectation of the conditional chance. According to these accounts, counterfactuals don’t have truth conditions. Leitgeb does give them truth conditions. He says “if P, would Q” is true just in case the conditional chance is high.

Alan Hájek: And my own account is influenced by all of these. I like conditional probabilities. I really like conditional chances, for counterfactuals, and I really, really like truth conditions, given in terms of those conditional probabilities, those conditional chances. So that’s an important alternative.

Rob Wiblin: The basic idea here is taking counterfactuals away from the realm of strict logic — where you’re like, “If P, then Q” — and bringing it more into the realm of probability, or Bayesianism, or making claims about correlations, or associations between things. So it’s causal relationships, maybe, where you’re saying, “If P, then Q is more likely,” or something like that?

Alan Hájek: And it’s also a good approach to reasoning and arguing with counterfactuals. And this goes hand in hand with Adams’s alternative approach to thinking about reasoning. So don’t think in terms of classical validity, which is truth-preservation, because again, the thought is that these conditionals — in this case, counterfactuals — don’t have truth values.

Alan Hájek: But we want to reason in such a way that we go from high-probability premises to high-probability conclusions. We don’t want to have our premises being probable and our conclusion being improbable. And Adams has this nice theory of, so to speak, high-probability preservation, and this fits in with that whole alternative programme. So it’s not classical validity, it’s not truth values — it’s high probabilities.

Rob Wiblin: Yeah. OK, let’s push on and think about an application of this set of objections. I think you reckon that this sort of reasoning about counterfactuals, or recognising the trouble that comes with counterfactuals, can potentially present a problem for a flavour of utilitarianism called objective utilitarianism, or I guess objective consequentialism of any kind. Most people know that consequentialism is when you judge the value or goodness of actions, or what would be right to do, based on the consequences that they have. Most people have heard of consequentialism in some form, but what is objective consequentialism?

Alan Hájek: Yeah. Roughly this: Action one is objectively better than action two if and only if the consequences of action one are better than those of action two. I think here we are imagining really the long-term consequences — not just the immediate consequences, but really perhaps to the end of history.

Alan Hájek: And now, we get into a big discussion — which I know is close to the hearts of many listeners — about the long-term consequences of what we do. But anyway, what I’m about to say I think will generalise beyond just objective consequentialism, but that’s a good place to start.

Alan Hájek: All right. So let’s take a case. You have a choice: you could help the old lady cross the street, or go to the pub. What should you do? What’s the right thing to do? Now, let’s suppose, in fact, you take the old lady across the street. You help her. I don’t have any problem with taking a total of all of the goodness — whatever, the happiness or the welfare — after that. I’m happy to allow there’s a fact of the matter of the total value, the total goodness, the consequences of that. But what about the thing you didn’t do? You did not go to the pub. That’s where my worry is going to kick in.

Alan Hájek: First thing, we should make clear that this is a counterfactual. The way I just stated it before, notice the carelessness of it: “Action one is objectively better than action two if only if the consequences of action one are better than those of action two.” Well, in this case, action two didn’t happen. It was non-actual. It didn’t have any consequences, so we must be talking about counterfactual consequences. And now, my worries about counterfactuals are going to start to kick in.

Alan Hájek: All right, so let’s take the thing you didn’t do: you didn’t go to the pub. Well, case one, the world is chancy. Well, let’s consider the very first chancy coin toss that never happened. How would it have landed? “If that coin had been tossed, it would’ve landed heads — not tails, heads.” No, no, I say. That I find implausible; it might have landed tails. Consider the first lottery that never happened. “If the lottery had taken place, ticket number 17 would have won.” No, I say. We can’t say of any ticket that it would have won. Some other ticket might have won instead.

Alan Hájek: All right, but I’ve hardly started. Now I know in the cluelessness industry, this worry about consequentialism, there’s a lot of discussion of how our actions have these far-reaching consequences. There are these ripple effects, but it’s not like ripples in a pond that tend to dampen down as you go further out. No, these just keep on rippling for the rest of history. Unborn children, which children would’ve been born or not, depends acutely sensitively on very minor changes in what we do.

Rob Wiblin: Precise timings of population.

Alan Hájek: All right, so now let’s go back to the hypothetical visit to the pub. The first child to have been born thereafter. The child to be conceived, hypothetically, depends on which sperm fertilises an egg, and it’s a lottery which sperm wins the race to the egg. So there would have to be a fact of the matter of which sperm wins the lottery to fertilise the egg to make it this child that would’ve been conceived and not some other one that would’ve been the winner of a different sperm winning the race. And I’ve still barely started. That’s the first child. But now, consider that child’s children and grandchildren and great-grandchildren and now the rest of history.

Rob Wiblin: And all of the people who they interact with…

Alan Hájek: All of that. All of that, that’s right.

Alan Hájek: Now, I find it wildly implausible that there is a fact of the matter. We’re still considering the chancy case where all of these chancy processes would be resolved in one particular way and no other way. But it makes a huge difference to how we evaluate these counterfactual histories which way things go.

Alan Hájek: So in one hypothetical scenario, the children that happen to be conceived are latter-day Gandhis and Einsteins, and a wonderful world follows. And with just a small tweak, we now get a different counterfactual history with a latter-day Hitler, followed by a latter-day Stalin, and a horrible world. And everything in between. And all of this, again, is acutely sensitive to how things are initiated and also how the chance processes go.

Rob Wiblin: OK, so there’s a lot here. Let’s take it step by step. So we’re considering the ethical question: should you help this old lady cross the road, or should you go to the pub? And it turns out that from a total forever consequentialist point of view, the question of how good the counterfactual would be of you going to the pub is going to depend on the minute timing of exactly when you go to the pub, what cars you happen to walk in front of and slightly delay, when you go and order a beer versus not — because that ultimately, actually will, we think, given the setup of the world, end up affecting the identities of all people potentially in future times, because it will change the exact moments of fertilisation and so on.

Alan Hájek: Yeah.

Rob Wiblin: Now, given that the action of going to the pub could be extremely good or extremely bad based on a one-second difference of when you do it, you’re going to say that we need to specify much more precisely what the counterfactual is. We need to say this extremely precisely specified set of actions that you’re going to go and engage in, because otherwise it’s just not defined.

Alan Hájek: Yeah.

Alan Hájek: OK. Now, so far I’ve been assuming indeterminism, but now let’s assume determinism, which I think is the best hope for objective consequentialism. And now, we’re imagining if, given a precise specification of the initial conditions and the laws of nature, we get an entire history that’s determined thereafter. That’s the best case. And now, maybe, it’s more plausible that there’s a fact of the matter of what would have happened had I gone to the pub.

Alan Hájek: But now the problem of unspecificity kicks in. This is not chanciness at all. Under determinism, if I’d gone to the pub, somehow or other, how exactly would I have gone to the pub? And it makes a difference. Would I have entered the pub at 6:03 and 17 milliseconds rather than 18 milliseconds? And given the acute sensitivity of what follows thereafter, it matters which of these initial conditions is realised.

Alan Hájek: So I say — this parallels what I said about Sophie a moment ago — even under determinism, I just don’t think there’s a fact of the matter given this loosely specified antecedent — I go to the pub, somehow or other, counterfactually — that you get this very specific consequent. It’d be this realisation, the 17-millisecond one perhaps, not some other one. And this, by the way, is another example of implausible specificity. Even under determinism, I think it’s implausibly specific. Loose antecedent, all too tight consequent.

Rob Wiblin: OK, so I’m going to try to rescue it here. So the person says, “Well, when I said ‘go to the pub,’ what I meant was this whole portfolio of possible ways that you could do that.” And you’ll say, “Yeah, but that’s a massive mixture of extremely good and extremely bad outcomes that are depending on exactly how you specified those actions.” Then we’ll say, “OK, let’s say that I could exactly specify the exact path — hypothetically, we could write out in this book perfect instructions of the exact moments I should make each of these movements.” On its face, that might rescue this issue of indeterminacy, although it is now a bit weird.

Alan Hájek: And I should say more about the weirdness.

Rob Wiblin: OK. Yeah.

Alan Hájek: So now let’s very precisely specify the way I go to the pub. We are assuming determinism still, which I’d rather not, but let’s spot that. So now, given the initial condition precisely specified, determinism, I am happy enough that there is an exact history that follows, the one that’s determined.

Alan Hájek: But now we have a different problem. I’ll call it “clumsiness.” And I’m trying to riff off “cluelessness.” Cluelessness, by the way, that’s an epistemic problem — that somehow it’s hard for us to know or to have confidence, reasonable belief, in which of these histories would be realised. I think the problem’s much worse than that. I think it’s a metaphysical problem that there’s just no fact of the matter of what would be.

Rob Wiblin: But we fixed that one.

Alan Hájek: And we fixed that one with the determinism and the exact initial condition. And I say the problem is, I’ll call it “clumsiness”: it’s just not under our control to realise the initial conditions in one very precise way rather than another. To enter the pub exactly at 17 milliseconds after 6:03 rather than 18 is just not something…

Rob Wiblin: You’d be as likely to do the good one as the bad one because they’re so adjacent to one another.

Alan Hájek: That’s basically it, that it’s not under my volition to so finely tune exactly what happens in this case down to the millisecond — let alone all of the extra stuff, exactly when I order the beer and so on. Even if I knew what I ought to do — “This is the trajectory I want to get on: the 17 millisecond one, not the 18 one” — it’s just not something that’s an option for me. I can’t choose 17 rather than 18 milliseconds after 6:03.

Alan Hájek: So to step back, put this in the form of a dilemma. Now, what are the things that we are evaluating? What are the objects of moral evaluation? On the one hand, let them be these rather coarse-grained statements, like, “I help an old lady as opposed to I go to the pub, somehow or other.” So now it’s pretty plausible these really are my options, but now I say it’s not plausible that there’s an entire counterfactual history that would be realised given that unspecificity of the option. That was the first horn of the dilemma.

Alan Hájek: Second horn: now, let the “options” (in inverted commas) be these very precisely specified things, the exact millisecond I enter the pub and so on. Now, I think it’s more plausible that there is a fact of the matter of the entire counterfactual history, but it’s not plausible that that’s an option…

Rob Wiblin: That anyone could actually act on it. It’s irrelevant.

Alan Hájek: Exactly. That’s right.

Rob Wiblin: Indeterminacy or irrelevancy.

Alan Hájek: Yeah. It’s not the appropriate evaluation of a moral theory. It’s not something that I can realise as an act of my volition.

Rob Wiblin: So one way of dealing with this problem would be to specify extremely precisely what you’re going to do. Let’s say you’re going to go to the pub at this exact time, with these exact movements, or help the person in these very exact ways. And maybe that’s overly precise, but it might at least allow you then to say, in principle, that there is a specific consequence. Again, always assuming determinism here, for the sake of the conversation.

Alan Hájek: That’s right. Well, for a start, what does determinism mean? It means given an entire time slice of the world and the laws of nature, the rest of history follows. But I don’t think that it’s enough just to be told the details of your going to the pub. For example, even an exact specification of your pubbing, I say, doesn’t determine a unique history thereafter, even under determinism. Because you’re just a tiny part of the world. In the sweep of world history, you’re just a speck. I’m sorry if this comes as news to you, but even describing precisely the details of your going to the pub, the exact millisecond of your arrival and exactly your movements thereafter, that just falls way short of determining the entire world’s initial conditions at that time. And under determinism, it’s the initial conditions of the entire world, and the laws, that entail the rest of history.

Alan Hájek: So, to put it colourfully, according to determinism, a snapshot of the world at the time will give you the rest of history. But I’m imagining we’ve only just got a selfie! We’ve only just got this little part of the world: you entering the pub. And that’s not enough to entail the rest of history. You’re just a tiny part of a portion of a small bit of a fragment of a time slice of history.

Rob Wiblin: So in order to specify it sufficiently closely, it would have to not only talk about your actions, but I guess specify the entire initial conditions of the entire universe, or all of the different atoms that could affect all of the different ones — which is then, I guess, becoming an absurdly bloated set of instructions.

Alan Hájek: We need not just information about you — we need to know about the hypothetical other people, and the hypothetical dogs and frogs, and bees and trees, and photons and electrons, and everything else, hypothetical pandemics, natural disasters. And I claim you don’t get all of that for free just out of the specification of your going to the pub, even precisely specified.

Rob Wiblin: OK, so this is all very clever, but it does feel —

Alan Hájek: Well that’s a start. I’ll take that.

Rob Wiblin: It does feel like some kind of trick’s being played here. Like surely, we can patch this theory. So let’s just try to do it in a kind of common-sense way.

Rob Wiblin: So what did someone really mean when they said, “If I’d gone to the pub, it would be better?” What they were talking about was some portfolio of different scenarios in which you went to the pub. And we might sample from them, and then hypothetically we could see what the full long-term consequences of them would be. And then, you’re going to choose one of these different pathways somewhat at random from this portfolio of things that plausibly match going to the pub. And then, we’re going to average, I guess, the expected value of the different ones that you might sample from. And then you should do it if the expected value of the outcomes of these different options that you might end up taking, in fact, in practice, would be positive versus negative. That seems to fix it.

Alan Hájek: I think it’s a big improvement, but notice you really had to change the view. Remember, the original view said one should perform the action that has the best consequences. It wasn’t any probabilistic thing; it wasn’t an expected value thing — it was just the best consequences. And I’ve been challenging that thought. And in particular, the thought that there’s a fact of the matter of the consequences for something you didn’t actually do.

Alan Hájek: Now, I like this much better, where you somehow probabilify this thing — you take the expected consequences, where that’s a probabilistic notion. And by the way, we’re going to have to talk about what the nature of probability is. I like how various topics that we’ve been talking about converge. Now, the interpretation of probability matters. I think here, the right way to go is: it’s some sort of reasonable subjective probability that we are putting in at this point. But notice we have gone a long way from the original objective consequentialism.

Rob Wiblin: For practical purposes, maybe it feels quite similar, but I suppose we have now this quite foundational level of a theory. We’ve started sticking in subjective probability judgements and expected value calculations — these things that originally seemed like distinct philosophical issues. Now, our moral theory is this combination of consequentialism, plus expected value, plus some view on how people ought to form views about what things will happen.

Rob Wiblin: So an unpleasant consequence of this is that if we identify problems with subjective opinion-forming or how we aggregate them into expected value or so on, this is all really deep in the moral theory, and so it’s going to create a problem for the subjective utilitarianism, as well.

Alan Hájek: Yeah. Spot on. My objection to objective consequentialism was that it presupposes this bizarre metaphysics I find implausible. To be fair, I should say very quickly, that some really serious good philosophers do, I think, believe that there are what Orri Stefansson calls counterfacts: facts of the matter of what would happen given any antecedent for the entire world.

Alan Hájek: This goes back to the Molinists, who thought that God has this knowledge of these counterfactuals. And it seems to go pretty well with the early Stalnaker view that there’s a unique closest possible world where an antecedent is realised. John Hawthorne runs with that view. Sarah Moss, I mentioned Orri Stefansson, Richard Bradley. There are a number of people who have views, I think, in this neighbourhood, so I have to take this view very seriously, even though I’ve said I don’t buy it in the end.

Alan Hájek: So far, I’ve been making the point that objective consequentialism presupposes this very questionable metaphysics. And even if it turns out the metaphysics is correct and it’s defended by all of these really good philosophers, we should be upfront that this ethical theory, this theory about morality, has this deep metaphysical commitment.

Alan Hájek: Now, I think you are making the next point that if we fix the theory, as I think we should, and we go probabilistic and we get expected value into the calculations, now the foundations are probability theory and expected value theory. And we’ve talked earlier about some of the problems there. And again, the moral theory has got these foundational presuppositions. We want to make sure they’re in good order.

Rob Wiblin: Yeah, I see. So it makes solving the problems with that even more important.

Alan Hájek: That’s absolutely right.

Rob Wiblin: Because without that, consequentialism just doesn’t even really make sense.

Alan Hájek: Yeah. And quite a while back, you asked me, “Why should we care about counterfactuals? And why should we care about probability and expected value theory?” And now, we see yet another application: that some moral theories are going to depend on how counterfactuals pan out. Some of them will now depend on how probability and expected value turns out.

Alan Hájek: And by the way, just to finish an earlier thought, I began by saying, I’ll put my objection as an objection to objective consequentialism — but I actually think it will generalise beyond that, namely any moral theory that takes consequences at least somewhat seriously. So even a deontologist should take some account of consequences, especially when the stakes are sufficiently high.

Rob Wiblin: I think virtually all do think that the consequences matter as well. Yeah.

Alan Hájek: Yeah. And especially, given this more recent movement about how our actions have these huge consequences for the rest of history.

Rob Wiblin: Right. Yeah. Someone’s like, “Oh, I shouldn’t steal this thing.” Of course, that ends up changing everyone’s identity for the same reason as going to the pub potentially does. So it’s like massive consequences.

Alan Hájek: That’s right. Yeah. That’s it, the deontologist who says, “I ought to keep a promise,” but now imagine some history where —

Rob Wiblin: In so doing, creating the next Hitler.

Alan Hájek: Hitlers and Stalins later are created. So even a deontologist has to take some account of consequences. Similarly, a virtue ethicist. Virtue ethics is partly a matter of, I hope, promoting good consequences for people, but now we have to take seriously these worries about counterfactual consequences. And much of the stuff I said before kicks in again.

Rob Wiblin: Yeah. OK, so it’s really going to impact any theories that give some weight to consequences.

Alan Hájek: Yeah.

Rob Wiblin: I suppose if you totally constrain your moral picture to just being like, “It’s wrong to steal. You shouldn’t steal here and now,” that’s sufficiently constrained that it’s not going to create a problem, because you can say, “If I took this course of action, I would be much less likely to steal.” Although I suppose now we’re in likelihood, but you could say, “Well, I’m going to specify a path of action that doesn’t include stealing. And I can do that set of actions sufficiently closely that I’ll be confident that I won’t steal.” And so that’s safe, but as soon as you start considering the broader issues that stay at play, then this is going to come back.

Alan Hájek: Exactly. That’s right. And now, this takes us back to longtermism. So maybe if we are myopic and just look at the very short-term consequences of what we do, the considerations that I’ve mentioned don’t kick in. But if we take a broader horizon, then I think it’s a very live issue.

Rob Wiblin: Slightly reminding me of the paralysis argument that I talked about with Will MacAskill in my second interview with him a couple of years ago. I guess we don’t have time to go into that now, but it turns out if you’re a deontologist, and you place different values on harms that you actively cause versus harms that you prevent through your actions, then you can end up in a very sticky situation where potentially lots and lots of actions are prohibited to you because of their foreseeable unintended consequences basically. It has a similar flavour.

Alan Hájek: Yeah, that’s right. Although even being paralysed and just locking yourself up at home so that you don’t have any effects on all these other people that you can’t control and you can’t foresee, there’ll still be consequences of your locking yourself up at home. Unforeseen, there’ll be cluelessness there. Clumsiness, too, by the way.

Rob Wiblin: That’s right.

Alan Hájek: The exact way you quarantine yourself. I think you just can’t avoid it. What are you supposed to do? Just stay at home and not even move? And then, you die? And then, people are going to come and have to get your body. And that’s going to have consequences after all.

Rob Wiblin: I see. So it becomes a bit unclear what it’s stating.

Alan Hájek: Yeah.

Rob Wiblin: Yeah. I think that this is a huge kettle of fish. People who want to understand what we were just talking about probably have to listen to my second interview with Will.

Alan Hájek: Just to summarise what we’ve said here, I think it’s underappreciated how these metaethical positions — whatever, objective consequentialism or some other kinds of consequentialisms, maybe subjective with expected value, and even, at the end, deontology and virtue ethics — they may involve some very questionable metaphysics, I say, about how the counterfactuals pan out. Or if we now probabilify them in terms of expected value, then the foundations of probability and expected utility theory are very relevant.

Rob Wiblin: Just as a reminder for people who might have forgotten, deontology is the class of ethical theories where it’s like rule-based ethics — like you shouldn’t lie, you shouldn’t steal — classically, that kind of thing. Especially prohibitions, although usually prohibitions that can sometimes be swamped by other considerations that might be really important.

Alan Hájek: Yeah. That’s right. Thank you.

Rob Wiblin: And virtue ethics is the theory that puts front and centre cultivation of good qualities of the actor. You should try to be a kind person, things like that.

Alan Hájek: Yeah, that’s right. And then, when you spell out these things so that they’re somewhat sensible, consequences will have to be part of the picture. And my worry is about the non-actual consequences, as before.

Rob Wiblin: OK, so this has been a very nice segue to bring together almost all the different threads that we talked about through the interview. It’s been a lot.

Alan Hájek: It’s been lots of fun, I have to say.

Rob Wiblin: I would not be surprised if some people, including me, might have to listen to this through twice to fully grasp everything that’s gone ahead.

Alan Hájek: Me too.

Rob Wiblin: We should wrap up, though. We’ve got other sessions at the conference to get to.

Alan Hájek: Yeah.

Rob Wiblin: Final question. We managed to get through our thing on stage earlier without too much trouble. We didn’t do anything too embarrassing. Or if we did, I guess listeners won’t know because we’ll have cut it out.

Alan Hájek: Yeah.

Rob Wiblin: You’ve also done tonnes of presentations, tonnes of stuff on stage, over the years.

Alan Hájek: Sure.

Rob Wiblin: Have things ever gone either incredibly well or incredibly poorly for you?

Alan Hájek: I hope some of them have gone incredibly well. That’s for others to judge. I can tell you about one that went very surprisingly, shall we say. This happened in Argentina. I was at a conference and I gave a talk. And as is often the case, there was an empty glass in front of me and a filled bottle. Towards the end of the talk, I was getting thirsty. Q&A began and I poured the bottle into the glass, and then I had a nice big drink from the glass, and my head nearly exploded. My first reaction was just to spit over the audience. And I thought, “I can’t do that,” so I just swallowed.

Rob Wiblin: What was it?

Alan Hájek: Well, my head was hit by a sledgehammer. I thought, “What the hell do they put in their water here in Argentina?” Well, I have to explain. Behind me was a whiteboard and you were supposed to use marker pens. And then, you erase what you’ve written on the board with a bit of eraser fluid, which is 100% pure ethanol.

Rob Wiblin: Jesus Christ.

Alan Hájek: So I had just had a big drink of pure ethanol, so I was smashed.

Rob Wiblin: Did people notice? Surely. I mean, how would you avoid making a scene?

Alan Hájek: The first question is still coming at me. And now, the room is just spinning for me. And I said, “Hey folks, just a word of caution. Next time you put an empty glass and a bottle next to the glass, make sure it’s not cleaner fluid. Water would be better.” And in a way, it sort of worked out. Now normally, I maybe get a little nervous at talks, and maybe in Q&A I perhaps jump in a little too fast or something. But I was so mellow. I was so relaxed because I was smashed. “I’m just going to answer the question when I damn well feel like it.”

Alan Hájek: So now, I’m thinking maybe in the future when I give a talk at a conference, I should bring along a little hip flask of pure ethanol, just to relax me a bit. Don’t worry, I’m not serious! But anyway, that was one of the more surprising things to happen when I gave a talk.

Rob Wiblin: Yeah. Well, I’m glad you didn’t hurt yourself. Isn’t it dangerous to…

Alan Hájek: I think it is dangerous. I was OK. The story does end pretty well.

Rob Wiblin: OK. Yeah.

Alan Hájek: I think! I don’t think there was permanent damage, but it was a little scary at the time.

Rob Wiblin: Yeah. I think we can say if there was any brain damage done, it has not been apparent to me here.

Alan Hájek: Right. Thanks so much. And fortunately, we didn’t have such a mishap doing this interview.

Rob Wiblin: My guest today has been Alan Hájek. Thanks so much for coming on The 80,000 Hours Podcast, Alan.

Alan Hájek: Thanks so much for having me, Rob. It’s been a blast.

Rob Wiblin: If you liked that episode, you might also like some of our other more challenging episodes, such as #98 — Christian Tarsney on future bias and a possible solution to moral fanaticism or #86 — Hilary Greaves on Pascal’s mugging, strong longtermism, and whether existing can be good for us.

Sticking with the practical philosophy theme, if you’ve made it this far there’s a decent chance you’ve heard of the Oxford philosopher Nick Bostrom, who has pioneered thinking about existential risk and humanity’s potentially grand futures.

A friend of mine has started a podcast feed called Radio Bostrom that features professional audio readings of Bostrom’s papers and articles on those topics, as well as wise philanthropy, the ethics of AI, transhumanism, and the case for speeding up some investments in technology while slowing down others.

If that sounds interesting to you, just search for ‘Radio Bostrom’ in any podcasting app, or visit radiobostrom.com to learn more.

Finally, as I mentioned in the intro, if you’re interested in working with us at 80,000 Hours, we’re currently hiring a recruiter to help grow our team!

Not being able to hire fast enough is one of our biggest bottlenecks as an organisation. The person in this role will run hiring rounds, source candidates and help to improve our recruiting processes — and will be key to increasing our impact over the coming years.

This might be a particularly great role for someone excited about building a career in operations or HR at high-impact organisations, and you don’t need any previous experience with recruiting to apply.

Applications close on November 2, and you can find out more at https://80000hours.org/jobs/.

All right, The 80,000 Hours Podcast is produced and edited by Keiran Harris.

Audio mastering and technical editing by Ben Cordell and Ryan Kessler.

Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.

Thanks for joining, talk to you again soon.