Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner?
Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real life problems.
Let’s work through one here: how likely is it that you’ll enjoy listening to this episode?
The first step is to figure out your ‘prior probability’; what’s your estimate of how likely you are to enjoy the interview before getting any further evidence?
Other than applying common sense, one way to figure this out is called reference class forecasting: looking at similar cases and seeing how often something is true, on average.
Spencer is our first ever return guest. So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of at most 1 - you’d probably want to add more data points to reduce variability.
Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be your prior probability.
But maybe the two you didn’t enjoy had something in common. If you’ve liked similar episodes in the past, you’d update in favour of expecting to enjoy it, and if you’ve disliked similar episodes in the past, you’d update negatively.
You can zoom out further; what fraction of long-form interview podcasts have you ever enjoyed?
Then you’d look to update whenever new information became available. Do the topics seem interesting? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential?
Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode?
Links to learn more, summary and full transcript.
We’ll run through several diverse examples, and how to actually work out the changing probabilities as you update. But that’s only a fraction of the conversation. We also discuss:
* How could we generate 20-30 new happy thoughts a day? What would that do to our welfare?
* What do people actually value? How do EAs differ from non EAs?
* Why should we care about the distinction between intrinsic and instrumental values?
* Would hedonic utilitarians really want to hook themselves up to happiness machines?
* What types of activities are people generally under-confident about? Why?
* When should you give a lot of weight to your prior belief?
* When should we trust common sense?
* Does power posing have any effect?
* Are resumes worthless?
* Did Trump explicitly collude with Russia? What are the odds of him getting re-elected?
* What’s the probability that China and the US go to War in the 21st century?
* How should we treat claims of expertise on diets?
* Why were Spencer’s friends suspicious of Theranos for years?
* How should we think about the placebo effect?
* Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that?
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.
The 80,000 Hours podcast is produced by Keiran Harris.
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin and Luisa Rodriguez.