10-Minute Talks

In a post-truth world, can we always trust data? And what about our human biases? Walking us through ‘the ladder of misinference’, Alex Edmans FBA outlines how statistics and studies can feed misinformation, and the tools we need to resist and make informed choices. 

You can learn more from Professor Edmans on the subject in his recent book ‘May Contain Lies’ https://www.penguin.co.uk/books/455479/may-contain-lies-by-edmans-alex/9780241630181  

Speaker: Alex Edmans FBA, Professor of Finance, London Business School   

This podcast is for informative and educational purposes.  

10-Minute Talks are a series of pre-recorded talks from Fellows of the British Academy screened on YouTube and also available on all podcasting platforms. https://podcasts.apple.com/gb/podcast/10-minute-talks/id1530020476    

This podcast was produced by https://www.haychdigital.com/  

Find out more about the British Academy: https://www.thebritishacademy.ac.uk/    

For future events, visit our website: https://www.thebritishacademy.ac.uk/events/    

Subscribe to our email newsletter: https://email.thebritishacademy.ac.uk/p/6P7Q-5PO/newsletter

What is 10-Minute Talks?

The world’s leading professors explain the latest thinking in the humanities and social sciences in just 10 minutes.

My name's Alex Edmans. I'm a professor of finance at London Business School and a fellow of the British Academy, and my talk is entitled, Why truth is not enough.

The opioid epidemic has killed 650,000 people in the US, 2 million people around the world. Why were opioids prescribed so liberally when they're highly addictive? It's because many people thought they're not addictive at all. A study published in the prestigious New England Journal of Medicine concluded that ad diction is rare in patients treated with narcotics. It's been cited 1700 times.

So let's look at the study. Well, actually, it's not a study at all. It's a letter to the editor. It hasn't been peer reviewed or checked by anybody, and even if everything in the letter were true, it studies just one narcotic preparation, or maybe one dose of narcotics doesn't make you addicted, but four or five might. And it also looks at hospitalized patients. Maybe opioids are okay when given in a controlled environment, but when prescribed as an outpatient, they can be deadly. So this is an example of why truth is not enough.

We know we live in a post truth world, and we think the solution is to check the facts or to ask social media companies to fact check for us. But the punchline of my talk is, even if something is 100% accurate without checking the context in which the evidence was gathered what was actually studied, it may well be misleading.

So why do we make these mistakes? It's not that we're bad people who want to get others addicted. It's that we're people, and we have our biases, and one key bias is confirmation bias. If there's something we really want to be true, such as, there's a cure for chronic pain and it won't get you addicted, we will lap it up uncritically without checking whether it's actually accurate. In our brains, we have a striatum, and when we see something that we want to be true, it releases dopamine in our bloodstream, it switches off our Critical Thinking faculties, and we lap everything up uncritically. In a recent book entitled may contain lies, I go through all the ways that we can be misled by truth, and I illustrate this in something I call the ladder of misinference.

So why a ladder? Well, when we start from some statements and we draw some conclusions from them, we climb up the ladder, but the higher rungs are broken, because sometimes the inferences that we make are actually not valid. So what I've just given is an example of the first misstep up the ladder. A statement is not fact. It may not be accurate. We like to quote statements all the time. Addiction is rare in patients treated with narcotics. Culture eats strategy for breakfast. 10,000 hours is the secret to success, but without actually looking at the evidence behind the statement and what was actually studied in that evidence, it may well be misleading, so we might think the solution is just a more thorough Fact Check, check, not only the evidence, but also what was actually measured. But unfortunately, that's not enough, because of the second misstep up the ladder, a fact is not data. It may not be representative. Even if a fact is cast iron accurate, it might be cherry picked. It might be the exception that does not prove the rule.

The third most viewed TED talk of all time by Simon Sinek, laims that starting with why is the secret to success, which we want to be true. We want to believe that you can do anything you put your mind to, and it gives lots of facts to back this up, companies like Apple, organisations like Wikipedia and people like the Wright brothers, they are all highly successful. That is cast iron fact. But there might be hundreds of other companies or organizations or people who also started with why and they failed, but we never hear about them. Simon Sinek never tells us about them because they don't support his thesis. What we would really want is like a clinical trial. We take some patients, we give some of them the drug and see how many get better and how many get worse. We give others a placebo and see how many get better and how many get worse, and we compare the two.

Now, importantly, our data set needs to contain people who took the drug and still got worse. Those are companies that start with why and they failed. But you'll never hear about such companies in a Simon Sinek book or in any type of these gooey books. They will never give you counter examples and the data set also needed to contain people. Who took the placebo, yet they still recovered. Those are companies that did not start with why, and yet they still succeeded. And again, we will never hear about them in these types of books. So even if all the examples they give are 100% accurate, it may well be that the full picture is where the action is in the data that we don't see. So you might think, well, solution is just to get all the data is to see the full picture. But unfortunately, that is also not enough, because of the third misstep up the ladder, data is not evidence. It may not be conclusive. So data is just a collection of facts.

Evidence is data that supports one conclusion, but it also rules out alternative explanations. So a study took 5500 children, lots of data, and it found that the ones who were breastfed as kids had a higher IQ than the ones that were bottle fed. So one explanation is that breastfeeding causes that higher child IQ, and indeed, that's the interpretation of the World Health Organization, which recommends exclusive breastfeeding for the first six months. But there's also alternative explanations, whether a child is breastfed or not, is not random. Breastfeeding is tough. It's difficult to do without family support. So maybe the babies with a more stable home environment were the ones that were breastfed, and that stable home environment was behind the higher IQ, not the breast milk itself. And indeed, when you control for the home environment, when you strip out the effect on IQ of these other factors, actually the feeding method does not matter at all. And this is really important, because mothers are often told breast is best, that you're a bad mother if you don't exclusively breastfeed your babies. But this is actually not supported by the evidence. It guilt trips mothers into only breastfeeding when actually bottle feeding correctly applied can be just as effective.

Now we know deep down that correlation is not causation. So why do we make these mistakes? Well, again, it's confirmation bias. We want to believe that breast milk is better. It's natural. It must be superior to a formula concocted by a giant corporation. So how do we, as busy people who don't have time to look up every single footnote, how do we figure out whether something is correlation or causation? Well, here's a simple tip, if there is a result that we really want to be true, imagine it was the opposite result, and see how we would pick it apart. So let's continue the algorithm.

Let's say the study found that breastfed babies had lower IQ, and we don't want that to be true, because we think breast milk is natural. It's better. So we'd want to pick it apart. How would we do so? Well, we'd say breastfed kids might be from poorer families which cannot afford formula. Now that we've pointed to an alternative explanation, which is poverty, family background, maybe that is the driver of the differences in IQ, ask ourselves whether this alternative explanation still arises, even though the results are in our favor. So why I think the idea of Ask The opposite is so powerful? Is it highlights that the skills for discernment are already within us. We don't need to do a PhD in statistics in order to discern misinformation. That's just not practical. But we already have natural discernment within us already. Whenever I see a study posted on LinkedIn which people don't like the sound of there's no shortage of comments as to why this is a cherry picked example, or why correlation does not imply causation. So we know how to ask these questions, but when we see something posted that people do want to believe, they lap it up uncritically. So the idea of imagine the opposite is to get us to be as discerning about a study we do like as one that we don't fourth and final misstep is that evidence is not proof. It may not be universal. So even if evidence is cast iron, accurate and shows causation and not correlation, it may only be true in the context in which it was gathered, unlike a proof which applies in all situations when Archimedes proved the area of a circle that was true, not only in ancient Greece in the third century BC, when he proved it, but true around the world and in London today.

Now another famous TED Talk, which has led to another famous book, was by Angela Duckworth, and it's on the power of grit, passion and. And perseverance, again, qualities we believe to be really important. And her most famous study took men and women who got into West Point, which is the United States Military Academy. Now, actually, to get in to West Point, you haven't qualified until you complete a difficult six week training course known as beast barracks. And Angela Duckworth wanted to find out what led to you successfully completing beast barracks. And she found that despite this being such a physical challenge, that grit was even more important than fitness, which is why it was such a striking result. However, the problem with her study is something known as restriction of range. In order to get into West Point to begin with, you have to be extremely fit. And if there are diminishing returns to fitness, then it's not surprising that fitness doesn't matter in completing beast barracks, because everybody was already fit enough. So something else, like grit may be more important, but for the average man and woman on the street dreaming of joining the military, they might be better off working on their fitness, not their grit.

What's true in a very small subset of extremely fit people might not be true in the general population. We can't extrapolate from one small context to make general claims like a universal proof. So to hammer this home, there was a study which looked at whether having a parachute helped you survive falling out of an airplane. So what they did is they got some volunteers. They forced them to jump out of an airplane. Half of them were lucky and they got parachutes. The other half weren't so lucky, and they had to jump without a parachute. Well, how is this ethical? Didn't this lead to death? Well, actually, what they found was that parachutes had no effect on the number of people who died. Why? Because the planes were grounded. They jumped out of a plane for two feet onto the ground. And clearly, this was a satirical study, but this highlights the importance of range. Just because parachutes don't make a difference for a two foot jump, it doesn't mean they're ineffective for 10,000 feet.

So let's sum up all the ways that we can be misled by truth and go through four simple questions we can ask ourselves to ensure that we're not deceived. The first misstep is, a statement is not fact. It may not be accurate. So we should ask ourselves, what is the source? What was study? What was actually measured? The second misstep is, facts are not data. They may not be representative. Ask ourselves, are we shown just a few cherry picked examples? What does the full picture look like? Third, data is not evidence. It may not be conclusive. Ask ourselves, are there rival theories? Are there alternative explanations for the same data? Fourth, evidence is not proof. It may not be universal. Ask ourselves, what was the context in which the evidence was gathered, and might the findings not apply to the context that we care about? And by asking ourselves these simple questions, we can navigate the minefields of misinformation out there.