Welcome to UCL Brain Stories, the monthly podcast series from the UCL Neuroscience Domain presented by Caswell Barry (UCL Division of Biosciences), Steve Fleming (UCL Division of Psychology & Language Sciences) and Selina Wray (UCL Queen Square Institute of Neurology). UCL Brain Stories aims to showcase the best of UCL Neuroscience, highlighting the wide range of cutting-edge research going on within the Neuroscience Domain as well as bringing you the people behind the research to share their journey of how they ended up here. Each month we’ll be joined by a leading neuroscientist to offer their perspective on the big questions and challenges in Neuroscience research, to find out what stimulated their fascination with the brain and hear how they ended up becoming part of the UCL Neuroscience community.
For more information and to access the transcript: https://www.ucl.ac.uk/research/domains/neuroscience/brain-stories-podcast
Tali Sharot - UCL Brain Stories Episode 20
Tue, Aug 13, 2024 3:38PM • 49:49
SUMMARY KEYWORDS
optimism bias, bias, true, brain, humans, people, beliefs, call, information, study, misinformation, wrote, platforms, interesting, reward, ucl, imagine, book, habituation, shown
SPEAKERS
Speaker 1, Caswell Barry
Speaker 1 00:01
Hello and welcome to brain stories. I'm Caswell Barry, and I'm here with my co host, Selena Ray, on brain stories. We aim to provide a behind the scenes profile of the latest and greatest work in neuroscience, highlighting the stories and the scientists who are making this feeling. We don't just ask about the science. We ask how the scientists got to where they are today and where they think their field is going in the future. And it's a huge pleasure today to be joined by Professor Talley charrot. Talley is the director of the affected brain lab. She's a professor of cognitive neuroscience in the department of experimental psychology and the Max Planck UCL, Center for Computational psychiatry at University College London. She's also part of the faculty of Department of Brain and Cognitive Sciences at MIT Tali research integrates neuroscience Behavioral Economics and Psychology to study how emotion and motivation influences people's beliefs and decisions. Tally, has two award winning books, the optimism bias and the influential mind, and at the end of February, recently, co authored a book with Cass Sunstein called look again. She has also given two TED talks that have been viewed more than 15 million times in total, and has written multiple op eds for the New York Times, Time guardian, Washington Post, CNN and others. Tally, welcome and thank you for joining us today. Thank you for having me.
01:31
So maybe we could start by just hearing a very brief overview in your own words of what your research is all about. So I think it's quite broad. I try to understand what drives human behavior, and specifically humans, how they make decisions, how emotion and motivation drives those actions and decisions,
01:57
and in relation to that, how we decide what information to seek out, how we use that information to update our beliefs and the interactions between humans and humans, and also now humans and technology.
02:13
And we look at this at really different levels, so all the way from behavior to neural systems, looking at pharmacology as well, and using a lot of different methods to do so. So from brain imaging to
02:34
looking at how people interact with the web, so using things like natural language processing and more traditional experimental psychology, wow, that sounds fascinating. Could you just to sort of make this as real as possible? I think I have an insight of what's going on. But could you sort of describe typical experiments, say, you put people in the brain scanner, and what sorts of things are you asking them to do? For example, in a recent study that we're doing now, we're showing people pieces of information, and we're all ask them simply to read that information and just to tell us how much they like the information. That's all then, outside of the scanner, we are asking them to rate that piece of information on dimensions that we think are important for how they value the information. So for example, how useful it is, in terms of, can I use this information somehow to direct my actions, but also things that we are related to, what we call intrinsically rewarding value. So for example, how beautiful, how meaningful this is. And then what we're looking at while you're reading that piece of information, we're looking at signals in the brain that signal the value of information across these two dimensions independently. So the what we call instrumental utility. Can this drive my action? Is it useful versus Is it beautiful? Is it meaningful?
03:57
And we're interested in seeing not only how these values are represented, but do they modulate activity in other parts of the brain? For example, maybe they're modulating the activity in the language system. So which parts of the brain do you see being involved in these kind of processes, and are there factors that, what sort of factors influence. For example, I might find a piece of information useful, but Caswell might not, or kind of beautiful, but Caswell might not what, what kind of influences those differences. So we are usually interested in the reward system in the brain. So that's kind of a system that we always, almost always look at. So the reward system in the brain, that's quite broad as well. So it could include the some frontal regions like the orbital frontal cortex. It can include subcortical regions such as the striatum, which is thought to be, you know, sort of, some people talk about it as a reward center of sorts, and all.
05:00
The way down to the midbrain. So looking at, for example,
05:04
areas called the VTA and SN which are thought to be rich in dopamine, so it goes all the way from the midbrain all the way to the frontal cortex. So these are some of the regions within the reward system that we're interested in.
05:18
And the second question is, what will drive your evaluation. So it is usually a combination of your past experience, right? If your past experience, obviously, is different from caswells, that will influence the type of things that you find useful or beautiful or interesting, right? One example is that we have in one of our theory paper. And in fact, this theory paper I wrote together with Cass Sunstein, where we talked about how people value information and decide what to seek. And one example that we gave is
05:52
a sentence saying that dogs are related to wolves. It was something along that lines, right?
06:00
And I find that not so interesting, because I don't have a dog, and I'm not so interested in dogs. But for Cass, this is really interesting. He has dogs. He always grew up with dogs, right? He's it's something that he thinks about a lot. So there is a clear differentiation here, based on what I see in my world on a daily basis versus what he sees in his world on a daily basis. Are there things that are common to all humans? Are there? Are there things that we can agree on that have, you know, pretty much equal utility to people, or are equally beautiful? Or is that just not a thing? I don't think it's equal. You'll be it's a very hard to find something that you say, oh, for every human being, it will be equal. But there are certainly things that
06:48
you know a lot of people, most people will find beautiful, or a lot of people, most people will find useful. But even things like knowing whether the stock market is going to go up or down, well, that will it's pretty useful. And I think most people will say it's pretty useful. However, if you have absolutely no money to put in the stock market and no way to gain from that, then of course, you will find it not useful.
Caswell Barry 07:13
Then someone who has a little bit of money or a way to gain money, to put it in the stock market. And so do these two different dimensions, utility and a beautifulness, if that's a word, are they mapped onto different brain regions, or are these, these things, represented in the same area? So we're kind of called, we call the two things intrinsic. So the intrinsic value, the idea is that it's not necessarily something that you could evaluate in how you can change things external to yourself, but it's more intrinsic, right? And that will will include things like how meaningful, meaningful it is, how beautiful, how interesting, maybe how it relates to the self or so versus extrinsic. So, yeah, that is a question that we're trying to answer, right? That's exactly one of the questions that we're trying to look at. Are these things separate? If they are separate, how and where are they encoded? And then how do they kind of integrate to drive your decisions in? You know what information to seek out, for example, if I might stick in a quick machine learning perspective on this, like, the concept of intrinsic reward is is really exciting to machine learning community, because, I mean reinforcement learning, which drives a lot of how we think animals behave, but also how you might fine tune a large language model or something, relies upon having these, like objective rewards, and they just don't really exist in the real world, you know, except when the mouse gets the cheese. Sure, there's some reward. But most of the actions we conduct in the real world don't have a immediate external reward. And so the ability to generate these sort of intrinsic thing give things intrinsic value as you go along, so you can seek them, is kind of really interesting and really exciting. And I guess it's, I don't know, I guess, assume it's built in at a low level, into brains just to make them function. I'm not sure there's a question, but curious what you think? Yeah. And in fact, to to go back to your questions about whether people agree when it comes to pieces of information, there is quite a nice agreement. Actually, what pieces of information in our sample, what pieces of information people find useful, and what people what piece of information they find intrinsically rewarding. And so given that we think we haven't tested this yet, that we could easily train an LLM to distinguish between these things, and in fact, probably, and we haven't tested this, if we give something like chatgpt A piece of information and ask to rate it on these dimensions, we believe it probably will give ratings that are consistent with human ratings. We haven't tried that. Now I haven't told you this, but we do it not only for this type of language information. We've also done this for things.
10:00
It's like, just art. So art, turns out that humans can tell you what piece of art is useful to them. And it could be useful because, oh, I can put it somewhere in my, you know, in my home, that will look good, or it reminds me to go out and jog, right? So it's useful in that way. Or it could be like, Oh, it's meaningful and so on. It turns out for that we don't see agreement across individuals. So,
10:30
yeah, absolutely no agreement. And so for this kind of thing, if there is no agreement, you can't really train
10:38
an AI
10:40
to distinguish between these things,
Caswell Barry 10:43
that's really interesting. Could I ask maybe about accuracy of information in that case? Because I guess we can hear a bit of information and decide whether it's useful to us or not, but there's also an element of judging whether that information is true or not. In that assessment is it does your work touch on this and how we we kind of assess, do we take into account kind of a judgment of accuracy when we decide something is useful or not?
11:14
So, first of all, just in the theoretical meme of it,
11:20
our theory is that whether a piece of information or a belief is useful is can be associated to accuracy, but doesn't have to be why? Like, for example, there are beliefs that will gain you some rewards. So for example, in some states, if you believe in God, you're more likely to get a job. If you believe in God, you're more likely to get social support. In fact, that's true in many places. And so that belief is useful regardless of whether it is accurate or not. And there are many beliefs like that, in fact. So the way that we think about why people hold certain beliefs. We think people hold certain beliefs based on whether it will gain them the most amount of rewards and avoid losses, right? So what and those rewards and losses can be external to oneself, like, Can I get a social support? If I believe that, will I get will I be more healthy, right? If I believe vaccines are safe, I'm more likely to get vaccinated, are more likely to stay healthy. So that kind of belief actually will reward me in the health domain.
12:27
Also, belief can be rewarding because of internal reasons. I think I'm going to get that job that makes me, puts me in a positive mood, gives me confidence, right? So on its own, regardless if I get the job or not, on its own, that's already rewarding internally. It's good for my mental health. Now, on top of that, when I go to to the interview, because I believe I'm going to get it, I might be more likely to get it because I exclude, exclude confidence, right? So beliefs will have high or low utility, sometimes, regardless of their accuracy, right? Sometimes there is a relationship between the two, and sometimes they're not. And so to answer your questions, I believe can be very useful, even if it's not accurate, right? And now that that creates some somewhat of a problem in some domains, because that also means that I'm more likely to believe some things because of these kind of like rewarding aspects, despite the fact that they're false, I may then share these false beliefs with others, and so now we have misinformation spread that can then lead to all sorts of other problems, polarization,
Speaker 1 13:36
violence, racism. Would I be right in thinking this might be one of the links to one of your latest books. I haven't, I haven't read it, but I'm guessing from the title, maybe.
13:46
Or is that not true? I think it's in everywhere. Yeah. I think this, this question of, like, belief. Why do we believe what we believe, and whether it's true or not, and how that relates to it. That actually is afraid in almost all of my work and in all of my books, right? So the first book is the optimism bias. The optimism bias is all about how we have these positive beliefs that are not necessarily true, and what is the consequence of that, right, the positive and negative consequences. The second one is called the influential mind. It has actually the most I would say about
14:24
how to try to communicate information in the way that's more likely to be believed.
14:30
And again, why do people believe things that are not true, and things like confirmation bias so more likely to seek out and believe information that confirms what I already believe
14:41
then information that goes against it, right, which creates a problem. So how do we talk to people, given that we know that they have a confirmation bias within look again?
14:52
So look again is about habituation, so our tendency to respond less and less to things that are that is repeated and we.
15:00
Do have one chapter about misinformation, and the reason we think as sort of like the overall phenomena of habituation is related to misinformation is because of this phenomenon known as the illusory truth effect. And the illusory truth effect is the effect by which, if you hear something more than once, you're more likely to believe it. This is a super strong effect, like you. Anytime you try it in any kind of way, you always get the effect is very, very strong. People hear things once they believe it, less than things that they hear twice.
15:37
And one of the reasons is if you hear things for the first time, your brain does a lot of processing. So if I tell you a shrimps gut is in its head, your brain goes, woo, like imagine the gut in the head. Imagine the last time that you ate shrimp, right? It does a lot of processing. Now the second time that you hear a shrimps gut is in his head, much less processing is going on, and the third time, even less, and when there's less processing going on, less of a surprise signal, we then assume things are right, right. Unconsciously, we kind of don't stop to think or to interrogate it. We just assume it's correct. And so this can be a real problem on social media, because the likelihood of us hearing something that is incorrect multiple times is huge, because misinformation is shared so much, and we don't necessarily remember where we heard it from, right? So we don't necessarily remember the source. Is it reliable? Is it not reliable?
16:33
So, yeah, so, so that's, that's in the, I guess, the question of like, is it related to the last book? So that that is
Caswell Barry 16:41
the connection there. Yeah. Interestingly, when you were speaking, this reminded me exactly of something that I heard said was true, at least in previous iterations of chat, GPT. And I guess now that it's also subject to this illusory tooth truth effect, like you can, you can induce it to believe things by repeating them multiple times. So apparently, it doesn't just share our feelings about intrinsic value, but also our failures in judging truth in a fair and unbiased fashion, whatever that tells you, I don't know. So we were actually we do have work now, looking at
17:21
AI in general, and the bias that AI gets from humans and the potential feedback loop,
17:31
and the idea that AI reflects human bias, so all the way from like biases related to gender and race, but, but in fact, There's many studies showing if you go on chat GDP, it has an optimism bias, right? So almost every bias, yeah, this is shown by Eric Schultz, so almost any bias or the bat and ball, right? So that's been shown by Stefan Paula Taro,
17:58
so almost any bias that humans have, you can see it in AI and different AI all the way, you know, it doesn't have to be an LLM other AIS, too have these biases. If they are trained on human data which has a bias, right? And that happens a lot,
Caswell Barry 18:14
and that's been well known. So in new work by my postdoc, Moshe liekman, what he is showing is that, well, okay, so you get human data, which is bias, you feed that into the AI, the AI, then not only does it reflect the bias of the human, but in fact it amplifies it, because it uses a bias as a signal, right? So, and then it amplifies this bias. Now we let the humans interact with this AI, and now the humans become more biased because they're interacting with the AI, which has a larger bias. And he's shown that in simple biases like perceptual biases, emotional biases, emotion perception bias and then social biases as well. It's really fascinating. I wanted to come back to this concept of habituation and ask, is there an advantage to us of why this might exist and why this can happen. But also, are there certain other groups of people, or perhaps certain conditions where this happens more or less? And you know what influences our susceptibility to to kind of disillusion? I can't pronounce it. I'm sorry, illusionary truth.
19:23
Yeah, thank you.
19:25
Okay, so, so habituation is a little is much broader than the illusionary two effects. So maybe I'll talk about situation first. Habituation is a very fundamental aspect of biology, right, which just means that neurons respond less and less and less to stimuli that are repeated or a constant, and that you see absolutely in every animal, right all the way down to like unicellular and you can see that in almost every neuron in the brain and every normal system. So.
20:00
All the way from perception when you see, when you look. I mean, in one of the illusions, the visual illusions that we kind of talk about, and actually you could see it in the inside cover of the book, there's like these blobs of colors, red, green, blue, yellow, with a little fixation in the middle. And if you have your eyes fixed on the fixation. Within about 30 seconds, the colors become gray if you're not moving your eyes, and if you're really good at it, in fact, it's going to become white so it disappears completely. And the reason is the same exact neurons are getting the exact same input of color, and so after a while, they stop responding, right? And we call it habituation. There's a bit of like, do you call it neural adaptation? Habituation, where the difference and I know scientists are listening, but I'm just going to use an overall term for this,
20:52
and so that's perception, but it's also true with emotion. So you see something scary, a dog is barking. The first time you see the dog barking, you're really scared the second time less, so the first time less. So, so that's emotional habituation, right? And so it really impacts every system. Olfaction is a well known example. You go into a room full of smoke, studies show within 20 minutes, you can't detect the smoke.
21:18
So that is indeed in every human, every animal, every species, regarding the illusionary truth effect that also something is that you see in almost every individual, the one and it has been tested this, I mean the illusion effect. There's so many studies on this and been tested on every population that you can think of. The one population that doesn't show it is people with Alzheimer's.
21:44
So I think it's because they don't remember what you said, and they're probably not encoding it.
21:50
So there you don't see it.
Caswell Barry 21:53
That's really interesting. Is it people who are like into sort of full blown Alzheimer's, or is it, is it something that comes on. I don't remember. The study is by Jason Mitchell from the from Harvard University. I'm just trying to think how this experiment worked. But is this aside from humans and llms, which I guess are trained by humans in a very real sense, well, they're trained on humans. Are these phenomena exist in other animals? I mean, just trying to think how I could, we were talking about dogs earlier, how I could create an experiment that would induce an illusionary tooth effect in dogs. I guess I could sort of keep showing them. Maybe it doesn't exist. Maybe it is a uniquely sort of language based concept. What do you think?
22:40
Yeah, it's a good question whether, I mean, all the studies I can think of are language based, it's always a piece of information
22:52
and the idea of what's true and not. I mean, obviously animals have
22:58
an idea of what's true and not? I mean, I imagine it's a really interesting question. I don't know. I mean, they do show a lot of other biases, right? So, for example, optimism bias has been shown in almost every species,
23:17
and there's other biases as well that has been shown, loss aversion, I think, has been shown as well. So yeah, that would be we'd have to think about it a little bit, because first you'd have to think about, how do you show it without language, which probably you can figure out relatively easily, and then, in a way that matters to the animal, that would be tricky. Maybe we should maybe challenge for you as well.
Caswell Barry 23:46
We'll do it. We'll show him rats. That'll be fantastic. So these, I mean, I guess some of these things. I mean, I'm pretty sure I remembered you telling me some point the best you know that optimism bias has a sort of is useful, right? Because is useful up to some point, at which point it becomes pathological. I guess these other, these other biases, like, you know, presume you also have some general utility. There's a there's a reason we do them, or are they all just terrible failures of the human brain? No, absolutely not. So every single one, it's relatively easy to imagine why they are useful.
24:23
So things like cognitive bias and illusory truth effect,
24:27
you know, most of the stuff that you hear is true.
24:32
So we are kind of because, you know, nowadays we're concentrating on all these things that people disagree about and all these false beliefs that people have, but they are a small little fraction of what we actually believe, right? We have a pretty good model of the world, and most of our beliefs are true, and most of the things that we hear are true. And so it makes absolutely perfect sense that we have evolved such that if we hear something more than once, we think it's probably true. You know? We.
25:00
Multiple sources of information, right? Totally rational. Same thing with confirmation bias. I mean, if you most of the things that you believe are true, so you know, it makes sense that if someone says something different, you would be a little bit suspicious, because most of the stuff is true. But of course, it's not always the case, and it is those interesting exceptions which we are focusing on. And of course, nowadays, with technology and social media, and social media actually amplifying these biases, right? And actually kind of like using them to get people to be more engaged, then we kind of start thinking, Oh, humans are, you know, non sophisticated creatures, right? They have all these false beliefs, when, in fact, mostly we don't. And I'm not saying the false beliefs that we have are not a problematic there are hugely problematic, right? You know, not believing that climate change is human made, is can destroy us. So it's not that, it's not important to think about it, but it makes sense that we have all of these biases,
26:06
and there are some biases for I can give an example of the optimism bias. So optimism biases are tendency to overestimate our likelihood of experiencing positive events such as having talented kids or getting promotions, getting grant, and underestimate the likelihood of negative events, having a manuscript projected right, getting divorced.
26:30
And you can say, well, how is that a good thing or a bad thing? And the kind of common view for many decades, I would say would be that, yes, it's probably adaptive, because it, for example, it enhances your motivation. If you think you're going to get that grant, you think you're going to, like, get the paper published, you put more effort into it, right? If you think, Well, I don't not going to get the grant, you don't even try. So it kind of becomes a self fulfilling prophecy, right? It also keeps you your well being, and, you know your happiness, if I think there's positive things for me in the future that's good for my well being, and, as a consequence, also good for your physical health. On the negative side, however, you might underestimate your risk, or maybe you're going to put a lot of time in a grant that there's no way you can get
27:19
not wear a helmet when you bike and so on. So most people, scholars, including Daniel Dennett, who recently passed away, he actually called it a design feature, right? And he said it was like it there's a reason, because there's all these benefits. But since then, there's been quite interesting computational work. In fact, now it's more than a decade old that show that this is only true in certain environments. So in a relatively safe environment like you and I are sitting today in our office and so on, yes, the benefits will outweigh the negative parts of it. However, if you are in a very unsafe place, lots of danger, lots of threat, then potentially this is not going to get us in a good place, right? And there's lions coming your way, there are rockets all around. Probably an optimism bias is not necessarily the best thing. So what we actually have found since is that what is quite interesting is the optimism bias and the processes that lead to the optimism bias can be turned on and off based on the environment. So if you're in a threatening environment, there's a stress reaction, and the stress reaction actually leads to different ways to process information, that makes the optimism bias less likely, which is beneficial in these threatening environments. But if you're relatively calm, stress is low, then you're processing information in this asymmetric way, where you taking in the positive more than the negative, creating an optimism bias, which is beneficial in these
Caswell Barry 28:50
environments. So it's, I think our brain is even more sophisticated than we think. It's not that we just have a bias that is adaptive. On average, we have a bias that can come and go in response to the environment in a way that's adaptive, that's fascinating. In fact, that reminds me of saying that it's possible what I read in the optimism bias, or maybe you told me a long time ago that actually I'm going to suggest that maybe we turn on its head. And I think the thing that, thing that you may have told me is, you know, in general, people are optimistic, but actually people who are depressed aren't pessimistic. They're just closer to, actually, they're closer to the truth, is it possible that actually, based on what you were just saying, that those people are just stuck in a cycle where whatever you could understand depression actually as the normal processes for winding up or winding down, your optimism bias based on your environment, have slightly gone awry, and you've sort of, you know, permanently wound them down just to, like, the baseline level, so you're now just estimating things at chance. Yeah, so first of all, it's a case that people with mild depression don't have an optimism bias, but.
30:00
Not that they're necessarily more accurate, so that means they can make a lot of mistakes, but they're in both directions. So they could be overly optimistic, sometimes overly pessimistic, and you know, so overall, it just balances. So it's not that necessarily they're more accurate, so having a bias means that your mistakes are systematic, but you can be very inaccurate, but just that your mistakes are not systematic. So those are people with mild depression, people with severe depression. So these are usually either hospitalized or not necessarily so, but in them, we see a pessimistic bias, so they actually are more likely to believe that negative events will happen more than the data in front of
30:43
them. What you're saying is, well, people with depression have an unrealistic view of their environment. They see their environment as more threatening, which increases stress reaction, or in general, they are more sensitive to stressful events and have high stress that then leads them to process information in a way that makes them more hyper vigilant to negative and therefore creates this pessimistic view.
31:07
Yeah, that's that's exactly the idea. Of course, it can go like, we don't know what cause and effect could be like. Oh, well, they have process information they take in more than negative. That creates a pessimistic view, that causes stress. And probably the truth is both like it's just sort of a feedback loop, right? I was wanting to come back to this idea again, of our
Caswell Barry 31:32
tendency to believe things if we hear them more than once. And you said earlier about social media being, you know, a really big challenge in that scenario, because of the misinformation that is out there. What do you think needs to change? How can we kind of regulate that? Is it a case of trying to regulate the platforms or actually helping people to process that information differently?
32:00
Yeah, so I am more on the side of regulation,
32:07
so I think it's the platforms that need to change, and there's ways to change, and they, at the moment, don't have enough incentives to make those changes, and so it has to be pressure from the government to change the platforms, to put pressure on the companies,
32:26
you know, so regular monetary, incentives
32:31
and punishments. And the reason I think that's a way to do it, rather than, I mean, I'm not against educating people.
32:40
You know, there's nothing bad in that. The problem is, I don't think that is really going to be a solution of a magnitude that we're interested in. The reason is, first, you're going to have to get people to be willing to be educated, right?
32:58
And so who are those people? Well, you can do it in schools, because you have the students there so you can teach them. Sure, that's something you know, children. But what about all the adults? Right? What about people who are not in school? And second, and definitely, if you think about who are the people who would be willing to spend time to get trained and educated, probably not the people that we're really worried about in spreading misinformation. And the second problem is that the people who are in companies that are putting in misinformation, you know, the groups are, they change their strategies all the time. So it's a bit of a cat and mouse kind of thing. You're educating people. Oh, you know, this is a way that they usually do it. By the time, you know, you you get that message out, they're changing their strategies all the time. And AI is, is kind of like developing really fast for all this, like deep fakes and so on.
33:55
So that's a problem. And
33:58
as I said, on one hand, it's great educate the people. The problem I see is that the institutions who have money to put into this problem, they
34:11
as understandably, they use the solution that seems the easiest but not necessarily the most effective. So the easiest thing to do is to say, let's put, you know, the that put it all on the on the consumer, on the user, right? We'll put all our money to have these education problems, because that's what we can control.
34:32
And so that's what they're doing. And by doing that, they kind of like pat themselves on the back and say, Okay, we've done something. But in fact, there is very little evidence that this is even working, extremely little evidence that there's any impact on that. And so this is a problem, and it's, I think, halting the other solution, which is, put it, put it on the platforms. Have them.
35:00
Make changes, and also put in money to find what changes will work in terms of the platform and even small changes. So we have one study where we show that if you add trust and distrust buttons to social media platforms, what happens is people use those trust and distrust to distinguish between true and false information more so than they use, for example, like and dislike. And once people are in a platform, when they realize they can get carrots in the form of trust and sticks in the form of distrust, they're more likely to post true information and less likely to post misinformation. And finally, once they're in this platform, they beliefs become more accurate. So this is will this you know, we did it in our simulated platforms. Will this work on Twitter and Facebook? We don't know, because in order to try that out, we need the cooperation of Twitter and Facebook, which we don't have. But these are relatively simple things that could be tested, right? And in our platforms, it reduces misinformation by half. I'm sure that's not going to happen in real in the real world, but if we don't try it out, we don't even know what impact it will have. Because any impact, even if it's 1% that's huge, because it's such a global, like, huge problem. Sorry, that's my little rant on No, it's huge, and it feels relevant and urgent, particularly in the mammoth year of elections, where we have this year, it feels like it needs to uncompen right now. So yeah, it's good to hear your thoughts on that, and I'm sure Elon Musk is one of our regular listeners, so
Caswell Barry 36:40
get in touch with Tali sort it out. I mean, Tali Selena went through your biography at the beginning of this and it's, it seems, you're, you're, you know, you're living the dream. You're, you're working on both sides of the Atlantic. You're both a practicing scientist, generating really exciting books for public consumption. You're doing TED Talks. How on earth did you end up in in this position, doing so much across sort of so broad a swathe of things, I guess, you know, I was, I was doing my usual, you know, I did my PhD. I did my PhD at NYU, then I came to do a postdoc at UCL,
37:17
and I had a paper about the optimism bias that was published just at the end of my PhD.
37:26
And I then had an agent
37:30
email me and ask if I was interested to writing a book about this, about the optimism bias. I certainly did not think about writing a book at all, and definitely not at that early stage. And I don't think I wrote anything apart from, you know, scientific. I never wrote a story or anything like that in my life. But he said, Well, let's try it out and see what happened. What happens? And then, yeah, so I said, Okay, I think that's probably, probably the key.
38:00
If things sound like it might be interesting and might take me in different directions, I usually say, Let's try them out.
38:09
So then I wrote a proposal, and it turns out I actually really enjoyed writing this
38:15
kind of way, which is, you know, it's about science, but it's much more free. You know, when you write an article, you have to obsess about every single word and every single sentence. When you're writing a book, you can speculate. You can say, well, this is what I think. This is what I believe. You know
38:31
a lot of things. It's not that you are interpreting a piece of data. You are much more free to express your opinion based on everything that you read and everything that you know, and tell stories, which is fun, so it's fine to tell an anecdote right alongside the the what you know from your studies. Yeah, so that worked went well, he
38:56
you know that the book was sold
38:59
and to a publisher. So then I had to write it.
39:04
And so I just, I did sorry about the book while I was doing a postdoc, but I did it while, you know, on weekends or on holidays, I didn't have family at the time, so that was possible to do, and that then led to everything else the book was published. It was on the cover of Time Magazine, which then led to the TED people being aware of it. So I did the TED Talk in 2012
39:34
and from there, it just kind of
Caswell Barry 39:37
grew. That's amazing. That's not what I expected. I don't know. I just assumed that you'd always plan to, I don't know, you seem so good. I just assumed it was always the strategy to sort of do this, have this public, sort of phased thing. But it was happenstance, not, yeah, yeah, yeah, no, absolutely not. And I wasn't sure whether to go ahead with a book, and I asked for people, um.
40:00
And I think two said yes, and two said no.
40:05
Now only remember three of them, but two people who wrote books a very different thing, kind of his type of books, I think, you know, both academic books and kind of more of this kind of popular science books.
40:18
Yeah, so it wasn't even that I that, I straight away said, Yes, you know, I had time. Yeah, I was starting my postdoc. So I was thinking, Well, can I do all of this?
40:28
But turns out, you have to try.
Caswell Barry 40:31
I think it's so inspiring. A lot of our listeners that are kind of undergrads or people kind of thinking about taking their first steps in in science. And I think for them, it's so valuable to hear that you can take a chance on things like you say something that might lead you somewhere interesting. And it seems it's just opened up so many avenues for you to a variety of roles and experiences. So I think that's really cool for our listeners to hear. I wonder if I could maybe take a step even before that, to ask what made you interested in this area of research in the first place? What was your kind of motivation for the subject area that you chose? Yeah, so I actually did my PhD on emotion and memory
41:22
and
41:24
that was, well, I was always interested in memory, but I think you know, when I was doing my undergrad,
41:31
decision making was not even an area of research in in neuroscience. And it was sort of, you started hearing about common Travers key.
41:44
So, but, but the idea of taking that and combining it with neuroscience, that was not something that anyone did. So within the neuroscience courses, you didn't really, even in psychology, they weren't really, there wasn't a decision making course.
42:00
And for some reason, I found memory the most interesting out of out of what I was studying. So while I went to NYU and there, Liz Phelps was the only one who was doing memory in humans, and she was studying emotional memory. So that's how I did my started my research on emotional memory,
42:21
and I was specifically interested in traumatic memories. And after I did my PhD with her, I went to do a really short, well, it wasn't supposed to be short, but ended up being short, short post doc at Harvard.
42:36
And there, when I got there, they were really interested in how are the way that we remember the past is similar to the way that we imagine the future, right? There was, it was kind of the beginning of thinking, Oh, it's the same system, the hippocampus, is also related to imagining future events. And so when I found that really interesting, I said, Oh, it will be interesting to to study how emotion impacts this imagination of future events.
43:04
And in fact, that was also a sort of chance I was sitting in the FMRI doing in the FMRI room, just helping another postdoc. She was and I had no idea what her study was. She just asked me to sit, you know, with her, because you usually need two people in the room. So I did. So I was just like, sitting with her, doing my things. Then I was like, looking at the screen, and I could see that she had, like, the words future and, like, I don't know, Apple past.
43:30
And
43:32
this was rose Addis, who was working with Schachter in this quite seminal paper now about how the same system in the brain that's important for memory is also important for future thinking. And when I was saw that as I remember thinking like, wow, this is like, so amazing and so interesting. And that's when I thought, Oh, let's see how what this happens, how it relates to emotion. So that's when I started my first study. And in the first study was supposed to be to have people imagine, imagine negative and traumatic events happening to them. But when I started the study, turns out people don't like that. They don't like to imagine negative and traumatic events happening in the future. And what they did is they taught they took this negative events and they kind of like solved them and they made them into positive so I said imagining a breakup with your breaking up with your partner, and they would say, I broke up with my girlfriend, and then I found a better one.
44:31
Or even, like, you know, I also had a neutral condition, because I was going to compare it to a neutral event. But even the neutral events, which were like, oh, imagine getting a haircut, which I thought was like, very neutral, someone said, I imagine going to my favorite hairdressers, and I was donating my hair to Locks of Love, and then we went celebrating with my friends. And these are a few examples, but it was very common for people to start imagining these magnificent things in the future. And now it seems a bit ridiculous, but.
45:00
It is true that I was trying to fight this. So I was trying, it's like, no, you can't do this. Because the whole point is, like, imagining these negative events, so trying to give them different prompts. You know, I did this for a couple of months, until I realized in snack, this is super interesting. What's leading to it? That led me to, kind of like, find this thing about the optimism bias, right? And from that point on, then I really became interested in all this behavioral economics,
Caswell Barry 45:27
and that led to, like, no economics and all of that. So that was kind of the segue that's amazing. I feel very optimistic for the future based on what you've just said. Indeed, maybe I should turn that into a question. So where do you see the future from here? For you like, is it, is it more of these two sort of parallel streams? Or do you think is there, is there something new on the horizon?
45:53
But they're not really parallel. They are very much intertwined.
46:01
And in fact, you know, a lot of my ideas about new research comes, for example, from giving talks, and especially, I remember one a lay person asking a question, right? Of course, it also happens in academic talks. And having to write books means that you learn so much because you have to read not only about your specific
46:24
research like area, but you have to read very broadly. So you kind of think about new ideas, and then you write about what you're doing. So they're very much intertwined with each other.
46:36
Yeah. So I think I'm definitely I like it much better
46:41
than just focusing on academia, because it's just more interesting. You kind of get to talk to a lot of different people and learn about a lot of different industries.
Caswell Barry 46:51
And I get bored quite easily. So it's good for me to have this, this kind of way of living. I can identify that with that. I think that's where this podcast might have come from. For me, me Steve and Selena,
47:05
well, thank you so much. Tally that's been fascinating. We're almost out of time, so we need to wrap up, but we always finish by asking all of our guests the same question, which is, can you tell us what your favorite fact about the brains, I think one of them is the idea that we don't see what's in front of us, right? What we're perceiving is different from objective reality. And I think I mean that's true, not only with with vision and seeing, but the fact that the reason I find it really interesting in vision is that you can show it to people,
47:41
right? You can actually do demonstrations. So this is not necessarily a demonstration, but first of all, you know, we have this like black spot in our in our field of view, and what our brain does, it predicts what's in that black spot. So you're actually seeing at all times, something that's not necessarily there, but what your brain believes is likely to be there.
48:03
And then you have all of these, like, really fun visual illusions that this is a podcast, so it's hard to
48:12
demonstrate, but like the Thatcher illusion, so go look it up. The Thatcher illusion really interesting.
48:19
And what is great about that is that you show it to people, then there's something, well, okay, that that totally makes sense. I'm perceiving something, and then you kind of move the picture to show them that they're perceiving something, like, that's not what's actually there, and they can, like, the brain goes, whoa, wow, that's crazy, um.
48:39
And that is true for a lot of different things, right? Our decision makings, our emotions, these are all sorts of interpretations of what is in front of us. So I think I like it a lot, because you feel like humility, right? That what you are so sure is true
Caswell Barry 49:02
is only partially true. That's great. I really like that. So that was just a fascinating discussion. Thank you, tali charroth for joining us on this episode of brain stories, and to our audience out there. We'll see you next time. Thank you guys. This was fun. We'd like to thank Patrick Robinson and UCL digital education for editing and mixing and UCL neuroscience domain for funding the podcast. Follow us on X, formerly known as Twitter at UCL brain stories for updates and information about forthcoming episodes foreign