Corey and Steve talk with Joe Cesario about his recent work showing that, contrary to many activist claims and media reports, there is no widespread racial bias in police shootings.
Steve Hsu is Professor of Theoretical Physics and Computational Mathematics, Science, and Engineering at Michigan State University. Join him for wide-ranging conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Corey: Hello and welcome to Manifold, where we look at today’s scientific issues from varying points of view. I’m your host, Corey Washington. My co-host, Steve Hsu, is a little bit away across a desk because he is infectious.
Steve: I’m a little under the weather. I’m going to be probably a little bit quiet during this episode, but I will serve as the audience ombudsman.
Corey: Our guest today is Joe Cesario. Joe, welcome to Manifold.
Joe: Thanks for having me, I appreciate it.
Corey: Can you tell us a little bit about yourself?
Joe: Sure. I’m associate professor of psychology here at MSU. Basic research interests are automatic behaviors, automatic cognition, fast decisions under uncertainty, how they go right, how they go wrong, where systemic bias might play into or might not play into those kinds of decisions.
Corey: Today we’re going to talk to Joe about police decision-making and racial bias and deadly force decisions. Joe is the author of a recent paper on the topic called “Is There Evidence of Racial Disparity and Police Use of Deadly Force,” that I expect as it becomes more well-known will spark some controversy. Joe, though your office is only about a quarter mile away, I have to say I only heard about your paper because a friend of mine from North Carolina texted me the link, and I was really interested in your findings. We’ll put a link to the paper up on the episode’s website and we’ll talk about a few of your other papers soon, unpublished, that will be up when they’re coming out. I should add that you’re also critic of your field, which I assume makes life a little uncomfortable at conferences.
Joe: Yeah sometimes, yeah.
Corey: You’re a critic of the reliability of the results often, and how they’re applied outside. Is this a concern of the — you’re the editor of a journal too, and is this one of the topics you raised in the journal you edit?
Joe: The journal that I edit, which is Comprehensive Results in Social Psychology, is the first preregistration-only journal in the field, and so that’s focused more on publication practices and publication writeup rather than questions about applicability or external validity. Those come up, but principally what we’re concerned with is the publication end of things and how that’s impacted the problems that we’ve had in social psychology.
Steve: Joe I hate to interrupt, but on behalf of the audience, perhaps you could define what preregistration is for experimental psychology.
Joe: Absolutely. So for our journal, we’re actually a peer-review preregistration journal, so it’s even slightly more complicated than just explaining preregistration, but basically what happens is in standard publication practices and in many areas of science, particularly in social sciences, you do some research, you write it up, you find your results, and you submit that final write-up to a journal, it gets reviewed by peers in the field, and then maybe gets published or maybe gets rejected or whatever. In our in peer-review preregistration, what happens is that you submit the proposal for what you are going to do, okay, and so it looks just like a regular research report would look like except there are no results, okay, instead you’re saying here’s what I’m going to do, here are the number of subjects I’m going to collect, here are the analyses that I’m going to do, and then we’ll see what happens when I do that. That proposal then goes out for peer review.
Steve: Right.
Joe: And so…
Steve: And for our audience’s sake, what this design is meant to combat is a kind of hacking of results in which you actually are testing a whole bunch of things, maybe there are ten different things that are kind of going on in the topic that you’re exploring, and you wait until you get one of those ten things to reach a p-value or statistical significance which looks good, and then you submit just that result out of the ten possible things maybe that you are measuring as part of your procedure. And then of course that’s sort of cherry-picking, right, it looks like it’s statistically significant, but the reviewer doesn’t really know what else you are up to during the experiment, and so this preregistration of the design helps prevent that kind of p-hacking or statistical hacking.
Joe: Yeah, what people call researcher degrees of freedom, right. That’s one element of it, but there are a couple others also. One is the publication bias in science in terms of publishing only positive results, right, and so this prevents that from happening because reviewers have to evaluate the quality of the work prior to the results being known.
Steve: Right.
Joe: So that’s one thing. The other thing — I try to at least, there sometimes I try to be optimistic — even though I am critical of our field and often pessimistic — I try to be optimistic about it or frame it positively. The other thing that it does, I think, is produce better research, so you’re getting feedback from other experts in the field before you do the study, right, which is exactly when you want that kind of evaluation and that kind of information. So I think actually the quality of what comes out of that is much higher in the dynamic between the author and the reviewer. It turns out to be, when you do it that way, really constructive and positive, and people have actually a really good experience doing that.
Corey: There is a general basic statistical principle, and that is that you accept a result as significant and perhaps publishable if the odds of it occurring, given the null hypothesis, the hypothesis that nothing interesting is going on, is roughly 1 in 20, and if you run enough tests of different sorts on different hypotheses, every 1 in 20 times you’re going to get a significant result. And if you just do that consistently you can publish papers on almost any topic.
Joe: You can have whole careers.
Steve: Yeah, even if the null is always true.
Joe: The null is always true, exactly.
Steve: You can still publish a bunch of papers.
Corey: And that’s what this is designed to prevent. So let’s talk about disparities and police shooting. This issue of police shooting of black men burst onto the national stage with the shooting of Michael Brown in Ferguson, Missouri in 2014. Brown’s death led to months of protests, and it was a catalyst for the creation of the Black Lives Matter movement. It led Colin Kapernick, one of my favorite NFL players, and Eric Reed to begin kneeling in protest during the national anthem at NFL games, and Donald Trump to attack him and other NFL players, with consequences that are still reverberating throughout our society. A basic assumption behind the protests and the BLM movement is that there are racial disparities in police shootings and other uses of force, but these views are not universally accepted. So in your recent paper you examined data on fatal police shootings at the national level. What did you find regarding the role of race in shooting of black and white people in the US during those years?
Joe: Sure, so yeah, the basic question is whether there exists evidence that police officers are racially biased in their decision to shoot, and by that, you know, the sort of classic quote, of you know, “do officers have one trigger finger for whites and one trigger finger for blacks” is really the meat of the issue here that we’re trying to get at. The debate concerns how exactly you measure that and how you answer that question. So the standard way of doing that is to compare the proportion at which some group is shot relative to their proportion in the overall population, right — so black citizens represent about 13% of the citizens in America, but they are roughly 35% of those shot by the police, okay, so they’re over-represented given their population proportions — you do that, you compare it to a different race, and you can compute some odds ratios, and you find for instance that black citizens are two-and-a-half times more likely to be shot than white citizens, given the proportion of blacks and whites in the US, okay. Our argument — and we’re not the first to make this argument, although I think we have some unique takes on the data — but our argument is that the population proportion is actually the wrong comparison standard or the wrong benchmark, what you would call the wrong benchmark, okay, and in fact I would argue that the population proportion is the wrong benchmark in virtually every study of group outcomes that you’re interested in, okay. So women in stem, any kind of racial or ethnic group outcomes in income and health, whatever it might be, comparing to a group’s population level is almost always not going to tell you anything about bias in the decision-making process. For the police shooting, the argument here is that what you really want to know is whether black or white citizens are more likely to be shot given their exposure to the police okay, that that’s actually the correct benchmark, rather than a group’s population proportion. When you do that, which is what we did by indexing exposure through a range of different kinds of crime statistics, we find no evidence of anti-black bias and police shootings at all.
Corey: So step us through this argument. You think that proportion of the population is the wrong metric to use when trying to calibrate whether there’s a bias in police shootings. So for example, you think the 13% — the fact that blacks make up 13% of the population — is not the relevant number. So tell me exactly why you think exposure to the police is the actual benchmark, and what kind of data did you use to get it?
Joe: Sure. So for the first question, the reason why is that, comparing it to the population, comparing a rate to the population proportion basically carries with it one of two assumptions, okay, one of two things has to be true for that to be a meaningful measure. Either in the case of police shootings, police have to be randomly shooting people on the streets, okay. If they were just simply randomly firing their guns, you would expect people to get shot at roughly the rate at which they exist in the population. We know that that’s not true, okay. So then the question is, if police are more likely to use deadly force in some situations rather than other situations, it has to be the case that different racial groups are represented in those situations even to their population, if you’re going to use the population proportion as a metric, okay. If not, then the population proportion becomes irrelevant. So an example would be, if you just remove it from the policing example and say imagine that, you know, you find that black Americans, if you’re looking at the best cancer treatment or some new cancer treatment that’s the most effective treatment, and you find that black Americans get that treatment at a rate of, you know, among all those who get that treatment, 13% are black Americans. Well that looks great, unless you find out that actually among all cancer patients, blacks represent 50% of cancer patients. Well then you would clearly conclude that blacks are being underserved with respect to that cancer treatment, because the relevant value is the proportion of cancer patients, because you don’t get cancer treatment unless you have cancer. And so the same applies then to the fatal police shooting argument.
Corey: So I take it your argument — you haven’t quite stated it — your assumption or your inference from the data is that black citizens have more encounters with the police that’s disproportionate to a representation of the population, and that is why it’s wrong to benchmark on the 13% figure.
Joe: That’s right. So the benchmarks that we use are crime rates, and it is the case that crime rates differ across racial and ethnic groups in the United States. When you benchmark across those crime rates, then what you find is, you don’t find evidence of anti-black disparities, right.
Corey: Now your analysis is actually similar to other analyses that have come out in the past. Roland Fryer had an analysis, I think it was in 2016 or 2015, that looked at data from New Orleans. Can you tell us a little bit about what he did, how it compares to what you did, and what his conclusions were?
Joe: Sure. One thing that he did, or one thing that was a little bit different across our approaches, he used New Orleans as one of the cities but he had actually a number of different cities in his dataset that he analyzed, and what he also looked at was non-lethal force use as well. So he had a smaller scope in terms of regional data, but a larger scope in terms of the kinds of policing behaviors that he was looking at, both lethal and non-lethal force. He did something similar in a lot of the cases, which was to try to use various benchmarks — different neighborhood characteristics, different precinct characteristics, different citizen characteristics — to try to ask once you control for those, do you find evidence of anti-black bias in policing. What we did that’s a little bit different as you introduced this as, is first of all looking at the national level data, so a larger data set in terms of the number of fatal police shootings, but we were only looking at lethal use of force, not non-lethal uses of force. So that’s one difference. The other thing that is important about the two approaches is that for us, we tried to get at some different measures of crime beyond just police department reports of crime to the federal government, because, as I’m sure you know, one really important and common criticism is that policing data are biased to begin with, right, and so if police are simply stopping black citizens more often, arresting black citizens more often than their actual rates of crime, then the data that we have are in some sense contaminated, right, by that initial police bias. So what we did was look at some data sets that don’t have police bias in them to try to see, do we still find the same effects when we do that.
Corey: So you’ve kind of foreseen one of my lines of pushback against your argument. I think there’s a fairly large body of evidence suggesting that first, aspects of the criminal justice system are biased against black Americans. You know, I moved here from New York City ,and there’s a lot of data about the stop and frisk policy that was in force in New York when I was there. In New York, about 54% of the stops were for black people, whereas black people made up 23% of the population. Once stopped, blacks were more often frisked and subject to violence. They were stopped more often for furtive movements but less likely to have guns. And I’m giving data in part from Jennifer Eberhardt’s book: she analyzed over 28,000 stops in 2013-14 in Oakland, and blacks were more likely to be searched, handcuffed, and arrested. You can go on for traffic stops and other kinds of events. So I guess I’d like to hear what’s your attitude towards these kind of figures for other engagements with the police, and how or why do you think shootings are different if you do.
Joe: Okay, so one thing about the stop data is, you’re right that there are large disparities in stops, large racial disparities in stops. What you often find are two important things. One is — and that’s true also for non-lethal uses of force, okay, so there are large disparities in terms of things, you know, relative disparities in terms of use of baton or use of taser and so on, okay — a couple of things happen, though, when you look at the data in more detail — one is that when you look at the absolute numbers, a lot of these racial disparities are not quite what they seem compared to when you’re using relative comparisons or odd ratios. The other thing is that, once you adjust for the relevant variables, those odds ratios often drop really dramatically. So in Fryer’s data, for example, one of the things that Fryer did was analyze stop-and-frisk data — and there have been other reviews of stop data, police-stop data — he finds that when you adjust for all the various neighborhood characteristics and precinct characteristics and so on, that you end up dropping those odds ratios down to 1.2 or 1.1 even sometimes, in terms of blacks being more likely to be stopped relative to whites. So what will often happen is once you control for the actual variables, again that we argue are really important for these things, the odds are much less dramatic than what they appear.
Corey: But is there still bias?
Joe: There is, yeah. So we actually in some of our other data find the same thing that Fryer finds, which is it’s in lethal force data where we don’t get anti-black bias, okay. In non-lethal data — so we have some data from the city of Cincinnati, for instance — you still do find anti-black bias, but the actual bias is really quite small once you control for the relevant variables. So one interesting thing is in the Center for Policing Equity, which is not in any way a pro-policing organization, they have data on use of non-lethal force like taser use and so on, and you can find that there’s about, you know, odds ratio of about 1.5 times more likely for black citizens to be tased during an arrest relative to white citizens, okay. If you actually look at the numbers, that works out to about 6 black citizens out of 1000 arrests who get tasered, versus about 4 black citizens out of 1000 arrests. So those are really strikingly different numbers from using odds ratios or using percentages, you know, 60% more, 40% more sounds much more dramatic than the actual numbers do. So it’s not the case that police are totally unbiased in any way and even Fryer’s work showed that, that a purely non-biased decision-maker couldn’t produce the data that he had. It’s just that those levels of bias are often really misunderstood, even in the non-lethal force level.
Corey: So I want to hop into a role of audience ombudsman for a minute. Can you explain what you mean by odds ratio?
Joe: Sure. So basically what you’re asking in the odds ratio is, you’re trying to compare the — I mean, stated simply — the likelihood that some event is going to happen for one group relative to another group. So you can code these however you want, but in most of the calculations if you have an odds ratio above 1, that would mean that black citizens are more likely to have the outcome, let’s say, than white citizens. So when you say there’s an odds ratio of 2.5 for blacks to be fatally shot by the police relative to whites, what we’re saying there is that, given the population proportions of black and white citizens in the United States, that outcome is more, is two and a half times as likely for black citizens relative to white citizens, okay. And that calculation always depends on what the benchmark is, which is the point that we try to make.
Steve: So Joe, just to make sure I understand what you’re saying? So in Fryer’s work, for example, if you use non-racial categories like, for example, characteristics of the neighborhood or some other exogenous variables which are not race, and you try to control for those effects, then what’s left over, which is interpreted perhaps as a racial bias, is there and perhaps significant, but it’s not as large as what people think. Is that a fair characterization?
Joe: That’s a fair characterization. I would say in terms of the details, one of the things that he’s doing is something more similar to, let’s take a regression and throw all those things in there and see what the coefficient for race is.
Steve: Yeah.
Joe: So it’s a reasonable interpretation.
Steve: Right, yeah. Now, but getting back to Corey’s question about your normalizing, say, rates of police lethal force to other things like, say, the probability that you’re stopped given that you’re black, you know, I think a lot of people might say well, couldn’t there just be an overall level of bias, so all of those numbers are inflated kind of roughly equally? And then when you take ratios of them, well yeah, you find the rate of lethal-force use application is not disproportionate relative to the chances that you’re stopped given that you’re black, but that one was also elevated because of racial. So it could be just there’s some uniform level of racial discrimination going on, and so this method of computing the size of it sort of just conspires to get rid of it.
Joe: Yeah, so there are two responses that I would have to push back on that, to say that that’s not what’s going on here. One is that two of the sources of data that we use have no kinds of policing bias in them at all, and this is an advance over what others have done, like what Fryer’s analysis did. One of those is the National Crime Victimization Survey, which is a self-report, nationally representative self-report survey of Americans in the US on their crime victimization. So they report whether they were victimized in the last year and whatever kind of characteristics they can recall, such as the race of the person who attacked them or who was the offending individual. So when you do that, there’s no policing bias in there because that’s coming straight from the victim him or herself, and when you do that you get the same effects. So we don’t find anti-black bias even in those cases. If it had to do with a just general inflation of policing data, we would see bias there.
Steve: Right. So just to make sure I understand, so if you look at witness accounts…
Joe: That’s right.
Steve: … of who committed the crime against them when they have some sense of the race of the person, there may be an inflation relative to the population fraction for certain groups. And then if you use that as the metric in the denominator, then you find no inflation of application of lethal force relative to that representation.
Joe: That’s right. And the other variable that we use, the other way of indexing that also then is, we use the CDC death-by-assault data, okay. So the CDC death-by-assault data are also uncontaminated by this kind of policing bias that you talk about. We can estimate, it’s a non-perfect estimate of crime…
Corey: Death by assault, how would you… Explain what you mean, do you mean they identified the perpetrator already?
Joe: No, so it’s the race of the victim, okay. So someone is killed, and the CDC has data about how they died. One of the categories is death by assault, and so you can get number of black Americans and number of white Americans in the US — and you can break that down by county level even — who have died by assault in a given year. What we do then is make the inference that, because most deaths by assault are within-race deaths — it depends on what kind of death you’re looking at, but something like around 90% are within race, not cross race — that gives us some imperfect but still an index of homicide rates. So again, you do that and we find the same thing, we still don’t see anti-black bias there. The other thing though, also, that’s important about this, Steve, with respect to your question, is that if stop data are inflated and the question is that then also inflating rates of fatal shootings for black citizens because of that, I think that actually reflects one of the problems with this whole field, which is a misunderstanding of the nature of fatal police shootings, okay. So it isn’t the case that most fatal police shootings begin with that kind of stop, okay, begin with the stop and frisk, begin with a traffic stop or something like that. I think there’s a misperception in the public that most police shootings are of that nature. It’s a very small percentage that are like that.
Corey: Do you happen to know what percentage of shootings occur by simple stops where someone is unarmed?
Joe: It’s certainly under…Well okay, you ask two different questions. One is a simple stop and one is when they’re unarmed. Those aren’t the same.
Corey: Let’s do both those questions in sequence.
Joe: Okay, so let’s start with the initiation of the stop or the initiation of the police contact, whether it was initiated by, say, a 911 call response or whether it was a discretionary initiation on the part of the police officer. That’s one way to break those down. I can try to find the exact numbers. My recollection is that it’s 85%, certainly above 80% of non-discretionary stops, okay. So the far bulk of police shootings begin not in discretionary policing, okay, not in the kind of stop and frisk.
Corey: You mean someone’s called 911.
Joe: That’s right.
Corey: Or that they see some kind of criminal activity ongoing.
Joe: Yes, exactly. If you ask about unarmed, actually, the data are even more striking. It’s a really small minority of cases where an individual — and here we could break it down by different ways also — but where an individual is unarmed and not currently aggressing against the officer in some way. So we have to be a little careful here, because you can be unarmed and be a deadly threat to an officer. There are lots of cases, for instance, where — I mean, relatively speaking, in these data sets — where it’s someone is, you know, kneeling on top of an officer, pounding their head into the ground, and the officer fears that they’re going to lose consciousness. If we look at the number of cases per year where someone is not currently physically aggressing against an officer and is unarmed when they’re shot, at most, as a really, really liberal maximum, that’s probably something like 50 cases a year.
Steve: Yeah, I seem to recall someone claiming, that sounded about right to me, that your chances of being stopped in a discretionary action by a police officer and then shot while unarmed was less than your chance of being hit by lightning or something.
Joe: Yeah, that’s almost certainly the case, because if you add that on also, where they’re stopped in a discretionary…
Steve: Yeah, I mean the nightmare scenario, I think the emotional reaction people have from these few, these relatively rare news stories about which everybody becomes aware of, is the idea that I was minding my own business, maybe I was speeding or something, I get pulled over, and then through no fault of my own I end up shot by a police officer. But I think that’s like similar to being hit by lightning or something.
Joe: Right. And one thing I will say — not to play my own devil’s advocate against my own work — but one thing that’s important is that even though those are rare events, I think they have a really serious, oversized impact on police community racially.
Corey: Quite rightly.
Joe: Yeah, quite rightly, so no question at all in terms of the importance of those events for those, you know, subsequent effects, but you’re right that the likelihood of them is extremely low. So I actually, recently I was interested in this, I calculated what the error rate of police officers might be, so what would be an interesting estimate of the error rate, and it turns out — I said let’s be as critical as possible toward the police and trying to make them look as bad as possible in what their error rate is — so as an absolute minimum, there is a minimum of about 50 million police-citizen encounters a year, okay, and that’s actually just for an individual person — it’s actually 50 million person-police encounters, so it’s probably three or four times that in reality — but let’s say a minimum of 50 million police encounters a year, there’s a maximum of, let’s say, 50 errors that police make, and by that we would say the person is unarmed, they’re not doing anything, they’re not aggressing against the officer, and they end up being fatally shot, so I think cases where we would clearly say the officer has made an error, okay. So that translates to about a .0001% error rate.
Steve: It’s like one in a million, it sounds like.
Joe: It’s an incredibly small error rate for what is an exceptionally dangerous and difficult decision that officers have to make. It’s probably actually, you know, half or even less than that error rate if we are being more generous to the police.
Steve: I mean, totally negligible compared to the chance that you’re just killed by another driver who’s drunk, or you screw up yourself and got killed, right.
Corey: So let me raise three more potential arguments against your, these questions you can raise about your analyses.
Joe: Yeah.
Corey: If you go through your data, you actually get a fairly wide range of odds ratios for a relative likelihood that someone’s going to be killed by the police. I think in your NIBRS data you have whites being four times as likely as blacks, but in other data blacks are certainly more likely… I mean, a factor of 4 is not nothing, so why do you think the data vary so much in particularly the likelihood that someone of a particular race will be shot relative to someone of another race?
Joe: Yeah, so there are actually two sources of variation in those odds ratios. One is, if you look within a certain kind of shooting — so let’s just take all shootings, okay, so every fatal shooting that happens in the US, all right — you’re right that when you benchmark across different kinds of crime rates, those odds ratios are going to change around a little bit. It’s not obvious — I mean I know the statistical reason for why that happens — but it’s not obvious to me why those move around so much. It basically just is a difference in terms of what the relative rates of crime from those different data sets might be.
Corey: But what if there’s a potential error in those data sets, right, based on a sampling error?
Joe: It could be, but if you look…Yes, but it’s not clear that that actually changes the conclusion. So when you look at all fatal shootings, which across a two-year period for black and white citizens is about 1,500 shootings, in no case is there significant anti-black disparities in fatal police shootings, okay. So the difference is just how much anti-white disparity is there? Is there evidence of some or is there evidence of equality in the outcome? But the other source of variation then is what happens when you move from different kinds of shootings to one another, from one kind of shooting to another. So once you move out of the all-fatal shootings and you ask about things like the likelihood of being shot while you’re unarmed, the likelihood of having an object that you take out of your pocket being mistaken for, you know, a gun when it’s a harmless object like a weapon or a phone, what happens there is that the data just become really uncertain, the estimates are very, very uncertain, and so there those estimates may range from really strong anti-black disparity to very strong anti-white disparity, we just don’t know. And so that is one source of serious uncertainty in the data. Once you start to drill down to those smaller kinds of shootings, there’s just so few of them that it’s not clear what the actual answer is.
Corey: Hopefully later on in our discussion we’ll have a chance to talk about Jennifer Eberhardt’s study, where she looked at the ability of subjects to identify an object as dangerous or not dangerous based on how clear it was, and she found disparities, with people more likely to identify the object as dangerous in the hands of a black person.
Steve: So it occurs to me that just these kinds of simple calculations, where you look up one rate perhaps distinguished by race and another rate and you take the ratio, these are kind of the obvious things — like I guess if you’re quantitatively oriented and you start hearing stories about police shootings, you might do a few Google searches and try to calculate a few of these things, seems totally innocuous to me, maybe the inferences that you make based on those things one should be very, very careful about. However, I get the feeling that in certain quarters, if you just show those numbers and compute those ratios and maybe even say something a little bit tentative about the conclusion, you’re going to get crushed, you’re going to get a very emotional, angry response for daring to perhaps say something which is counter to the mainstream narrative on this. And is that your experience?
Corey: Have you been threatened since this paper came out, Joe?
Joe: [laughs] The response has been really unusual, in the sense that the work itself has been picked up by a lot of what you might say are more conservative, politically conservative corridors and commentators and so on, and so there the response…
Steve: Is that because only they can understand the math, or is it because… [Joe laughs] is it because the other side understands the math but they don’t want to talk about what you want to talk about?
Joe: The latter I think, I assume it’s the latter. So you know, a lot of those responses have actually been very positive toward the work and people have shared it and liked it quite a lot, so…
Steve: What about in a departmental colloquium?
Joe: Well, our department is maybe somewhat unique in being a relatively objective group of individuals who have found the data to be convincing and have thought it to be very interesting — which isn’t to say that they haven’t asked hard questions about the data, but just that they’ve accepted the conclusions based on what the data are. That’s not going to be true for all corridors certainly, but…
Steve: So at least in your experience, science works here in the academy.
Joe: Science works in some parts of the academy.
Steve: [laughs] Okay. Maybe Corey’s going to get to this later, I don’t want to derail the conversation.
Corey: Just a couple more points I want to get at. Your data is at the national level, so you honestly, you probably can’t make inferences, or can you actually — I don’t want to put words in your mouth — can you make inferences about potential bias by local police units?
Joe: Sure yeah, so…
Corey: You’re from Chicago, that’s a fairly corrupt place.
Joe: Right.
Corey: So I want to hear about that.
Joe: [laughs]Well, what is important is that there is variation, there’s actually quite a lot of variation from county to county or from department to department, so that is an important thing. One thing that we’re looking at, it’s just the national level in this work. That said, we do have other research where we look at county to county variation and look at county to county data, and ask whether there’s evidence then of meaningful anti-black bias once we drill down to the county level. And what we find there — actually, this was part of a bigger project that one of my collaborators, David Johnson, spearheaded, which was what we were actually interested in was officer race. So one thing that’s not well known is anything about the officers themselves, so we spent two years contacting all 700-some police departments that had a fatal shooting in 2015, asking them for data about all the officers who were involved in those shootings, and we asked about officer race, officer sex, and officer years of experience. And we ended up getting, through an iterative process of more and more involved requests and then a team of undergraduates who were very good at internet sleuthing, we ended up getting about 85% of the data, of all police officer data from all shootings in 2015, and so we could ask questions about the likelihood that the race of the officer predicted the race of the citizen shot. But one of the things we were able to do there, because we had county-level information about where the shooting had taken place, we could look at county-level crime rates also, and there we found really quite remarkable specificity of county-level crime. So it turns out that the county-level white crime rate, and this is in terms of violent crimes, predicts the likelihood that a white citizen will be shot in that county, but not the likelihood that a black or Hispanic citizen will be shot. The county-level black crime rate predicts the likelihood that a black citizen will be shot but not a white or Hispanic citizen, and then the county-level Hispanic crime rate predicts the likelihood obviously that a Hispanic citizen will be shot and not black or white. So we actually find really race-specific influences of crime on the likelihood that some citizen will be shot — again, which is not to say that there aren’t there’s not variation from place to place or department to department, but even when you go down to that local level, it does seem to still hold that that crime rates are a key variable here.
Corey: And that’s for every county you looked at? There were no counties that exhibited over-representation of blacks among people shot relative to crime rates?
Joe: No, there’s going to be variation there, yeah. So it is the case that some counties will show that kind of over-representation, yeah. I mean, it’s not a perfect measure, but overall we don’t see that disparity.
Corey: So the last critical pushback question. Your data is from 2015-2016, and this is post Michael Brown. Is it possible that there was an effect of that shooting on the police mindset that led people to be more careful about when they use force, rather than, you know, prior to the shooting?
Joe: Yeah, it’s a great question. It’s really hard to know… First of all, what we know is at least from 2015 there’s been virtually no change in police shootings. So if there was an initial effect, that effect hasn’t moved at all over the last three years, so the numbers are almost identical across that period. Prior to that — the problem is, and the reason why we start with 2015, it was only in 2015 that we had any kind of complete data on fatal police shootings. So that year was the year that The Washington Post and The Guardian began to collect data on fatal police shootings in the US. Before then, all we had was the police departments’ voluntary reports to the federal government about who they had shot, okay. It turns out that once other non-police organizations began collecting data, that the police departments were under-reporting by about half of the number of citizens that they shot. So it’s really hard to do pre- and post-2015 comparisons in fatal police shootings for that reason. What we can say is that the importance of local crime rates had been shown prior to 2015, but there’s just no similar comparison where we can really get that kind of nice data. We just don’t know.
Steve: Joe, are you familiar with something called the Ferguson effect?
Joe: Yes.
Steve: Have you studied that?
Joe: Actually I have an undergraduate SROP student who began that project, looking at the Ferguson effect. The Ferguson effect, for the listeners who don’t know, is — there’s actually three or four different versions of the Ferguson effect, but the broad strokes of it is that following the shooting of Michael Brown in 2014, that police began to pull back and do less discretionary policing. That pullback on the part of police led to an increase in crime, and in particular violent crime. So that’s the general idea that’s been proposed. A few people have tried to test that, but the tests have been either of one city looking pre- and post-2014, or at a couple of cities but only looking at 2015 data. What we wanted, what we try to do is to get for the 50 largest police departments in the US, over time data from 2010 to 2018, okay, in terms of both crime rates and some discretionary policing measures. It turns out that’s really hard to do for a lot of reasons. So we’ve got about half of those cities, we have about 25 cities now that we have data for, but there are a few problems. One is that different agencies report data in different ways — they break down the different data in different ways, they categorize data in different ways — so you’re trying to find out, okay, well where can I get similar crime-rate data across all of these years for these top cities. But the other thing is actually in trying to find discretionary policing data. And so we’ve been using so far the Police Data Initiative and the Stanford Policing Project, both of which collect discretionary policing data in terms of things like traffic stops by the police and so on. And so as I said, we’ve got about half the data right now. What it looks like so far is that there is variation in whether discretionary policing has changed pre- and post-2014. Among the cities where there has been a decrease in discretionary policing, about 80% of those also saw increases in crime, okay. So there does seem to be at least some tentative evidence that the Ferguson effect may be something real, but again, we’re only about halfway through trying to, you know, being able to get all those data.
Steve: Is Baltimore in your data set?
Joe: Baltimore is in our data set, yes.
Steve: Because I think, at least what I’ve read is that there was a particularly strong effect in Baltimore and it wasn’t necessarily Ferguson, it was actually some incidents that actually happened in Baltimore.
Joe: That’s right.
Corey: So I want to put forward an analogy to you and I’d like to hear your reaction to it. Police shootings are a little bit like the death penalty, kind of an instantaneous way, at least fatal police shootings. You’re arguing that there’s no racial disparity in police shootings, presumably because police approach this somewhat differently than they approach other uses of force. Have you ever heard the term “death is different” in death-penalty jurisprudence?
Joe: No.
Corey: It refers to the idea that there’s far more extensive procedural protections for defendants in death penalty cases: you have multiple layers of appeal, you’ve got bifurcated hearings… A lot of these came in, you know, after the Supreme Court had a moratorium on the death penalty in the early to mid ’70s. The general principle behind it is that, because the penalty is so severe, you’ve got to take extraordinary measures, and these measures hopefully will reduce both unfairness and —although this is not written in because it’s a substantive consequence, it’s not a procedural — potential errors. But we know in fact there’s pretty clear evidence of racial bias in the death penalty, especially as regards the race of the victim — if you kill someone white, you’re much more likely to get the death penalty than if you kill somebody black — but also some evidence of race of the defendant also having an effect. So I’m sort of interested in the idea that, although they take these extensive efforts to reduce bias through really, really rigorous review in the death penalty cases, they haven’t managed to do it. But your suggestion is that in police shootings, somehow police in the instantaneous decision to shoot have managed to eliminate bias. What are your thoughts on that?
Joe: Sure. Let’s be a little cautious, I’ll be a little cautious and say it’s not clear that they have eliminated bias completely. It’s certainly going to be the case that there are some officers at some points in time that are exhibiting bias, okay. The question is, is there widespread systematic bias in that decision, okay. And I think one answer for why there doesn’t appear to be that kind of widespread systematic bias is because the use of lethal force is really something that is just qualitatively different than other kinds of use of force. And so people talk about the use-of-force continuum, where you might yell at a citizen, you might put your hands on them, you might use a baton, you might use a taser, and so on, and then the end of that continuum is the use of your service revolver. But it’s not quite correct because, although obviously fatally shooting someone is more severe than using a taser, the use of fatal force serves a different purpose, okay, so there’s simply a different purpose that fatal force is supposed to meet, and therefore a different threshold for using it, than any other kind of non-lethal force. So all the other kinds of non-lethal force are about compliance okay, they’re there to, you know, compel the citizen to comply with the officer’s request — right or wrong, whether they should be doing that is a separate issue — but that’s the stated goal of those measures. So you’re putting your hands on the citizen, for instance, when they’re not complying with what you as the officer think is the request that they should be complying with. That’s not why the gun comes out. The gun comes out for a very different purpose, which is to stop an imminent threat, okay, to one’s own life or the life of another citizen. So there’s just simply a different kind of decision rule, actually, that happens at that stage versus other stages of use of force. And so, whatever the kinds of bias might creep in on the non-lethal force end — so for instance, you might interpret — and there’s evidence showing this — that you might interpret non-compliance on the part of a black citizen versus a white citizen as disrespect for the self or disrespect for authority that might compel you to move through that use-of-force continuum faster — those just aren’t the variables that are playing a role when you’re making the decision to use deadly force. So really strong factors like, does the person have a gun, are they firing the gun at someone, are the components, are the factors that impact that decision, so those things may just overwhelm whatever other kinds of bias might be present in that decision. So it doesn’t even have to be the case actually that officers have somehow eliminated bias in their thinking. It just has to be that there are other more significant factors that overwhelm whatever amounts of bias there might have been creeping into that decision.
Corey: What would you say to someone like Colin Kaepernick or to someone who’s protesting bias in the police use of force? if you were to tell them where they should be spending their energy to try to reduce fatal shootings of black men, what would your proposal be?
Joe: Well, I mean if you take a strictly data-driven approach, the number one way to reduce fatal shootings of anybody — black or white citizens or Hispanic citizens, the three major categories of individuals who get shot — the number one thing is reduction in crime, okay. So not being involved in criminal activity is far and away the best way to not be shot by the police. Again, it doesn’t mean that officers are perfect, that they’re making perfect decisions 100% of the time, but that is the major factor that would reduce fatal police shootings. It seems like there are very few things that would reduce it more than that in terms of sheer number of people shot, given the nature of fatal police shootings. So that would be number one, and all the difficult problems of doing that. Number two would be something like… Investment in de-escalation training is probably something that would then also reduce fatal police shootings — not nearly as much as that first thing, but maybe to some degree also. There might be some cases where some scenario unfolds — again, even if a person is showing having an imminent threat to others — where other techniques might reduce it. But again, number one far and away would simply be crime activity.
Corey: So you conduct not just what I like to call epidemiological research — where you data mine and look at ratios of police shooting relative to other factors that affect policing — but you also carry out experimental work, where you bring police officers into your laboratory — actually, I think your laboratory is a traveling laboratory?
Joe: That’s right. Usually we bring it to them, because it’s hard for them to travel to us and easier for us to travel to them.
Corey: So your goal in these experiments is to begin to actually experimentally manipulate the factors that might affect police shooting, both to determine whether there’s racial bias and to probe different types of biases that might creep in. In discussions of these kinds of studies, people often use the term implicit bias, and I’d like if you could start off by telling us what implicit bias is.
Joe: Yeah, that depends on who you ask. So usually there are at any point in time three or four different definitions of implicit bias floating around. I would say, something that captures the essence of it is that implicit bias is supposed to be a kind of bias that people have that is either uncontrollable, or in some way that they’re unconscious of or unaware of. And those are two really different features in terms of consciousness versus controllability of cognitive processes, but those tend to be the two things that people bring up when they talk about implicit bias. And so one function of the implicit bias argument is to say look, if you look over the decades, explicit prejudice has by almost every measure possible gone down, okay. Egalitarian attitudes have gone up, at least in the United States, explicit prejudice goes down, but we still see that some groups get unequal outcomes in whether it’s in criminal justice or health or whatever. Implicit bias has entered in a lot of ways as an explanation, then, for those disparities. So people have said, it’s not that the police are explicitly biased or that they have explicit levels of prejudice — in other words, if you ask them, do you like black citizens and white citizens, they’ll report that in fact they like them to the same degree. That would be explicit prejudice. But instead there’s this kind of implicit bias that they have that they just can’t control even if they wanted to, okay. And again, another version of that is that they might be unaware simply that they have that bias, and so for whatever reason the kinds of associations that they might have — let’s say, I mean a standard argument would be officers, just like everybody else, have some associations between, let’s say, black men and weapons, or black men and violence — and so again, in an uncontrollable way those associations are going to impact the decisions that they make. Let’s say they might be more likely to think that that wallet in the hands of a black citizen is a gun because of those implicit associations.
Steve: Is the issue that it’s uncontrollable or that it’s subconscious, that they’re not knowingly doing it?
Joe: Those are two different things, and again, depending on who you ask, they’re going to emphasize one or the other. And that even depends on, you know… For example, the Implicit Association Test — which is probably the most widely used implicit test, it may be the most widely used personality or psychology test in existence at this point —
Corey: Stop and explain what this test is, and the fact that anyone can find it on the Harvard website.
Joe: Sure. So you could go to Project Implicit and take any number of different kinds of implicit association tests that you’d like online, and basically what the test is doing is, it asks you to categorize different words when they’re paired with different categories. So for instance, if you wanted to know a person’s implicit associations between, you know, let’s say women and math, okay, versus men in math, you might categorize math-related words when those words are paired with the category female or the category male, and it turns out that it’s harder for you to categorize words as math when they’re paired with female relative to male, okay. So that’s something like the implicit association test. Measures like that are used to support the idea that there is implicit bias in people, because if you ask people look, can women do math, they’re going to say yes to you in an explicit way, but they still show that it’s hard, that they have these associations between men and math that are latent in their minds. So Steve, for your question, a task like the implicit association task is a controllability task, okay. What’s actually interesting about that task is, it’s really hard not to show that bias, even if you’re trying, so even if you are really consciously trying to control your response and categorize the math word as a math word when it’s paired with female, but you just can’t do it. So that’s a controllability elementY but there’s nothing unconscious about that. You have this really incredible subjective experience of the difficulty of categorizing it. So in that kind of case it’s all about controllability, and not about conscious or unconscious. So it just depends on who you’re asking in terms of what is the key element there.
Steve: One of the striking aspects of the data revealed in the lawsuit against Harvard over Asian-American admissions was that the personality rating given to the Asian-American applicants by the Harvard admissions staff — who had never met them typically — was the worst of all groups, whereas the alumni interviewers who actually met these particular applicants rated them much more highly relative to the other groups. And so one could wonder whether that’s a kind of implicit bias, or maybe they just had to meet a certain quota.
Joe: So that would be the key difference in the implicit versus explicit bias argument, which is, if it’s the case that they have to meet some quota and so they are knowingly giving personality ratings that are lower for Asian applicants — which I think many of the analyses suggest they may actually have been doing that, right — if they’re doing that, then that’s not going to count as implicit bias. The implicit bias argument is that they don’t even know that the race of the applicant is causing them to downgrade their personality rating. That would be a consciousness element, where they’re just unaware of that.
Steve: Given the very progressive political dispositions of most of these admissions staff, I would imagine it’s got to be a combination of the two things going on here, actually.
Joe: Okay. In the policing case, the argument is about controllability, and so it’s that, even if the officer wants to make a perfect decision about what’s in the person’s hand, they’re not going to be able to control the influence of race on that decision.
Steve: Right.
Corey: So let’s turn to your experimental design and your simulator. Can you tell us first, what does a standard simulator study look like, and then how does your simulator differ from that.
Joe: Sure. So the standard way that social psychologists would study this decision in the lab is to bring people into the lab — in fact they almost never even use actual police officers, they use undergraduate, you know, introductory psychology students for this — but they would bring people into the lab, sit them down at a computer, the computer would have a keyboard in front of them, one of the buttons might be labeled “shoot” and the other button might be labeled “don’t shoot,” and they’re going to see pictures on the computer screen that come up, and those pictures are going to have a black or white individual in the picture, and that person’s going to be holding a gun or a harmless object like a phone or a coffee mug or something like that. And the task for the participant is just to push a button on the keyboard that says “shoot” or “don’t shoot,” okay. That’s it. That’s how social psychologists, generally speaking, have studied almost — I would say upwards of 99% of the data in social psychology come from those kinds of tasks, okay. And you might go through hundreds of those trials, with a picture just popping up every couple seconds and you trying to make that decision on the keyboard as quickly as possible. What we do instead in the immersive simulator that we built was to — we have a 12 foot by 8 foot projection screen, and videos are projected on to that screen. Officers are to interact with subjects in those videos, with the citizens in those videos, as they might normally in any kind of policing scenario, and the scenarios vary, as I’ll get to. One important thing is the variation in kinds of scenarios that we have: they might be called for a domestic dispute, they might pull over someone for swerving, you know, there might be a report of a suspicious person breaking into a building, any of those kinds of scenarios the officer is going to, as best they can, interact with the target in the video or with the citizen in the video — in psychology we call them targets, I keep getting corrected by the police that citizen is the better word to use than target [laughter] — so they are to interact with the citizens in the video as best they can. In their holster they have a handgun, which is a real handgun. We use a Glock 19. It’s been modified so that it shoots an infrared laser out of the barrel instead of a bullet, and it’s hooked up to a compressed air cartridge. So it sounds like a real gun, it recoils like a real gun, it feels as if it’s a real gun, and in fact most of the officers we work with are impressed because it actually feels like they’re making that decision, given the handgun that they’re using. So they’re going to watch these videos, and at some point in the videos we usually have a kind of threat moment where the citizen’s hands, for instance, go out of view, okay, they might put their hands behind their back; or in a pullover scene, traffic pullover scene, the citizen might reach back into their car after getting out of their car, okay. So we have these threat moments, where then the officer has to make some decision because the person’s hands usually will come out either with a gun or with some other harmless object, or with nothing at all, for instance, and they’re going to use the handgun in their holster to make that decision to shoot by drawing it and then firing.
Corey: And why do you think it’s so important to have all these realistic features to your setup, compared to the other…
Joe: Yeah, so the reason why, and actually our main motivation for this, was because the features that officers report mattering in the actual decision to use deadly force are totally absent from the standard kind of, you know, computer screen button box or, you know, keyboard button task, okay. So the officers will report that the reason why they’re at the scene, the dispatch information, for instance, plays a critical role in how they approach that scene, okay. So they know the person, have they had interactions with the person, what are the person’s hands doing, what’s the neighborhood like that they’re in, all of those things are things that officers will say matter in making the decision to shoot. All of those are stripped away from the standard experimental task, and so our argument has been that race bias, which is very robust in at least untrained individuals in that standard computer task, is really quite meaningless, because you’ve taken out all of the really forceful elements of that decision to shoot. And so we wanted to put those back in, in part to ask whether race bias was still there, but also to get some estimate of how much do they really impact officers’ decisions — let’s say pulling over someone for speeding versus serving out a warrant for armed robbery, you know, do officers behave differently under those two conditions?
Corey: So before we get your findings, I want to explore a little bit about how police officers feel about working in this situation, because I think that probably accounts for the willingness to participate, and it may in fact account for the kind of officers that will participate in your studies, just to put a potential compound or so. What do they — I mean, first of all, how do officers feel about the experiments? Are they generally positive about taking part, and do some of them say no?
Joe: Yeah, so I would say our experience has been really quite positive with the officers that we’ve worked with, once they have some — in my impression anyway — once they have some belief that you aren’t out to make them look bad, okay — not that you’re out to make them look good, but just that you’re a trustworthy, honest person in that collaboration. And once you get over that and you have that kind of trust, we’ve found that actually officers are very positive, they’re very willing to participate in our research, and they actually have positive things to say about the simulator experience. Now granted, that’s not all officers, right. Some officers just won’t do it one way or the other, they refuse to do it. Some officers go through it and they will report to us afterwards, you know, I really hate these kinds of simulators because I can’t do what I would normally do in the real world. So that’s absolutely a limitation, and officers will bring that up — in particular there are times, for instance, where an officer would use non-lethal physical force and they can’t do that in the simulator. So it’s not a perfect measure by any means, but we do get more positive responses from officers certainly than negative ones. Some of them just like the practice of going through the simulator and being able to do that kind of practice, because departments vary in terms of whether they have this kind of professional equipment where officers can do this practice. Others sincerely believe that they make good decisions, and they’re actually happy to have an opportunity to demonstrate that they are making good decisions. So we’ve in fact found it’s been pretty positive, actually, in terms of our interactions with the police.
Corey: Have you found any systematic differences between the officers who agree to be part of your studies and the ones that say no?
Joe: Well, the problem is we don’t ever get the ones that say no, [laughs] so it’s certainly possible.
Corey: By race or something else?
Joe: No, by race, by years of experience, by sex, the samples that we get are fairly well matched to this overall sample in the departments that we’ve worked with, so that hasn’t been a problem. But yeah, there’s always the lingering concern that, you know, there’s really some officers making terrible decisions and we just didn’t get them in there. That said, we try to have the actual policing staff, like the staff sergeants, for instance, encourage officers to participate in a uniform way as much as possible, to try to reduce officers’ own, you know, lack of willingness to be part of it.
Corey: So what did you find?
Joe: So in that, what we found so far — and we’ve collected data with almost 700 police officers now, running them through our simulator — we find two main things. One is that we don’t find any evidence of racial bias in the decision to shoot. So again, we can measure — and we can measure it with almost millisecond accuracy how quickly they fire, how many times they fire, and so on — we don’t find racial bias in that decision, okay. So officers appear to treat unarmed black and unarmed white citizens in the videos in the same way and are not more likely to shoot unarmed black citizens. What we find instead is that most of the variation in whether officers shoot, including things like do they put their hand on their gun, do they draw their gun, all of these kinds of outcomes, is actually two things. One is, some scenarios make officers shoot more than others, and it makes them more likely to shoot. So scenarios with high threat levels, like serving out a warrant for armed robbery, make officers more likely to shoot than serving a warrant for failure to pay child support, okay. Officers put their hands on their guns sooner when they’re serving out the armed robbery warrant versus the child support warrant, okay. So scenario to scenario matters a lot. And then the individual actors, irrespective of their race, matters a lot. So some actors in the videos behave in more threatening ways than other actors in the videos do, in terms of where their hands are, do they run from the police, how are they moving their arms, all of those things also impact officers’ decisions to shoot. And so those are the two big things which influence the officers. And actually, what’s interesting is that matches officers’ reports about what matters to them in making that real decision.
Corey: So I’d like to ask a couple of questions about your experience working with the police, and what they’ve told you about their jobs. So you’ve got partnerships now with a couple of different police departments, and that’s actually pretty impressive, given a lot of the skepticism that, you know, police often have towards academics.
Joe: Yes.
Steve: So I’m curious about how you basically bridge that gap, how you made them feel comfortable enough with you to work openly and to make their officers available.
Joe: Yeah, that’s a that’s a good question. I should say it hasn’t always worked out. So we have had times where we’ve tried to work with different departments, and what has happened in at least one occasion is that lawyers for the department came in and stopped the research before we could really get it off the ground. So even there, it was less the actual officers and more people concerned with the liabilities of the departments that stopped that.
Corey: They were afraid you might find out something that was damaging to the department?
Joe: That’s right, and that then they would be held responsible.
Steve: I’m surprised the lawyers let you in there, because imagine that a future or ongoing lawsuit against the department could call your evidence, your results in as evidence of bias. That would be extremely damaging to the department.
Joe: Well, we were able to talk with the leadership prior to the lawyers getting involved. And then before we were able to do any studies and collect any data, that’s exactly when the lawyers came in, and that was their logic for why they wouldn’t let us work with them. Even if we promised that everything would be anonymous, all the data would be anonymous, that still wasn’t enough for them, so…
Steve: Butf your data, if your results had come out negatively in terms of racial bias, you know, even if they didn’t know whether the specific officer…
Joe: That’s right, yeah.
Steve: … was involved in the lawsuit was involved, there’s a general climate at this precinct where, you know, this is incredibly damaging.
Joe: Yeah. So in the cases where it has worked, I think part of it involves just being open and listening to officers. And so we listen to the officers talk, we present ourselves in a sincere way that we want to learn from the officers, we want them to tell us how they make these decisions. They’re the experts, they’re trained, they get thousands of hours of training to make this decision. And instead of us coming in and saying look, we know you’re biased and we’ve got all these laboratory data that say that you’re biased, and so we’re going to test whether you’re biased or not, we actually approach it by saying we’re interested in understanding the decision, okay, in understanding how it is that you make this decision, and overall you do a great job of making this decision. Help us understand that, and help us learn why you do what you do, or how you do what you do. And that seems to defuse the situation a little bit and get us on their side. And then it’s just a long process of talking with the leadership, talking with individual officers — and, you know, it doesn’t happen overnight, certainly, it’s just something that goes on. And then, you know, we probably could have collected about a third more of data than we actually did if we just stopped talking to the officer. So what would often happen is, they would go through the simulator — and we only had 20 minutes with each officer, okay — they’d go through the simulator, which was about 19 and a half minutes, and then they’d start talking. And they’d open up, and they’d talk about different decisions that they were in maybe that were really similar to the ones in the scenarios. And I in particular never stopped them from talking, and so sometimes we would talk for an hour, and they would just, you know, then get the sense of feeling, you know, feeling trusting of you and being able to open up. So I think the students who were with me collecting the data were angry with me usually, because I was collecting at a lower rate than all the other people were — you know, we had three different simulators going at once — I would just never stop the officers, I always wanted to hear what they had to say. And I think that helps then, they talk to other officers, and people get the sense then, oh yeah, there are these researchers at MSU and, you know, they’re good people, you should participate in this and you should do this. And it’s also great because you get incredible stories, and you get some real insight into what the officers are going through.
Corey: I would just like to hear a little bit about what these officers told you, because my impression was they opened up to you in a pretty honest and direct way about particular experiences, and also the general feeling of being a police officer in the US, in a major city today.
Joe: Yeah. So the general feeling of being a police officer we found varies depending on the department, but it ends up being fairly consistent within that department. So in some departments we’ve worked with, officers have really pessimistic views of the direction of the city, for example, and they clearly have, you know, a lot of problems with being a police officer in terms of what they view the public’s view of policing might be. In other departments, officers have really positive views of policing and positive views of the cities that they’re in. So there seems to be important city to city or department to department variation. In the cities where officers were more negative about policing or about police-community relations, a lot of what came out was just the problems of not being understood, okay, and citizens not understanding what policing was all about, and understanding the difficult decisions that they have to make and the kinds of stressors that they’re under. So that came out as a consistent theme. When officers would open up about the hardships of policing and the difficulty of policing, what often came out was, people just don’t understand what it’s like to be a police officer: we’re expected to do everything, and we’re expected to do everything in an exactly perfect manner, okay. And again, and just be clear, it’s not to excuse when officers make mistakes or when officers aren’t willing to change if they are, you know, making systematic biases. But in a lot of officers there seemed to be this concern, which really affected them psychologically, that we’re asked — you know, one officer said, you know, we’re asked to be enforcing the law, we’re asked to know the law, of course, but then we’re asked to enforce the law, we’re also asked to be school supervisors and help students out, we’re asked to be parents for kids, we’re asked to be everything, we’re asked to be elderly caregivers, we’re just asked to provide every service imaginable, and they keep loading more and more things onto our job that we’re supposed to do — and by the way, we have to do them all perfectly, right. If we do something wrong or we make a mistake, you know, we’re concerned that it’s going to be the end of our careers, or there’s this perception that the public is against us when we make an error. And they understood that it wasn’t all people in a city or all of the public, but just that there were very vocal components within the public that would make their lives miserable if they couldn’t do everything exactly perfectly like they were being asked to do. So that was a consistent theme in terms of the experiences that some officers at some of the cities we’ve worked with have had.
Corey: Now I understand that while your results are sort of, I wouldn’t say anomalous, they’re fairly different from the mainstream of social psychology, you think they’re actually not very different from results the criminologists have arrived at over the past few decades. Could you say a little bit about the difference between your field and criminology?
Joe: Yeah, there are a lot of differences between the two. One that’s… I think you’re right about that impression of, in criminology things like the importance of the neighborhood have been a central variable for decades and decades, people have studied that. I think one of the big differences actually between criminal justice and psychology is the starting point for understanding that decision. So something that criminal justice has done well relative to psychology is to start with the people actually making that decision, in the environment that they’re making those decisions. So, you know, drive alongs have been done in criminal justice research for decades. Those are, you know… In the ’80s there were researchers doing drive alongs with actual police officers, coding the data as best they could. In psychology, for whatever reason — and there are probably a lot of reasons — we’ve started at a completely different side of trying to understand this decision, which is, we know that people are biased, okay, and we’re going to do the same laboratory studies that we do to study every decision to show that people are biased. And so it’s just a completely different way of studying the question, to the point where, like I said, we don’t even use police officers in most of the studies that we use, even that we have in the psychology of police officer decision-making — which, when you think about it, it’s just insane. Like there’s no reason why we would study an expert group, okay, by looking at non-expert decision-makers. And again, we do so in these extraordinarily stripped down, what I would call just meaningless decision environments. So part of the reason why there’s been some difference in terms of the appreciation of some finding, is that criminal-justice people have just been studying it in a very different way, what I would say is a much better way, than social psychologists have.
Corey: So we’re about out of time, Joe. Steve, do you have any more questions for Joe on these two topics?
Steve: Well I think it’s been extremely enlightening. Maybe you could say a little bit more about specific reactions to your simulator research results, reactions from other people.
Joe: Sure. So there have been a few different reactions that we’ve had. Among police officers, obviously they’ve been, you know, fine with the results, that’s what they expected — and again, as I said, I think in part they’re willing to do this work because they believe that they make good decisions, and they see it as an opportunity to show that. Among typical academics, I could point out that it’s quite difficult to publish these data [laughs] and we’ve had some maybe questionable rejections of the work — although it’s always hard to know, work gets rejected all the time for different reasons — but there’s certainly been some questionable rejections of these data as we try to publish them.
Steve: And what would be a typical reason that the reviewer would give to reject the paper?
Joe: The common thing we keep getting is, we know that officers are biased, and so therefore something is wrong with your simulator, [laughs] right. You couldn’t have actually found this, you know, maybe the simulator isn’t the right decision environment to study this in, and it’s not a real environment anyway, it’s just a simulation — despite the fact that it’s, you know, orders of magnitude closer to the real decision than all of the social psychology work that they would accept as being correct. But this seems to be a persistent theme is, we know that officers are biased, so something’s wrong with what you’re presenting, which is kind of not how science is supposed to work.
Steve: Can you tell whether when they say we know that officers are biased, is it based on the experiments done on students on campuses, or is it based on the, for example, statistical data, for example, that was discussed in part one of this podcast?
Joe: It’s been both. So we’ve gotten both of those. It’s just more surprising when it comes from the experimental data based on, you know, naive undergraduates doing these simplified computer tasks. So that’s just really surprising when that happens, and frustrating.
Steve: What do you think is the potential for not — okay, there are sort of two levels of learning from your results. One is among other academic scientists. So imagine that your kind of experimentation becomes popular and well supported, and so huge amounts of data are gathered and presumably maybe verifying your results. What would the impact of that be on the attitudes of, for example, these referees that you’re dealing with now, and then eventually the broader public?
Joe: Yeah, so one impact would be what I would hope, if we could imagine, sort of, what the big picture impact would actually be — and this is what my long-term aim is — is to change the field of experimental social psychology very broadly, okay. So what I would love to see is — and this is sort of leading to the third section, Corey, that you have listed out there — what I would like to see is experimental social psychologists just doing research differently, broadly doing research very differently. When we’re interested in understanding, you know, women in STEM participation, we don’t bring undergraduates into the lab and have them do an IAT, okay. What we do is, we study that topic that we’re interested in, and we understand the factors that go into success at that topic, and then we pursue it starting with that point among the people who are actually making those kinds of decisions. So in the end, what I would hope for a big picture are broad changes just to standard experimental practice, where it became unacceptable to do research on specific populations by drawing undergraduates in our introductory psychology subject pools. I’d like to see that stop.
Steve: Okay, but in the process of getting from here to there, imagine that you’re invited to an august psychology department, say at Harvard University, to give a colloquium on part one and part two of what you presented today. How open are people in those departments to learning from your results? If I accept at face value that you are a skilled experimenter and that you honestly have presented your data and it’s been analyzed correctly, to me it sounds like strong evidence, in many ways stronger evidence than all these sort of classroom convenience sample studies, as to the true nature of decision-making under stress by these police officers. But do you find that reaction among other psychologists and criminologists?
Joe: That reaction has been positive among those people who don’t study this topic, and among those people who I don’t make the broad implications for, okay. So I’ve talked about this research in different universities, and people who don’t study fatal police shootings have been open to the data, okay, so it has been positive in that sense. It has not been positive among people who study fatal police shootings, okay, because it’s undermining what their research finds. It’s also not been positive if they think about the implications for their own research topics in saying that the way they’re going about studying is not acceptable.
Steve: It’s been said that it’s very hard to get someone to understand something when their salary depends on them not understanding it.
Joe: That’s right.
Corey: That’s Upton Sinclair, is that right?
Steve: Yeah. But so…
Joe: Actually, it’s salary and reputation, okay — and in some sense even more important, their life’s work.
Corey: So not to go on too long because we already have, but just one last question about if you want the public to know something about police officers that you’ve learned through your simulator studies, what would it be that you think they don’t know right now?
Joe: I would say an appreciation, actually, for the difficulty of the decision that officers are often placed in. And this is true when we do take, we have sometimes taken untrained individuals, regular citizens, community members, also sometimes undergraduates into the lab and have them do the simulator, and one thing that they very frequently come away with is just a different appreciation for what that decision is like. I don’t think people really understand what you have to do to identify within, in fact, in under a second, where an object is, what the object is, make a decision about how to respond to that object. Under those really demanding, highly uncertain conditions, decision-making is really, really difficult, it’s really hard to do that well. And so having an appreciation for the difficulty of that problem, and then also for actually how good of a job officers do day in and day out at making that decision, I think would be something that people could come away with.
Steve: You know, you may have already done this, but it would be great to shoot some video and put it up on YouTube of people going through your simulator, maybe some responses, maybe some interviews with some police officers. I think it would give, as you were just saying, a lot of people insight into how difficult it is to be a police officer, and also how hard an average person would find it to be placed in those circumstances.
Joe: Yeah, that’s a great idea.
Corey: Well thanks Joe for now, I think we’ve more than taken up the amount of time from you that we can reasonably expect. But we hope you’ll come back, and that’s all for me.
Steve: Yeah, thanks for joining us, I hope we can have you back.
Joe: Yeah, thanks so much, I really appreciate it.