Health Affairs This Week

California Attorney General Rob Bonta is investigating hospitals' software algorithms to help identify potential racial biases in the systems. Listen to Health Affairs' Jessica Bylander and Rob Lott discuss the background and research behind racial biases in health care algorithms.

Show Notes

California Attorney General Rob Bonta is investigating hospitals' software algorithms to help identify potential racial biases in the systems. Listen to Health Affairs' Jessica Bylander and Rob Lott discuss the background and research behind racial biases in health care algorithms.

View the Anniversary Timeline celebrating 40+ years of advancing health policy.

Related Links:

Creators & Guests

Composer
Andrew Barnes
Andrew Barnes wrote the theme music for Health Affairs This Week. He writes music under the name Fake Fever.

What is Health Affairs This Week?

Health Affairs This Week places listeners at the center of health policy’s proverbial water cooler. Join editors from Health Affairs, the leading journal of health policy research, and special guests as they discuss this week’s most pressing health policy news. All in 15 minutes or less.

00;00;36;28 - 00;00;44;29
Jessica Bylander
Hello and welcome to “Health Affairs This Week”, the podcast for the staff of Health Affairs discuss the most pressing health policy news of the week. I'm Jessica Bylander.

00;00;45;13 - 00;01;20;16
Rob Lott
And I am Rob Lott. Now, Jess, before we get started, I wanted to let our listeners know that Health Affairs is celebrating its 40th year. To help celebrate, we are offering a 40% discount on digital only subscriptions to the journal and Insider Unlimited memberships. The sale ends at 11:45 p.m. Eastern Time today. That’s if you're listening on Friday, December 2nd today, and to get that discount, use the code 40440.

00;01;20;24 - 00;01;27;23
Rob Lott
That's 40440 40440 and we'll provide a link in the show notes.

00;01;28;02 - 00;02;03;26
Jessica Bylander
I can't believe it's been 40 plus years since Health Affairs was first published, and there's also a really cool digital timeline on our website now, and it shows kind of how our history as a journal and other products has intersected with major trends in health policy. So listeners can also check that out on our website. Well, getting down to business this week, a headline that caught our eye was about efforts in California by the attorney general, Rob Bonta, who recently won his election, to investigate racial and ethnic bias in health care algorithms.

00;02;04;13 - 00;02;31;09
Jessica Bylander
So a few months ago, Bonta sent letters to hospital CEOs in California asking how they and other providers were identifying and addressing racial and ethnic disparities in the decision making tools they were using. And that's a first step in a Department of Justice inquiry into whether algorithms--commercial health algorithms used in health care--have discriminatory impacts based on race and ethnicity.

00;02;31;29 - 00;02;56;08
Jessica Bylander
And we've been hearing more and more about this, about the potential bias and the potential for racism within clinical algorithms. So we thought we'd unpack that in today's episode, talk about some recent developments on that front. But first, Rob, understandably not everyone is familiar with clinical algorithms. So what are they and what are they used for currently?

00;02;56;15 - 00;03;17;01
Rob Lott
Yeah, Jess, so algorithms, that's one of those words, right, that I think a lot of people use and maybe not everyone really understands and maybe different people have different definitions for the term. But, you know, at its most basic, an algorithm is a set of rules or instructions for solving a complex problem or maybe even just answering a question.

00;03;17;13 - 00;03;49;28
Rob Lott
And it's essentially just a complex formula. So in many cases, these kinds of formulas are used to identify a subset of people or maybe events or episodes from within a much larger dataset or population, really. So just to use a simplistic example, if my credit card, Jess, were suddenly to be charged for 50 iPads purchased in Albuquerque, New Mexico, that system, the credit card system, will will flag it.

00;03;49;28 - 00;04;12;20
Rob Lott
and I'll get a message asking if that was me that made that purchase. Or maybe it's possible that my credit card was stolen. Now, let's say I lived in Albuquerque instead of Chicago and ran an after school program, maybe where I regularly purchased iPads for students, the system would not flag it, but in reality they knew that wasn't me.

00;04;12;23 - 00;04;39;07
Rob Lott
And and so they used the system to catch these instances of fraud. Now, the credit card company obviously doesn't have a bunch of human beings sitting in a room reviewing everyone's purchases. They have an algorithm which is essentially running on all the data they have on me, including past purchases or place of residence, occupation. And they use all that data and that formula to spot when something is amiss.

00;04;40;06 - 00;05;04;14
Rob Lott
So lately, these formulas have gotten a lot more complex and sophisticated, using the so called lessons learned from running the formula on the first thousand times, for example, to inform how it's run the next thousand times, maybe to make it more accurate. And this is sometimes referred to as machine learning or even artificial intelligence, depending on really how sophisticated it gets.

00;05;04;26 - 00;05;08;01
Jessica Bylander
Wow. And how does this work in health care generally?

00;05;08;16 - 00;05;30;04
Rob Lott
Yeah, well, in a lot of different ways. But for health care, among many other settings, these formulas are sometimes referred to as decision making tools. Use that term at the top, and they've really come into that role, I think, based on two key virtues. One is that the formula can be applied to a large dataset, especially with modern computing,

00;05;30;11 - 00;05;58;20
Rob Lott
you can now process a lot of individual data points very quickly. So imagine applying these formulas to an entire hospital's patient population with the hope of identifying those most likely to benefit from a care management program. Or imagine a public health department combining Medicaid data with county jail data and a bunch of other sources to predict who in the county might be most likely to suffer an opioid overdose.

00;05;59;07 - 00;06;32;22
Rob Lott
So these are real scenarios and they get very complicated, very quick. Now, that's so that's one virtue, as I said, a second virtue of using our algorithms is the appearance of removing human judgment and all of its flaws from a given question. So in health care, we're worried about variation, right, for example? And so what if we applied a set of rules to key decisions with the hope of reducing some of that variation and therefore some low value care?

00;06;33;07 - 00;07;04;24
Rob Lott
Now I said it has the appearance of removing human judgment, Jess, but what our listeners didn't really know at the time was that I was doing that air quote thing with my fingers because in fact it really just shifts where that judgment is applied. After all, someone has to design an algorithm in the first place, right? And it's in that design process where the human judgment, including bias in many forms and other, often systemic injustices, often find their way into the formulas anyway.

00;07;05;11 - 00;07;13;06
Rob Lott
And for that reason, they've really come under fire in recent years. Jess, can you talk a little more about some of those concerns?

00;07;13;23 - 00;07;43;12
Jessica Bylander
As you mentioned, it's very complicated and there's a lot of data at play, you know, so-called big data. And any time that you're mining large, large datasets, big data for, you know, to make prediction or to help with decision making, experts warn that the algorithm is only as good as the data it works with. And also that data and algorithms may reflect widespread biases in society.

00;07;43;26 - 00;08;21;28
Jessica Bylander
So problems arise when these algorithms, as you mentioned, might determine who's a good candidate for a care management or other procedures or treatments result in disparate care or access for patients based on race or ethnicity. And that is has been found to be the case in many of these algorithms. So back in 2019, a study published in Science by Ziad Obermeyer and colleagues made a really big splash when it found evidence of racial bias in an algorithm that's used for tens of millions of people, and it's used to target patients for a high risk care management program.

00;08;21;29 - 00;08;52;28
Jessica Bylander
So these programs are designed to help patients with complex health needs by providing additional resources, including access to additional providers. But since the programs are very expensive, health systems use algorithms to predict which patients would benefit the most from them so they can kind of target the program at those who need it the most, which is, you know, a noble cause in the sense, you know, you want to make sure that if you have limited health care dollars, that it's going to the right people.

00;08;53;05 - 00;09;23;12
Jessica Bylander
But what turned out to happen is Obermeyer and colleagues found that Black patients that were assigned the same level of risk in the algorithm were actually sicker than White patients. So in other words, fewer Black patients were identified for care management programs, even though they were equally as sick and could have benefited from them. And the authors found that if the algorithm weren't biased, the percentage of Black patients receiving help from the program would more than double.

00;09;23;25 - 00;09;27;24
Rob Lott
Oh wow. Did you learn why that was the case?

00;09;28;12 - 00;09;59;13
Jessica Bylander
Yeah. I mean, interestingly, the algorithm they studied didn't include race at all. It excluded race. So you might think, well, how could the algorithm be racially biased if race wasn't factored into it? Well, the issue was that the algorithm predicted health care needs based on patients health care costs. However, due to a variety of factors, including factors that we know are linked to structural racism, Black patients have lower health care costs than equally sick white patients.

00;09;59;28 - 00;10;22;17
Jessica Bylander
So we know that Black patients historically don't have access to all the health care that they need, or they may forgo care or not be offered the care they need. So using health care costs as a proxy for how much health care you need can lead to the racial bias that the authors found in this algorithm. And that's not the only recent evidence of bias in algorithms.

00;10;22;23 - 00;10;48;25
Jessica Bylander
In a New England Journal of Medicine article from August 2020, the authors compiled a list of 13 algorithms that adjust for race across medical specialties and found the potential for bias in all of them. So, for example, the American Heart Association has the Get With The Guidelines heart failure risk score, and that assigns three additional points to a patient who identifies as non-Black.

00;10;49;11 - 00;11;15;26
Jessica Bylander
And in doing so, that characterizes Black patients as being lower risk. So if you were following those guidelines, that could direct heart failure care away from Black patients. Another example is the algorithm that's used to predict how risky it would be to attempt a vaginal birth after you've previously had a cesarean birth. And that algorithm predicts a lower likelihood of success for anyone who identifies as Black or Hispanic.

00;11;15;26 - 00;11;56;05
Jessica Bylander
And that's problematic because generally, you know, vaginal birth is considered to have lower risks and some additional benefits. So even if you've had a prior cesarean birth, like you'd like to perhaps try, what’s called the VBAC. And so it turned out that women of color were seen as as being worse candidates for a VBAC. And other algorithms that they looked at run the risk of know, on one hand steering patients of color away from cardiac procedures or delaying access to kidney transplantation, or incorrectly classifying the severity of lung disease for racial and ethnic minorities.

00;11;56;29 - 00;12;25;15
Jessica Bylander
And, you know, we recognize now that race is a social construct rather than any indication of genetic or biological difference. So algorithms that change their output based on race or ethnicity can propagate false beliefs that race is biological or genetic, the authors of that study were writing about. So clearly there are problems with these algorithms being biased and there have been increasing calls and actions aimed at addressing this, including the actions by the California attorney general.

00;12;25;25 - 00;12;28;11
Jessica Bylander
So, Rob, can you say more about what's being done on this front?

00;12;28;21 - 00;12;55;29
Rob Lott
Yeah, Jess. Let me quickly point to a few examples of some ongoing policy efforts in this space. For one, Congress is getting into the act. Earlier this year, Senators Ron Wyden and Cory Booker, Representative Yvette Clarke introduced the Algorithmic Accountability Act, which would require companies to conduct impact assessments for bias when using automated decision systems to make critical decisions.

00;12;57;00 - 00;13;53;21
Rob Lott
We also recently saw a new proposed rule from HHS focused on Section 1557 of the Affordable Care Act, which prohibits discrimination. The new proposed rule would explicitly extend these protections to the impact of clinical algorithms and prohibit the use of discriminatory algorithms in health care decision making, also creating new mechanisms to support impacted individuals. We actually recently ran an article in Health Affairs Forefront by authors Rohan Khazanchi, Aletha Maybank and colleagues, which basically makes the argument for using this moment, this proposed rule, to extend those protections further and really zero in on ways to to protect people from potential discrimination.

00;13;53;21 - 00;14;01;13
Rob Lott
So a lot going on here in the public sector to match some of the interesting stuff happening in the private sector as well.

00;14;02;08 - 00;14;34;15
Jessica Bylander
Yeah, it's a complicated field. And I know there's some debate, you know, should we just not have these algorithms? What what should we do going forward? And it seems like the bottom line is that these are necessary in clinical practice and that, you know, it's not possible for individuals to make these decisions for millions of patients quickly, but that we really need to be careful about how they're applied and ensure that they don't increase inequities or kind of reify problematic assumptions about race.

00;14;34;27 - 00;14;54;28
Rob Lott
Absolutely. A lot of work to do, but really important. And that sounds like a great place to wrap up. So to our listeners, thanks for checking out another episode of “Health Affairs This Week”. If you like this episode, tell a friend, leave a review and subscribe wherever you listen to podcasts. Thanks, Jess.

00;14;55;11 - 00;15;09;23
Jessica Bylander
Thanks, Rob.