Health Affairs This Week

Subscribe to UnitedHealthcare's Community & State newsletter.

Health Affairs' Jeff Byers welcomes Brian Anderson, President & CEO of the Coalition for Health AI (CHAI), to the program to discuss the adoption of artificial intelligence technology in health care and the future application of these tools.

Health Affairs is hosting an Insider-exclusive virtual event on March 19th examining the potential policy and administrative changes surrounding the Medicaid program and what they may mean in terms of coverage, operations, and financing. Sign up today.

Related Links:

Subscribe to UnitedHealthcare's Community & State newsletter.

What is Health Affairs This Week?

Health Affairs This Week places listeners at the center of health policy’s proverbial water cooler. Join editors from Health Affairs, the leading journal of health policy research, and special guests as they discuss this week’s most pressing health policy news. All in 15 minutes or less.

Jeff Byers:

Hello and welcome to Health Affairs This Week. I'm your host, Jeff Byers. We are recording on 03/07/2025. Before we begin, I just wanna give a quick shout out that as a special insider event next week on March 19, we will have an hour long session on Medicaid's uncertain future. Check out the details for that in the show notes, and join to become an insider so you can join us then.

Jeff Byers:

Today, I'm joined by Brian Anderson. Brian is the CEO of the Coalition for Health AI or CHI. It's a nonprofit focused on developing guidelines and best practices for AI use in health care. Brian, welcome to the program.

Brian Anderson:

It's so great to be here, Jeff. Thanks for having me.

Jeff Byers:

Yeah. So for listeners that may not know, are you able to give us just, like, a thirty second pitch of, what CHI is all about and what you do?

Brian Anderson:

Yeah. So CHI was started four years ago in the middle of the pandemic when a number of private sector organizations were coming together. Of course, we're, you know, solving the pandemic at the time, but we began appreciating the kinds of impact that you have when you have nontraditional companies that are inherently competitive working together. And so we asked, is there something we could be doing in AI? And we looked at the concept of responsible AI and developing consensus technically specific best practice frameworks on what responsible AI looks like for a developer developing it, a deployer deploying it, be it health system or a payer or a life science company, and what it means to manage and monitor AI tools over time.

Brian Anderson:

And so, really, that's been the focus of CHI since we started is developing consensus driven best practice frameworks of what responsible AI looks like in health.

Jeff Byers:

Thanks. Thanks for, giving us that that quick overview. We'll just get into it about what you're kinda seeing on the policy landscape. You know, in January, the then Biden administration released a strategic plan for health care, AI. This document has since been removed from the HHS website as far as I can tell.

Jeff Byers:

The New York Times reported that FDA layoffs were decimating teams reviewing AI and food safety. Then today, I read a report that the FDA is trying to bring back some of those employees. So it's hard to tell exactly what the federal health care agenda is for AI. You know, what are you seeing in that landscape?

Brian Anderson:

Yeah. You know, I mean, it's obviously been a transition and a period that I think many of us are are, you know, very interested in understanding where the current administration is going to take, you know, policy and their priorities. I think some things that have, certainly stood out to me that, you know, I'm really excited about, honestly, are, I think, an interest in engaging private sector organizations like CHI. CHI is fundamentally led by private sector stakeholders from technology companies to health systems to payers to patient community advocates. And working with those communities to develop the kinds of guidelines and guardrails in our space, which of course is health AI, rather than having, you know, top down regulation being developed and instantiated from from a regulator.

Brian Anderson:

And so, you know, I think there's appetite and interest in partnering with organizations like CHI because, you know, that's our focus is a private sector led effort to develop those kinds of internal self regulatory guidelines and guardrails, and so we have excitement in that. I think the second big area of focus that I'm also very interested in exploring further is an interest in, investing in infrastructure and technology assets that help accelerate innovation to keep us economically competitive. And this, of course, is, you know, very core to the mission of Chai is to develop AI that is responsible, that serves all of us. And to do that, you need to have the kind of infrastructure investments across, you know, our our nation, to support, to bring online different datasets from rural communities in Appalachia to the heartland to urban environments as well. And so we're seeing, I think, a lot of interest at a policy level on exploring how to make those technology investments to spur innovation, to drive economic competitiveness, and we're excited to be part of those conversations.

Jeff Byers:

Great. Thanks for that overview. Is there anything else that you can give us about, like, what you're seeing from the federal policy making that's like Yeah.

Brian Anderson:

I mean, I think, you know, some some more specifics, particularly so one area that really excites me is, you know, there's this promise always been this promise and hope that technologies are going to bend the cost curve in health care. And I would say, you know, certainly during my lifetime, I have not seen that. In the fifteen, twenty years that I've been in digital health, The costs for health care delivery have only gone up. I think AI has a unique opportunity to potentially drive down cost in novel ways because it's such a powerful tool. And I see interest in exploring at a policy level.

Brian Anderson:

And, certainly, I think there have been some announcements from, you know, various, organizations that work closely with the Trump administration around exploring things like how can AI be leveraged, AI tools be leveraged in value based care, looking at Medicare Advantage, even looking at Medicaid or, you know, ACOs and how they can incentivize the use of AI tools in in those value based care spaces to potentially drive down costs. And I think AI is really well situated to potentially help in those areas. We already see, you know, private sector companies doing really impactful work, with the patient lives that they cover in that space. So, you know, I'm excited to see that. I would love to see signals of costs driving down with greater adoption in AI.

Brian Anderson:

So, you know, that's, I think, an example of something specific, you know, that potentially has, you know, relevance to, you know, CMS. We'll see where it goes. So When we

Jeff Byers:

talk about AI, we're we're talking about everything, and it seems like everything under the sun can be AI these days. So, like, what you think about, like, specifically health care, what are the current major capabilities you're seeing in AI across health care?

Brian Anderson:

Yeah. Well so, actually, let me first push back on on that statement you just made. So when we started CHI, we intentionally chose the name health AI, not necessarily just health care AI because health care, I think, connotates, you know, specifically looking at the traditional health care delivery system with hospitals and health systems and clinics. CHI is health AI because there is an additional area of focus that we have, which is in the direct to consumer space. So this is and I think this is, I think, where AI vendors building technologies can really have a dramatic impact as it relates to challenges and access to care for so many people.

Brian Anderson:

Rural communities, as an example, inner city communities. You can have AI tools that are meeting individuals where they are at. And so I think for that, it just has, you know, one kind of marker in the sand. Chai is very interested and focused in in partnering with technology companies and patients, and doctors and nurses in in looking at different use cases that disrupt the traditional health care delivery. You know, patient goes to the doctor, waits in line, you know, in the waiting room and then is seen by the doctor for fifteen minutes and then, you know, comes back maybe six months, three months, a year later.

Brian Anderson:

I think some of the really exciting use cases that we're focused on supporting develop best practices are in the direct to consumer space, empowering patients as consumers and creating the kinds of guidelines and guardrails that ensure that the tools being used can be trustworthy and can be safe and effective. Other use cases, I think, you know, certainly in the traditional AI space, you know, I think there's a lot of excitement about robust, powerful expert classifiers helping with, you know, a variety of different clinical decision making tools. I think there's a unique interest, and I think this is something I'm really excited about is the use of both traditional AI tools and generative AI tools in the administrative back end. Health systems are so challenged right now in terms of their margins and, you know, staying financially solvent, Developing and deploying tools that actually help them create efficiencies with coding, with billing, with scheduling, with staffing levels, I think those will be the tools that help them, you know, hopefully, navigate this really financially challenging time. The generative AI space, you know, is really exciting.

Brian Anderson:

I can tell you that as a physician who was challenged with burning out, to see some of the tools that are coming online like AI scribes that help doctors connect better with their patients really means a lot. You know, there's this stat that is really sobering that I like to remind many listeners of, which is in medicine today, we have to graduate at least two, if not three, medical schools of students every year to account for the providers that commit suicide every year. And that's sobering. And in part, that is oftentimes because of the lifestyle challenges that providers have. They get burnt out, really burnt out.

Brian Anderson:

And we have an opportunity here with some of these generative AI tools to help address some of those challenges in really meaningful ways to help providers better connect with their patients, to not spend inordinate amounts of time away from their wife and their kids and their families and their husbands. To enable AI tools to do that, I think, is gonna be a really exciting, space. And, of course, you know, there's the kinds of improvement in clinical outcomes that generative AI can offer with, you know, taking complex medical records and, you know, a number of research papers or publications that doctors can't stay on top of. And helping doctors digest all of that could help, I think, save lots of patient lives. We're seeing already a lot of anecdotal stories about patients taking their medical records and uploading them into, you know, one of these, frontier models in a secure way, and getting results that really, you know, can help drive, you know, a diagnosis and treatment.

Jeff Byers:

Yeah. I wanna push back a little bit, on that, research thing you mentioned. I think people can keep up to date with health care if they subscribe to, health affairs. That I might be biased in that. Well, hey.

Brian Anderson:

I mean, we just published a recent article with the National Academy of Medicine with you guys, so I I can't push back on that at all.

Jeff Byers:

That's right. Yeah. And I do wanna ask you about that specific article a little bit later on, but I'm glad you brought up the back end and administrative use for provider organizations. Because one of the things you hear about I'm gonna use AI a little a little broadly in this case, so feel free to help fill out those details. Are they generating the ROI for the cost that they are, Yeah.

Jeff Byers:

You know, the cost that they're incurring. Yeah. So, like, when it comes to provider organizations and, like, hospitals and practices, what are their challenges for the cost versus ROI?

Brian Anderson:

Let me create, I think, two, two frameworks here. So so on the clinical delivery side, there is real challenge in identifying financial ROI because there's so often a misalignment of incentives in terms of, obviously, providers. We wanna drive clinical quality and improvement of patients' lives. But oftentimes, that's not necessarily tied to financial return for our health system, particularly if they're in, you know, the fee for service space. So you have challenges in deploying tools that may help a provider diagnose a patient more correctly or accurately, but not necessarily be tied with a very kind of clear line to an economic ROI for the health system that's, you know, outlying millions of dollars to procure that tool.

Brian Anderson:

On the administrative back end side, I think it's a little bit easier, particularly if I'll take the use case of coding. Right? And so physician organizations are oftentimes very concerned that they're missing, you know, a particular DRG or CPT code, in a, you know, long narrated note or a complex hospital visit that you're leaving money on the table. We are hearing, I haven't yet seen, you know, a a wealth of knowledge published in this space, but we are hearing that there are, I think, easier ways to specifically identify where AI tools deployed in that kind of use case are driving better economic returns. Additionally, on the scheduling side.

Brian Anderson:

So, you know, I think one of the real challenges for physicians and, you know, the scheduling teams that that they work with is driving schedule density and driving down no shows. And so when you have tools that can help identify individuals that might have a high risk of no showing and then enabling those health systems to take proactive steps to help get that individual to come into the visit and then seeing your scheduled density go up. Those sorts of things are easy to measure. And so if you're getting the closer you get to, you know, that billable moment, I think the easier it is to measure economic ROI. And so schedule density tools that help with scheduling, tools that help with, administrative billing, capturing all the codes in a complex, you know, hospital visit or outpatient visit are some areas where I'm seeing signals that are helping health systems, I think, get a little bit excited about some of these tools that they're using.

Jeff Byers:

You know, that made me wanna ask, you know, a lot of concerns we've heard about with AI are about hallucinations over, patient diagnoses or something along those lines. And I imagine those are real concerns, but I actually wanted to ask you about potential hallucinations for, like, administrative data. Is that, like, possible when you're talking about coding and billing and thing things like that? Is that a concern at all or is that

Brian Anderson:

You want well, anytime I think you're working with a nondeterministic model like, a Gen AI model, it is gonna be a concern. And so it's a risk. But I think what we're seeing as the space matures, as an example, is when you take a ensemble approach, meaning I'm building an AI product, And it's not just one model operating. It's multiple models. And in the case where we're seeing, I think, a lot of, reduction in hallucinations is where you have, you know, an agent or a supervisory model that is watching out for particular kinds of hallucinations, and it's specifically trained to identify hallucinations and catch those hallucinations that one of the, you know, earlier models in the ensemble might might have created, and then and then and stopping that and reducing that risk of hallucination.

Brian Anderson:

That's a really exciting space, I think, where we're seeing a significant reduction. I would say, additionally, more robustly trained models on high quality data with appropriate kinds of you know, there's additional kinds of tuning steps and training steps that vendors, I think, are discovering to reduce the kind of hallucinations to to mitigate that risk. It it doesn't drop it to zero, but it reduces it. And so, you know, I'm excited to see that. I think it it does reinforce, though, that in some of these consequential use cases like creating a claim bill, you still want a human in the loop to review things until these models become, yeah, I think, more accurate, and and the performance is we're more clearly able to measure how these models perform in in certain use cases.

Jeff Byers:

So you mentioned you've been in the field for fifteen to twenty years. So I have to assume that you saw early stages of adoption for EHRs. Is that correct?

Brian Anderson:

Yeah. I mean, one of the, first jobs I had in the digital health space was helping to lead some of the product development and clinical services that we offered with Athenahealth, which was the EHR company that I worked at.

Jeff Byers:

Do you see any parallels to the adoption of EHRs versus AI?

Brian Anderson:

Yeah. Well, certainly in, you know, any kind of digital tool, there's gonna be the early adopters and the the folks in the middle and then, you know, the late adopters. And I think we're certainly seeing a lot of similarities in that in general. So you have, you know, the techno files that are excited about any kind of digital tool and leaning in and using them as much as they can. And you're gonna have the folks that, you know, aren't as excited about technology that will be, you know, later on adopting these tools.

Brian Anderson:

Additionally, I think, you know, in the early days of EHRs, these were very expensive, pieces of technology to deploy and to configure and to use and train folks on. And so it was mostly limited to health systems that were well resourced and funded and and had, you know, armies of IT behind them to help do that configuration and maintenance. I think we're selling seeing similar, signs of that, holding true here, meaning you have a lot of academic medical centers and big health systems with big IT teams that are some of the first movers to adopt and deploy in, you know, really significant ways AI tools and technologies. My hope, and this is one of the missions within CHI, is how do we create the kinds of playbooks and guides to help the rural critical access hospital or the federally qualified health center in an inner city environment that maybe doesn't even have a single person dedicated full time to be an IT staff? How do you help those communities to be able to get into the AI game?

Brian Anderson:

One of the I think the the hopes and and and the things that many of us are excited about with AI is that it doesn't necessarily have as much of a burden on an IT team to potentially deploy it. Right? You don't need an army and informaticians to stand at the ready to do data, you know, interoperability because, you know, you have the magic of genera generative AI that might be able to just, you know, be able to do all that for you. Where I think the real challenge, though, is still for these FQHCs or critical access hospitals is how do they manage and maintain and govern these tools over time? There's still significant cost associated with that.

Brian Anderson:

And so part of, I think, one of the more existential questions that we're gonna have to address as a society if we want to have these smaller market clinics, rural clinics be able to participate is how do we support them in AI governance and monitoring and managing AI models over time, similar to the challenges that that those clinics still have today with EHRs.

Jeff Byers:

So we have about five minutes left, and I wanna make sure I ask you this question. I'm really glad you brought up, the you know, kind of an entry into a question about workforce. So in your paper in health affairs, it was part of the vital directions series. Listeners may have heard my interview with Victor Zao, either a week or two ago. Please check that out if you haven't.

Jeff Byers:

You if you haven't also read the papers, check that on our website, or or in the February issue since you're a subscriber. You and your colleagues write about promoting the development of an AI competent workforce, which makes me wanna ask, as we think about we're still somewhat early stages for a lot of this. Where do you see the workforce moving to, or how do you see the workforce changing, whether it needs upskilling, or how will duties change? I know that's a very broad sweeping question, but, like, where do you see how do you see the workforce the health care workforce shaping in the wake of AI?

Brian Anderson:

Let me start with kind of a look to the future. So within five years, I certainly see doctors and nurses, the standard of care will be more often than not using AI tools. And so if you look at it from a paradigm where these are tools that doctors within five years, the majority of them are gonna be using as using them as parts of standard of care. There is a very urgent set of questions that we need to educate nurses and doctors around. And one of the principal questions, as any doctor who uses any kind of tool, be it a stethoscope or a scalpel or, you know, an otoscope that we, you know, use as part of our routine care, in examining patients is, is this tool this is a question that every doctor should be asking.

Brian Anderson:

Is is this tool that I'm about to use on my patient appropriate for the patient I have in front of me? And in the space of AI tools or digital tools, there's a lot to unpack in being able to answer that specific important question. Questions like, well, how was this AI model trained? Was it trained on patients like the one I have in front of me? Do I know how the model performs on patients like the one in front of me?

Brian Anderson:

What are the indications for the model? What are the limitations of the model? How does it perform right now? Right? If I've if my health system has deployed the tool three years ago and I know kind of what it was what its performance was maybe three years ago, does my health system know how it performs right now?

Brian Anderson:

Right? These are questions that are questions I, you know, can call out to you and and and and share with your listeners because, you know, I've been in the space of of AI and understand the principles of responsible AI. But that's just because I've had, you know, the privilege of of working alongside and being educated in this space. Many doctors, frontline clinicians, and nurses, they don't have time. They haven't, you know, been able to take classes in this space.

Brian Anderson:

And so we have an urgent need to upscale our workforce to be able to ask and answer the right kinds of questions to deliver care and use the right tools. The example I oftentimes give is you don't use a pediatric stethoscope on an adult. You don't use an adult stethoscope on a pediatric patient. And so, similarly, you don't use an AI tool that's trained on a specific population of people on a patient that comes from a population that is not representative in the data on the model that was trained on. You just don't.

Brian Anderson:

That's just using the wrong tool. And so helping to educate doctors and nurses on that kind of information and working with the vendor community to help enable the physicians to have the transparency on where to get the answers to those questions is gonna be really important. And so it's gonna take not just educating and upscaling nurses and doctors. It's gonna take partnering with the vendor community to ensure that we have the kind of transparency to get those answers.

Jeff Byers:

Brian Anderson, anything else you wanna plug before heading out?

Brian Anderson:

No. This has been great, Jeff. Thanks for having me, and really appreciate the time.

Jeff Byers:

Yeah. Appreciate the time with you. Brian Anderson, CEO of CHI. Thanks again for joining us today on Health Affairs This Week. And if you, the listener, enjoyed this episode, please send it to the AI skeptic in your life, and we will see you next week.

Jeff Byers:

Thanks all.