Each week, Health Affairs' Rob Lott brings you in-depth conversations with leading researchers and influencers shaping the big ideas in health policy and the health care industry.
A Health Podyssey goes beyond the pages of the health policy journal Health Affairs to tell stories behind the research and share policy implications. Learn how academics and economists frame their research questions and journey to the intersection of health, health care, and policy. Health policy nerds rejoice! This podcast is for you.
The pages of Health Affairs Journal and Health Affairs Forefront and plenty of other publications are rife with commentary, op eds, analyses about artificial intelligence and health care. These pieces share lots of eye opening anecdotes about providers and patients' experiences with various AI applications. Some authors lead with optimism and others are more skeptical. Many offer frameworks and guidance for how to grapple with the complex professional, financial, and ethical implications of AI search. And many warn of the dire consequences we might face if we fail to grapple with those implications.
Rob Lott:Less common is hard evidence. Less common are empirical peer reviewed studies on how implementing AI applications in the clinical setting can affect provider practice, patient outcomes, and spending. That makes sense. Of course, we've only just begun over the last few years to see AI applications taken up broadly across the system. But now the first wave of initial studies are reaching the pages of Health Affairs and other journals, and one of those papers is our subject for the Humble Podcast today.
Rob Lott:I'm here with doctor Anna Zink, assistant professor in the department of community health at Tufts University. Together with coauthors, she has a new paper published as part of the March 2026 issue of Health Affairs. The paper looks specifically at, quote, practice pattern changes after adoption of diagnostic AI tool used in conjunction with cardiac imaging. Doctor. Zink, welcome to A Health Podyssey.
Anna Zink:Fantastic. Thank you for having me here today.
Rob Lott:And if I may, just a quick programming note for our listeners, I just wanted to remind folks that in addition to all your usual podcast platforms, we're also, sharing a health podyssey on Health Affairs YouTube channel, so if you want to watch a video of this conversation, can check it out there. All right. Well, let's dive right in. Doctor. Zink, I believe there are more than 1,300 FDA approved AI applications for clinical use.
Rob Lott:Medicare meanwhile offers reimbursement for about a dozen of those applications, including the application you study in your paper. Can you give us some background? Kind of evidence or on what kind of evidence do FDA and CMS base their decisions when evaluating
Anna Zink:question. So the FDA and CMS as regulators are approaching AI, I think with slightly different questions and incentives. So the FDA is looking to approve AI applications that are safe and effective while also trying to support innovation in this space. And so most of the tools we're seeing that they've approved so far, getting approved this five zero one ks clearance pathway, basically they just need to show that they are doing something similar to what's already on the market. And in terms of evidence, they tend to look at performance metrics from retrospective studies to show that they're benchmarking as well against existing solutions.
Anna Zink:I think in ideal world, they would love to have more prospective studies, like what we think of as an RCT for drugs, but that hasn't been the case so far. It's hard for hospitals to maybe want to invest in such a study when they don't have FDA approval. So there's a bit of a loop in trying to get more of those prospective studies to bring evidence before the approval. So they've very much been relying on retrospective data, thinking about how well are they predicting the thing they're trying to predict. CMS on the other hand is receiving a bunch of applications asking for reimbursement.
Anna Zink:They are only going to reimburse things that have been already FDA approved, but as you say, they have over a thousand of options to choose from. And they're thinking about When they're thinking about coverage, they're really thinking about covering things that are reasonable and necessary for the diagnosis of illness and injury. That is kind of their specification for anything they cover. And so they'll be looking more at the peer reviewed studies, potentially randomized clinical trials if they exist, to try to assess whether those tools are reasonable and necessary for treating patients. Some examples of what's been approved to date in our paper, we're looking at computed tomography fractional flow reserve or FFRCT.
Anna Zink:It's a bit of a mouthful, so I'll probably use the abbreviation throughout this conversation. And that tool is analyzing a medical image of the heart, which is quite common for a lot of the tools we're seeing. They're usually built on top of a particular medical image, and they're trying to predict something from that image that would be useful for doctors in making a decision about treatment or diagnosis. There's some other ones though that are quite cool. They've also reimbursed for a tool that takes images of an eye, to try to determine whether the patient is at risk for diabetic retinopathy.
Anna Zink:And the idea there is to help improve diabetic eye screenings, which are kind of below benchmark to date. And so, so far, I'd say they're sort of taking it on a case by case basis. There is some preliminary evidence that exists, but it's often single site or sponsored by the company. And so we're really excited here to look at real world evidence to understand how well these tools worked in practice and see whether those studies are really externally valid and how we think about bigger changes in practice patterns and spending, etcetera, all the things you listed.
Rob Lott:Well, let's talk a little bit about that technology, the specific application that you guys studied. I'll use the same abbreviation, you did just to save us some time. FFRCT, as you said, it kind of analyzes an image of the heart and specifically geared at people with, symptoms of coronary artery disease. Can you sort of walk us through what did clinicians do in this situation before the AI application was an option, and how has the application sort of been integrated into that process since it's become available?
Anna Zink:Definitely. So a patient walks in with symptoms of coronary artery disease or CAD, maybe that's chest pain, and the doctor is trying to figure out what's going on and how severe it is. So they have a lot of options at their disposal. There's a lot of different cardiac tests available. They could go with stress testing or echocardiograms or CT scans of the heart, or some combination of those tests.
Anna Zink:There's also a relatively newer imaging test called coronary computed tomography angiography, CCTA. It's basically a chest CT scan, but it has better imaging that requires updated software, which not everyone has or is using to date. And so after the doctor decides what combination of tests to run, they kind of have to make a decision. Either they think the patient can be managed without further testing, and they'll discharge them maybe with a prescription or no further treatment, or they might decide the patient needs more, in this case, potentially an invasive coronary angiogram, angiography or ICA. Sorry, there's a lot of abbreviations in this paper you will find.
Anna Zink:And the ICA, it's an invasive test. It allows the doctor to get a better understanding of what's actually going on in the heart and even potentially do procedures while they're in there.
Rob Lott:When you say invasive, they're literally inserting something into the patient's
Anna Zink:Yes. And I don't know, I'm not a clinician, so I won't try to go into the details, but it is considered an invasive procedure. And so this new AI tool, FFRCT, is supposed to basically take that image from the CCTA, that newer image of the heart, and determine the level of stenosis of the coronary arteries. So how much are the arteries narrowing because of plaque buildup? And that's supposed to help the doctor determine kind of whether that invasive test is needed or not.
Anna Zink:Patients don't want invasive tests if they don't need it. They cost a lot as well. And so it's really trying to be used for patients who are kind of in the in between. It's like doctors know who to discharge, they know who's really high risk, but you have the ones that are kind of in the middle of those close calls. And FFRCT is supposed to help kind of make that decision for those blurry calls.
Anna Zink:Is this the patient that you want to refer to the ICA, or is this someone you can discharge? And it's just giving you that little extra bit of evidence, that confirmatory test, to understand what to do with that patient.
Rob Lott:Got it. Okay. So the goal of your study was to sort of measure the relationship between FFR CT adoption on healthcare spending and health outcomes. And I believe you did this by exploiting the fact that some clinicians adopted the application at different times. What did you learn from your study?
Anna Zink:Yeah, a bunch of things. I'll just sort of set up where sort of how we went into the study, kind of what we were thinking about as we started. So again, when it was first introduced, it was marketed as a tool that could analyze that CCTA image for patients you weren't sure about. So ideally, they were hoping that it would improve targeting of ICAs, those invasive tests. And because the invasive test is really expensive, it was also advertised as being a less expensive non invasive option to avoid testing.
Anna Zink:So this is ideal, right? You have a tool that promises to improve diagnostic reasoning and decrease healthcare costs. So my co authors and I were very interested in learning whether that was the case, but also in how adoption changed diagnostic testing patterns for clinicians more broadly. As I mentioned, FFRCT requires CCTA. So if you're using FFRCT, you might also actually end up using CCTA more, or we assume that there's a lot of unnecessary invasive testing done, but maybe there's not enough invasive testing done.
Anna Zink:So it could have increased the use of invasive testing. And so it wasn't clear to us how the dynamics of adoption would actually play out despite how it was advertised. And that's why we wanted to do the study. And we used a difference in differences design where we examined trends and outcomes for doctors who adopted FFRCT versus those who didn't, to look at how adoption of FFRCT changed a bunch of things. So the first was how testing itself changed, what the tests that the doctors chose to use, how it changes spending on those testings, how it changed patient outcomes, as you mentioned, over the course of a year.
Anna Zink:And then we also asked some sort of secondary questions around productivity, which I probably won't talk about today. And we found some very interesting things. So the first was that diagnostic testing did change. No surprise, we saw there was an increase in the use of FFRCT among adopters that's in some ways mechanical, though it was sustained. We also saw that there was more CCTA use, so indicating that doctors who adopted FFRCT were also using the CCTA, that sort of more advanced imaging more often.
Anna Zink:And then we did see that promise reduction and the use of the invasive testing. We did see invasive testing go down a bit. So then we wanted to look at spending, right? Each of those tests costs different amount. And so we actually noticed that diagnostic spending per newly diagnosed CAD patient went up.
Anna Zink:So while we had a decrease in spending from the invasive testing, it was actually offset by an increase in spending on FFRCT and CCTA, which are more expensive than some of those other testing options like the stress testing and echos, etcetera. I do wanna note that this was just We were just looking at spending on cardiac testing itself, so we didn't incorporate potential offsets from months ahead if potentially there were better health outcomes or so on and so forth. And we did actually see that patient health outcomes look slightly better for adopters. So there was a reduction in adverse patient outcomes, which, was encouraging, we were interested in looking into more and further research.
Rob Lott:Gotcha. Well, I wanna ask a little more about the implications of those findings, but first, let's take a quick break.
Anna Zink:Great.
Rob Lott:And we're back. I'm here with doctor Anna Zink, and we're talking about how clinicians' practice patterns change after adopting an AI diagnostic tool. Doctor. Zink, when it comes to debates about health care spending, there's often this back and forth where one side sort of points to spending increases as a big problem in and of itself, wringing their hands. The other side sometimes responds and says, well, if we're getting something of value for that spending, like better health, maybe that's worth taking into account.
Rob Lott:And so when we look at the findings of this study against that backdrop of that debate, I'm wondering how you sort of put it in that context.
Anna Zink:Yeah, that's a great question. It's not just about spending, right? It's about what we are getting for that spending. And we are seeing some encouraging results in our findings, right? We do see a decrease in the use of invasive testing.
Anna Zink:We also detect a reduction in patient adverse cardiac events. And so taken together, I think that signals that FFRCT is providing value in the healthcare space. But I think it's sort of the question we're trying to get at then is, yes, so maybe that makes sense for CMS to be reimbursing it. It seems like it does. But then the question is how much should they be reverse reimbursing it for FFRCT?
Anna Zink:And we're really probing at that second question, I think, in the end of the paper. How does CMS figure out how to set the price for FFRCT and then AI tools more broadly? In the last few years, CMS has increased reimbursement for both CCTA, the image, underlying image used for FFRCT, as well as FFRCT. So currently, I think they're reimbursing well over a thousand dollars per use for these tests. And often that's because users of the tool are basically saying that the current reimbursement is not covering their costs of the new tool, but it's hard.
Anna Zink:It's with something like FFRCT, for many years, it was actually only one company that had received FDA approval. So they were really determining the price they were setting for these tools. We have more competitors coming in now, which will likely potentially change the price they're using. But it's not clear that CMS wants to just be preying the price that the first mover is setting, in this type of space. And so I think they still have a lot of questions to think through in terms of whether they should be bundling it with CCTA or thinking about pricing as subscription rather than a per use fee.
Anna Zink:And they need to think about this for FFRCT and just the tools more broadly. Yes, it offers value, but how do we think about this pricing structure, moving forward, given the current kind of, setup for reimbursement in Medicare.
Rob Lott:Were you able to consider those changes in reimbursement during your study period, or did that sort of happen after the fact? And I'm curious if that's something you're thinking about potentially for future evaluation.
Anna Zink:Yeah. That's a that's a great question, actually. We didn't in the current paper. We're planning on doing more with this work, and so I think it'll it'll be a a really opportune thing to look at, especially because it is happening. It I think that price increased a year or two ago, so it should be something we can study with enough data in the next year.
Rob Lott:Great. Well, in my introduction, I alluded to the sort of glut of frameworks and prognostication about how we can prepare our system for AI's growth and evolution over the years to come. And I'm curious based on your experience doing this study, if you think that our current AI regulatory regime is both rigorous enough and flexible enough to meet both the growing demand for various AI applications and what we can assume will be unforeseen challenges. What do you expect in this space over the next few years?
Anna Zink:Yeah. I think so far we've seen that various regulatory bodies are trying to fit AI into existing regulatory structures and finding it doesn't fit well, or it fits very awkwardly. So you have CMS trying to figure out how to pay for AI. It's sort of thinking of it as a software, but we already know that the existing fee schedule is running into issues, making it fit nicely. And so there's a lot of problems with trying to pay for software under the current way that CMS sets reimbursement.
Anna Zink:There's a lot of good perspectives on this, including one in your journal, so I won't go into the weeds on it. But they're really having trouble trying to find the right way to reimburse. That strikes the balance between promoting use and not overuse. And I don't think they've come up with a good solution so far. You also have the FDA who's regulating AI, as well as software as a medical device category, but it's not working well with newer generative AI, where use is wide ranging.
Anna Zink:Like what indication do you approve ChatGPT for? So I think conveniently, in terms of Gen AI, they've kind of tucked a lot of the newer mental health chatbots, etcetera, under this category of wellness apps, which they don't regulate, but I'm not sure how long that's going to last. And so, I think there's also this bigger issue that a lot of patients are starting to get health advice outside of the healthcare system. OpenAI has the new ChatGPD Health. And there's question about how much the FDA wants to regulate that.
Anna Zink:Right now, they're not at all, but I don't know, it's unclear to me whether they will or not. And thinking about patients, we also have doctors, they're getting advice via generative AI all the time. So, you have Open Evidence, it's a new AI tool for medical decision support that a lot of doctors use. It hasn't been cleared by the FDA, it isn't reimbursed by CMS. It's giving medical advice and affecting patient care.
Anna Zink:So, are we going to regulate it? I don't know. So we're already seeing kind of existing issues with the regulatory frameworks we have now in trying to handle AI. And I think it's probably only going to get worse. I think just more broadly in thinking about how to regulate, there's this question of how do we balance incentives for innovation with minimizing risk?
Anna Zink:And so the first question is like, what's the right balance there for AI? And then the second is how do we do it? I think I'm worried that we'll decide how to regulate AI based on how well we're able to regulate within existing systems, which will kind of predetermine that question of the balance between innovation and risk, where we should probably be thinking more about what the right balance is and then coming up with the right regulatory solutions to achieve that balance.
Rob Lott:Well, useful, reframing for us as we look ahead. Doctor Anna Zink, thank you so much for taking the time to chat with us today. Really interesting paper and really interesting conversation. Thank you for being here today.
Anna Zink:Thank you so much for having me. It's been fun.
Rob Lott:To our listeners, thanks for tuning in. If you enjoyed this episode, leave a review, recommend it to a friend. Check us out on our YouTube channel. Subscribe there, and, of course, tune in next week. Thanks, everyone.