The podcast delves into the realm of health tech, highlighting a common trend: a focus primarily on the US, EU, and UK. However, it advocates for a broader perspective, urging listeners to look beyond this bubble and consider the innovations happening in low- and middle-income countries (LMICs). These regions face significant digital and non-digital healthcare challenges, leading to inventive solutions borne out of necessity. By exploring the work of those in LMICs, the podcast aims to uncover valuable lessons from their successes and obstacles.
Hosted by Shubs Upadhyay, a primary care physician with a wealth of experience spanning clinical practice, innovation, regulation and medical software engineering quality, the podcast offers a unique viewpoint. Through this lens, it reveals a stark disparity between technological advancements and their impact on underserved populations. With a focus on spotlighting individuals and organizations making a real impact in these communities, the podcast invites listeners to join the journey of discovery.
Shubhanan Upadhyay AI (00:00)
Welcome to this episode of the Global Perspectives on Digital Health podcast. This is the podcast that unpacks learnings from people implementing digital health and AI for underserved communities globally to help you break out of your bubble and give you fresh insights on meaningful impact at the last mile of healthcare.
Shubs Upadhyay (00:18)
Really excited today So we're changing up the format a little bit. We've got two guests on the podcast.
We've got Rigveda Kadam who is from FIND And also we've got Andrew Muhire. He is the Chief Digital Officer of the Rwanda Ministry of Health. So having that political aspect covered in this discussion will be so valuable. And to me, it's a really good connector to the previous episode we've just done talking to an innovator who was working, supporting mental health.
with digital tools in Rwanda as well. today's podcast is really kind of the layers around supporting that work. Like what enablers do you need from a political and infrastructure perspective? So for me, that's really, really exciting to be able to connect the dots on that. Let's get these insights.
Shubs Upadhyay (01:08)
Rigveda, Andrew, thank you so much for joining us on the podcast. It's a real pleasure to have you here. I think it's going to be a very rich discussion for anyone in the global digital health space. Let's start with intros. Let's start with you, Rigveda. I would love to hear about the work that FIND do and your role in there
Rigveda Kadam (01:27)
Thanks, Shubs, and thanks for the opportunity to participate in this podcast. It's also my first time on a podcast, so I'll try to do my best. I lead the digital health and AI portfolio at FIND, Foundation for Innovative New Diagnostics. It is a nonprofit that was created more than 20 years ago now, and it's a WHO collaborating center for diagnostic evaluation and lab strengthening.
Andrew (01:36)
you
Rigveda Kadam (01:55)
The work that FIND does is essentially supporting product development and innovators, as well as then evidence generation and product introduction for diagnostics and diagnostic services in the context of low and middle income countries. So we are a not for profit and we are funded via grants and government aid. And we work with now, I would say 400 plus partners across multiple countries.
our biggest country offices being in Switzerland and in India. And more specifically to the topic of the podcast, the work that we do in digital health and AI, it's a cross-cutting unit. So we essentially work across a number of health areas, disease areas. the work, for example, one of the things that we've done, especially in the context of the COVID pandemic was supporting
countries in looking at their diagnostic data and responding on the ground at a primary and community health care level, but also looking at it at a national level to identify potential hotspots and how to essentially tailor their response in the context of all that data. So the digital health and AI work is to identify what are the current and potential problems impeding diagnostic service delivery, and then of course, how digital health and AI, if it can.
and where it should fit in to address those problems.
Shubs Upadhyay (03:16)
So are you working in particular with health ministries? Like where do you fit in terms of the layers of the ecosystem?
Rigveda Kadam (03:24)
So we work with Ministries of Health, national lab programs, as well as then in-country research partners. So for example, for product introduction or around policy shaping, we would work with Ministries of Health to look at how and if diagnostic products and diagnostic policies need to be modified and how they are currently being implemented. But then if it is, let's say an earlier stage, a relatively new product that needs
more evidence to be generated to understand if and where it has value. So we would work with in-country research partners to develop context specific evidence that can guide overall policy development. one big part of our collaboration is with diagnostic manufacturers and innovators themselves, who we want to just make sure are supported adequately as they are working to address these important challenges.
Shubs Upadhyay (04:17)
So you sit in this layer between, government organizations and then innovators and healthcare delivery and infrastructure.
Rigveda Kadam (04:23)
Yes, that's precisely right. And that's kind of reflective of this category of organizations called product development partnerships of which FIND is also a type.
Shubs Upadhyay (04:25)
Perfect, okay.
Perfect. Thank you for that comprehensive overview. Let's move over to you, Andrew. It's really, really a privilege to have representation from a health ministry, particularly of Rwanda. And by the way, the podcast that's been released just before this is interviewing an innovator in Rwanda around mental health solutions. They're called Y Labs. So I think this is so great to kind of have like the completing of the circle. So we've had an innovator in Rwanda working on the ground and then we've got
FIND who are like a layer between. And then we've got like the government layer and the infrastructure layer and
decisions and that needs to be made on kind of, guess, like what enables them. So please tell us about your role and what your focuses are at the moment. Thank you.
Andrew (05:16)
Yeah, thank you. Actually, I'm Andrew Muhire. I work for the government and the Ministry of Health, Rwanda. I'm the chief digital officer in charge of digitalization at the Ministry of Health. So luckily, background, I started working with the Ministry of Health since 2011. And since then, I was really contributing in the digitalization process. that time, there was a pressing
area that Ministry of Health was really establishing to be able to have like a health management information system. So from 2013, I was the head of health management information system till now. So currently I'm heading digitalization as I said. Actually, what I could say is that I am an IT by background, but I also have epidemiology. So that makes me fit well.
in the Ministry of Health and also digitalization. So what I could say about Rwanda or the Ministry of Health, actually AI is part of our digital priorities. We have seven priorities. Among seven priorities, we have AI. So the good thing with Rwanda is that we have already laid the environment to support AI. That's why you can see we have partners. Different partners are working with us to make sure that we explore how AI could really improve.
Shubs Upadhyay (06:11)
Perfect.
Andrew (06:38)
the processes at the health facilities, but also the treatment of patients. Yeah.
Shubs Upadhyay (06:43)
Perfect. And I think, I guess one of the things I perceive from the outside is that, know, Rwanda is seen particularly in Africa as like one of the pioneers of like how to invest and think about digital infrastructure in general and kind of creating these conditions to allow the right innovations to thrive. it's really, really great to get your insights
Okay. I think we've got, lots of areas that we could talk about. But maybe we'll start with you, Rigveda that, FIND work across many, many types of ecosystems, and particularly, you know, many types of low and middle income country settings. So
as you sit in this, layer between innovators and infrastructure, and then kind of enabling or helping ministries to make good decisions on implementation. you probably have quite a good insight on kind of what drives good innovation, what challenges there are, what are the barriers.
What's your read on kind of the situation globally? And maybe we can then get into some examples.
Rigveda Kadam (07:44)
Yeah, sure. So the work that we do at FIND I think what we try to use as our source of truth is essentially the insights that we try and collect from end users, ministries of health and all the partners that I mentioned, because especially when you're working on the innovation and the more sort of emerging tech side of things, it's often easy to get lost in the shininess of all the tech that comes towards you.
Shubs Upadhyay (08:13)
Mm-hmm.
Rigveda Kadam (08:14)
So what we did at FIND a few years ago is that we mapped out via 60 plus stakeholder interviews and across four countries, along with 500 plus patient surveys. We mapped out what were the pain points and challenges, both at an individual user level, at a patient level, a healthcare worker level, and also at a national health program manager level that were
stopping people from accessing diagnostic services or diagnostic data as and when they would want to. So using this as a framing, we then have used that to kind of guide our work on what technologies or approaches should be prioritized. And the outcome from all of that was in the context of AI is guiding our work across four pillars. So the first pillar is around the pre-point of care at an individual level.
where the biggest gap is individuals want to have access to trusted and quality information in a timely manner. So how can AI and digital try to address that in the diagnostic but also in the broader sense? The second category of our work is at a point of care level. So around supporting healthcare workers and clinicians in understanding what is the context specific up to date set of guidelines around screening, testing, treatment, monitoring.
that they should be using and across different disease areas because a person coming to see a community health worker does not come in like a TB workflow or a malaria workflow, they just come with a problem. The third category is around health program support. So how do we make sure that tech innovations are supporting health program managers to manage supply chains better, to conduct surveillance more effectively? And then finally, is more an ecosystem
kind of an area where, especially in the case of AI, there is a lot of work that still needs to be done around regulatory and evidence generation support. So that, in a nutshell, is how we are structuring our work.
Shubs Upadhyay (10:20)
I really like that because It starts with the on the ground challenges, particularly starting with access. And I like the way that then it like mushrooms out from there. So start with like, well, the beginning of the person's journey before they even seek care, information as a determinant of health, right? Access to high quality information.
And then, yeah, how do you then enable once, once someone has got hopefully to the right care at the right time, how do you enable that care to be as good as it can be? And then how does that fit into disease areas that might be a priority or affected at that time and public, almost like public health level. And then, like the broadest ecosystem that supports that. So I think you've highlighted really well, like the different, different layers and centering it around like, yeah, how this, how this affects.
and connects what's going on at patient level and then overall health system level. So that's a great outline. Is there anything else in terms of the challenges that you've seen?
Rigveda Kadam (11:18)
So maybe to make it, let's say, use an example and make it more concrete. So even while we start with this broader structure to help us figure out which problems should we be focusing on and prioritizing, I think there are still some non-AI specific and also some AI specific things that need to be done. for example, let's say, let's take radiology as an example.
If you look at the approvals that FDA has done for software as a medical device that include AI, depending on the date that you look at that 70 to 85 percent of the total set of approvals are around radiology. If you look at the demand or the user side, the Lancet Commission on Diagnostics has flagged that there is a clear gap when it comes to access to imaging in low and middle income countries. In a multi-country survey,
only two out of 22 countries had imaging technologies available at the primary health care level. And this is despite the fact that if you were to rank the global burden of diseases by DALYs, by disability adjusted life years, seven out of the top 10 of these diseases have imaging as a tool that can help address that disease. So all that to say, you know, on the innovation side, there is innovation if, you know, if we were to take
Shubs Upadhyay (12:34)
Hmm.
Rigveda Kadam (12:41)
the FDA approvals as a proxy. On the need side, clearly there is a need. But then why is it that we are not moving as fast as we can because X-rays and ultrasounds have been around for a while and now we of course have AI. So as I was saying earlier, there is a non-AI component to it that we cannot be sure of just because AI has arrived on the scene, so to speak, around infrastructure and on investing in really like non-digital infrastructure, but also digital public infrastructure.
Shubs Upadhyay (13:02)
Mm-hmm.
Rigveda Kadam (13:10)
But then with AI, there is an opportunity to address some of the challenges around access to trained personnel. So that's the part where first making sure that you are designing with the user becomes critically important. Because what we are looking at, apart from the infrastructure part, is getting people to adopt something that's new and that has so many different perceptions out there. Will it take away my job? Is it something that people are going to hold me accountable for if it makes a mistake?
So these are some of the challenges that we do need to address even when it comes to AI.
Shubs Upadhyay (13:45)
I wanted to bring your perspective, Andrew, in here, do you have any examples from the Rwandan perspective
to improve, some of the challenges of implementation
Andrew (13:52)
Yeah, yeah, Actually, thank you, Rigveda Actually, we're working together on different AI tools that we are really implementing in Rwanda, including the Radiology one. So for our side, actually, I liked the idea of patient journey. And most of the platforms that we're implementing, I mean, digital platforms, are actually, we are much more on the patient. We are centered on the patient. So looking at the Rwanda context, actually, you can see that the patient starts at the
community, then at the primary healthcare, which is the health facility, then at the tertiary on top. So the whole flow, we are looking at the use cases that can really make it easier for the patient to have a better experience while they are moving through the flow. So the first one was looking at the community as workers to see what could we do to make AI
Shubs Upadhyay (14:43)
Mm-hmm.
Andrew (14:48)
really help community health workers to interpret some of the samples or exams that they perform and makes it easier for them to take decisions. So we have like one AI for community health workers that interpret a RDT (Rapid Diagnostic) test. Then from that interpretation, it makes it easier for the community health workers to be able to know what to do at the end of the day.
So that is one, it's like interpreting the guidelines and standards and make sure that at least the system is able to interpret the image from the RDT. They just take RDT, then you take the picture of the output that the system will say do ABCD, this is what happens. Then another one is reducing the overuse of antibiotics. That one also we have really implemented this and we have proof of concept.
Now we are looking at how we can inject it in our overall electronic medical records. So maybe those are the two ones, including the radiology that also we are looking at. But again, what I could say here actually, using AI in the patient flow and making it add value at the side of clinician and the patient, it reduces those challenges and the resistance that we're talking about. Yeah.
Shubs Upadhyay (16:11)
100%. And if we use the mapping out that Rigveda had mentioned at the beginning of those four kind of layers of the ecosystem, I think the examples that we've mentioned here are really at the point of care, right? How do we enable and create value at the point of care? And so that, mean, that's really great, like creating standardization of interpretation of rapid diagnostic tests, super important. And also, yeah, absolutely. Like a WHO global priority to
to reduce antibiotic overuse. To me as a clinician as well, I wanted to get into this thing around how is that perceived on the ground by healthcare workers? So either community health workers or even physicians or nurses on the ground where of course at health system level, decisions to say, okay, we need to standardize and kind of reduce, make sure that there's reduced antibiotic prescribing. Absolutely, that's important.
And most clinicians would say, yes, absolutely, that needs to happen. But then implementing a tool that is at one end of the spectrum assisting that decision, but another spectrum of interpretation is like, it's making that decision for me. How does that land on health care workers in Rwanda? Do you see challenges there? How is that
being adopted and being accepted as a part of a new way of delivering care.
Andrew (17:32)
Yeah, that's a good question. Actually, first of all, everything starts on the policy level, where you have a policy that you have to reduce the overuse of antibiotics and also enforce the standards of treatment. So in Rwanda, we already have the standard of antibiotics and treatment that really guides clinicians on how they have to prescribe these antibiotics. So what we did was taking the AI and
use it to enforce the policy, then what happens is that the clinicians, they continue the way they are doing their practices, but the system at the end of the day will suggest and say, this is the medication you should give. Then if it doesn't stop you to do, to take your decision, but at least at the end of the day, it shows these were the antibiotics that was given, that was not supposed to be given. This was the one that was given, that was supposed to be given. At least that one makes them when they are discussing internally.
to discuss to see what was the overuse of antibiotics. They have statistics like 20 % of antibiotics that was prescribed in this facility was not really, there was no evidence of prescribing it. Then at least, know, appropriate for that situation. At least you can see that AI can go through all decision we're making and is able to tell us what we are doing that is really not aligned with the standards. So.
Shubs Upadhyay (18:43)
Yep, it was not appropriate for that situation, for example.
Andrew (18:57)
Coming back to clinicians, they see it as an added value tool because at least it's like another eye that tells them that even though you took this decision, but you didn't consider ABCD. So that's why at least we found that many of them were really happy to have the tool that can be able to guide them. But again, on the other side, some of them, seem as if we are forcing them not to do the way they are supposed to do it.
Maybe some of will even say that data 20 % was supposed to maybe the tool didn't. But again, the more you work with them, the more you bring them on the way algorithm are being configured. They most of the time they found that their tool is really adding value in what they're doing here.
Shubs Upadhyay (19:39)
Yeah, that's really useful because so a couple of things to pick up there because I think it relates to what Rigveda was saying earlier, which is, you you really have to involve the end users and end users are not just patients. They are the clinicians who are delivering care, particularly when you have clinical decision support systems, especially. Right. And so if you want a decision to be supported,
There's so many pillars to driving that trust, explainability and making sure that that makes sense in terms of the guidelines and how clinicians are able to interpret those results. I think ultimately for any healthcare worker, they would think, okay, yeah, all this technology is well and good, but
Like the clinical liability falls on me. And so it's great to have these things that help me think of certain things that I might not have thought of, but ultimately the clinical decision comes down to me. so being able to see it and rethink how we make decisions and where it fits in that decision flow seems to me like an important enabler for healthcare workers to trust this.
Do you also get feedback where clinicians, feel threatened by this kind of clinical decision support?
Andrew (21:00)
So actually, this is a new area that we are trying to bring into the existing practice. And actually, it's the change management. Some of them may feel that you're bringing another person to take decision on my behalf, then they feel maybe threatened that AI is taking over their role. But I think most of them, they are
Shubs Upadhyay (21:09)
Mm-hmm.
Andrew (21:22)
thinking that it's really simplifying the way they're really treating patients. Because instead of taking much time to think and take decision, there's another tool that immediately tells you, look at this, this, and this, and this, and this is the interpretation of what you entered. So of course, it's something that we're trying to bring into the process that existed long, long ago. That's why working with them and having them give
designing and giving input in the whole process makes it easier at the time we want them to use it. So that resistance there, that designing together, having them participating reduces actually most of these resistance from the side of clinicians.
Shubs Upadhyay (21:57)
Absolutely,
And I think that comes full circle around, know, what, you involve people and that might be communities and patients, they're more likely to trust it. And the same goes for clinicians. so that kind of thinking about that as like, yeah, the co-designing or co-developing or implementing together for people rather than, you know, rather than forcing things on people, they're much more likely to adopt those.
And then you really do get the overall benefits because it's actually being used. Rigveda, do you have any other examples from other parts of the world of how this has either gone well or not so well?
Rigveda Kadam (22:46)
Sure, and while we're on the topic, I do also want to mention the principles for digital development. So these principles were put together by a number of stakeholders working in the digital health space and are shepherded by DAIL. But in a sense, it's kind of like a handbook to ensure that digital innovations and work, including the AI, follows a set of principles that have really been put together based on
Shubs Upadhyay (22:53)
Yes.
Rigveda Kadam (23:13)
years and years of successes and failures and lessons learned. And why I'm mentioning that is one of the key principles there is designing with the user. But then also how do we define relevant stakeholders and users? So to your point, it's not only the patients or individuals who are receiving care, but the people who will use the tools. And so in that context, in our work, we've seen a couple of models as to how country partners have chosen to engage with the stakeholders.
For example, both in India and in Kenya, partners have chosen to engage with sort of these stakeholder groups for physicians to engage with them and try and understand how they perceive, let's say, the use of AI in ultrasound. And what are some of the concerns that they have? For example, one of the concerns, even though task shifting is something that is possible with AI enabled ultrasound, there are concerns around, okay, what does it mean?
to task shift and at a community health worker level, what would be then the changes in policies and checks and balances that would be needed. So there is this model. And then the digital transformation office in Indonesia has taken another approach, which is looking at more just generally without getting into product specific use cases right away. How do we just engage with clinicians more broadly educate them on just
broader applications of AI in health and give them an opportunity to surface some of their concerns. So you've seen both these working, but I think it's important to have that space because then we've seen also like resistance to adopting the, let's say AI enabled ultrasound in some context because of some of these concerns that have been voiced in other areas, but really maybe weren't actually socialized or consulted on in the specific geographies where it's not happening.
Shubs Upadhyay (25:07)
Yeah, absolutely. And I think, this is such an important takeaway because, you know, ultimately, whether you're building or whether you have a top-down policy that you need to impose, you need some kind of, I guess, like, to use the same analogy of top-down, you need the bottom, you need the bottom-up buy-in. Otherwise it just doesn't happen. So aligning incentives, creating the conditions and involving people in.
Well, so the top, so for example, the example that you've used, Andrew, you know, the top down policy and the mandate that needed to come from the government was we need to reduce the overuse of antibiotics. And kind of almost working with, okay, well, yeah, we might have some opportunities with technology, but how do we work with you to do that? And there might be some other layers around the technology as well that help drive reduction i.e. there might be other reasons.
that are not solved with like, know, mentioned Rigveda earlier, there's like the non AI aspects and then there's the AI aspects. And so there might be lot of other reasons and drivers for why antibiotics may be prescribed. So it kind of, for me is, okay, if you've got this kind of initiative or priority, AI or any digital tool is like one part of that, but actually to solve the problem, we need to think about all of the factors that contribute to that and think, because otherwise if you just
the AI. other things are still happening. So maybe there's like increased pressure from the community or people aren't educated about what antibiotics do. And so they are insisting to the healthcare worker who maybe just wants to say, okay, well, you know, this person is insisting, I don't want to get in trouble, just take it, you know, kind of thing, right? And so that's another driver which won't necessarily only be solved by AI. that was another reflection on like,
thinking holistically about the problems that we're trying to solve with point of care and access to information and ultimately to achieve the goals that we want. Any reflections on that from you, Andrew?
Andrew (27:06)
Yeah, actually, are pointing the real, actually, it's not only AI, looking at the antibiotic example, because you will see that there is this pressure, there is a patient pressure. When you discuss it, when we are discussing as patient, because I'm also a patient, you will see patient talking that, this is the better doctor because he given me this batch of medication. But this one, when you go, just say, no, no, no, no, go and take water and whatever.
Shubs Upadhyay (27:32)
Mm-hmm.
Andrew (27:35)
So I think there is that pressure, but again, here what we are talking about is that are we able to track the magnitude of the problem? So that if we are enforcing the policy, at least we have a tool that can tell us even though there is this pressure, but the pressure has this weight. Then when you are talking about awareness and bringing in the patient side and whatever it is, when you have data,
Shubs Upadhyay (27:44)
Mm-hmm. Yep.
Andrew (28:04)
it makes it easier for you to plan for awareness and bringing different stakeholders to make sure that the problem is addressed from different corners. So the problem we had by that time, there was no data on the magnitude of overuse of antibiotics. So bringing in AI at least, it is able to pick some cases.
Shubs Upadhyay (28:20)
Mm-hmm.
Andrew (28:26)
from different corners, then at the end of the day, say it's at 20%, 10%, 5%. Then that way it makes it easier for the policymakers at the Ministry of Health to take decisions.
Shubs Upadhyay (28:36)
That that's very useful. And I think it's related to the podcast that I did with Ruchit and with Khushi Baby talking about community health workers in India. And that was that, first of all, sometimes we rush to like, we need to reach this outcome. We've got this ambitious goal and let's do it and let's like drive. And that's really important. But actually, first, we really need to work with people to actually get the real on the ground reality.
Right. And so I can, I translate the similar type of problem that Ruchit was talking about with the TB and understanding first needing to understand the great ground reality of like the, how bad was the situation with TB rather than saying, okay, we need to get rid of it. So everyone's reporting that it's, that there is no TB. I feel like there's similarities or parallels here because here, yes, it's important to reduce antibiotic overuse.
And if you come in very heavy from the top, then people are like, okay, well, I'll do that and maybe not interact with it so well. But actually, if you involve people and say, okay, well, we need to do this, let's do this together. Let's, and let's work with you to find out through good data collection. What is the situation with antibiotics? And like, get a real sense of the reality and the shape of the problem. Which antibiotics are being prescribed? Where are they particular types of?
organ systems or types of illness where it's being over prescribed. And therefore then you might be able to have other types of campaigns like patient education campaigns or et cetera that might then. So like you said, driving good government decision-making or health system decision-making by actually knowing what the shape of the problem is on the ground through good data. So I just wanted to kind of press on that because I think that was really valuable and connected to something on the.
on one of the previous podcasts. And like you said, AI is a tool to help us to get to the goals that we are hoping for in healthcare, right? And in the health overall. And so really thinking about and attaching it to the use cases across the healthcare system through, I guess, like the four layers that you talked about, Rigveda
So I think that's a really nice way to kind of bring that all together. Okay. I wanted to move on if that's okay. One of the areas of work is evidence generation and
clinical evaluation, and I guess the overlap this has around this layer four, is then also around regulation and the overall ecosystem.
Even in high income settings, sometimes measuring the right things and evaluating the things that matter are difficult. How is this in LMIC settings?
Rigveda Kadam (31:11)
So I would maybe just speak more narrowly to the work that we are doing, course, driven by what we've understood from the needs from our partners. But to your point, one, I completely agree that in terms of identifying measurable outcomes for AI-driven interventions, I think there's a lot of work to be done. Still, as to what are you looking? Are you looking at cost savings? Are you looking at improved quality of care? Are you looking at better?
Shubs Upadhyay (31:16)
Yeah, sure.
Rigveda Kadam (31:39)
clinician and individual interaction. So that's something that's one area that we are looking at via focus group discussions with healthcare workers in partner countries. But the bulk of our work has been around supporting clinical evaluation of AI driven diagnostic innovations. So that the problem there that we're trying to solve is like if you look at the AI product life cycle.
And if you look at generally the non-AI evaluation life cycle, there is a huge mismatch where you, you look at IVDs or any other health product, and if you look at how much time it takes to generate evidence, it's measured in number of years. And by which time you will probably be on version six or seven of the AI product. So what we've been doing is working with partners and the WHO's tuberculosis program.
Shubs Upadhyay (32:23)
Mm-hmm.
Rigveda Kadam (32:31)
And we have set up this rapid technological evaluation platform where partners can install AI software and where we then have been able to collect, again, with our research partners, set of data sets. So chest X-ray images, let's say, which have been collected from different WHO regions. They have been annotated and have metadata. And then these software are rapidly evaluated on these data sets.
And then each of the sort of evaluation points is compared also with expert human interpretation. So the entire process itself tries to rapidly look at how does the AI software perform against the standard of care in this context, which is human beings reading just X-rays. So that's been something that we've been working on for two, three years now, and it's been really a pioneering effort. And we are supporting the WHO to...
apply that for the chest X-ray image analysis software developers for tuberculosis screening. So that's been one really great learning experience for us as well.
Shubs Upadhyay (33:40)
And I think that's particularly important where, you know, we live in a world where an AI algorithm is developed in one context. And sometimes people think that it's very easy to then just copy and paste it into another context. And in some ways, some principles apply and are generalizable, but some things need to be adapted to the local context and that so that it sounds like what you're doing is being this
validation layer that helps people prior to being implemented in a certain context and have some independent objective view on okay, based on this context how is your model or how is your product fairing?
I guess, to then give assurances to decision makers like Andrew before something to say, okay, okay, it's normal, it's fine that something is developed in another context, but I want some assurances that this actually works in my local context and actually is appropriate and the outputs are appropriate and appreciates the nuances, the issues around the population and the product itself is not only clinically adapted, but like locally.
Andrew (34:22)
you
Shubs Upadhyay (34:44)
culturally perhaps adapted as well. So, and I think that's an important thing. you know, the WHO have done a lot of work on this, like the ITU/WHO focus group, AI for Health have kind of been trying to do this in other modalities as well. And this, think this is an important piece of work. So if you can give us a link to that platform and that work, if anyone is in this kind of TB space and wants to kind of look at that, then they'd be able to access that, I guess.
Rigveda Kadam (35:13)
Yes, definitely. And incidentally, we call this platform the validation platform. So to your point, that is around AI software validation.
Shubs Upadhyay (35:21)
Yeah. and I think so rapid evaluation and kind of getting these assurances is really important. When we talk about outcomes and we talked about how things affect people who are delivering care and for patients, part of this, and because you mentioned also the, sometimes the differences between, you know, hardware medical devices or medications, for example, like drugs.
and then kind of the nuance of difference between then software as a medical device. And part of like the measurement of success is also like, well, what impact does this have for people? And so for me, there's all that also this kind of qualitative aspect with which I think traditionally, academia kind of maybe scoffs at is less good because we want kind of hard numbers, right?
But it seems to me in discussions that I've had, that actually is very, very important to have, especially when we're thinking about adoption and trust, to have this qualitative aspect. Do you have any insights around measuring qualitatively the experience of healthcare professionals or patients in terms of their use and interaction with digital tools? Because ultimately that's going to be a marker of success.
Rigveda Kadam (36:34)
Yeah, definitely. I also want to say that, I mean, if you look at the private sector, so some of my friends are working in these big FMCG companies. There, for any product launch, focus group discussion and qualitative research is a key part of the product launch strategy. Because to understand contextual drivers of adoption and barriers to adoption, these tools are really like a key part of making sure that the product that is being launched is actually
Shubs Upadhyay (36:46)
Mm-hmm.
Rigveda Kadam (37:04)
fit for purpose. for us, example, for a digital health, sorry, a digitized community level integrated disease screening algorithm. So we conducted with our partner in Kenya, Jomo Kenyatta University, focus group discussions with community health workers to understand how and if they would want kind of a integrated digitized approach to going into the communities and screening.
And there was some really great feedback points. So one was, you know, just being able to understand that there is, you know, there is at least in that population group, they didn't have any resistance or aversion or like fear of having the tool come in and sort of substituting what they're doing. But they did have very concrete feedback, which is that one that it needed to be very agile and like they should be able to work with very quickly. And that second, and this was more on the
policy side that they felt that the current community level guidelines did not cover the needs from the communities. So why is non-communicable diseases not included in the digitization? So really, like, I think that was a good way to also not only maybe not uncover brand new insights, but a way to just have a way to document and share, like, this is what is needed. And it's for us to then work on.
Shubs Upadhyay (38:12)
Mm-hmm.
Sometimes we're just so focused on the optics and the success that we want to create that we miss opportunities to learn. And then it becomes a very black and white thing. Like, was this a success or was this a failure? Whereas actually it's like a continuous learning process. so for me that this, this part that you've mentioned is like such an integral part to like, how do you build those insights that you get and make sure that you know that those are going to come in and how do you set yourself up as an organization or a health system or an implementation partner?
to know that, okay, well, we need a mechanism to actively seek out these insights and we need to plug them into like the improvement cycle that we're to have. So I think that's really key. Another thing that you mentioned Rigveda was kind of the cost effectiveness and you everyone talks about cost effectiveness and like, and like from a ministry perspective, Andrew, like you're thinking about, okay, I'm going to be investing in the infrastructure of this or investing in this innovation.
You as a ministry or a government need to see some, see that it's cost effective and perhaps have some ROI that you can report then back. How do you think about this? Because sometimes it's not just the like intervention itself that will give you ROI or, or be cost effective right at the beginning. How do you guys think about this, especially with Rwanda being kind of more forward thinking, knowing that you have to invest in the conditions and it can take some time before you see the like big upsize gains.
So yeah, it would be great to get your reflections on this, Andrew.
Andrew (39:55)
Yeah, thank you. Actually, it's a good question for researchers and academia. But at the maybe at the Ministry of Health level, actually, when you look at the health care environment is a complex environment. And again, solving or improving the process there, you need to have system thinking approach. It's many components connected to contribute to something you want to achieve.
So maybe bringing in AI perspective, the way we are handling AI, we are not handling AI as a tool to come and connect all components. I mean, when you bring in the system thinking approach, we bring AI in the area that we feel it can improve and contribute to overall environment.
For us, our way is really embracing technology and AI
And then it leads to the care, the quality care we are talking about. But for us on our side, actually, the reason why we invest, because we take a task. We don't take the overall care. We just take a task. And say, when we look at the whole process, we can see that this task, if it can be taken by AI, it can be
well-performed than human or it can contribute to the human. So our evaluation will just go into this small task that we really injected AI to see how was it before, how is it after. Then we can compare. As I always say, if it was maybe taking longer to interpret the images.
And we find maybe it take one hour to interpret in a image and we put AI and we find it taking 30 minutes. We just say the contribution of AI is 50%. Then including other patient flow. So that is how we do. If it's a screening and we knew that we didn't have people to go to each village and do screening and we put like AI machine and it's able to screen at least at the village level. We just say before there was no screening at the village, but now we have screening at village. That is how we are.
really interpreting it and that's how we evaluate it. But I think when it's not a question of AI only, I think even when you read scientifically, there is less evidence on how digitalization contributes. It's not easy to attribute the contribution of digitalization, including AI, to overall quality of care.
Shubs Upadhyay (42:28)
I think you've summed up in a nutshell, like the real challenge, right? You kind of have this intervention that somehow everyone needs to conveniently say, okay, did this thing work? Right? And then therefore you need baseline. But the paradox for me is that, well, yeah, it's not just this thing in isolation. It changes so many things.
in the whole interaction, in the whole way that we deliver care. There's this old adage, a rising tide lifts all boats. So the way that might translate is, OK, we have this initiative. We're going to have AI. And therefore, we're to do all these other things to make sure that the AI works. And so I like the way you think, thought about attribution or contribution and recognizing that some of it will be directly due to the tool and the implementation of the tool.
but some of it was just related to how people changed what they were doing around that as well. And so I like the way that Andrew kind of talked about attribution and contribution lines. And it's okay to think about it like that. Because people want to kind of put it, I guess, like neatly into a box. But yeah, medicine and healthcare in general, like, and life is just much more messy than
Shubs Upadhyay (43:42)
I really liked the way this thinking embraces that. even though it's complex, it still gives you some way to know how things are doing. and it relates, think, to me, to Stephen Gilbert's podcast, where he was thinking, even when we reg, you know, in terms of regulation and evaluation, we should be thinking about like systems overall, and not just like individual components of AI. So I think that's a really great point. Okay.
Moving on. for me, this next kind of area that I wanted to discuss is a bit more zoomed out. We've got Rigveda, your view on kind of like the global ecosystem. And then Andrew, we've got your kind of super local context There's lots of challenges in the global digital health space.
Let's start with Rigveda and like the zoomed out view. Do you have, a wish list or like a area of priorities that you see for like the ecosystem as a whole or this community as a whole to address?
Rigveda Kadam (44:38)
Sure, Shubs. And I would say, although we focus a lot on the challenges, I think it's also great that, you know, with the whole overall buzz and hype around AI, it's not that healthcare has been left behind, which sometimes happens. And I'm really glad that we are seeing so much investment as well as innovator interest. So for me, the wishlist would be all around making sure that, you know, like this phase.
we can maximize the gains for the intended users and the communities. So for me, I would say the first point on my rather long, but I'll keep it short wish list would be that it would be great to have a normative body like WHO sort of pull together stakeholders and provide, let's say a priority R &D kind of a roadmap for where they think AI can add most value.
And this is something WHO has already done in other areas. So the priority R &D roadmap for disease X or for primary healthcare. So I would really want to have that mostly from like an end user perspective. But of course we can contribute from the FIND side as well, because I think that will help so much in like targeting the ongoing interest investment as well as innovator effort in areas that are in dire need of these things. So that would be point one.
Shubs Upadhyay (45:39)
Mm-hmm.
Rigveda Kadam (46:02)
And then point two would be again like more on the previous discussion section of regulatory and evidence generation support. I think one key sort of enabler for ensuring that innovators keep coming and working on these problems is how easy it is for them to get to the market while also ensuring that the interests of the end users individuals are protected. having I know there's already a lot of work that has happened but
taking the examples and successes of partners like Rwanda and showcasing them more globally on like this is working well, this is not working well and moving more rapidly on that would also be a great wish list item.
Shubs Upadhyay (46:45)
Great, okay, thanks for sharing those. And Andrew, from your context being part of the digital strategy in Rwanda and kind of your perspective from the Ministry of Health, what do you want to say to kind of the global community around what you want to see in this space
Andrew (47:06)
Yeah, thank you. As I said, actually, I say that the health care environment is complex. I think that is the area where AI can play well and it makes it even easier for us to evaluate it. Another thing that I could say is that the government of Rwanda, not only the Ministry of Health, the government of Rwanda is really promoting use of AI. So from the Ministry of Health's perspective, I don't see the future of medicine without AI.
That's why actually we are trying to bring together all academia, researchers, stakeholders, private sectors to see how could we work together to have all these innovation well coordinated and also channeled well into the patient flow. And again, I always say the patient flow because whatever we are patient centered, because whatever contribution that digitalization can do, I can do is around the patient.
But the last one that I could really echo from the rest of my colleagues around the community and other places, I think something that we need to build, maybe to learn from Rwanda, is having environment that really promotes and incentivizes these startups and innovations, as Rigveda said, where if you have an innovation around this area we are talking about,
At least we have that infrastructure and environment that really promotes and incentivizes these innovations. So that is what I can say. But for the Rwanda context, for us, the priority actually is use of AI in the complex setting of the health care.
Shubs Upadhyay (48:48)
Yeah, thank you for sharing those insights and that message. One thing that's in there, I was thinking of wrapping up there, but one of the things that you mentioned that I wanted to get your insights on, if you could, is there's kind of like the global picture on regulation, because obviously it's important to incentivize and make sure that innovation can happen.
And then there's the regulatory side of things to make sure in this complex and really important field that regulation kind of has the checks and balances and quality and safety aspects and different countries have different risk appetites and approaches to regulation. So for example, like Europe are seen to take a very like very conservative approach.
And at the moment, at least other places like the US are not as conservative as an approach as the EU, I were to put it diplomatically. how do you take this in terms of your own
governments kind of approach to regulation.
Andrew (49:43)
actually for Rwanda context, we value the solutions that brings value on what we're doing. And actually we don't think much more on AI or digital, whatever. It's much more on what can we do.
that can solve the problem we have currently, and maybe it's cost effective, as you said. So the reason why we are talking about AI is actually making sure that we simplify the process and services we are providing.
And also we simplify some complexity in the flow. So that is why actually Rwanda is embracing use of AI to make sure that at least we improve the quality of service, we improve the quality of care, we improve the quality of what we doing currently. But again, also scientifically, we contribute in the process of treating patients. So that is why I think for Rwanda, when you put the risk and the benefits, we really balance.
Shubs Upadhyay (50:37)
Mm-hmm.
Andrew (50:44)
and we try to mitigate the risks. So we try to make sure that we improve the the safety of the patient. We improve everything that is the risk part. But again, we make sure that the benefits are the ones that we are promoting and making sure that at least we are really benefiting from the output of the innovation around the flow of the patient or healthcare. That is what actually we are promoting currently. Just the solution of the problems.
and then we mitigate the risks.
Shubs Upadhyay (51:14)
And does Rwanda have its own medical device regulatory framework that, for example, innovators would have to go through and certain things that then documentation and processes that they would have to follow or certifications? Does that exist?
Andrew (51:30)
Currently we have a Minister of ICT AI policy that really defines all the implementation of AI in different areas and how we can collaborate. So at least these policies have made, have attracted the private sectors and different stakeholders to work with us. But again, also having it in our strategy that it is the priority. If we have like seven priorities and say AI is part of the priority, it makes it easy for Rigveda to check on the priority and say, me I'm fitting here.
Shubs Upadhyay (51:47)
Great.
Mm-hmm.
Andrew (52:00)
But if it's not in the priority, some people may not be motivated. think motivation, I mean, is around policies, strategies, and making sure that the environment really promotes the use of AI.
Shubs Upadhyay (52:02)
Mm-hmm.
Yeah, that, makes a lot of sense. So I wanted to kind of, bring things together. as, as we wrap up, I think the main things I take away from, both of your insights is that I think the first one is, if we look at that continuum of the patient journey, pre point of care, then to like adding value at the point of care, then you've got like health program or like public health or population health level. And then you've got like the whole infrastructure and the ecosystem around that.
regulatory, evidence generation, data infrastructure, and at that level. And then thinking how that propagates. I think one of the key messages I took is, really starting with, in terms of the changes that we want and the outcomes that we want, really need to really zone in and narrow down on kind of the first two steps of that. Like, how do we improve things for patients in terms of how they experience care and how things are improved for them day to day?
And also then also how do we enable healthcare workers to be able to deliver that quality care and then think about through that continuum, what use cases can add value And then, yeah, and I think that's a key thing because, you everyone thinks, yeah, AI is this shiny new tool and let's think about where we can shoehorn this.
I think thinking about, okay, well, what are the overall healthcare problems that people are having on the ground that we need to solve? And then there are lots of ways you can be innovative, even in terms of processes or giving people good information and like non AI things, as you mentioned Rigveda, and then how do we augment this with technology, right? And then I really also like this kind of concept of systems thinking and thinking about, okay, when we generate evidence,
And when we try to think about the overall outcomes that we're trying to achieve, and not thinking about the tool itself in isolation, but also how it interacts with people and all of those layers around it,
And we know that there's lots of things that might contribute to the overall outcomes and also the way that the tool is adopted. So I've got those as takeaways. Any kind of final messages from either of you?
Rigveda Kadam (54:11)
Maybe one just reiteration, think involving intended users and beneficiaries in the design consultation would be super important.
Shubs Upadhyay (54:20)
Yes.
Yeah, going back to the first concept we were talking about super, super important. And yeah, not thinking about end users, just as patients as this passive receiver of technology, or even clinicians or healthcare workers as passive, minions who have to use this technology, but actually involving them and saying, Hey, here are the big, big challenges that we want your help in solving. So how do we do this together and involving them in those steps? Yeah, absolutely. Andrew, how about from your side?
Andrew (54:50)
patient centered design.
patient centered design.
Shubs Upadhyay (54:52)
Perfect, yeah, absolutely.
It's been so, so valuable to have this level of discussion with both of you. for people who are building, people who, I guess, like the donor community, and also researchers and academics as well.
Thank you so much for your insights. I'm really looking forward to seeing how things evolve in the Rwanda ecosystem and reading more about the work that FIND are doing as well. And I hope that your collaboration that you're doing together in Rwanda goes fruitfully as well. Thank you so much.
Rigveda Kadam (55:22)
This was a pleasure to be on.
Shubs Upadhyay (55:26)
Thank you.