To understand why it improved care, if it did, was as important, if not more important, than understanding that it did.
Because that will help us to understand how we need to potentially modify or adapt the intervention either in the same place or in future places to be able to achieve the same success.
🌍 Global Perspectives on Digital Health
A podcast unpacking the stories, insights, and innovation shaping health systems and underserved communities.
🎧 Listen on Apple, Spotify. Watch on YouTube
Shubhanan Upadhyay (00:00)
You are tuned in to the Global Perspectives on Digital Health podcast. If you're new to this podcast, here's a little bit about it. The angle is really to create genuine two-way learning between different contexts. And to give people, especially who are based in the EU,
the US and UK, is I'm based in Europe. the bigger picture view on digital health And the idea is really to share the insights of people who have been implementing, researching, building policy that's particularly aimed at underserved communities, building with underserved communities.
and understanding what it takes to have good implementation of digital health and AI that serves everybody. I think this is really to add to a sorely missing gap in our digital health industry discourse and content that we see, which is hyper fixated on so-called high income settings.
And so it's really to share learning across contexts and addressing these problems. And on that, today we have a really, really experienced guest. We talk to people across the layers of the ecosystem. Today we're talking to Shay Soremekun She's an assistant professor at the London School of Hygiene and Tropical Medicine. What I'm excited about about this podcast is
that everyone's, particularly in AI discourse where everyone's talking about LLM evals. How do we measure performance? And, you know, whilst there this focus on quality of the actual tool
How do we also make sure that we are capturing the ripple systemic effects on workflows, processes, systems? And that's what we're really going to be getting into today with Shay Soremekun who is an assistant professor at the London School of Tropical Medicine and Hygiene. And we'll be going into her experience of implementing digital health tools in Mozambique.
and the learnings on how do we think about these other layers of evaluation, not just about performance of a tool, but also how it integrates, how it interacts with the system. And I think that's going to be really interesting. All of us who are trying to build in this space will face this challenge of like, okay, we can measure this like immediate performance of a tool, but actually for it to make a dent in the world, we need to think about evaluating the other layers around that technology. So this is the insight that...
Shay is going to give us. really can't wait to get into this. So let's get into this conversation.
Shubhanan Upadhyay (02:27)
Shay Soremkun so, so great to have you here on the Global Perspectives on Digital Health podcast. Welcome.
Dr Shay Soremekun (02:35)
Thank you. So nice to be on here as well with you.
Shubhanan Upadhyay (02:39)
Shay, tell us a little bit about yourself, your background and your work and then can get into diving into some of your work.
Dr Shay Soremekun (02:44)
Okay,
I can. Yes, so my name is Shay Soremekun I'm an epidemiologist and evaluation specialist at the London School of Hygiene and Tropical Medicine. So I'm interested in strategies to improve the health and development of children and young people, particularly in low and middle income settings.
So I've designed and analysed data from clinical trials, pragmatic, randomised controlled trials or observational studies on a number of topics in the field, from the effectiveness of low-cost digital interventions in Africa to improve the care of sick children, to the impact of police stop and search practices on crime and mental health in children in the UK.
I'm also Deputy Director of the London School of Hygiene and Tropical Medicine Centre for Evaluation and particularly involved in evaluation guideline development streams as well.
Shubhanan Upadhyay (03:34)
I think there's so many things that we can get into talking about. And what I'd really like to focus on today is some of your, learnings that you've had implementing and monitoring and evaluation work that you've done, particularly in low resource settings that you've done this in and the learnings that you've taken away.
But also you mentioned this other stream of work that you're doing, which is the center of evaluation and we can then zoom out on the bigger picture considerations and ways that the, I guess like the industry or the ecosystem needs to evolve and change with everything that's going on. So Shay, when we've talked before, you've talked about the implementation you've done in Mozambique.
this would be such a great thing to talk about in terms of the experience and the learnings and the methodology that you've developed. What problem was this trying to solve?
Dr Shay Soremekun (04:25)
Thank you. that's a really nice introduction into it. I work a lot, well, not just myself, but myself and my colleagues, we work a lot with various organisations, including non-governmental organisations. And this was a really, really nice intervention. was in the digital space that we worked on with the Malaria Consortium. The Malaria Consortium is a worldwide charitable organisation which essentially raises and uses money and looks at ways to prevent and control malaria in places where it's endemic.
So does fantastic work. So they approached us about a potential study which could be evaluated by the London School to look at the impact of a digital intervention to improve the care that children receive if ⁓ they got suspected malaria, diarrhoea or pneumonia. So a couple of years ago we published a pair of papers on this particular intervention. It was a primary care digital package in Mozambique and in Uganda.
And the idea was to improve the correct management of children who had suspected diarrhea, ⁓ malaria or pneumonia, which together the three of those are the largest killers of children under five in sub-Saharan Africa. And I think what's really important is these are treatable conditions and many of the deaths that occur are completely preventable. But historically the health system and particularly health worker capacity has not been great, particularly to manage these.
conditions and many families, particularly those in rural areas, don't have great geographical access to care.
So community health workers are
kind of seen as the last mile style of health worker to provide access to care for people who are in very rural or hard to reach areas in low and middle income settings. Now, community health worker programs were introduced with lots of fanfare probably about two or three decades ago, but what has happened over the last maybe 10 years is that they have been chronically underfunded. Community health workers have felt that they've been poorly supervised and poorly trained in the initial phases.
And that has led to what we know from the literature as low motivation of community health workers.
Shubhanan Upadhyay (06:27)
you
Dr Shay Soremekun (06:28)
Feelings of disconnect from the wider health community, because often their primary care supervisors who were based at facilities were quite far away. And this also meant that they, drugs and diagnostic tests that they had as part of their kits weren't replaced promptly. So.
As evaluators looking to promote sustainable models of digital healthcare, our goal was to design a digital intervention that would address these concerns for community health workers, to support them, to co-create it with them, and therefore lead to improved treatment of children. We used what we call a program theory, which is an approach to design and evaluation. This essentially means that we propose that the way to improve the treatment of children, the prompt and appropriate treatment of children, would be based on a series of mechanisms of change.
that would be initiated by implementing the intervention, leading to intermediate outputs. So that would be in our case, we thought that this would lead to improved community health worker performance, motivation, retention, and then that would ultimately result in improved coverage of the proportion of children who received appropriate care when they were sick. So we really got into this in quite a lot of detail. We created what we call a logic model to show this whole process for how the intervention would work.
and we agreed a set of evaluators, a set of indicators right along the proposed pathway, not just the main outcome, which was going to be a pre-opening treatment. So that whole process is called a process and impact evaluation. So I'll just explain the intervention very quickly. It was a smartphone program which provided clinical decision support to community health workers in the diagnosis and treatment of children who had suspected malaria, or pneumonia.
Shubhanan Upadhyay (07:43)
you
So I'm just going to congratulations to You've been a team. You've You've You've been a great team. You've been great team.
You've
Dr Shay Soremekun (08:10)
It also
improved the reporting of community health workers to their supervisors in facilities. They got automated and personalised feedback using the data that they had submitted as part of their reports. And they were also able to call for free other community health workers and other facility staff members as well. We also gave them, because obviously this is a low resource setting, we gave them a solar charger because electricity is really unstable in the areas in which they worked.
Shubhanan Upadhyay (08:30)
you
Dr Shay Soremekun (08:37)
and also multiple
pins for the, we also gave them a smart phone as well. It's important to note that we also gave the same package to facility staff so that they could actually communicate with their community health workers.
Shubhanan Upadhyay (08:50)
I just want to kind of connect this to like the digital health ecosystem. right now, everyone is focused, particularly people who building LLMs and agents, et cetera, very focused on the quality and benchmarking aspects. There's lots of papers and literature coming out around evals.
of LLMs, how do you evaluate? And it's really concerned around performance, which is absolutely important, right? How correct it is, et cetera. And I think what's really important, and often the conversations I have in the work that I do is, we need to make sure it's connected to the layers around, the second and third order effects that it's having in terms of
like changes in the world. And I think one of the things that digital health companies, struggle with is how do we do that, right? There's often a reliance on working within health systems or clinics, et cetera, to be able to know which levers there are. And even before you know which levers there are to turn, how do you even observe what's happening in the system? So what excites me about your work is
Dr Shay Soremekun (09:48)
Thanks
Shubhanan Upadhyay (09:56)
creating that space for observing those impacts in the world. So not just, hey, like these broad, like clinical outcomes that we hope to one day achieve, but like what are these like intermediate outputs that are so important to identify. And so I really like this model that helps you get from, okay, our intervention is doing a task correctly or performing it's like,
Dr Shay Soremekun (10:04)
Mm.
Shubhanan Upadhyay (10:22)
statistical calculations well, like so performance is good too. But then what is it's like the so what impact of that, right? So what's happening at an operational level? What's happening at a motivational level? I really like how you call it process and impact. And you've talked about these intermediate levers that need to change for you to even hope to have.
Dr Shay Soremekun (10:41)
Thanks.
Shubhanan Upadhyay (10:43)
an impact
on like actual clinical things that then donors and investors and health systems care about as well. So maybe we could just dive into some of that, how do you create the space to observe this?
Dr Shay Soremekun (10:53)
it.
That's a really thank you for summarizing that so brilliantly. And you're absolutely right. I think that what we learned very early on is that if you just look at the performance of the actual thing, so this digital intervention does it improve performance in community health workers, you might see a positive result if you do it in a very closed system that's highly controlled, but it doesn't really tell you how it works in the field. In somewhere like, you know, in Mozambique in Uganda, where there are so many other considerations
to do with the context in which that's going to be implemented. So for us, creating this program theory and a logic model to understand how the intervention would work was incredibly important. We did it, the creation of the program theory was led by one of our partners, which was Zelee Hill and Daniel Straughan at the UCL, which is part of the University of London. So at the London School, we were responsible for the entire evaluation, but our main
area of interest was the impact on that hard final outcome. Did it improve the care that children received? I think what we learned along the process and also working with the Malaria Consortium and the governments of both countries is that to understand why it improved care, if it did, was as important, if not more important, than understanding that it did. Because that will help us to understand how we need to potentially modify or adapt the intervention either in the same place or in future places.
to be able to achieve the same success. So I can go on to tell you that, as I said, we have two papers out on this, so it's not a secret, but we saw positive improvements in the care that children received in both countries. So we had conducted a randomized control trial, as I mentioned, but that essentially means that we randomly assigned some large areas for the health facilities in those areas to receive the intervention and some areas where it didn't. And they were randomly assigned, which means that we can compare them.
Shubhanan Upadhyay (12:43)
you
Dr Shay Soremekun (12:49)
and the only difference should be the fact that some areas got the intervention and some didn't. And therefore we can be quite certain about the causal pathway through which the intervention works. What we saw is in areas which received this digital intervention, improvements of appropriate treatment in children rose by something between 10 and 26%, which is really good for a pragmatic trial. So it was only when, so that was great. We could go back to the
Shubhanan Upadhyay (12:50)
Thank
Dr Shay Soremekun (13:17)
ministers of health in both countries and tell them this works, implement it now. But I think by doing this process bit of the evaluation, looking at the pathways, understanding the context and essentially who was impacted really helped us to understand why it worked.
We had agreed a set of process evaluation indicators, but during our final household survey, where we were interviewing families who had children who were sick in the last month, and we asked them, where did you go to? What treatment did you get? This is how we figured out who had received appropriate treatment and who they'd received it from. We expected to see that the community health workers in the areas which received the digital intervention would...
be better performing, they would be happier, they would be more motivated, and we had instruments to measure all of these as well as also looking at appropriate treatment in children. But what we actually found was that the community health workers in both areas showed similar, fairly good levels of motivation, good levels of connectedness to the wider health system, even though sometimes they were far away, and really good levels of performance and similar levels of retention or attrition in terms of staying in their job.
Shubhanan Upadhyay (13:55)
you
Dr Shay Soremekun (14:21)
But essentially what we saw was that facility-based health workers provided better care where they had access to the in-scale package compared to those in areas who didn't. So a major area for true focus, what we found out, was not the community health worker delivered care, but care in facilities.
And we know that from our qualitative research as well around this time that the health facility staff expressed a desire to have access to the smartphones in the app as there could have been tensions if it was perceived that the community health workers had received this extra kind of fancy support and they hadn't. So coming back to your original point, it just reminded us that we need to take a wider lens systems approach to addressing health problems in this way. So just supporting the final mile workers, so community health workers.
is exemplary, but it's often not enough.
Shubhanan Upadhyay (15:10)
I think that's so important because the way most evaluation procedures and then the knock-on effects of that, which is the next cycle of funding based on the pass fail or the, did this work or not question, leads to a perpetual thing where we see this problem where it worked in one context and then we made this conclusion and we made a recommendation.
Dr Shay Soremekun (15:23)
Yeah, yeah.
Shubhanan Upadhyay (15:31)
which would have been like the first thing you would have done if you didn't kind of dig deeper here. And then suddenly the funding would have gone wouldn't have been targeted exactly the thing that made the change, right? And I think this is really key where the incentivization of like the way funding is set up means that
if you weren't actually intentionally doing this, you would have just followed the kind of common path where well, this is the recommendation. here are our conclusions. Boom. Okay. Just fund this or invest in this part of it. And then people have scratched their heads, right? Oh, it wasn't replicated. And it's because this insight or this learning that you had just didn't surface.
Dr Shay Soremekun (16:04)
But yeah.
Boom. ⁓
Shubhanan Upadhyay (16:11)
You know made very reasonable conclusions, right? So not even I wouldn't even say jump to conclusions like very reasonable through pretty solid methodology got to certain conclusions that this really interests me
Dr Shay Soremekun (16:16)
Yeah.
Shubhanan Upadhyay (16:25)
because the researcher or the implementer or even a digital health company, they've got the pressure to like show, okay, show us something, right? did it work or not? Without the curiosity of ⁓ what was more difficult? What was interesting?
So the trade off for people in this space is you have this pressure because of limited resources to prioritize, okay, our well-defined primary and secondary endpoints, We've got to like, do they work or not? Versus creating the space to be like, okay, what else in the system is working or not?
Dr Shay Soremekun (16:52)
Thank
Shubhanan Upadhyay (16:57)
and how will that help us to succeed on that kind of bigger goals that we're trying to achieve.
Dr Shay Soremekun (17:02)
You're absolutely correct. mean, I think the reason that we found that particular useful result wasn't an accident. So, I mean, we are committed generally, whenever we do pragmatic trials in a kind of a real world situation. I don't know any situation, certainly from any researcher I know in public health wouldn't also do some type of program-based, theory-based evaluation, which would involve process evaluation.
or other forms of evaluation where you take into consideration the context. Realist evaluation is another good example. But because we had committed to doing that, we had already pre-agreed that we were not just going to look at the care that the community health workers provided. We were also going to design and validate instruments to look at their motivation.
and their performance, their feelings of connectedness to the wider system and that was some great work which was done by Daniel Straughan. We were also going to look at lots and lots of other indicators not just from community health workers but also from facilities. We wanted to look at how facilities were performing. We wanted to look at both private facilities and also public government-friendly facilities. We wanted to understand the context with their particular network issues in the countries at the time and there were and that also impacted on our results.
And it's because we had pre-agreed that entire process, it made it easy for us to be able to understand where the intervention was working and how it was. Now, sometimes you get it wrong. As I said, we started off with a program theory, and our program theory could have led us down the wrong path because we were just looking at motivation and attrition and performance of community health workers. If we hadn't seen an impact there, if we hadn't kind of had this much more comprehensive plan to look beyond the community health worker,
themselves. We probably would never have picked up on that. We wouldn't have been able to disaggregate the data, we wouldn't have collected it. So it was because we were able to do this, and I have to say independently in both countries you see exactly the same pattern of behaviour with completely different teams on the ground, shows us that this is an effect that probably is the way that this intervention works.
Shubhanan Upadhyay (18:52)
Yeah.
And, these papers are published so we can, share them for people who are interested But as a like TLDR, if I paraphrase then is you were very intentional about the different layers that were part of like implementation science and behavioral aspects, the other process and the workflow of that context. And you were intentionally going to create
Dr Shay Soremekun (19:12)
keep that fixed out.
Shubhanan Upadhyay (19:30)
visibility on this, I think another challenge that people have is okay, maybe I want to be intentional about this. But how do I measure what matters here? So how did you get into having good hypotheses on what was important to measure here? Because that's an area that people struggle with a lot.
Dr Shay Soremekun (19:47)
That's a great question and it won't be fully mentioned in the paper but we do have a series of papers that were published prior to that. Again with our excellent partners at UCL, designed essentially a whole formative research package where we were in both countries speaking to lots of potential stakeholders including families of sick children, community health workers, health facility staff.
mobile phone network operators, local government ministers and national government ministers to understand how first of all the situation in which this intervention was going to be implemented in but also what the needs were. We had I think a huge pile sorting exercise where we looked at various interventions that could work and we whittled it down to what they think would work in their area. We also did a large literature review on several fronts to understand okay what do we know about digital interventions and why they work.
how they work, in which contexts they work. So all of these elements are published in separate papers because it really is a big job. This is not something that you could just go in and do in six months. I think the preparation phase probably took two to three years. I don't think it always has to take that long, I have to stress. But I think in the fact that it was the first time that we were working with this type of intervention, with these type of digital basic AI components. So we wanted to do it properly. So essentially,
Getting lots of information, having a team who are well trained and have expertise in understanding how to use formative research to develop a programme theory helped us to understand, we need to, we actually have a very nice diagram, both in the end paper, but also I think we have a protocol paper published a few years back. You can also look for my name, but also my co-lead, which was Karin Källander who was leading the intervention implementation.
that shows our theory of change. It's basically like a nice graphic that says, okay, this is what we're going to implement. We're going to give them smartphones and this app and all of these wonderful things. This is how we think it's going to impact on our outcome. And then we sat down, I remember meetings both in Mozambique and in Uganda, like days of workshops. What are we going to measure? We've just made lists and lists of intermediate indicators that we're both counting. Exactly, exactly.
Shubhanan Upadhyay (21:35)
Yep.
Do working with the people on the ground who knew and I think
that's the important thing about context, right? You might have assumptions about what will be useful to measure, but without the people on the ground, those things might not even be possible to measure. And you're going into the granularity of like each step.
and each event and then what's possible, what's feasible to measure here. That's really useful. Yeah. Carry on. Sorry, I cut you off there.
Dr Shay Soremekun (22:19)
No, no, no, I I think I kind of covered it. that process was really helpful in both countries and that's how we whittled down to our process evaluation protocol and we just followed it.
Shubhanan Upadhyay (22:22)
Yeah.
It kind of makes me think of this saying, kind of was it Einstein or someone who again, it's always a challenge right finding the right things to measure it's not everything that can be counted matters.
and not everything that matters can be counted, right? Which is really the challenge of finding the right things to measure that really do give you a good proxy of what things are actually like on the ground.
Dr Shay Soremekun (22:55)
I think that's really, I haven't heard that phrase, have to say, but I like it. No, it's great because that kind of brings me to, I mean, to give you a kind of like a case study of, you know, I talked about the fact that it didn't really necessarily impact on community health workers. And I don't want to give the impression that therefore you don't need this intervention to be delivered or to be focused on community health workers specifically. In the areas in which we worked that was.
Shubhanan Upadhyay (22:58)
Yeah.
Dr Shay Soremekun (23:21)
it was clear that they had a high level of motivation, but in other areas it could still be very useful. But what we did find is that maybe some of the things that we could have measured and didn't still show that it had an impact on their feelings of being able to be worthwhile to their communities, do good quality research. I remember on one of my many visits, I think it was in Uganda, was it in Mozambique? I can't remember which country it was in, going out to the field on a visit to meet the team and to do some more co-creation work.
It was just always rewarding to go to villages where we were working in. I think a small example, because the in-scale pack that we gave community health workers, included a solar charger plus a range of pins so they can fit it to multiple phones. Because, as I mentioned, in the areas in which this intervention is being implemented, there was usually unstable...
electricity even if it was available. So they needed to be able to independently use our materials even in times when resources were really low. This community health worker had started a side business. He had started charging his community members for a very small nominal fee to charge their phones using the pins to his solar charge that he had gotten as part of the in-scale, as part of the trial. And that was such a positive thing because it meant that, you know, we liked that.
We like the fact that he had been resourceful. We like the fact that he was able to offer a service to his people in his community, to offer to them a reasonable price. So he showed us his account book and it was absolutely fabulous. You he had taken this seriously and he was so happy to show us, you know, what you have done for me by bringing, giving me this additional help has really helped me. It's increased my standing in the community. It's helped me to be much more confident when I am treating and managing children.
I'm also able to start a side business and also help my community in this way and we love that. We thought that was brilliant. I think we've written that up and it might be available maybe on the Malaria Consortium webpage but that was good fun. We also took some pictures.
Shubhanan Upadhyay (25:22)
And that's so interesting because you could have this automatic reaction of that, of saying, hey, you're trying to take advantage of the fact that this is being given to you as a resource, da da da. And so, I mean, clearly he did it in a very transparent way and that was great. But it's also great that you guys didn't have this knee-jerk reaction of like, you're misusing, da da da da, right? And actually saw it as a really positive.
Dr Shay Soremekun (25:39)
Yep.
Shubhanan Upadhyay (25:49)
step in like, wow, this is part of the general societal impact of this.
Dr Shay Soremekun (25:50)
Cheers.
Yeah, I mean, would
suggest he wasn't charging people to get treated, but you know, would like anything to do with the intervention. I think in low resource settings, if people can be resourceful and you can help them to be resourceful in ways that you didn't expect, that we saw that as a positive. And in fact, I think that they probably encourage community health workers to do that now in with the same intervention. They say you've got both of them. If you want to charge the phones of people in your community, that's a great thing. Go ahead and do it.
Shubhanan Upadhyay (25:58)
Yeah, yeah, yeah.
Dr Shay Soremekun (26:23)
But was nice seeing that this guy had just gone ahead and done this and had just kept this wonderful book of accounts. And I think that was a positive all-round in fact. that's the way we feel.
Shubhanan Upadhyay (26:35)
And I think just a reflection on this, it's great that you observed that and that you've also wrote that up. that doesn't get shared in, that human story or impact never gets shared, right? And most people, even a hardened ivory tower academic could read that and think, yeah, that's really cool. You know? I often talk to investors or donors or.
companies in this space and saying, you know, it's, really important. one part is like your implementation report and the hard, academic and clinical rigor of it. But actually what really people relate to is the kind of thing that you've just said, which is that, that human story. I don't know. Do you have any others or do you have any other aha moments or things that you made you change your mind or challenge an assumption that you had?
Dr Shay Soremekun (27:07)
Mm-hmm. Mm-hmm.
think maybe I can talk about the challenge that we had and the success that we had. I think our major challenge was probably contending with developing and delivering a digital intervention and relying on network coverage in sub-Saharan Africa. As I said, there's such potential for this space.
Shubhanan Upadhyay (27:30)
Yeah. Perfect.
Dr Shay Soremekun (27:52)
not just in the global north or in countries which have high income, but in low income settings there's still lots of potential. And we really tried to deliver something that was low cost and that was sensitive to the environment in which it was being delivered. But we just couldn't predict everything. And we couldn't predict the level of network outages that was experienced. think particularly in Mozambique, maybe less so in Uganda, if I remember correctly.
And also the difficulty in the adoption and the use of the interventions in some of the older community health workers who were well respected by their communities but were fairly old and maybe found the use of the smartphone and the intervention a little bit difficult. So we had to essentially kind of constantly be taking information in throughout the period of the intervention and then adapting in an iterative fashion, which is a really kind of a useful way for implementing things that you shouldn't just implement it and then measure it.
You implement it, you measure, you change, and you keep doing that over and over again. So I think that one of the first changes that we made as a result of that was the ability to store report information on the phone. So they didn't have to constantly be connected to a network. So they could only upload, they only needed to upload to servers when they had network coverage. It didn't have to be every day. We didn't give them kind of deadlines or timelines, which initially was what we thought would be useful.
Shubhanan Upadhyay (28:51)
Yep. Yep.
Dr Shay Soremekun (29:15)
they could upload whenever they wanted. And also the fact that you probably do need IT staff or some type of specialist person in the country, can't be remote, to be able to fix problems with the phones as well. So I think those are the things that we try to adapt to our setting that were probably the most important.
Shubhanan Upadhyay (29:28)
Yep.
And if I expand this learning as well, I think a really key thing that you've just said there is, when you are doing real world evidence generation, you know, I'm thinking of digital health companies who are they're deploying and then measuring is really, really important to be proactive about what you're observing. you know, in this discussion, we've talked about, okay, you want to create the conditions to measure what matters and then observe it. And then we've kind of, guess, moved into the part of the conversation of
When you see things falling off, you have to use that information that you've created of like, you're now measuring what matters.
Dr Shay Soremekun (30:04)
this.
Shubhanan Upadhyay (30:05)
So I think a takeaway that I take is like be proactive about.
Dr Shay Soremekun (30:06)
for watching.
Shubhanan Upadhyay (30:10)
what's causing things to kind of maybe like fall down and then you can act and iterate as you said.
Dr Shay Soremekun (30:18)
Absolutely. was collecting interim data, taking advice from people on the ground like, OK, we have a problem here. I think we also changed one of our partners who was actually helping us to develop the app in Mozambique partway through as well as a result of problems. But it's really understanding that you do have to make changes as a result of what you're hearing. You can't say, OK, it's not working, or it's not our fault it's not working. It's the system that we did our best.
you have to find solutions and you've got to pivot. And I think that's what we learnt that we needed to do. And we still had lots more learnings, you we didn't get everything right. But think writing it up was really important.
Shubhanan Upadhyay (30:52)
Yeah. Yeah.
Yeah, absolutely. And also sharing, you know, we talked to a lot of this podcast about sharing the things that didn't work. And so it's great that you're also writing that up.
Dr Shay Soremekun (31:06)
most of the writings can be found in the Malaria Consortium web page. They have loads and loads of resources and documents and briefs and even layperson briefs about the intervention. really well prepared. It's really nice to see.
Shubhanan Upadhyay (31:15)
Anything else on this before we move on? Anything else that you want to share?
Dr Shay Soremekun (31:20)
No, just to say that perhaps our biggest success obviously after doing the evaluation and finding that we had a positive impact was the way in which it was what we found was heard and understood and I guess absorbed by ministries of health and other key stakeholders. In Mozambique, essentially they decided to run with it and I think with support from the Malaria Consortium, I also have to say that we were funded by, I think at the time, the Gates Foundation. They gave millions of pounds worth of funding to develop this intervention.
to trial it in as part of our study. But the biggest positive update was actually seeing it being rolled out across the whole of Mozambique currently, I think with continued support from Malaria Consortium. So now it's out of the hands of us as evaluators and we're seeing it in its phase two, which I think is now called Upscale So it was initially called in scale, now it's being scaled up ⁓ and that's just brilliant. I think it's probably in terms of being part of a piece of research that has had measurable impact.
This has probably been the one that I can see has had the most measurable impact that I've been part of.
Shubhanan Upadhyay (32:22)
That's so awesome. this an, the upscale part, is there somewhere people can look at that or kind of track what's going on? ⁓
Dr Shay Soremekun (32:26)
Yes, absolutely. Absolutely. And I can
send some links to that as well.
Shubhanan Upadhyay (32:34)
Super, super. That
sounds awesome. So many great things in there. I would love to have like, be able to have the chance to spend more time on this, but I would, I think it would be great to just zoom out on evaluations as a whole. I guess you propagated this work into the London School and this center of evaluation. you know, from our discussions, that's the work that you're doing there is doing a lot of big picture thinking.
And I guess the question that leads into that is what's not working about the overall monitoring and evaluation space and how are you trying to think about that in the Centre for Evaluation at the London School?
Dr Shay Soremekun (33:12)
So yeah, mean, at least one of the ways I'd like to talk about is being involved in developing guidelines to do evaluations better. Now we do that, obviously, for many reasons, because there are new types of interventions that have been developed over time, new ways of working. I think the main thing for us at the London School of Hygiene and Tropical Medicine, which has historical links to
the colonial past of the UK is that we wanted to move away from a model of evaluation where you identify a problem in a setting that's not your own, usually a lower setting, a setting with lower income. And then you essentially try and figure out a way to solve it and evaluate it using methods that have been developed in.
high-income westernised countries over centuries. And what we realised is that by doing that, it's actually created a disconnect where you could do an intervention, maybe you won't find anything, maybe you do. But it rarely gets scaled up in a way that's effective, or it rarely gets adopted. And what we realised is that we needed to make the way in which we did evaluations more equitable.
relevant to low and income country settings and also decolonial in our approach. So the idea is to look at ways of understanding the world. And it became very kind of philosophical and sociological in trying to understand, what is it about the way that we do evaluations that could be seen as westernized and extractive rather than, you know, the same.
Shubhanan Upadhyay (34:54)
That would have been my question. Like, what's, have you got an example?
Yeah.
Dr Shay Soremekun (35:00)
So I think one of the main things is the way in which we decide on what matters and who's involved in those conversations. So as I said, maybe for us it was improving, I don't know, improving the number of community health workers who gave children this drug at this time. And that may help one particular outcome, but it doesn't matter if that intervention never gets implemented.
So you have to be able to find ways of either identifying indicators that are important to people on the ground, using methods of doing evaluation that are important to people on the ground so that there is buy-in. And in that way, you are more likely to be able to see the results that you really want. Because I believe that a lot of people who do research, even if they are based in high-income settings, actually have good intentions.
An example would be there are several frameworks which we call for indigenous evaluation methods. And indigenous evaluation methods frameworks usually involves using local knowledge to understand how to evaluate whether something has an important impact. And there are some really nice scholars in this area. And I don't want to name one or another, but I can also put some links about more of this type of work.
Co-creation and co-production is a really big part of the approach. So making sure that you are definitely designing an evaluation with people who are going to be most exposed to the intervention or the thing that you're trying to, the thing that they're trying to improve in the way that you're trying to improve it. And doing that sometimes is often that we don't often think about including them or include them in a tokenistic way. For us, that meant making sure that we were in the field and
designing the evaluation with people who were going to benefit from it. And even using the word beneficiaries can have some negative connotations. But the idea is that we want to do it with stakeholders who are families who could receive this intervention, that it was an intervention that they wanted. And that we, as you talked about, not only looked at these statistical outcomes, you know, we decided that this intervention was successful because we saw a statistically significant improvement in appropriate treatment.
But that's not necessarily as important to the community health worker or the mother of a child who's under five. They want to know that their community has been brought together, that their children is going to be treated with respect when they go to a facility, that their child is going to get better. And sometimes having those more human interest stories, doing things in a way that can convey the right meaning to people on the ground, and thinking about
the way in which you decide what needs to be intervened on and how it should be done should be essentially handed over to people who are going to be affected most in the field by the things that you do. So it's a new piece of Endeavour for us at the Centre for Evaluation. And I have to say that we're learning as we go along, as probably as you hear, as I speak about it, I speak about it in little bit of a hesitant way because
It involves lots of new concepts and it involves ceding control. And I think that's what we're really trying to get to. But it's a really important endeavor for us to do. We're about to publish, first of all, our scoping review. We've looked at lots of papers which have characterized indigenous evaluation methods. And we want to essentially show the breadth of methods, essentially whole ways of knowing the world.
Shubhanan Upadhyay (38:24)
Yep.
Dr Shay Soremekun (38:37)
that we don't consider because we see them as being produced by people who don't necessarily have agency or who aren't empowered in their middle income settings. Publishing them incorporated them into way which should do evaluations.
Shubhanan Upadhyay (38:45)
Yep.
And I think there's two things I want to say on this. One is, you know, we talk about how do we, especially in the wake of USAID and cutting funding, and people are talking a lot. A lot of political rhetoric is based around how do we create local ownership and local decisions. And I think this definitely feeds into that.
Dr Shay Soremekun (39:15)
Yes.
Shubhanan Upadhyay (39:16)
But in some ways there might be detractors who say, well, like scientific methodology, when done right, is there for a reason of being objective. There's lots of conversations on why that might not be, or certain guidelines and things have been created without the inclusion of certain communities, particularly indigenous or certain other underserved communities.
absolutely there's room for this. But people might be also worried about, types guess like treatments or things in indigenous communities are also not evidence-based. So how do you make sure you take the benefits of the scientific approach and remove negative aspects
around that if I was trying to be a detractor here,
Dr Shay Soremekun (40:02)
I think honestly that it's really good to be provocative and to play devil's advocate because a lot of people fully believe what you have essentially just outlined. I think the main thing is to also stress that we're not trying to deconstruct or dismantle the current, all of the methods that have been used in the past which have been successful in improving public health.
What we want to do is to recognise that the reason why we don't often see improvements in the way that we expect is because we have ignored the agency of key people who understand their own settings, their own context much better than sometimes we do. Well, always than we do. And there is nothing wrong with incorporating good quality methodologies that have been useful.
and that have been passed down from generations amongst tribes, amongst people in settings that are different to ours, that have value and that can essentially ensure that good quality interventions, good quality programmes are implemented and upscaled. And what we have noticed is that when you take, even if it's like a co-production approach, and co-production is probably one of the biggest...
I would say gems that we have taken from indigenous methods where things are usually done in community settings by community. tend to be much more, we have much more smaller support systems in the UK around each of us. Things work in large communities in our middle income settings and there are lots of values to that. Taking that co-production approach, making sure that the right people are leading it and that the people on the ground are the countries in which the interventions are implemented. That alone and using that to ensure that your evaluations are equitable.
has such a massive impact, such a significant impact on not just the quality of your evaluation, but also on the chances of you seeing the success in the thing that you're evaluating, and then lastly on the chances of it being adopted and taken up.
Shubhanan Upadhyay (42:03)
That's such a great takeaway. I think what I also link that to is when you're thinking about the, right at the beginning of the process, you're in discovery, you're in this kind co-creation process, if you've involved people in the right way, what you're also trying to do is like, what is the value, right? And there's different layers, like there's value to patients and communities, there's value to the clinician end user, for example, and there's value to the system. And they all, to some degree, whatever context you're in, pull in different directions.
what the individual and the communities need might be different to like the system decision makers, like perception of value versus what the community health worker or the clinician would find valuable. And you kind of depending on what type of intervention you're building, you have to be able to answer questions on all three of those levels. And the work that you talk about, I think contributes to having a very high fidelity of what is valuable for people individually on the ground and answering the, make this make sense for me as an individual.
Dr Shay Soremekun (42:44)
Yes.
Yeah.
Exactly. Yes.
Shubhanan Upadhyay (43:01)
So it's valuable for me. So
Dr Shay Soremekun (43:02)
Yeah.
Shubhanan Upadhyay (43:02)
I'll actually engage with it and adopt it and take it up and trust it eventually. And this is the work that you have to put in to make that happen. So that's what I take away from that, from digital health, if you're a builder in this space perspective.
Dr Shay Soremekun (43:13)
You are absolutely crazy.
That's really, I'm glad you took that, you know, that summary was perfect, maybe even better than what I said. That's exactly what we're trying to achieve. Can I just name check some people as I'm talking? I'm thinking that I, as a person who would come from a very quantitative scientific background, this has been an eye-opener for me, even though my history and my ancestry has been in sub-Saharan Africa. I have enjoyed this process, but I want to mention...
Shubhanan Upadhyay (43:27)
Yes.
Dr Shay Soremekun (43:41)
Sali Hafez, Agata Pacho, Ruth Ponsford, Mitzy Gafos, Lucy Platt and Meghna Ranganathan who are working with me on creating these decolonial and equitable evaluation guidelines that helps us to be able to do better quality work with people both in the global north and the global south and to also make sure that the people who own the research are the ones who are going to be most affected by it in the settings in which they are.
So hopefully you'll see our outputs coming out in the next probably six months. We have a scoping review paper coming out. Also, we have a position piece potentially coming out somewhere else. We're going to hold a series of talks at the London School for people who are based locally in London and also can come and see those as well from people who are doing excellent work in this space.
Shubhanan Upadhyay (44:28)
think it's such an important piece of work and conversation that you're starting and structuring. And there's a lot of people who are advocating for this as well. I think the timing is right as well. So looking forward to kind of sharing that work and seeing that work come out.
Shay, we're coming to the time where we need to wrap up. I wanted to maybe do a quick fire round of questions that will help people who are building in this space. So I've got three, if that's okay.
what's Shay's kind of key recommendations on how you get to measuring what matters?
Dr Shay Soremekun (45:02)
So you measure what matters by co-creating your design for your evaluation with people on the ground. You have to include people who are going to be involved in the research, who are going to be conducting the research.
who are going to be exposed to the interventional programme that you are evaluating as a public health practitioner, and the people who are going to have agency to see whether it's scaled up. Those people should be involved right from the start in every aspect of your evaluation design, and you should consider your evaluation not just about one hard impact, like improving health or reducing mortality, but also the context in which it's being delivered, and try as much as you can to also measure things in that context.
Shubhanan Upadhyay (45:45)
Superb, that's such a great summary. Have you got any, ⁓ maybe this is linked somehow, but lots of digital health companies are leaning into evidence generation and choosing the right clinical outcomes that they can actually move the needle on. Do you have any insights from your work that can help them on how to approach this?
Dr Shay Soremekun (46:06)
will say going back to the way in which we thought about what do we need to measure is formative research. Formative research can take many guises, but essentially what you want to do is again to understand what's important to the people who are going to be affected by the thing that you're measuring. You have to talk to those people, but the idea is that once you have decided, once you have come together and produced what we call a program theory, which is essentially
How is the digital intervention or AI intervention going to achieve its impact? Once you've decided on that program theory, you create what we call a logic model, and that logic model will essentially map out every single stage of how it will achieve its impact. What we would then recommend is that you agree on indicators along that whole process to understand what are the best things to measure.
Shubhanan Upadhyay (46:54)
Yes.
good. Have you read anything recently that's been either articles or books that have kind of changed your mind on certain things or been really inspiring in this space?
Dr Shay Soremekun (47:12)
I the biggest thing that has struck me recently is again an opportunity for digital health to be a game changer in Africa. So we're in an era of what I call disinformation and misinformation because we're also in an area of increased social media usage, which means that messages around what's good for public health can get distorted or could be incorrect. And this doesn't just occur in America.
or in the UK or in countries in which we tend to hear on the news. But this does occur in villages and rural communities in far away countries. And for us in Sub-Saharan Africa, that becomes a really major problem. What we have learned is that there are some really nice examples of early digital programs which use reminders and text messages and WhatsApp groups and even digital media platforms to combat misinformation.
especially around, for instance, things like childhood vaccination, new drugs that are beneficial to communities, and they do it in a way that's again been co-created with the communities which are going to receive the messaging from these digital programmes. And it was really nice to see. So I I'm very encouraged by the ways in which digital health interventions can improve.
the lives of people in sub saharan Africa.
Shubhanan Upadhyay (48:33)
There's lots of different kind of threads of work. mean, I know that WHO have the FIDES network. Some of the big like social media type companies like TikTok and YouTube often have clinician, doctor influencers and charters of trying to do this right as well. There's lots of negatives in that space as well.
But that's at least seems like a step in the right direction, right? And it links to information or disinformation as a determinant of health, right? Totally. So that's really good one, Shay, I feel like I could talk to you for hours about this stuff, but thank you so much for sharing, your insights into, such important methodology around.
Dr Shay Soremekun (49:03)
Yeah.
I know and you too, it's been wonderful.
Shubhanan Upadhyay (49:19)
process and impact evaluation, program theory, the logic model that you've talked about, and then zooming out on bigger things that we need to consider. How do we need to be thinking more about creating value locally for people within their context and involving them much more intentionally in the work that we're trying to do. So thank you so much. It's been really, really great and insightful talking to you,
Dr Shay Soremekun (49:41)
No problem. Lovely to meet you, Shubs. again for your insights. It's been fantastic.