Healthtech Pigeon

Huw and Belle from SomX breakdown the best stories from this week’s newsletter with help from The Health Foundation’s Tim Horton.

[02:15] - AI outperforms clinical tests at predicting Alzheimer’s progress
[42:30] - Digital health needs women at the heart of it


🗞 Grab the newsletter at www.healthtechpigeon.com
✍️ Apply to be a podcast guest using this form

🖇 Connect with Huw on LinkedIn
🖇 Connect with Belle on LinkedIn
🖇 Connect with Tim on LinkedIn


📖 Learn more about the SomX team at https://somx.health

What is Healthtech Pigeon?

Like normal healthtech news but, you know, not boring.

The team at SomX bring you the audio version of the weekly Healthtech Pigeon newsletter - a panel show with journalists, VCs, podcasters, entrepreneurs, execs and those with a strong opinion to make things interesting. Regular hosts and guests are Dr James Somauroo, Jessica Smith, Belle Taylor, Huw Penson, Jess Farmery from the SomX team.

🐦 Grab the newsletter at www.healthtechpigeon.com
🚀 Learn how SomX can help your healthtech company at www.somx.health

Speaker 1:

Hello, everyone, and welcome to the Health Tech Pigeon Podcast where we take a look at the best health tech stories of the week every single week. Today, I'm really pleased to be joined by Belle from Somex and Tim Horton from the Health Foundation. We're gonna do something a a little bit different, today. We're still gonna touch on some of the stories, but it's been a really exciting week in health care AI. And we've come off the back of, I think it's the Health Foundation's first ever AI summit, with some fantastic speakers, some fantastic guests, and even, I guess, more interesting, some fantastic talking points.

Speaker 1:

So, before we get, you know, too deep into everything, Bell, Tim, how have your weeks been?

Speaker 2:

Thanks, Hugh. It's been a very busy week, for us at the Health Foundation. As you said, we've had a big, summit yesterday looking at the potential of AI and health care, where we were joined by, Karen Smith, the new government minister who's responsible for, health technology and data, including AI. So we were, busy preparing for that. And, yeah, had a really good day yesterday, but, glad to get to the end of the week too and be with you here.

Speaker 1:

Fantastic. Yeah. And, obviously, the end of the week can't come soon enough, but it's been a good one and you really enjoyed yesterday's event. Belle, how about you?

Speaker 3:

Nothing as exciting to report on my end. I've not gone to any of the nice events this week. But, yeah, I'm super excited to hear about what you've got to say and, obviously, how that links to some of the new stories this week. I obviously love love talking about AI, and there's some things here which kind of tap into themes that I love as well, such as virtual care, women's health. So can't wait to dig in.

Speaker 1:

Well, I mean, we're gonna get on to some really good stories. There's some really exciting stuff we've got coming up. But, Tim, I I guess that the kind of, like, core for me on this one this week is looking at the Health Foundation's work on AI. You've published a lot of reports over the last I think it must be 3 months. There's, there's some fantastic things out there.

Speaker 1:

You know? What are the technologies that are actually saving time? What tech is actually working in the NHS? A and, you know, most recent one, we we featured and and already discussed, on Pigeon, but it's great to have, you know, one of the team behind it on, which is a strategy for AI in health care. And I think, you know, that's what I'd love to discuss today.

Speaker 1:

So before we go deep into some of the kind of issues at stake and some of the, I I think, some of the talking points from yesterday, tell us, why now? What's what's the what's the kind of challenge that a health foundation is seeking to address, and what do you you know, what's the kind of basis of the work?

Speaker 2:

Yeah. It's a great question. I mean, over the last few years at the Health Foundation, we found ourselves working more and more on the area of health tech and data and the potential this offers, for the NHS and health care systems, all around the world. That potential is growing and growing. Thinking is a year ago where we had that explosion of interest, following, I guess, chatgpt4, but in in what the the the latest, large language models can do.

Speaker 2:

And particularly with AI now, it's feeling like there's a real there's a real moment if you want. In the UK, the NHS is under such pressure at the moment. It's, got persistent workforce shortages, long waiting lists, and money's tight, and demand for health care is increasing. And it's sometimes hard to see a way out of that, and I think technology and AI are one of those things that potentially offer not a quick fix and not not an alternative for recruiting enough staff and investing in the health service properly, but do offer a way to support the health care workforce, try and deal with some of those pressures. So it feels like there's a real moment now where there's rising interest in health tech and particularly AI.

Speaker 2:

And what we've noticed as a foundation, you know, we fund a lot of, innovation and improvement. And we're looking around, and we're seeing really good work going on on AI in health care around the UK. We're seeing good initiatives going on within government, but they're all quite disparate and fragmented and feel feel like they're not really, if you want, adding up to more than the sum of their parts. And something we were trying to do at this summit yesterday was call for a dedicated strategy on AI and health care. We think it's a moment where if we can get a grip on this, if we can get agreement across the different players in the system, what the priorities are, If we can get some of the agents involved collaborating, a a a better and consistency across them, there's a chance to go, further forward and make make further progress there.

Speaker 2:

So that's what we were focusing on, yesterday. We were really delighted. I mean, it's kind of huge interest in it. We we had a couple of 100 people lay yesterday, but a couple of 1,000 joining us online. I guess, just showing the kind of energy there is, around it, and and I guess a community of people wanting it to work.

Speaker 2:

So that that's really why we were trying to focus on, the need to get a kind of over arch overarching strategy, a kind of agreed focus, and more more clarity and consistency for everyone working in this field.

Speaker 1:

Completely. And I I completely take your point about the kind of sort of fragmented things here and there that, you know, we've seen those announcements over probably over the last 18 months now, sort of 18,000,000 here, 30,000,000, for that sort of vague, you know, where it'll be in this space, but it'll this one might be for this number of trusts or, you know, in this particular area. And there's been a lot of it, hasn't there?

Speaker 2:

Absolutely. And, look, I mean, it it's all really good stuff. I think for us as a country in the UK, you know, we're not the US where the National Institute of Health can spend tens of 1,000,000,000 each year. We've we've got a limited number of resources. There's a chance if we can if we can agree some really promising areas, some key areas to focus, if we can target everyone's limited bandwidth and resources, better, there's there there really is a chance to make progress on potentially some key priorities.

Speaker 2:

But that does require the kind of players involved coming together and thinking, you know, without squeezing out you know, in addition to all of the local innovation that will always be going on all the time, where can government and, industry, academia, civil society, where can we make most difference if we can agree some key areas of focus?

Speaker 1:

So for me, this this is just one very slight thorn in in in this that I can't quite reconcile. And I think we it was touched upon quite a bit at the the the summit, which is an AI strategy is absolutely valuable. I think there's so much that's up in the air and some of the questions that are up in the air. But do we need a tech strategy first? Because there's a lot of the it's really unclear how a lot of it, you know, scales, adapts.

Speaker 1:

There's the technologies that are more basic isn't the right word, but using sort of different different approaches, different technologies in there that could have a really, really impactful, effect on the NHS and health care more widely that that don't rely on AI. And we're we're struggling to bring them in as much as we are, you know, the fancy new generative AI tools.

Speaker 2:

Yeah. It's a really good question. And, of course, AI is is a subset of, health technology, whereas you said there are real challenges. I mean, you could go one level up and say health technology itself is a type of innovation, but there are other kinds of innovations. And, actually, there's work to do for us, in the UK to get the ecosystem right for innovation in health care, whether that's investment or, how you scale things up.

Speaker 2:

So, definitely challenges exist on all of those levels. But I guess what we were trying to say is AI in health care presents some specific challenges, some really specific challenges that, we think need a dedicated focus. So for example, issues of regulation, and accountability, something we hear from the clinicians we work with a lot is this anxiety about who's accountable if stuff goes wrong. Is it the person using a tool? Is it the person who designed it, manufactured it?

Speaker 2:

You know, we've seen that debate, haven't we, play out with driverless vehicles, at the moment. Who's responsible if something goes wrong? And, that you know, so regulation's one area. Evaluation's another one where AI presents some real challenges because now it's not just you got a product, you test it. Is it?

Speaker 2:

Does it work or not? And then you can press a button. AI is usually based on algorithms that can adapt and therefore change over time. So how do you monitor that performance? A lot of people yesterday were focusing on some kind of dry phrase, post market surveillance.

Speaker 2:

But this idea that, we're gonna need systems for continuing to watch the performance of these things. It's not just a case of evaluating them upfront. Or you've got, like, generative AI where, you know, you could use it for different purposes. You could use something for a purpose that wasn't originally intended for. How how how do we deal with that?

Speaker 2:

So AI is posing some specific challenges as well that we and and we think they'll benefit from a dedicated focus. Of course, that's not to say that there aren't challenges with technology in general, including getting the the infrastructure, the IT, and the data infrastructure that needs to underpin good AI, right, as well. So, yeah, totally agree with you, Hugh.

Speaker 1:

That's a great answer because that's, I guess that is challenge we've got to work out is sort of where are the differences between innovation, new technology, AI. 1st question sort of back then is we there's a lot of discussion about sort of listening to the market, rather than necessarily have the market lead the conversation as well. So how, you know, how do we make that happen? Because that's a key part just about bringing in, you know, what are the problems that we have. I I had a nice sort of quiet conversation in one of the refreshment breaks yesterday with, a number of people working in the innovation field, and it it's sort of there's a lot of initiatives to bring tech in, sort of and bring AI in along with this sort of strategy, but there's still perhaps a lack of leadership from the center on this.

Speaker 1:

So, you know, is it how do we bring that what is the challenge, piece into into a strategy?

Speaker 2:

That is just like the fundamental question, isn't it? Because, the risk if if if we don't think about that as a question is you'll get industry developing a lot of tech, a lot of good tech, no doubt, But then, voila, that has to be the solution, and and and you're into a system of people trying to sell stuff in to the NHS and persuade others that it has to be the solution. A better way, of doing it is going to involve us understanding more the problems that NHS providers, hospitals, GPs, NHS staff on the ground, and, of course, you know, patients, carers, families are are facing. And to be honest, finding better ways of aggregating that intelligence and that that demand and signaling that demand to industry. And I think, you know, the center, if you want government or or or or or national agencies and organizations, do have a valuable role to play in doing that.

Speaker 2:

You know, ideally, we need good horizon scanning of what's coming out of industry, but we need good demand signaling back to industry of the kind of issues that the health service is facing and that they need help with. So there's definitely, a chance to get better at that. Over the last year, we've been doing a bit of work with some of the Royal Colleges in the UK. These are the professional, representative bodies for different kinds of health care professionals, asking some of their members, about the challenges they face and the types of tech that could make a difference for them. And to be honest, that was a real eye opener.

Speaker 2:

You know, if if you open the papers as you guys do regularly or or look on TV, you often see a lot a lot of excitement about kinda niche clinical technology, you know, a new type of, medicine, new a new kind of treatment. What we heard a lot from the staff, we've been working with this year is the importance for them of tech that helps them with quite mundane admin and operational stuff, and frees up their time. They they were saying these kinds of technologies, even things as simple as like video conferencing, like we're, doing now, or digital messaging tools for for messaging each other, we're making the biggest difference for them in saving time rather than really cutting edge, futuristic, niche clinical technologies. So, actually, you know, reflecting on that engagement, what we heard a lot is it it sometimes, tech to help with very mundane run of the mill staff that, is gonna be moving the needle. Yet when I what, you know, when you look at where stuff gets invested, you know, of course, everyone wants to cure cancer.

Speaker 2:

Everyone wants to make progress on major diseases where all concerned about that that there's less attention to, you know, producing letters in the NHS, scheduling appointments, planning rosters, all of that. But, that's, you know, what we heard was really moving the needle for staff.

Speaker 1:

The stuff that takes time. Right? It's the stuff that, you know, shouldn't take time, and there's a wonderful, anecdote from yesterday's event of the the 10 minute for the consultation and the 30 minutes going around looking for a printer. But, I I mean, it's it's a really good, I guess, challenge, and I I like the idea of going to sort of talk to some of the bodies representing different groups within the NHS because, I mean, obviously, we work with a lot of health tech companies, selling into the NHS, selling in at different levels. And even as much as the buyer and the user can have quite different priorities.

Speaker 1:

And addressing those and, you know, determining what gets bought and by whom and and, you know, what problem you're actually trying to solve is quite challenging. Right?

Speaker 2:

Absolutely. And in in a complex service like the NHS, particularly where you've got very big providers like large hospitals, You know, adds another layer to that here, I think, because it very often, a person making a decision probably work in a procurement office, it different from the people who will be using the technology. You know, it's not like your classic innovation theory where you take an innovation to market and the user buys it. The decision to buy and the decision to, and and the challenge of using are are are done by different people. And it it makes it even more important this this understanding there and that that, the kind of chemistry of the systems right in getting this work.

Speaker 2:

There's also the the challenge, I guess, underpinning all of this of just how the whole face right. That's going to be really key getting that cold face right, so that, you know, the health service is getting tech it needs and is confident about the, effectiveness and safety of that and the post market arrangements, but also the innovators in industry have confidence as they're developing technology, in what's going to be asked of them. That it's clear that it's not constantly changing, that it got easy roots into the system. So this that that was another thing we heard at the summit yesterday. There's still work to do at at coal face to to get that right.

Speaker 1:

Is it a fundamental blocker then to sort of some of these grand challenges, these big missions of, using AI to solve cancer if, if right now the kind of things that is, saving people's time are Zoom or MS Teams or, yeah, we've got a lot more time to save, obviously.

Speaker 2:

Sure. I mean, look. It's it's not a blocker, and all all that work going on, on on AI and Kansan and in that medical space more generally is, of course, critically important. I suppose so if we're thinking what's gonna what's gonna help the NHS get through the next few years, It's not gonna be something that's coming over a kind of sunlit horizon in 15 years' time. It will be absolutely helping staff with admin freeing up their time, in the here and now, often better use of existing technologies as well as, as well as, the the the new things that are coming down the pipeline.

Speaker 2:

So, yeah, it's not a blocker, but, what's gonna what's gonna save the NHS over the next few years isn't, the most exciting new developments in genomics that I'm reading about in Time Magazine. It is as you say, better communication, better help with, admin, and things that can free up staff to, care for patients and improve services.

Speaker 1:

Cool. I I guess a lot of it is just sort of avoiding people feel like they're, you know, having innovation done to them. Right? It's, you know, how do you bring people along that journey? And I I it's it's the, you know, the cohesion point that you sort of raised earlier is that how do by a user, admin staff, clinician, just, I guess, bring a cohesive market signal together that industry can then react to it?

Speaker 2:

Yeah. I mean, it is the, was gonna say $64,000,000 question, $64,000,000,000 question probably. It because ultimately, the benefits of tech aren't just coming from the the gadget or the kit, but from the successful use of it. And that ain't gonna happen, unless as you say, Hugh, that the people who are gonna use it, whether that's health and care workers, whether it's patients, are involved in the discussion about how it's designed, and and how it's used. And something we heard at this summit yesterday, we had, Rachel Power there, who is the chief executive of the patients association, who was talking very powerfully about the the the importance of involving the public and patients in the conversation about the way tech is changing health care and and in in in helping get right the way we're applying tech to health care, to make it better.

Speaker 2:

I mean, just let me give you a small example. We we with that work we're doing with the rural colleges, we we were looking at tech that might free up staff time. But we also asked them, what would you do with that freed up time? Because a lot of the policy agenda is assuming, oh, great. We can free up an hour of time.

Speaker 2:

That will mean formal consultations. Actually, what we heard from staff is, yes, we'd use we'd use it on patient care, but we'd also use it on training, on, management, on admin. Probably for reducing our overtime as well. So it's showing you the complex context of freeing up time and how we learn to get kind of benefits we want as a system if everyone's engaged in in in talking not just about the technology, but how it's gonna be used in part of a system where we we derive those benefits. So, having having the space to to do that well, is is is the holy grail in a way.

Speaker 2:

I totally agree.

Speaker 3:

Yeah. Something I was going to add is we talk about a lot about the idea of developing impactful solutions and then the challenges of selling that into health systems and things. And, Tim, you've hit upon so many really, really crucial and critical points here. One of which I think is that it's not just about getting buy in from the clinicians, from the, hospital management, for whoever is making those decisions. It's the best solution in the world could be shown to evident like, with evidence to free up a ton of time and be really impactful across a health system.

Speaker 3:

But if that solution has a really big onboarding time, then that's gonna be a real blocker to it. So it's the case that sometimes a little bit of, you know, activation energy or whatever to get to that better solution is gonna be worth it. And then a doctor can see that and be like, okay. Yeah. I'll spend half an hour just learning this, setting it up, and then I can just get to it straight away and see that impact.

Speaker 3:

But more often than not, especially if it requires change at a system level or a trust level or something wider, you're not gonna get people using the best solution in the world if there's too much of a barrier to entry. So that's why these conversations with clinicians and the users are so so so vital because how do they carry their work carry out their work? Does it slot into their current framework? Are they gonna have to change their other processes to use that technology? Because that's when things don't get used and really impactful technology doesn't make the impact that we hope it would.

Speaker 2:

That's absolutely right. I mean, we've at the Health Foundation, we've been, supporting and and funding, innovation, over a a a long number of years. And what you've just described, Belle, is is just a kind of key lesson you see coming out coming out of it, which is that, as we said, it it's not just the tech. It's the change that's involved, redesigning your workflows, ensuring the culture's right to use it. And that that that will that requires a change management process.

Speaker 2:

It it sounds mundane, but often very small amounts of resource like clinical backfill to allow staff to step outside of their jobs and try something out first before putting it in place or just a little bit of data collection, obviously training, whatever. For sometimes for want of quite small resources on that, you see things going wrong, which could unlock big gains. So, yes, it's an area that requires some, you know, the as you said, there's a hurdle to clear to do that. It requires some investments to get this right, but it's not tens of 1,000,000 or tens of 1,000,000,000 of pounds often. It's quite small amounts of resource to do that process.

Speaker 2:

Right? But unless you do that, you see technology not delivering, the benefits that's that are intended.

Speaker 3:

I guess it's it's a funny time in many ways because you've got this we're at this precipice of AI technology. Just it's booming and there is so much potential and everyone can see it and they know that we are just that next, like, generation of how we interact with technology is it's just there. We can feel it coming. Like, 10 years time, the way we interact with technology is going to be so different. And AI is gonna play a crucial role.

Speaker 3:

What that will look and feel like, we're not quite sure yet, but it it's there. You can feel it. And then that's happening at the same time. Whereas in the UK health system and health systems across the world, just this intense pressure and overcapacity and waiting lists and the like, everyone wants the change. We want to usher in, but they're they're just so busy.

Speaker 3:

And the priority always has to be how do we give the best care to our patients. So it's it's a it's a really interesting time to be in health care and AI and technology because you can just see it waiting to happen. Yeah. The other thing I just wanted to quickly mention because, Tim, you mentioned it right at the beginning of you talking. I've made some notes, but you were having a great chat so I didn't want to interject.

Speaker 3:

Is this idea of, like, what is powering the technology and stuff? Because I think that for clinicians who are very evidence based, they've gone through a very long training process. They're used to being able to say x y zed means this. Like, being able to make decisions based on the data in front of them. Suddenly being asked to potentially rely on an AI tool, which has that black box is a big ask.

Speaker 3:

And for AI that's developed in a very open way, then that's less of a problem where you can point to these algorithms were trained on these datasets. This is what it does, this is how it processes it, this is how it learns. But for gen AI in particular where you're often relying on these mammoth data sets which are developed elsewhere, and then you're using that for your own purposes. That's a bit of a funny one because people don't really know. Like, I don't think anyone knows exactly what CHA2PT is trained on and things like that.

Speaker 3:

And then being able to use that in a clinical scenario raises so many questions, which are, I mean, regulators and lawyers, I'm sure as well, are gonna have a field day over the next few years with this. And it'll be so interesting when we see the first cases go to court of, like you said earlier, who is responsible for these decisions? Is it the developers who have developed technology with biases in it, that ultimately is developed to look and seem human? As we've already seen, their end goal is not to create true or even correct answers. It's just to make something feel human.

Speaker 3:

And then being able to use such as something which needs evidence and has a very, very, very real impact on people's lives. It's a it's a big, like, chasm to cross. I think it's gonna be fascinating.

Speaker 2:

Yeah. I mean, so so many issues there. And I think health care is really showing the importance, the seriousness of that issue, about the representativeness of the datasets that these AI models are trained on, whether they're developed in one country and then used in other countries, whether they're trained and developed using a population that is representative of the target population. So we're often seeing that play out with, for example, ethnic or racial inequalities, and biases in the datasets, on which, these tools have been trained. Yesterday, our summit had a really interesting case.

Speaker 2:

I had a pediatric radiologist on my panel who said, all this stuff to do with AI image analysis. Yes. It's brilliant, but it's all for adults. It's been developed for adults. Children aren't getting the benefit of that yet.

Speaker 2:

So yeah. It it's it's it's such an important issue, who we're developing AI for the representativeness of the the datasets they're trained on, and avoiding the biases and the the the the very real errors and harm that could come with that if they're not not used in the right way.

Speaker 1:

All of this gives me one really big pause for thought, which is, is the whole I mean, even as we look at, you know, could you know, is AI the answer? Could, the I think someone raised a wonderful point yesterday as a, you know, why use AI when a simple statistical model would save you a lot of faff? And, you know, Belle, you raised the kind of precipice. We don't know what we're gonna be looking at in 10 years, and that, I guess, also brings in the skills conversation as well, which I thought was a really, you know, great discussion yesterday. And I think, Susan was the pediatric radiologist who also brought up the, the question of the, you know, how it changes, not just the task, and, yeah, what elements of the task that, a clinician is doing, but also the tasks that the clinician must do at any given time, not just from a tech perspective or a clinical perspective, but working with patients, working, you know, that that that radiology tech that now has to explain that the AI has found something and they need to book them in for another test is workflow wise, just pure workflow wise, it's great because it means the patient doesn't have to go home, immediately.

Speaker 1:

You know, they can find put them in right then. They don't have to wait for something because the the AI has found something. They're now faced, however, with having to have that awkward conversation, that quite difficult conversation that they would never have had to beforehand. So I guess all of that is in around to say we're moving very quickly. And from a, regulatory perspective, from a skills perspective, how do we know, you know, how do you know how do we know what we're gonna be looking at in 5 years?

Speaker 1:

We can there's the old adage of regulators always 5 years behind anyway.

Speaker 2:

Yeah. And and if that was the case anyway, with the speed at which AI is moving Yeah. I mean, 3 years ago, LLMs, we we we we heard yesterday from a a professor Zach Cohen from Harvard, and he was saying 3 years ago, large language models, couldn't pass a medical test. Now they can. You know?

Speaker 2:

And the speed at which things are improving is is, itself quickening. So keeping up, is incredibly difficult. And, yeah, nowhere did you see that more than in that skills conversation. You know, further to what Belle was saying, you you know, staff thinking, what what are the skills I need to oversee the use of one of these tools responsibly? How do I question its outputs?

Speaker 2:

How do I explain why it's saying something, and, how do I how do I how do I use it, effectively. And then what you're alluding to, Hugh, in that it's freeing them up from some aspects of the job, it it it's then allowing health and care workers to do what humans do best, which is interacting with patients. And very often, those more patient facing aspects of care. And for some roles, that's going to require more skills, so not more tech skills, but more interpersonal skills. More skills at negotiating or helping people navigate systems and so on and so forth.

Speaker 2:

So there's both the skills that staff will need to oversee and use tech properly, but also the new skills they'll need because tech's freeing them up from some historic aspects of their work, to add more value in other areas. So it it's just a huge challenge. And and just coming back to your speed point, you know, it's all very well changing curricula and student education, but, you know, things are changing every year, every 2 years. So this is going to be about continuing professional development for people already in the system. And, it's not a generational thing and how we're training today's students.

Speaker 2:

It's what everyone needs to know next year and the year after, and having the right play the the the right kind of agile system in place to do that.

Speaker 1:

That'd be a a very interesting one. But, again, for the professional bodies as well to look at from the the CPD perspective and and how they bring that in. That's certainly a really interesting one. We've obviously chatted a lot of challenges, but we're I think I don't think any of us would be on this call if we weren't techno yeah, health techno optimists. And I think that probably showed in the in the room yesterday when you showed that ongoing poll and survey of how optimistic we were about, the tech and AI ability to, I guess, change our jobs for the better.

Speaker 1:

I'm not going to make you spoil any, upcoming data that you might be sharing, but do you wanna, trail, sort of the kind of things that you're working on in terms of, looking at that side of things?

Speaker 2:

Yeah. I mean, we we believe that getting tech right, particularly getting AI right, is going to, mean that new uses of technology command the support of the public and patients and staff, And that involves understanding what they think of technology and, as we were saying, involving them in that conversation, about the design and use of technology. And to help with that, that the health foundation, investigates public and staff attitudes to health tech and data. And this year, because it's a hot topic, we've put we put in a module on AI, and we'll be publishing the results of that survey in a couple of weeks. This is a survey of, 7,000 members of the UK public and, 1,300 NHS staff.

Speaker 2:

But as you mentioned, Hugh, one of the things we asked them really as a kind of barometer of people sentiment. I don't wanna suggest that that these questions have simple answers. But if you had to say whether AI would primarily improve jobs and your work or threaten your jobs and your work. Which side of that, are you on? And yeah.

Speaker 2:

Well, I can't, give you the the spoiler of that. When we asked our audience at the summit yesterday, there was strong optimism that AI would primarily be a driver of improving jobs, and making them more rewarding. And, you know, I think that that's an area that maybe health care is different from some industries. If you look at some aspects of manufacturing or transport or sectors where there's a lot of, routine jobs. And there there are sometimes more pessimistic, estimates of the impact of AI on jobs.

Speaker 2:

But, actually, the consensus if you look across the kind of modeling and economic work that's done is that, tech and AI will change work in health care absolutely, but won't primarily be leading to large scale redundancies. They'll be changing the way we work, and if anything shifting, health and care workers, more into tasks where humans add more value.

Speaker 1:

There's a a real cliche here that I'm not gonna repeat, but I I think we'll all have heard it in some form about using AI and not being replaced by AI. So too trite to repeat on this podcast. But, no, it's been super interesting. Thank you, Tim, for, chatting through, this with us. For anyone who's been waiting for the stories, we're not gonna let you down.

Speaker 1:

So, let's jump into our first story, which comes to us from HealthTechWorld. AI outperforms clinical tests at predicting Alzheimer's diagnosis. So this is news from the University of Cambridge where researchers have developed an AI tool that claims to have outperformed traditional clinical methods in predicting the progression of Alzheimer's disease. So using cognitive tests and MRI scans, the AI can predict with over 80% accuracy whether a person with mild cognitive issues will develop Alzheimer's and how quickly. If it could be taken from research to a more, I guess, in practice and in, in progress form, this kind of technology could reduce the need for expensive invasive tests and help patients get the right care earlier.

Speaker 1:

Definitely a win for, I guess, both patients and health care systems. So, I mean, I think we've all had a a a little look at this. My sort of immediate piece is we've seen quite a lot of stories sort of recently in terms of the studies about how effective AI can be a prediction. This is, I think, based off, several hundred, bits of data, which, you know, is is good. It's it's impressive.

Speaker 1:

How do we get technologies like this to scale? I mean, we've we've touched on on on part of it already this call, but I think it's there's a there are a lot of conversations and a lot of studies coming out about how it could help. How do we make sure that they do help?

Speaker 2:

Yeah. I I found this a really exciting study and, an exciting development. As you say, the use of AI and data analytics to help, predict, is one of the the kind of key key key functions, key benefits that these tools can offer. When it comes to scaling though, I mean, the the top thing in my mind was this is a model built on data from 400 patients and then, yes, validated on several 100 more, which is great. That's still a pretty small cohort of individuals.

Speaker 2:

And if we're thinking about scaling that, I I guess a key question will be, what are the kind of, datasets? What what kind of size of dataset is going to be required, to have confidence with using this on different populations and diverse populations. So hugely exciting development and great that it's worked. For me, thinking about the scaling challenge, actually, testing it and validating it with, a larger cohorts and and wider populations is gonna be really key.

Speaker 3:

It's interesting, isn't it? Because I think of that validation piece, and it brings to mind, clinical trials and how we test and develop drugs. And you start with a small proof of concept. You go to larger studies. Eventually, you trial it on humans.

Speaker 3:

And I wonder if that's a place that will eventually meet with AI and with this more kind of development of digital health technologies is a much more sort of clear process of this is what's expected before we go to market just so you can prove that, clinical impact. But, yeah, like, the numbers are small. But one of the things that I thought was really nice and interesting about this study is so it was developed in a small group, like I said, 400 people, and then tested with some longitudinal data. And the idea being that this means that they can look at data points in time and then use that to validate the the algorithm and then use it to sort of see, like, well, we're going to try and predict. Does it work?

Speaker 3:

And they've shown that, yes, it outperforms the standard of care in predicting whether someone will progress onto dementia or Alzheimer's if they are exhibiting signs of memory loss. But also in this kind of loops into the other discussion in a more roundabout way, which is it's not just about identifying the people that do have Alzheimer's and could benefit from their early treatment, which is obviously great because we know that people do benefit from being on treatment at as early a stage as possible with Alzheimer's and dementia. But it's also about identifying the people who do not and who are presenting with their doctors with memory loss or other symptoms and actually are found to not be a risk of a progression on to, Alzheimer's or dementia. And actually that's probably linked to other conditions such as, I think they mentioned here, depression and things like that, which could also lead to similar symptoms, but obviously have a very, very, very different, treatment regimen. So it's kind of it's it's part of that bigger conversation as well, which is how does it help us make good decisions, which not only funnels the people into, the next stage if they need it, but also freeing up that capacity across the health care system.

Speaker 3:

So that not only do the people get the techno get the, support that they need straight away, but also the health care system isn't then having to support someone through an Alzheimer diagnosis to then get to the end and find they do not have it. So really freeing up that space at the beginning so that then the clinical leads, the doctors, whoever is overseeing it, are free to treat the people who do need their support. So that's why I thought this kind of gives lots of hope across the system because it's allowing people to present. It's giving, hopefully, predictions that are really reliable, obviously validated by doctors and things like that, but then really, really helping sort of get people funneled to where they need to be sooner.

Speaker 1:

So I guess the barriers to taking this fur forward then are gonna be 2 fold, aren't they? Like, it's gonna be the access to data questions, so and that kind of taps into a lot of the work that you've been doing in the kind of data infrastructure, area, Tim. And then the funding piece, once once you've been able to access this data, it's getting funding for this particular project to take it forward to trial it even further with larger datasets and, find it. When there are so many projects, how do we how do we make those determinations? How do we get that that data and bring in that funding?

Speaker 2:

It's a good question. There's only a certain amount of funding to go around. It's why it's really important in terms of our public sector investment in research and innovation. It's really important that we're funding, research, on the basis of excellence, and we have we have good systems for assessing and and and prioritizing, research. But I I I see you know, when I look at when I look at stories on health tech, and you see the amounts being invested by industry, in developing new technologies as well, it makes you realize the the the huge amount of resources that that are required to do that, and that private sector investment will be essential too.

Speaker 1:

Well, that brings us on nicely to our final story, which is from open access government, digital health needs women at the heart of it. So this obviously looks at a number of, health data, health innovation, biases that have perpetuated in, in medicine due to women's exclusion from health care, and some of the the workforce issues, and progression of women in health tech and health innovation as well. Tim, you've had a read of this. What do you think?

Speaker 2:

Yeah. This was a story, saying that, digital health needs women at the heart of it, arguing that, you know, historically, there've been biases in health that have perhaps neglected women's health issues. The menopause would be a good example. You you know, actually over the last 10 years in the UK, there's been more of a push to improve menopause care, but I guess it's speaking to that position of historic neglect. And this article saying, you know, we need to make sure digital health doesn't unfold in that way.

Speaker 2:

And as you said, Belle, a key a key part of that is ensuring that we are developing, technology and particularly, AI models, with representative datasets. So we don't get that bias, for this article. It's arguing that in the context, of women, but there are many characteristics that we need to worry about at in terms of. When I read it actually, it reminded me of a project we're funding at the Health Foundation, which is looking at retinal scans, particularly diabetic retinopathy. And actually, historically, those models have not worked so well for different ethnic groups.

Speaker 2:

And so it's a project we're funding to try to build models for analyzing those, scans based on more representative datasets. So it's a huge challenge, and I really enjoyed, this piece. And just finally, the the other thing this piece was arguing, it was a need for more women in tech industry and in AI industry as a way of avoiding that bias. I think, it's well known that those who work in tech development and AI development, a semi representative of society as a whole. And, a key way of ensuring that, this stuff is developed in ways that is sensitive to the needs of different groups is if those involved in doing it themselves, look like the populations they're intended to serve.

Speaker 3:

Yeah. I think one of the, this article opens by talking about how historically medicine has excluded women at various points from clinical trials, from research, from the fact of minimizing women's pain, things like that. And that has been because, historically, medicine has been led by men and developed by men. They've been the ones putting money into the research trials. They've been the ones carrying them out.

Speaker 3:

They've been the ones, then doing, you know, medical practice. So now we are at this really amazing kind of, you know, the new frontier as it were of AI and digital health. And it's a really great opportunity for us to draw a line in the sand, but, like, look, we have learned from that. Let's not continue to let it perpetuate through, you know, the new bit of health care. Because all the stuff we're doing now is gonna be the foundation of what digital health will be going forward and making sure that algorithms don't, like you said earlier, exclude populations or represent the populations that they're going to be serving is one thing.

Speaker 3:

But also making sure, yeah, you have diverse teams that are able to call that out and just be aware of it. Because, yes, let's not lie historically. When lots of white men are in a room, they don't necessarily think about the other peep the other sorts of people that are going to be using that service, medicine, technology, whatever that may be because that's not their lived experience. So it's really, really, really important that diversity is a is a core part. And I think we are so lucky in many ways that we have these learnings and are able right now to say, look.

Speaker 3:

We know that if we ignore this, there are gonna be issues in the way that people are treated in the future. So we need to make sure we hold ourselves responsible for that. And often that is, let's bring someone else into the room. Let's make sure someone checks it. But, also, I think as well, if we do look forward to how this might be, approved through regulatory processes and things in the future, it's ensuring there are checkpoints from external people as well to make sure that any sort of algorithm that as be that is being developed for medical purposes have those checkpoints are proven to be validated across wide datasets and proven to be tested in them as well because it's not just a case of using diverse data.

Speaker 3:

It's also making sure that we're then testing on that as well because populations vary so massively both at the local scale and larger. So I think it's I think it's an an optimistic time in many ways. Like, let's make sure that we do the good thing. We have learned from centuries of, you know, not including women in trials, let alone women of color, women of other ethnic minorities. How do we make sure that that doesn't continue?

Speaker 2:

I think that's right. And it it's, you know, the the the best guarantee of that is us being sensitive to it as an issue, and it's why I thought this was an important article. I suppose as a footnote to this, I just add though that it's it's really essential we're considering the inequality's issues and how we're going to avoid those, but I think it's also important just to hold in mind the huge potential health technology and AI have for tackling inequalities. You know, so many of the uses of technology I'm seeing, whether it's remote care, helping people with mobility problems, or translation, helping people with language barriers, auto translation, or, subtitles, large print, you know, the the list would go on. These uses of technology have the the the ability to transform and massively improve the experience of care for people who've historically faced huge barriers.

Speaker 2:

So I think just to finish in a kind of glass half full kind of way, it's remembering just the huge potential of technology to to to to tackle those inequalities as well as the risks we need to be sensitive to as it's developed.

Speaker 3:

I love that.

Speaker 1:

Agreed. Agreed. Well, we shall draw it to a close there. Thank you, Belle, and thank you, Tim, for that. It's been a fascinating, conversation today.

Speaker 1:

Really appreciated having the opportunity to run through it all with you. We will include links to all the stories and, everything we've discussed from the Health Foundation in the newsletter going out this weekend. You can also find the best jobs, podcasts, and events in health tech in that newsletter. That's healthtechpigeon.com. We'll be back next week.

Speaker 1:

Thank you for listening.