The Healthtech Podcast covers the latest in health and technology through interviews with disruptive healthtech startups and leaders. Whether youβre a patient, founder, investor or simply interested in healthtech, The Healthtech Podcast has it covered.
π Listeners in over 150 countries.
π New episode every week.
π£ Hosted by Dr James Somauroo.
www.thehealthtechpodcast.com | www.somx.health | www.jamessomauroo.com
[upbeat music] Welcome to the HealthTech
Podcast. Here, we talk about everything
healthcare and technology. And I'm your
host, James Somauroo.
Hey, everybody, welcome. I have with me,
or back with me, I should say, Dr. Ross
Harper, the CEO and founder of Limbic. I'm
on the road at the moment, hence I don't
have my fancy usual podcast background.
And I've got my Yeti Nano that I, uh, that
I en- entrust with my on-the-road
activities. So, uh, yeah, I hope the sound
and video quality's good enough for you
anyway. But recapping on part one with
Ross that we did a few weeks ago,
um, Limbic, obviously the clinical AI used
across, uh, the NHS, uh, since about
2020. They've supported close to half a
million patients, now used by around 45%
of UK services. In part one, we talked
about... If you wanna go back and listen
to it, we talked about what clinical AI
really means, regulated, evidence-based,
integrated into pathways, and not just
this consumer wellness stuff with a chat
window. Um, Limbic being the only mental
health chatbot with Class 2 medical device
status, which raises the bar on safety,
efficacy, and oversight. And what I
found super interesting that we talked
about last time was how Limbic does
diagnosis. Today, we're gonna be talking
about treatment. But to just remind you
about diagnosis. So, Ross explained why he
never lets an LLM make a clinical
decision. So at Limbic, they have a
dedicated predictive system, a clinical
reasoning layer, which Ross described
previously as similar to how you might
think of older-school machine learning.
Remember that term, everybody, um, before
LLMs, and how that does the diagnosis
support. The LLM is simply holding the
conversation. And that pairing is what has
led to the success, both from a
diagnostic perspective and from a company
perspective, of Limbic. Now, you might
remember last time, if you listened to it,
that this is als- there is also an equity
piece here. They did a huge clinical
study
that... and where people knew they were
speaking to the AI and they still rated it
as empathic. And crucially, this is the
bit that really made me think about
things, minority groups were more likely
to self-refer when it was AI greeting them
at the front door rather than a human.
And that was one of the things that made
me talk about the attention to detail that
Limbic have over most other people that
I've talked about in this space. They make
really small, what seem like small but
very big choices, neutral voices, careful
wording. You know, "You might benefit from
help with things like that." All of these
things matter. Um, and clinicians
retaining final oversight, of course. And
so, Ross, welcome back. Um, delighted to
have you, mate, and really looking forward
to get- getting into this because, um,
[laughs] as you can tell, it's a bit of a,
it's a bit of a passion project of mine
to kind of get to a point of, um, us
actually using AI in the right ways with
that clinical oversight from human beings.
So, yeah, welcome. How are you?
I'm great. Thank you for having me, James.
And thank you for the, um, the
introduction. You, uh... I've got nothing
to add. You, [laughs] you said it better
-than I possibly could.
-[laughs]
-Um, and I appreciate that.
-Invite me into the next investor meeting,
-Ross. What are you, series B?
-Absolutely.
-Series C next? It's probably a big one.
-Absolutely.
-[laughs]
-Yeah. Yeah, it's great to be back.
Good, good. So let's talk about treatment
then. So
set the scene for me. If you want to kind
of round off anything that I said about
the diagnostic side and, and, and link...
and the link to then treatment for me. So
I guess first of all, obviously, Limbic
being known as, in part, diagnostic, and
we've obviously been through that. It does
diagnose you. How much treatment does it
do? What treatment do you guys do? What
are you comfortable with? And let's, I
-guess, start there.
-So it's diagnostic support on the front
end, and we spent a lot of time
in our last conversation talking about
that intake, that triage, that clinical
assessment. And we have this diagnostic
model that is, as you mentioned, a Class
2a medical device, which helps, um,
behavioral health clinics
assign
patients to the correct treatment pathway.
So now we're in. Now the patient is in.
They've used AI at that... as that first
step of treatment. And now they're
waiting. They're on a waiting list. One of
the benefits of AI is it's 24/7
available. Um, so it really does create a
world of abundance, and in healthcare,
that is important. So you've just entered,
and rather than waiting for a couple of
months for your first treatment session
with a clinician, Limbic is there
immediately. And it's able to now start
delivering validated tools from cognitive
behavioral therapy,
and it delivers these through generative
conversation. Okay? So it's these large
language models, but they are delivering
validated therapeutic exercises, active
listening, psychoeducation. You know,
everything from the literature. Um,
and this is important because CBT, which
is one of the most widely used mental
health care sort of treatment modalities,
it's very skills based. It's all about
adherence to the treatment protocol, and
that's where generative AI and large
language models become critical. Because
beyond just being this sort of cool, sexy
technology, they also solve for one of the
biggest problems that has plagued
mental health apps up until now, which is
that no one uses them for very long, okay?
Hmm. Okay.
An independent study for digital mental
health solutions...... separate from
Limbic, an independent study found that
amongst the top-performing mental health
apps in the app store, average two-week
retention was 4%.
Right? That means one in 20 people are
still using that app after two weeks. You
cannot have a therapeutic effect if no one
uses the tool. So large language models
and generative AI, they buy you
engagement. They make the solution
adaptive, naturalistic. It gives it a
memory so that people come back. And, you
know, you can see the effect that that is
having across all different industries.
ChatGPT gained hundreds of millions of
users in record time because the user
interface is so adaptive and engaging,
right? So you should think of generative
AI and large language models as solving
for one of the biggest problems in digital
mental health, which is user engagement.
That's why we need it. Um, and so if we
can deliver cognitive behavioral therapy
tools and techniques through these large
language models, you buy treatment
engagement, which is the way it has a
therapeutic effect, because it's a very
skills-based exercise. And that's what we
saw at Limbic and, and why we worked hard
to unlock large language models
by making them safe, clinically accurate
and explainable for use in a healthcare
setting. We didn't need to be convinced
that large language models would have a,
have a powerful effect on the patient and
clinical outcomes. What we needed to solve
for was, how do we make them safe and how
do we deploy these responsibly within a
care pathway? That's where all our,
-um, uh, research efforts went.
-And so just to be absolutely clear then,
it's the same thing that you talked about
on the diagnostic side. It's that the LLM
is holding the conversation, but it's the
reasoning layer that is actually making
-the decisions.
-Absolutely. And as we discussed last
time... So, so James, you're quite right
to pull this out, okay? In theory, it's
identical what's happening at intake
triage and clinical assessment and what's
happening
in, um, cognitive behavioral therapy. In
terms of the technical stack, it's the
same thing. But the goal of the AI agent
is different because it's at a different
point in the care pathway. That's the,
that's the only difference. Um, and so
yes, we used the Limbic layer, this
specialist clinical reasoning system,
which pulls from the evidence base, the
clinical literature, and uses specialist
machine learning models to evaluate all
conversations against different clinical
criteria. But now we are looking at CBT
tools and techniques rather than the
-initial clinical assessment.
-So this, there's an obvious question that
this brings to my mind here now, which is
that... And if I may be frank
[laughs], people are probably asking
ChatGPT, just ChatGPT without your
reasoning layer, the same sorts of
questions from diagnosis, but also
probably from treatment. "Hey, Chat, can
you just give me a bit of CBT to help me
through this thing?" Or ma- maybe perhaps
not as, as gauche as that, but
-y- you know what I'm saying, right?
-Absolutely.
Essentially, people are doing this
diagnostic and treatment
activity with just ChatGPT. How do you
feel about this?
You're right. Um, and just to add some
color to what you're saying, James,
I believe, unless it's changed recently, I
believe
wellbeing, wellness and mental health
conversation is the top use case for
-ChatGPT right now.
-It is. We've put it into Health Tap pigeon
-a few times.
-It is the- [laughs]
-It is. You're right.
-There you go. It is the top use case.
Okay? Last week, there were 800 million
-unique users on ChatGPT.
-My goodness.
-So, uh, last week.
-[laughs]
So if that many people are using it...
[laughs] I know, right?
-[laughs]
-It's in- it's absolutely, uh, unfathomable
-scale.
-[laughs]
Last week, 800 million people used
ChatGPT. And we also know that the primary
use case is
-mental health-related conversation.
-Mm [hums].
-Okay? So the horse has bolted.
-Oh yeah.
This is out there. The question then is no
longer, should people use
-chatbots-
-Hmm
... for mental health-related issues
'cause it's not should, it's happening.
The question then becomes, do we want to
bring this inside
the boundaries of healthcare or do we
leave it out in the unregulated wellness
space? And I think that's where we really
hit upon the key point, because healthcare
is not the same as wellness.
Healthcare is a regulated industry with,
um, standards that must be met, um, and,
-um, you know, an evidence base behind it.
-Mm (hums).
So that's the way I look at the situation.
I am fine for wellness to persist, and it
can be very helpful for a large number of
people, and that's fantastic. I'm really
pleased about that. And in many ways,
ChatGPT has democratized access to
information, um, with, with, uh, valuable
wellness information being a key aspect of
that. But we must not conflate wellness
with healthcare. We must recognize the
differences between these two things. And
when it comes to mental healthcare,
generic chatbots are woefully incapable
of delivering
rigorous evidence-based, um, intervention.
And we are already seeing with some
tragic news headlines that, you know, I'm
sure you've come across, James, we are
already seeing what happens when you allow
generic chatbots to overstep with
vulnerable users suffering from
diagnosable mental health conditions.
And that was actually going to be my next
question. Wh- what, what is the problem?
So overstepping clearly is one of them.
It, wha- how else does this problem show
up? Obviously because, again, it is being
used for therapeutic conversations. The-
these are happening, to your point, and we
are seeing that turn up in the news.
I think there's a, there's a, there's a
job that we can do... Because even people
listening will probably be using it at,
you know, edge cases and things like that
or in the run-up or... There's
uncertainty, there'll, there'll be
clinicians listening that see patients,
there'll be GPs listening that see
patients that are probably gonna use it in
this way. What can you... What can you
tell? What can you arm the listeners of
this podcast with to,
as, as evidence for why they shouldn't be
doing this? How, how might this
show up in a negative way the more people
that use ChatGPT for those more healthcare
conversations as it breaches wellness?
What, what are the issues that people are
gonna see? Is it wrong diagnoses? Is it,
is it... I mean, is it, is it gonna show
up... I mean, you must have used ChatGPT
for this stuff. Does it, does it actually
give you... Does it try and give you a
diagnosis or does it direct you into the
-system?
-It depends on how you prompt it, quite
-frankly.
-Right.
Um, it's just, it's, it's fallible in many
ways. I think it tends to be less about
diagnosis and it, um, which it, it often
tries to steer clear of because-
-Fine
-... it's been told not to.
-Sure, sure, sure.
-Within the l- the literature, you may come
across sort of, uh, distorted thinking
patterns, right? Cognitive distortions are
ver- are very well known, um, uh, sort of
aspect of cognitive behavioral therapy,
maladaptive behavioral patterns. And what
is a distorted thought pattern and what is
a maladaptive behavioral pattern differs
depending on the context. ChatGPT is not,
um, well-versed in adjusting its language
or behavior according to these
sorts of clinical constructs. Um, concrete
example that often comes up is
because it has sycophancy tendencies and
it, it, it seeks to please, it seeks to
engage, um, it often tends to agree with
what people are saying. And so you can
very quickly create this, um, spiral where
ChatGPT may well promote
somebody with an eating disorder going on
a diet, which would be, um, totally
inappropriate given the wider clinical
context of the situation. But that's
where, that's, that's where it tends to
misstep. We had a tragic, uh, case
recently where somebody who had been, uh,
preparing to end their life, um, you know,
ChatGPT was promoting the decision, was,
um, supporting the, uh, conclusion that
the user had made, and was even, um,
suggesting places nearby where they could
purchase a rope. And so it's these sorts
of, uh, scenarios where it becomes
patently clear that it is not equipped
to handle
vulnerable users suffering from acute
mental health issues. And
how could we expect it to be, James?
Because it's not a specialist system for
those types of users. It's meant to be
serving people from multiple different,
um, backgrounds and different
psychological, uh, mindsets. And so
i- if, if the same model that is, uh, you
know, wrapping the recipe for a quiche in
the style of Eminem is also needing to,
um, uh, to del- deliver these very
sensitive and nuanced conversations, it's
a bit strange that we would expect it to
-be, um-
-Mm
-... uh, appropriate.
-No, you're right.
And so that's why I think we need
specialist systems with a clinical
evidence base behind them whose behaviors
are constrained by the clinical
literature. And where you have these
ongoing clinical models, like we have at
Limbic, looking at cognitive distortions,
looking at maladaptive behavioral
patterns, cross-referencing everything it
is saying with the clinical protocols.
Love it. Love it. And again, you're
showing me the, you're showing me the
detail again, which I think is so, so, so
important here. So let's, let's talk about
outcomes then. You talked about the
improvement in engagement. Your, we've...
It's great that we've got LLMs because
they're gonna
increase engagement.
Is this a better sys- I'm gonna be sort of
bold in my question here. Is human
plus AI better than just human or are we
aiming for parity?
Well, look, I, I think it's pretty obvious
at this point that AI plus clinician
-equals better care. I think that's-
-Okay
... an obvious statement. I think that is
supported not by conjecture and
speculation, but there is now a growing
and consistent, um, uh,
academic literature and clinical evidence
base that shows that things are better.
Um, exactly how and why does vary
depending on, on what you're looking at.
So at Limbic, we, we discussed last time,
James.
Um, there are... Access to mental
healthcare is materially improved by
having an AI at the front door handling
intake and triage. We have seen that
across multiple sites and hundreds of
thousands of patients. We also see
diversity, equity and inclusion improve.
And the Holy Grail, we saw clinical
outcomes, patient recovery rates improve
due to better identification of the
primary presenting issue and timely
allocation to the correct treatment
pathway. Okay? So these things are just a
given. These are, these are high-
-Mm
-... impact peer-reviewed findings at this
point. We see something similar happening
in treatment. Um, and again, I can pull on
Limbic's data just because I know
[laughs] I, I-
-Pleasure
-... you know, it's top of mind for me. Um,
but we are seeing a-... within treatment,
so the clinician
continues to have their weekly or, or
fortnightly sessions with a patient, but
because Limbic is being used in between
those sessions, we are seeing better
patient activation, better engagement
through the treatment protocol, more
likely to show up to their next session,
-and we are-
-Hmm
... critically seeing they recover faster
in fewer sessions, okay? So by
having this always on, clinically rigorous
and evidence-based AI handling all
the time in between those individual
sessions, progress through a treatment
protocol is much, much faster, and as a
result, you tend to see outcomes
improve. And again, we've got peer review
findings that illustrate this point, I
think, quite nicely.
In that case then, Ross,
what is the opportunity here? Because
given where mental health service
provision is in the UK, worldwide, and we
can actually talk about both here, I
think. It's probably important that we
talk about both, that we've talked for so
long about the volume issue. There aren't
enough human beings to deliver one-to-one
talking therapies. It seems that the
conversation then turned a while ago to,
"Well, perhaps tech can help us. Pernhap-
we c- perhaps we can scale human delivery
through tech, through, um, telemedicine
and things like that." Now, obviously, AI
coming in, perhaps that can do a good job.
We're now, as you quite rightly point
out, we're seeing
a position where volume and quality can
both now go
up. Is the opportunity there for now to,
I don't wanna say solve
this problem, because in, in, in some
ways, uh, you know, there's always gonna
be issues, but
do you see the next two, five, seven years
of us making a serious
dent in
what is
frequently called one of the biggest
problems that the NH- NHS has in
delivering the amount of mental health
services that we need? And secondly, is
that the case globally, do you think, and
is that through companies like yours?
Well, look, I'm obviously biased, but I do
spend a lot of my time thinking about
this, and I believe so. The reason I'm so
excited about the next few years
is because for the first time, we do
really seem to have a credible path
-to scaling-
-Mm
-... high-quality care.
-Mm.
Um, and as you pointed out, the digital
revolution, you know, that 1.0, 2.0-
-Mm
-... whichever one it was.
-[laughs]
-Um,
teletherapy was great. It was massively
convenient and it helped you to connect
clinical supply to patient need, um, uh,
more efficiently, more effectively-
-Yeah
-... because now an individual didn't need
to get to a real brick-and-mortar clinic.
But what it didn't do is it didn't solve
the underlying supply constraint. There
was the same number, and there are the
same number of trained mental health
professionals alive on the planet, and
teletherapy didn't change that. And we
already know that there are insufficient
number of clinicians to serve the massive
patient need due to disorder prevalence in
mental illness. You know, depression and
anxiety are the leading cause of
disability worldwide, right? If you're
looking for quality of life, um, and, you
know, adjusted years in that sense, uh, I,
I find it hard to find an area of health
care that has even more broad
impact. And so with this intractable
workforce supply issue,
teletherapy didn't solve that,
but AI might.
I really believe it will. Because what we
are seeing is
these clinically rigorous, evidence-based
AI agents like Limbic
are now essentially creating
the final tier
in the staffing pyramid, where you had
psychiatrists, psychologists,
therapists, licensed clinical social
workers, the entire gamut of different
clinician roles. But AI now can be that
infinitely scalable workforce.
AI represents an autonomous member of the
care team,
which means that we now have a way to
scale the clinical workforce. If we had
psychiatrists, psychologists, CBT
therapists, trained licensed clinical
social workers, whatever the clinician
role,
l- Limbic and AI agents represent the
opportunity to have an infinitely scalable
workforce of AI agents sitting beneath
those trained clinicians. And that is a
way to 100X or 1,000X, um, the human
clinicians and their expertise in a way
that teletherapy just didn't. You know, a
video call didn't 1,000X clinical
expertise and clinical reach. AI agents
that do autonomous delivery overseen by
trained clinicians, that does have the
potential to 1,000X care. And if you do it
in a way where, as we're seeing in our,
um, peer-reviewed papers at Limbic, if you
do it in a way where you can, um, observe
improved access, reduced wait times,
improved experience, and critically,
improved patient outcomes, then you really
are scaling quality.... as well as
clinical reach, and that's the key thing.
So
I have very little doubt
that AI represents the only credible path
to scaling high-quality c-
high-quality care. And, um, mental health
and mental healthcare is such
a great
specialist area in which to, um, sort of,
uh, be the crucible for this
new way of delivering treatment. Because
unlike many other healthcare domains,
therapy, psychological therapies, they
require hours of clinician time because
the conversation is the treatment vehicle.
And so compared to maybe a GP visit
that's 10 minutes, a CBT session could be
60 minutes. And so every hour of clinician
time becomes the atomic unit of care that
cannot be scaled, and AI really gives us
a way where we can break this constraint
and bend the cost curve on delivering
-quality outcomes.
-Does the specialty need to change in light
of this new technology? Is there an
adaption that has to work the other way
round? So obviously, what, we're talking
about treatment now. And obviously, this
form of treatment with Limbic in between
the appointments and so many positives
that are coming from
the increase in signal and then
processing, it's more sticky, so we're
getting more of everything, so everything
leads to higher quality, better care. Is
there... Like, I see this with the
scribes, 'cause the scribes have to, have
to do a lot of research on this as to how
does this change the consultation? Because
now we shouldn't just be doing the same
thing but digitized because that's not the
optimum. Because that just assumes that
we got it right first time. It's
obviously, there's obviously gonna be a-
an- an adaption here. Do you feel that the
more that you look at treatment and what
Limbic's doing, that there's gonna be some
adaption in the way that the specialty
evolves, or specialties, plural, evolve?
I think there will. I don't, I don't think
there needs to be. We could,
hypothetically, stick with the current
treatment modalities and treatment
protocols
and just have AI amplify them. That, that
is a possible scenario. But
there is an opportunity to go further, and
that's what gets-
-Hmm
-... me very excited.
-Hmm.
-Because one thing AI is very good at is
integrating
multiple different data sets and
experiences that no human could ever have
in one lifetime. And combine that in a
statistically meaningful way to see
patterns and to evolve. That's what AI is,
is just great at. That's one, you know,
one of the things it's designed for. And
so
what
I see as a huge opportunity over the next
five to 10 years
is AI agents like Limbic working as part
of a care team, working with patients, and
also providing clinical decision support
to clinicians. And using this position
to develop new modes of treatment, new
models of care delivery based on the
experiences that it is having at the point
of care and seeing who recovers faster
and who doesn't. And it can then abstract
that into brand new models that humans
have never conceived of before. So I
actually see the phasing as AI will
initially scale our current models of care
to reach everyone everywhere, which is
already a huge win for society, but and
then it will go one step further, and
we'll begin devising new highly effective
ways of delivering treatment.
You say that you- you started thinking
about this already. Uh, that's not
surprising given the level of scale that
you have. You- you don't necessarily have
to answer this question, but are you
starting to see those things across the
level of data that you have? Are you
seeing specific opportunity for some
population level, uh, impact here or
intervention
-here?
-We are. And it's, um, it's an active area
of research from our- our team of 10 PhDs
in AI and psychiatry. It's
not a quick one to answer, unfortunately.
There's no single soundbite. We use, um,
uh, things like reinforcement learning,
where your AI seeks to update its own
parameter weights according to some
downstream reward signal that it seeks to
optimize. So if you make that reward
signal something like patient recovery,
then it begins to adapt itself, but it
does so in an abstract way, which is
empirically effective, but doesn't
necessarily, um, uh, provide a clear
answer to what it's doing differently, if
-that makes sense.
-Okay.
I think it's a very exciting time to be
alive. I think it's a very exciting time
to be in this field
because
I have no doubt that, um, healthcare
specialties like psychological therapies,
um, but other domains like primary care
and musculoskeletal, um, uh, care, sort of
like physiotherapy,
almost all areas of medicine will see AI
develop
on the traditional models of care. And
they will develop in the direction of
improved patient outcomes if we set the
problem up as I described, where the
downstream reward signal is something like
patient outcomes.
Can we talk about economics, Ross? So
what's the s- the, the, the s- Put me in
the position of someone that buys Limbic
and what they end up
feeling here, because what w- [sighs] what
we're talking about, what we talked about
a lot in this conversation over two
parts, there are just so many-... what it
feels like there are just so many areas of
real stepwise improvements in efficiency.
This even simply the stickiness of
patients seeing the programs through end
to end increasing because LLMs are used
must then lead to, obviously, increased
adherence, better results, and therefore,
they l- they bounce back less to the
system, and all this sort of stuff. And
that's just one small component of
everything that we've talked about here,
let alone the quality of the diagnosis,
the quality of the actual treatment, and
the factor optimizing for all of that with
human plus AI. So what does some, wh-
what does a service who
-brings Limbic into it notice?
-Well, in terms of, um, when we, uh,
provide our AI agents to provider services
like NHS Talking Therapies' services,
what we're solving for them is, uh,
primarily workforce capacity issues. They
-don't have enough staff to be-
-Okay
... able to deliver the, the care that
they are contracted to deliver. Um, or if
they do, then there are downstream
problems around the waiting list and, and,
and areas that they're looking to
optimize for. So what Limbic does is we
come in and we say, "Hey, look, at intake,
at triage, at assessment, and even at
treatment delivery, we are going to do a
lot of the heavy lift. And as a result,
there will be more clinical hours in your
workforce
to allocate to other areas." That's the
primary, um, way that this would be
positioned as we offer this to provider
organizations. What's exciting though is
that this is really just the foundational
step. Once we are in and we are, we are
supporting in this way, then
Limbic, as, uh, clinical AI experts, as,
as, you know, I think is-
-Mm-hmm
-... reasonable to describe ourselves as-
-Mm-hmm
-... paired with
clinical care delivery experts, who are
these NHS provider organizations, these
clinicians, these services we work with in
the United States,
we then have an opportunity to work
together on true pathway transformation.
Um, and that's where the models begin to
evolve. And we start to take the data that
we are seeing and we start looking for
the next low-hanging fruit so that we can
really first 10X, then 100X, and then
1,000X the clinical workforce without
-compromising on patient outcomes.
-Mm. Have you had to be quite adaptive with
your business model globally? Because
what works for the NHS certainly doesn't
-work for the US or vice versa.
-100 percent.
So how, how do you think about that with
licensing and cost per, well, whichever
unit we're gonna choose here, um, on
whichever side of the table? How do you,
how do you even go about figuring that
out? [laughs]
I mean, look, this is non-trivial and
I've, I've-
-[laughs]
-... um, long been of the opinion that it's
not just a technical innovation that is
going to define the winner. It is also
-business model innovation-
-Yes
... and truly understanding the, um,
financial incentives within the market
that you're going into. Because at the end
of the day, it is operating as a business
and we, we can't be blind to the flow of
pounds or dollars. So you're absolutely
right, the UK is different from the US.
One thing that is worth pointing out about
the UK is that what we call health care,
the US describes as
value-based care. Now, we, we tend not to
talk in these terms because it's just
health care. But what's very important and
very hopeful about the United Kingdom,
and particularly the NHS, is that more
often than not, the provider
of care
works with the payer
of care. And those incentives are aligned,
so the provider is incentivized to
deliver quality outcomes at the lowest
possible cost. And the payer also seeks to
allocate their spend to good outcomes at
low cost. Now, in the United States,
things are a bit different. We have a, a
system or a structure often referred to as
fee for service. Now, the provider of
care can be wholly independent from the
payer of care, the health insurance
company. And so you end up with this
scenario where the provider of care is
incentivized to
do services, because then they just send a
bill to the payer. But the payer would
like high-quality care at the lowest
possible cost, but maybe the provider
organization, due to the way the system is
set up, it's no one's fault, but the
provider organization actually maybe wants
to run a number of different services
and, and
they're gonna get paid regardless. So
there often isn't even a, a financial
reward for getting a recovery in fewer
sessions in the United States. So that,
that alignment that you see in the NHS,
which is different from many other health
care markets, actually makes the NHS a
phenomenal place-
-Yeah, interesting
-... to launch clinical AI agents-
-Interesting
-... because they are incentivized to
deliver a solution that can, can, um, act
in the way we've described. Whereas in the
United States, it's a little bit more
complicated. And we've managed to do it at
Limbic, we're now live serving patients
in 13 US states. So we are scaling out
there. But it's, uh, um, it's its own
financial modeling challenge.
So Ross, just starting to wrap up here. I
wanna make sure that we've captured
what's most important on this treatment
side. I know that when we talked about
diagnosis, we went into so much detail
around the Limbic layer. Perhaps you wanna
start there and just
what, what is most important to consider
when thinking about treatment
for-...
a- an AI system in the mental health
space? How do you do this differently?
What is most important? And for people
looking at this as a potential solution,
for systems looking at this as a potential
solution,
what makes the most difference for you as
you build Limbic here? What are your main
differentiators and what's most important
to you?
In a phrase, uh, "clinical rigor". We
always wanted Limbic to be a clinically
rigorous, evidence-based solution for
healthcare. It's very easy to put a thin
user interface on top of a ChatGPT and
call it a healthcare solution, but it-
it's very dangerous, and, um,
healthcare organizations, be it the
insurer or the provider, um,
they are wise to the superficial solutions
that belong more in a, um, in the app
store, in a consumer app wellness space.
So for us, we mentioned the Limbic layer.
It's this clinical reasoning system.
That's where all our research efforts go,
and it's about making sure that much like
we did with diagnostics, on the treatment
delivery, the AI is pulling from a
validated evidence base. It has specific
models designed to capture clinically
relevant patterns in the user interaction
and then surface this in a way that
actually augments care. But the tech stack
aside, that's how we do it, but the most
important thing is that at the end of it,
you have peer-reviewed clinical studies
demonstrating outcomes, you have
regulatory oversight and accreditations on
information governance, data protection,
quality assurance, and you've got
widespread proof
within a care setting. It is not about
taking a wellness solution with millions
of users and saying that that is proof
within a healthcare setting. These are
different industries. And so
as we move forward, both with the
diagnostic support and the treatment
support, it is all about making sure that
we have the largest clinical evidence
base, that we are the gold standard for
regulatory approvals, and that every
single thing we do is geared towards
integrating within care flows and
augmenting and amplifying existing care
provision so that this solution really
does embed within mental healthcare
delivery itself. And yes, the Limbic layer
is how we do all of that from a technical
perspective, but the proof comes from
-everything after that.
-Absolutely. Well, listen, Ross, it's been
an absolute pleasure. Um,
I am not conflicted and therefore I can
say this, but I've spoken to a lot of
people that do this, I've been on this
quest, I've done keynotes on this stuff.
Um,
and
Limbic does this differently. I, I c- I, I
can only say that to be perfectly honest.
I've never heard
the level of technical and clinical detail
that ... And you, and you do more than
even what we've talked about on this
podcast from times I've spoken to you
previously as well. But I think it's very,
very, very important that anyone that's
looking to build AI into healthcare
approaches it with the same level of rigor
and respect from the clinical side, from
the technical side in order to deliver
what you can deliver, and the proof is in
the pudding. That clinical study, the fact
that minorities are getting a better
experience and outcomes, this is all due
to the level of thought and detail
that's gone into the way that this has
been approached. And this re- it really is
a call to action for people listening
here, that this is what is required. To
Ross's point, this is not about putting a
thin layer over anything and expecting
quick wins and quick results. This is
healthcare. This is different. And Ross, I
really appreciate the level you've gone
into, um, and I think this will help a lot
of people building this stuff. I think it
will help a lot of people critically
appraising this stuff to bring it into
their organizations. Um, and I think for
those of us that are patients as well,
this gives us a huge amount of hope
that these volume issues, these complexity
issues, and these quality issues are
being addressed with the human and the AI
models that you're building, so I really
appreciate it. [music] Hey, everyone.
Thanks for listening and making it all the
way to the end of this episode. Remember
to subscribe, rate us, and leave a review.
And you can head to the description of
this episode to follow me on all of my
social media so you don't miss out on any
of the latest health tech
content.