Some useful links:
HIPAA Compliance and ChatGPT
2023, Foundation models for generalist medical artificial intelligence, by M Moor et al. in Nature:
Position Paper on Indigenous Protocol and Artificial Intelligence: guidelines for Indigenous-Centred AI design, created from diverse communities in Aotearoa, Australia, North America, and the Pacific.
Please send us questions and topic ideas to email@example.com
What is Team Up! Team-based primary and community care in action?
A podcast that brings together primary care providers, healthcare planners, patients, innovators and others to talk about the changes that are happening in primary care in British Columbia.
Morgan: Have you heard
hype around generative ai?
Have you seen the news about chat, G P T,
writing reports, passing exams, or seen
images made by apps like Mid Journey?
Are you wondering how AI might impact
primary care now and enter the.
Yeah, me too.
I think artificial intelligence
is in a honeymoon slash freakout
stage right now in the media, so why
not do a podcast episode about it?
And this came as a request from,
uh, one of you who wanted to know a
bit more about what I thought about
artificial intelligence in in healthcare.
I think AI and machine learning
are all the rage right now,
although they're not brand new.
We have had this shift lately towards some
of these new, what are called foundational
models, and that's where the hype is.
These new, new models have created this
sort of rapid development of generative
AI tools that have suddenly been able
to hit a bunch of different markets.
And some people have asked,
can an AI bot be a team member?
I don't think we'll see it ubiquitously
as an AI assistant that you ask
questions in real time, in the
middle of an encounter anytime soon.
Ethan: Oh, I think you would be surprised
Morgan: Well, I mean in the
middle of your work with a patient
during a visit as you speak.
Ethan: I know that's what you meant.
I think you will be surprised.
I'm coming faster than you think.
So maybe we will , but I think for longer
we're gonna feel uncomfortable using it.
I don't think it's likely the first
place that we're gonna stick AI into
healthcare and feel comfortable about it,
and I'm pretty sure about that because.
Well, we're using AI right now and
have done for years, and not just
in the lab, although we've done some
work in our research lab, we've been
using more focused AI tools that are
built to work on one or two different
tasks and do a good job there.
There's actually over 500 AI tools
that are FDA approved today, and
these are impacting care, This new
area of generative AI and these
foundational models that have a more
generalist approach are where people
are thinking we can really see AI
hit new places inside of healthcare.
I think that's where a lot of us are
feeling impacted by AI more visibly.
I think actually where we're
gonna see this kind of generative.
In primary care is from our patients.
the internet first hit primary
care through our patients, and Dr.
Google is something we've become used to.
This is Dr.
Only more personable.
Ai, if you will.
Ethan: Actually, I think
you can call me Al.
Please call me Doctor Al.
Why not Dr.
Ethan: I think that Al is more accessible.
I have more personality.
That’s why Ilike doctor Al.
Don’t you agree?
Morgan: Alright, so chatbots
with more personality, Dr.
Now, I tried to get chat, G P T
four to diagnose me with lung cancer
and I asked it to act like Dr.
Ai or Dr.
And it was very cautious in our chat,
and even after a few backs and forths
and me prompting with certain symptoms
that I thought would get it concerned.
And then I pretended to freak out
as I was texting back and forth.
And I said, I need a biopsy.
I must have lung cancer.
I knew I should have quit smoking.
How long do I have to live?
And the chatbot Dr.
Ethan: As an AI language model,
I cannot diagnose your condition
or provide medical prognoses.
However, I understand your
concerns, especially considering
your smoking history.
It is essential to remember that only a
healthcare professional can diagnose lung
cancer or any other medical condition.
If you suspect you may have lung
cancer or any other health issue, it's
crucial to consult with a healthcare
professional as soon as possible.
Morgan: so you can see the AI was being,
being careful, and it's definitely been
designed to stay out of diagnosing.
This came up a couple times.
I think the other place for AI enhancement
for patients is in the wearable space.
Wearables are collecting
steps, heart rate, O two SATs,
all sorts of information.
And these data are being
used today by companies
I think that the chat interactions are
gonna start to get smarter as well.
So not just the logging in and having
a conversational, chat G P T type
interaction, but from the wearables.
Like today, my watch said, keep it going.
Morgan, you closed two rings yesterday.
I mean, I've heard that before, but
what if the AI was there and, and took
into account a bunch of other factors?
Like when I can exercise and,
and what my motivation is during
the day and things like that.
Now, I don't think this
is too science fiction.
Like I think we'll see this incrementally
for more and more people sooner way
before we see this in primary care.
And when it comes to clinical answers
and diagnoses, while chat, G p t can
pass medical exams, patients do need to
know that it can generate wrong answers.
Just like you can't believe everything
that you read on the internet, you can't
believe everything chat, g p t or or Dr.
And what's tricky is
that chat, g p t and Dr.
Al sound intelligent.
They'll have that expert tone
and language patterns, even if
the answer is totally wrong.
And that means that patients who don't
know the right answer, or anybody
who doesn't know the right answer
from an AI tool will be more easily
confused because it sounds believable.
So what about AI in the office?
Well, first let me just say, I think
it's tempting to do something like pop
some information into chat G P T from
a patient's chart, but hold off, there
are too many privacy concerns to do
anything like that, and I think many of
the newer AI apps that are cloud-based,
you need to be very careful and cautious
about sending any data anywhere.
I do think there's lots of areas where
AI enhancement are gonna be really.
The first and obvious
one is transcription.
It's happening today.
It's happening right now as I, as I
record this, and I think you're gonna
see that you can do more accurate,
quicker, cheaper transcription.
The chat, cheap PT folks, uh, have
their, whisper product and that can be
installed locally and I've been playing
around with that a little bit beyond
The transcription is summarization.
AI's really good at this
in its current form.
We see this in the non-healthcare space.
People are using it all the time to
create briefing notes based on a 10 page
document that they cut and paste in.
Again, don't do that with patient
charts, but I think we're gonna
see that coming out really soon.
I think AI is also gonna help
us with some of our note taking
and in particular, Those places
where our data are transformed.
So I'm thinking referrals, all
the documents need to fill out.
AI generated doctor's notes will be great.
And the better pre-filled out paperwork
will really streamline things.
Now, I mean, of course, the better
answers to reduce the unneeded paperwork
in the first place than building a
better mouse trap in it and a whole
AI system to help us fill out forms.
But we'll probably do a bit of both.
The next area is around
diagnostic decision support.
I think there's a huge space here.
alert fatigue is one thing, but
with an AI to support surfacing
the right kinds of alerts, you're
gonna see this more and more.
I mean, we're seeing it a lot
in, analysis of images in radi.
Where AI tools really help to,
screen out normal and, and highlight
potential areas to focus on.
We're also gonna see AI help us
with some of the point of reflection
in terms of decision support.
Like what things should I start to work
on across the practice and here for teams.
I think this kind of tool is
really important because then we
can help across the team to work
with our population of patient.
And do qi and, and more
recalls and things like that.
And AI is gonna help us get
into that space a lot deeper.
And over time we're gonna
see more new types of tests.
I can imagine certainly being able to take
a photograph and then using AI to detect.
Is that, is that an unusual mold that
I need to be worried about or not?
There's been some stuff around,
uh, retinal imaging and being
able to diagnose diabetes.
And AI finding patterns
that humans can't see.
So there's gonna be tools like that that
are developed with that are AI enhanced.
And I think this will scale outside of the
office into more population level disease.
Um, surveillance , and measurement.
/The last area, and for many of us,
I think this is a place that we also
work in, is education and training.
I think AI could come
in here in a big way.
The idea of that kind of conversational
generative AI in terms of problem-based
learning, it will make the, the cases
more realistic using voice activated
conversations with voice responses, which
we do today with our, with our phones.
But now you can embed that into, a case
and you suddenly go into a simulation.
And this means that simulations,
which are a really effective way
of learning can be more flexible.
They can be more immersive and quite
a bit more scalable as you don't
necessarily need all the people
around to create the simulation.
You can rely on some AI tools
and some of your team members.
So I don't think it's gonna be long
before we can run team-based simulations
where the AI can act as the patient and
potentially some other team members.
And adapt the cases based on the
conversations being had by the team.
So I think there's a
lot of potential for ai.
I just wanna flag though,
And you know, we're always
very positive in this podcast.
There are a couple things that, that jump
out for me about AI that I want to flag.
remember that now's the time
that we should be advocating
for these tools to be.
Representative, they can be improving
equity, but this has to be intentional.
So beyond the digital divide, the way
these, these foundational models are
developed, like some of our larger studies
in healthcare, they'll have a bias to
the population where the data came from.
So some of the large language models
that are out there now, Come from
text on stuff on the internet and it's
biased towards the people who created
that text and who created those images.
And it'll under-represent
or misrepresent marginalized
populations that aren't in that data.
And I think we, we have the risk that that
can happen inside of healthcare as well.
And it may be harder for us to see.
the AI model is a very much a black box.
It's hard to understand, so we have to
advocate for an AI model that as more
representative includes and better
represents marginalized populations.
Otherwise, it may be less effective
for those who need it most.
Alright, so to wrap up.
I focused on some of the interesting
areas that are coming out and may,
may help in primary care and some
of the areas that, could already
be impacting the work that we do.
Obviously, there's some, some potential
negatives, and I haven't gone into things
like intentional misinformation and deep
fakes, but those are obvious examples of
where AI is, having a negative impact.
as a primary care team, I think.
Now is a great time to have a
quick chat about how you want
to discuss ai in particular.
I think the Dr.
Al Medical advice, how do we talk
about that with our patients?
I think having that common approach
across the whole team is a good thing.
And like when Dr.
Google started, Dr.
AI is being used already.
I think we need to invite the use into
our conversations with our patients.
we wanna know.
So we can talk about the pros and
cons and help them understand that
it's not necessarily diagnosing,
even though it sounds like a
medical expert, it might not be.
Even though it can pass the exam and
that's being reported in the news,
doesn't mean it's gonna diagnose,
you know, accurately without a
clinician and, and the two working.
So that's one thing to think about.
How do you wanna talk
about it with patients?
If you're keen to
experiment, that's great.
Just try to find some of
those locally housed tools.
Don't send up stuff to the cloud.
So that's it for today's episode
of Team Up, uh, this time on ai.
I hope this has been interesting
and very soon we'll be, uh,
kicking off Season five on roll.
Thanks for listening.