Lab Medicine Rounds

In this episode of “Lab Medicine Rounds,” Justin Kreuter, M.D., sits down with Bradley Erickson, M.D., Ph.D., Director of the Mayo Clinic Artificial Intelligence Lab and professor of radiology at Mayo Clinic, to talk about working with artificial intelligence and how to train on it.

0:00 Intro
00:45 Why are computer-aided diagnoses, artificial intelligence, important for our healthcare?
02:45 Is there one way or maybe one facet of AI that’s the next generation of a checklist?
03:46 What’s important for physicians to understand about working with artificial intelligence?
06:01 As they are getting developed, is there a standard way the community is talking about this confidence in probability? Is that going to be universal for the different tools that are being developed? Or is it each to their own the way they try to convey that?
08:10 Is the thought then that will help us get away from some of the biases in our clinical practices?
09:25 How are you thinking about how we train our trainees, our residents and fellows, how to use artificial intelligence well? What can you share with our audience?
11:20 If I wanted to get my residents and fellows exposed to AI now, is there a recommended textbook on it, or any online tools to get exposure to and appreciate these points you are highlighting for us?
14:09 Have you started to have program directors approach you and your laboratory to facilitate education around AI in their department/division?
15:48 What do you think the future of AI in medical practice looks like?
20:38 Outro

Resources:
Medical Image Deep Learning (MIDeL)

National Imaging Informatics Course - Radiology (NIIC-RAD)

What is Lab Medicine Rounds?

A Mayo Clinic podcast for laboratory professionals, physicians, and students, hosted by Justin Kreuter, M.D., assistant professor of laboratory medicine and pathology at Mayo Clinic, featuring educational topics and insightful takeaways to apply in your practice.

This is Lab Medicine
Rounds, a curate podcast

for physicians, laboratory
professionals and students.

I'm your host, Justin Kreuter,
the Bow Tie, bandit of Blood

a transfusion medicine
pathologist at Mayo Clinic.

Today we're rounding with Dr.

Bradley Erickson, director

of the Mayo Clinic
Artificial Intelligence Lab

and professor of radiology

at Mayo Clinic to talk about working

with artificial intelligence
and also how to train on it.

Thanks for joining us today

Dr. Erickson.

Thank you for inviting me
to talk with you today.

So why don't we kick
stuff off and maybe define

like the importance, why
are computer aided diagnoses

artificial intelligence
important for healthcare?

There? There are a number

of advantages to AI algorithms as long

as they're appropriately implemented.

In a lot of areas of medicine

we humans tend to be more qualitative

and that can be very
good, but in other venues

it's important to be more quantitative

and computers are particularly
good at those sorts

of tasks and, and AI
falls into that category.

For example, measuring the size of tumors

or other disease processes is
a quantitative task that we

humans usually don't
like to do all that much

but computers are very good

at doing it and they can
do it very efficiently.

And so this is one of
those cases where you

get a win-win situation where
the computer does it better

it does it faster, and
it takes gru work off the

the shoulders of, of the physician.

So I think

that there are some areas
where it's a natural win

for AI to help us do our task better.

The other thing is that for
tests like detection and

and making sure that we
don't miss certain things

having AI there watching over
our shoulder can be valuable

particularly when it's
3:00 AM in the morning.

And, and I know I'm not at the top

of my game at that time or you
know, at the end of the day

after looking at hundreds of
thousands of images, again

we humans tend to fatigue
and AI tools don't.

And so having that kind

of extra set of eyes looking
at things saying, Hey

don't forget about this, or
what do you think about this?

Something to make sure that,
that we give as good attention

to the last case as we
gave to the first case

I think is another value of ai.

You know, as you were
talking there, in my head

I'm thinking about checklists, right?

And like in the operating
room there's checklists

and you know, in the aviation
industry checklist is

I guess is there one way or maybe it

is it one facet of AI that's kind of

like the next generation
of a, of a checklist?

Well, so the, the more typical way that we

implement checklists is what's
called structured reporting.

So when I interpret an examination

there will be a number of, you know

what about the such and such?

What about this, what about that?

And particularly then if the
computer prompts me and says

here are the legal answers

that also then can be nice
training data for training in AI.

And now there are already
companies starting to

show AI tools that can
generate structured reports.

And so you have that double advantage

of the computers looking at everything

and it lays it out in a
nice organized fashion.

So you, you mentioned

in your first answer explaining
why is it important talking

about, I kind of caught
your highlight on the

if it's appropriately implemented.

And so it kinda leads me to the question

of what's important for
physicians to understand

about working with
artificial intelligence?

So when I give talks about AI,
I try to emphasize the point

that despite its name,
AI is not intelligent.

The more correct term in the field is to

call it machine learning or deep learning.

And it's learning a pattern.

And so you could feed it whatever you want

and it would figure out the pattern.

And while

for humans who are really
good at memorizing patterns

which is what a lot of medicine is about

we think of that as being intelligent.

And so that, that's kind of
the, the origin of the term.

But the, the computer is ultimately just

doing a pattern matching thing.

And the danger then is this.

Somebody I know actually
took an x-ray of a pickle

and fed that x-ray into a
cancer detection algorithm

and the algorithm said
there's cancer there.

The problem is that
there's no common sense

that we humans would
think of when the AI runs.

It is just saying this most looks

like this and that's a big problem.

And I think

that that then kind of
gets me to the next point

which is we need to think
about confidence levels of AI.

The current generation
that we have basically

says it's probably this, but it

it doesn't give a lot about
the actual probability value.

It just tells you cancer or no cancer.

And the ability to have
it convey a calibrated

probability as well as a confidence value

I think is critical.

If you think about your
interactions with your physician

and you walk in and they say lung cancer.

How much confidence would that give you?

You know, sometimes it
is just about that clear.

But other times and
probably most times it's more

of a differential diagnosis.

And, and that's kind
of where we need to get

with AI is that we get
that list of possibilities

with some sort of indication
of the confidence level and

those technologies
are being developed,

but we're not there today.

And, as they're
getting developed, is there

kind of a standard way that
the community is thinking

about talking about this
confidence and probability?

Is that going to be kind of universal

for the different tools that
are kind of being developed?

Or is it each, each

to their own the way they
kind of try to convey that?

So we're still early
enough on the development

that each is kind of doing
it their own way, you know

and until we have a more clear winner

I'm not sure that people are
gonna put too much effort

into standardizing that.

In some of the structured
reporting technologies

there are fields for putting
in a confidence value

but the precise way to
interpret that is still

not to defined, you know,
that's actually

a big problem when you think
about our language today.

If I read out a chest x-ray

and I say that's probably pneumonia

does that mean I'm 99%
sure 90% sure 56% sure?

Right? What does probability
mean in a quantitative sense?

And that's a big challenge then in

in terms of creating training data, right?

How do we train the algorithm

that this is what a 56% probability means

but also then how do you map
a number back to language that

that we would understand as humans?

So, that's a big challenge that

that we have today is
that language and humans

are not quantitative the
way that algorithms are and

and thinking about what probabilities

and confidence terms mean is a challenge.

Yeah, I, as I hear you
say that, I think in

pathology, you mentioned
the radiology challenge

when you say this is probably. You know

in pathology there's certain aspects

of our practice where we're talking

about something is
suspicious for something is

atypical something you
know cannot rule out.

And and I guess I'm sort
of reflecting now that

we try to convey that
probability exactly like you said

actually as a, as more of a
subjective rather than

you know, qualitative rather
than a quantitative way.

So is the thought then that there's

that'll help us get away from
some of the, the biases in

in our clinical practices?

Yeah, you know, so bias has
several different components.

You know, I have a, an electronics kind

of background and you know, we
always tend to think of bias

as a bad thing, but of course
for those of you who know

electronics bias is what
makes transistors work, right?

So bias if properly used,
can be a good thing.

How does that apply to AI?

Well, in terms of bias
and particularly, you know

underrepresented populations and so on

we know that some races

genders and so on have
different risk profiles.

And so to say I'm going to
be completely blind to race

or sex is probably not the right approach.

You just need to make sure

that you use that information
to provide the best care

for patients.

And so as we then start to again

produce these probability estimates

you know, that information
is hopefully going to

improve the confidence intervals be

because we have that
additional information

about the the sex and
race of the individual.

So in your role as a director

of our artificial intelligence
lab, how do you go about

or how are you thinking about how we train

you know, our trainees,
our residents and fellows

how to use artificial intelligence well?

I imagine that's
starting to kind of enter

into your life and what can
you share with our audience?

So I try to make the points that

that we've already
discussed about the fact

that it's not intelligent,
it's just doing patterning

and that as long as you give
it an input, today's generation

of AI tools always produce an
output, even if it's nonsense.

And so, you know, I think it's critical

that our trainees need to
get at least some exposure

to AI technology to
understand how it works.

And it of course, more
importantly how it fails.

And you know, I draw a lot

of parallels with
statistics that, you know

even back in the dark ages
when I went to medical school

we had to take statistics
and epidemiology.

And I think that that's
a valuable thing, right?

You have to understand
how to read the literature

but also when you're looking at, you know

a BMI that's, you know, at
this value, well how far

off of the population norm is
that and what does it mean?

And I think there needs to be

at least as much time spent on training

about AI tools in medical
school and residency and

and so on, so that they
understand again, the principles

of machine learning, how
it works, how it fails be

because it's probably going to
have even more application

than medical care than than
statistics and epidemiology.

Is, is there a good, yeah

this is a bit of an ignorant question

in that I, I'm not
sure, like, you know, if

if I wanted to get my
residents and fellows exposed

to AI now, like I'm not sure if there's a

going into old school
things thinking is there

is there kind of the
the recommended textbook

on it or is there something, you know

in our current practice
now where I could have

somebody go deliberately
kind of practice with or are

is there some online
tools that, or a place

that somebody can go a digital playground

and get exposure to and,
and come to appreciate these

points you're highlighting for us?

Yeah, so as, as you kind
of suspected, you know

textbooks are pretty much useless.

They get out of date so fast.

Things like chat GBT, you know, didn't

didn't exist at least in the knowledge

of the population three,
four months ago, right?

So unfortunately textbooks
probably don't cut it.

So to address this problem

there's a guy named Jeremy
Howard who has built a number

of what are called Jupiter Notebooks.

It's a way that you can execute code

but it gets the name Notebook because it's

like a scientific notebook
where you also see the output

and you can put in hypertext
markup like a webpage.

And so he actually wrote a
textbook that is all code and

and these Jupiter Notebooks.

So stealing his idea, my lab

and I have created a website focused

on medical image deep learning.

So if people are interested,
that's at midel.org.

And that's something that, you know

because it's web content

it's a lot easier to keep up to date.

We can add a new page when
some new technology comes along

if there is a bug, you know

unlike a textbook where
you have to publish errata,

we can, you know, update
the code pretty easily.

But I think the ability to
actually see the code run

and people say, gee, I wonder
what happens if I do this?

And they change a bit of the code

and they can see the
impact I think is extremely

valuable for, you know,
early to mid-level learning.

There are courses and in
fact Mayo offers a master's

in AI for medical people

and that gives you a more
in depth learning experience

but obviously requires
a, a bigger commitment.

So there are a number
of options, but you know

I think web resources
probably is the way to go.

YouTube is fantastic,
the challenges that most

of YouTube content is
not specific to medical

but in terms of learning
the general concepts of AI

YouTube is kind of my go-to.

It's, it's wonderful to
hear all these resources.

I'm just kind of curious

have you started to have
program directors approach you

in your laboratory to facilitate education

around AI in their department division?

Yes, and I have, you know, gone

and done the typical visiting
professor thing to do that.

But in addition, I'm
part of an informatics

society that has created
what's called NIIC-RAD

the National Imaging Informatics
Curriculum for Radiology.

And that covers a lot more than just AI.

It talks about, you know,
how do you move images around,

how do you do structured
reports and whatnot,

but we've added AI content to that.

And so that is a week long
webinar that is available

to all radiology programs.

It's actually now across the world

not just the US and so
because it's really not feasible

for many of the smaller programs
to have an expert on AI.

And so this is a way that
we can educate, you know

essentially radiology
programs around the world

on AI as it applies to radiology.

And there are discussions
with other societies

like pathology about
doing a similar thing.

Yeah, that's wonderful.

So my, I've heard other
colleagues kind of talk a

about the future of medicine is

is being handed over to the
robots and what's, you know

the role of the physician
is really to still

have maybe that healing touch or comfort.

But as I hear you talk and

and really talk about how best to use ai

I would gather that's not
the future vision that

that you see.

So what do you think the future

of of AI in medical practice looks like?

So I think that, for instance

AI doing more of the quantitative
tasks and doing some

of the grunt work that we
physicians don't like to

do is the sweet spot.

We focus too much on doing the
sexy, it can make a diagnosis

that a human can't do and
it's cool when that works

but I think the payback

for that is relatively small
compared to the investment.

But I think, you know, those
sorts of tools are coming

we and others have
published on the ability to

protect molecular markers
from standard CT's and MR's.

That there's no way a human
can see what the AI is seeing.

I think that the routine
quantitative measurements

of things like body composition,
the amount of visceral fat,

subcutaneous fat, and muscle is valuable
to many clinicians today.

And having a human trace that
out is simply not practical.

But we've already deployed an algorithm

so actually every abdomen CT done

at Mayo has a body
composition available to it.

They don't routinely report
it, but it's available if

if people want to see it.

I think that the generative
technologies is kind of led

by Chat GPT is also going to
change medicine. Now Chat GPT

we all know

about the hallucinations
where it will make up

really plausible sounding
things that is complete garbage.

But there are variants that don't do

that where that there's
what's called the temperature

which is how much you
weight the probability

and how much you want to
right weight randomness.

But also it can say

and this is the document
where I got this idea from.

And so I think for
summarization, you know, going

through the 30,000 pages of
outside records, great task

for some of these generative
technologies where

you say give me a one page summary

of all the hematologic disease

of this patient in their life.

I think it's very feasible in
the not too distant future.

And if we can do that
with text, there are also

some great generative
technologies for images.

And, and this is one
where my lab has done some

work where we can take for
instance a large collection

of hip x-rays or chest x-rays
and if we also know the sex

and the age and the race and the BMI

we can train a model
where you can then say

generate 10,000 x-rays of
the pelvis with this many

from this age range,
this many, this age range

this many, this sex, this
many the other sex, you know

this many with a certain BMI,
this many a certain race.

And so we can completely
sample the population and none

of the x-rays will be
from any one individual

which then gets around privacy concerns.

So I think those sorts

of generative techniques
to improve the training

of other AI algorithms
is really cool thing that

that we're starting to understand better.

Finally, I think there are components

of AI that will go back to the roots.

And so for those of you who are historians

the first real AI application
in medicine was called MYCIN

and was a set of about 500 rules

for making the diagnosis
of blood infections.

And you could ask it, well

what's the best antibiotic to use

for this type of infection?

And, and that ability to
control things as opposed

to being susceptible to
hallucinations is a real problem.

And so if you can define a set

of rules and say, when you see this

do this and then do this and then do this.

We have a lot of workflow challenges

in healthcare where handoffs are dropped

or certain steps aren't done

in time or they're not done according

to the care process that we all agree on.

And I think that that form of workflow

something called process
automation that's used

in manufacturing of cars

it's used in the financial industry

but it's not used in
healthcare for some reason.

And I think that that's actually a form

of AI that probably is
going to start to be applied

in the next five years or so.

Wow. I think all of our
audience right now really keyed

into your, your
predictions for the future.

Because many of us are doing chart reviews

in preparation for frozen
sections the next day.

And, and also there's lots

of times where I have to
comb through a lot of data.

I hadn't even thought about that.

And then the, the medical educator

in me is just thrilled at the, you know

what might be possible with
almost essentially, you know

I know what, what I would love to have

and expose my learners to, but it's going

and grabbing that materials
that that takes so much time.

We've been rounding with Dr.

Erickson on the importance of working

with artificial intelligence
and how to teach it.

And thank you so much Dr.

Erickson for taking
the time with us today.

It's been my pleasure.

It's been great to talk with you all

and I hope that this was
valuable to your audience.

And to our listeners, thank
you for joining us today.

We invite you to share your thoughts

and suggestions via email.

Please direct any suggestions
to MCLeducation@mayo.edu.

If you have enjoyed this podcast

please follow or subscribe.

And until our next rounds together

we encourage you to continue to connect
lab medicine and the clinical practice
through insightful conversations.