A podcast on statistical science and clinical trials.
Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: All right.
Welcome everybody.
Back to, in the interim, I'm your
host, Scott Berry, and we are going
to dive into a number of things of
Bayesian adaptive trials, the life in
the academic world a little bit today
I have, uh, a guest with me today.
I have Dr.
Byron Eski.
a professor of biostatistics and data
science at the University of Kansas
Medical Center and the University of
Kansas Cancer Center, uh, in Kansas
City, and he's interested in Bayesian
modeling, Bayesian adaptive trials,
and he's a fellow of the American
Statistical Association and he's
also a very good friend of mine.
Byron, welcome to in the interim.
Byron Gajewski: Scott, uh,
thanks for having me on this.
I'm a, I'm a big fan of the pod.
Uh, this in the interim and,
um, Rewatchables is another
podcast that I listen to
quite often, both of them.
I don't know if you
know about Rewatchables.
It's, it's produced and done by
Bill Simmons as a sports guy
Scott Berry: Ah,
Byron Gajewski: and he does,
yeah, oh you do, and he
does, uh, Rewatchable movies.
That's what he does.
He talks about it.
So I listen to both of
them, uh, as I, as I
walk or, or, or whatnot.
Um, exercise.
Scott Berry: so is a re
watchable movie a good movie?
Uh, maybe really good movies you
don't want to rewatch necessarily.
Byron Gajewski: It's interesting
because, uh, but the concept is,
is you're sitting there and, and
we're cable, I'm a cable person.
Or in the old days, regular
TV and antennas and stuff.
And a movie just happens to be on.
And you're taking a nap and
you look up and you go, oh
gosh, that's a great movie.
Okay.
I really want to watch
where I'm at right now.
You don't have any streaming
or anything like that.
So it's a, it's a rewatchable.
It's not necessarily the
greatest film, but they
usually are good films.
And they're, they're
just very rewatchable.
yeah,
sure.
Scott Berry: hard to turn them
off and it's hard to get up from.
Yeah.
Yeah.
And, and you gotta kind of watch
it, despite the fact you've seen
the movie six or seven times.
Yeah.
Byron Gajewski: Absolutely.
Uh, you can't handle the truth.
I say that and you know,
immediately, or at least some
people will know immediately
what that rewatchable movie is.
If you couldn't.
Scott Berry: Yeah, this
just happened a couple
Byron Gajewski: Oh
Scott Berry: Tammy has a gift and
that is, she doesn't remember movies.
Byron Gajewski: yeah, that is, yeah.
Scott Berry: Tammy, and so she can
watch a movie and watch the same movie
six months later and be in complete
suspense as to what's gonna happen.
'cause she doesn't
Byron Gajewski: Yeah,
Scott Berry: happen.
Yes, yes.
It it, it's a
Byron Gajewski: that is
a great gift, actually.
Scott Berry: Yeah.
Yep.
Yep.
Byron Gajewski: Sure.
Scott Berry: Okay.
So Byron, uh, let's, let's, um, get into
some statistics, some adaptive trials.
Bayesian, of all, and I, I know
you watched the podcast, uh, with
Sean Cassidy, and he, he describes,
you know, tell me your story.
let, let's, let's let everybody
know what is the Byron Eski story?
Byron Gajewski: Oh, yeah, that's
a, that's a good question.
Um, you know, uh, maybe I could
go into some of my background
academically, maybe a little
bit about how that went, how
I was trained in statistics.
I studied math and civil engineering
at Marquette University, bachelor's
degree, master's degree in math.
Um, my dad was a civil engineer.
Scott Berry: Hmm.
Byron Gajewski: I loved math,
so I studied civil engineering.
Funny story on that, I was
happy to be in the College of
Engineering at Marquette, uh,
because I didn't have to go to
arts and sciences and do, I, I
thought back then, too much writing.
You know,
it was hard for me to be in
arts and sciences at the time.
It was, so, so anyway, I studied
engineering and math there.
It was hard for me to be in
arts and sciences at the time.
And then I studied math in
master's, and I realized that
math was too theoretical for
me, for my taste to do forever,
and engineering was too applied.
Statistics was kind of a sweet spot.
I had an advisor in math, who,
who, I had the book, back then you
look at graduate programs in, in
statistics, and you had a big book
you pulled out of the library, and
you said, hey, what do you think?
And I was like, okay.
And he starts looking around.
He's a bio -math guy,
statistician in a math department.
And he said, Texas A &M.
He said, they're a good
program and they got money.
And it turns out they had a
pot of money to bring prospective
graduate students down there.
I went and visited and fell
in love with Texas A &M.
Loved it down there.
And that's where I studied for
my PhD in statistics at A &M.
And it was funny because,
um, at A &M at the time,
it was a very frequentist
department, very frequentist.
Uh, but, um, and I studied.
I remember it being a
very rigorous program.
I studied everything from
statistical computing to
measure theoretic probability.
Scott Berry: Mm-hmm.
Byron Gajewski: Uh, the
exams were rigorous.
Um, I did take a base course there.
I did take a base course.
I really liked it.
Um, You talk about in a
previous pod, Jimmy Savage,
uh, he co -authored a paper that I
read, actually my professor assigned
it to me from Psychological Review,
um, a foundational paper in Bayesian,
and I, I, I, it was really cool.
You know what though,
Scott, I, I just,
Scott Berry: probably
Edwards, Lindeman and Savage.
Byron Gajewski: yes, absolutely,
um, and I actually had a student
read that here, um, Uh, recently,
um, a foundational
paper in the field.
And, um, I, I took the course,
loved it, but I, I didn't
really use it right away.
I didn't use Bayesian
stuff right away.
But later in my career,
Scott Berry: so just going
Byron Gajewski: it
Scott Berry: professor,
uh, in that course?
Byron Gajewski: was, uh, you!
Scott Berry: Yeah.
Yes.
So
Byron Gajewski: Uh,
Scott Berry: good to let
everybody know that, uh, yes.
Uh, when I was teaching at Texas a and
m, Byron was a student there, a graduate
student there, and I taught a Bays
course, which that course was awesome.
The, the students we had in that
course and a number of them out
doing things in the world of adaptive
Byron Gajewski: yeah.
Scott Berry: and, yep.
Fantastic.
Okay.
So, so you take the course,
it's interesting, but then
where, where does it take you?
Byron Gajewski: just to talk a
little bit about that, I mean,
my colleagues, as you mentioned,
that were in that, that course.
By the way, these were colleagues
that, that we played pickup
basketball with on Wednesday,
which was a wonderful thing.
And you remember playing
those pickup games?
It was a great way to get,
blow off steam about the rigors
of statistics program, but also
learn about the ropes from the
older, uh, older, uh, students
in the group and learn about
how to navigate the system, both
technically and socially, and whatnot.
But in that course, a lot
of my colleagues, there was
a number of colleagues who
used Bayesian methods later.
And, you know, everything from
airline industry, to sports,
to bowhead whale gestation
age estimation, cancer.
Um, and I was like,
Oh, that's cool.
I would like to do that, but
I never really did it until
I got my first job at St.
Cloud State University.
And in the summers I did
some, some Bayesian modeling.
I did some modeling where I had
this, I remember vividly there
was a problem where we had,
it was an audiology problem.
Scott Berry: Hmm.
Byron Gajewski: I laugh because the
experiment was on guinea pigs.
Okay.
Um, back to my point about
my struggles with writing.
Scott Berry: Yeah.
Byron Gajewski: Um, I, I
remember calling them pigs
when I talked about it.
And I couldn't pronounce
guinea for some reason.
I didn't, guinea didn't
make sense to me.
Um, and my wife makes fun of me.
My wife Mary, she, she says, pigs?
I thought you said they were pigs.
Anyway, they did experiments
on guinea pigs and it had
to do with hearing, audiology.
Anyway, they did experiments
on guinea pigs and it had
to do with hearing, audiology.
In audiology, you have
interesting interval censoring,
the way they do the testing.
They have interesting right censoring.
And, uh, the data
tends to be non -normal.
It's a shifted, it turns
out to be, we modeled it
with a shifted log normal.
It has a parameter that
you estimate in it.
And, boy, I struggled doing that
from a frequentist standpoint.
But then I did Bayesian, and
it was just, it was clean.
And so then from there,
I just, I got hooked.
I got hooked on Bayesian.
I got hooked on that.
I got hooked on the interpretation.
Scott Berry: Hmm.
Yep.
Okay.
So you're, you're St.
Cloud State.
Uh, by the way, St.
Cloud is a place I near
and dear to my heart.
I grew up in Minnesota and I, I have a
cabin that I spent time in the summer,
uh, about 45 minutes from there.
So, uh, fantastic place.
More fantastic in the
summer than the winter.
Um,
Byron Gajewski: Yeah,
Scott Berry: where, where
do you go from there?
Byron Gajewski: so a
brief comment on that.
I was there because I liked St.
Cloud State.
They had a nice undergraduate
statistics department.
They had a program there.
It was neat.
I had students that went
to JSM that were statistics
majors in undergrad.
Um, but I was there partly
we had a match thing.
My wife.
I got a master's in social work
at, at, and I'll say this to
the listeners, I'll call it the
U, but Americans in the Southeast
might think that it's University of
Miami, but it's not in Minnesota,
it's University of Minnesota.
Scott Berry: mm-hmm.
Byron Gajewski: And you have to
say it like that, Minnesota.
Scott Berry: Yeah, that
was pretty good, Byron.
Yeah.
Yeah.
Byron Gajewski: And so anyway, after
two years of that, we had an
open market and I found I found
a job at University of Kansas
Medical Center here in Kansas City.
Oh, by the way, there's a
difference between Kansas City,
Kansas and Kansas City, Missouri.
Two cities, two different states.
And, uh, the chiefs in Missouri now
are going to be moving from Kansas
City to Kansas City, by the way.
Scott Berry: Yeah.
Oh, so you are
Byron Gajewski: Yeah,
Scott Berry: Kansas, but.
You live in Missouri?
Byron Gajewski: I live, yeah, I
live in Kansas City, Missouri and
I work in Kansas City, Kansas.
Scott Berry: Yeah.
Yep.
Byron Gajewski: Um,
so, uh, we do that.
And so that's how I
ended up at University of
Kansas, um, Medical Center.
I just, an open job.
My brother lived out here.
I came and visited.
I loved it.
I loved the university.
I loved what was happening
here, and I still love it.
I still,
there's great things
going on here at KUMC.
Scott Berry: Now, were you
doing biostatistics at St.
Cloud?
Byron Gajewski: I was, so I, uh,
Scott Berry: I mean.
Byron Gajewski: I published off of
my dissertation when I was there
at first, and then I got these
consult, the reason I got into the
guinea pig experiments was because
my buddy from high school was a
resident and a professor, a clinical
professor at University of Florida.
He, he knows me, so he got me
involved in that, and then I started
getting into biostatistics from there,
so I was really like, kind of
self -involved, like by opportunity
I became biostatistics, you know?
Thank you.
Scott Berry: Mm-hmm.
Okay.
Byron Gajewski: Okay.
Scott Berry: you're off at, uh,
university of Kansas Medical Center,
and I, I've been struck by the group
you have there and really the innovative
designs that are going on there.
Now, I want to jump into one, the HOIT
trial, but let's not go there yet.
But
Byron Gajewski: Sure.
Scott Berry: bit about the
group you have at KUMC, the
innovative designs going on there.
I think that would be great.
Byron Gajewski: The group is, we
have about 30 faculty members here.
And we have several
clinical trialists.
Our department chair is a
clinical trialist, meaning he
does his research in clinical
trials, does clinical trials.
Uh, Matt Mayo, we have,
uh, Joe Wick is a clinical
trial, Bayesian adaptive.
She's trained at Baylor as
a Bayesian biostatistician
at Baylor University.
Okay.
Um, Lynn Fadness is a, is a,
is, is in our working group, and
I'm probably forgetting, um, um,
we have, we have, uh, some, some
junior faculty, Kate Young, Lexi
Brown, Bayesian, uh, statistician,
some, some Bayesian, and we do a
little bit of both, some of us,
I do more Bayesian trials, in my
end, my portfolio is more of that.
And we have a working group of
faculty, uh, staff, and students,
and it's, uh, It's interesting.
I'll, I'll, I'll say the name
and then I'll give the acronym.
It's, it's Fixed and
Adaptive Clinical Trial
Simulator Working Group.
It's named after FACTS Software.
And, um, in it, we kind
of have a cool group.
We mix things.
We have this pre -specified rule.
If there's a, if there's a webinar,
a FACTS webinar during our working
group, we will postpone a talk
that we have internally to the
later event and we'll do that.
We'll go to that.
And then we'll discuss it afterwards.
But students and faculty, they do a
novel clinical trial designs in it.
Motivated from actual questions.
And we've done this.
There's a couple things we've
done in that working group.
That I think is kind of cool.
We've done a trial.
As a working group.
Scott Berry: What does that mean?
What does that
Byron Gajewski: So what we
did, Was to simplify things.
To allow the students to experience
designing and conducting a trial.
We did a thing where I drive,
I commute to work every day.
I, myself, so it's a, it's a,
it's a one of one clinical trial.
It's a one of one trial.
I shouldn't call it
a clinical trial.
It's a one of, one of one.
And I have three different
routes that I can take to work.
And I have a route
that I think works.
So I have two experimental routes.
And we did a response, we did a
response adaptive randomization trial.
They designed it.
They looked at different
operating characteristics.
They looked at a fixed trial.
They looked at adaptive.
They picked an adaptive trial.
They built the, uh,
case report forms.
They built the
electronic data capturing.
They did the randomization.
Uh, they did that in RedCap.
And, we implemented it.
I went to an app in the morning.
I went to my car,
and I was randomized.
And I did a, uh, A route.
And then I input the
data when I was done.
I didn't do it during the drive.
Scott Berry: Yeah.
Yeah.
Byron Gajewski: I, I started
a clock and I went and I
stopped the clock when I parked.
We had protocols.
And they discovered some, the
trickeries and blinding things.
Who needs to be blinded?
Can you blind too much?
Is that a risk?
That showed up.
Then they, they analyzed it per
protocol and then they wrote up a
paper and they've submitted the paper
and it's under review right now.
Scott Berry: Hmm.
Byron Gajewski: Uh,
Scott Berry: did the experimental
routes end up quote unquote better?
Byron Gajewski: yes.
There was one route that was...
Okay, so it ended up not
hitting the formal trigger.
So we would be able to FDA
label it as the better route.
We have like a posterior
probability of 0 .94
or something like that.
That we have a better route.
Scott Berry: Okay,
Byron Gajewski: Yeah.
Scott Berry: uh, I, I did a podcast
with Lindsay, and you and I were talking
about that ahead of time, where we talked
about how to analyze ordinal endpoints
is what's the way you analyze time?
Is it mean, it linear?
That 20 minutes is twice as bad as 10
minutes, or do you have some sort of
loss function, minimax rule or something?
It's just straight meantime.
Byron Gajewski: Meantime.
Yep.
Yep.
Normal stuff.
Normal...
Central Limit Theorem for
the parameter estimates.
Scott Berry: Okay.
Okay.
Nice, nice,
Byron Gajewski: And then if you
want to do, we talk about
that actually with the group.
We say if you want to do predictive
district, you know, predictive times,
it's not going to work very well.
We need a different model.
Scott Berry: yep.
Byron Gajewski: Yeah.
Scott Berry: Okay, so I, I, we want
to jump into Hobe, but, um, some of
the other trials that are not on you
yourself as the, uh, as the, the patient,
um, but you, you've done a number of
Bayesian trials, the pain controls
trials, the start trials, other trials,
um, where you're also implementing them.
You're creating red cap, you have a group
there that sort of specializes in the
operationalization of response, adaptive
randomization, uh, and all of that, which
has been a powerful part of these trials.
Byron Gajewski: Yeah, absolutely.
It's, it's, it's, it's been great.
I mean, you mentioned, uh, uh,
pain controls was interesting.
Interesting.
And it's, I tell this to
Rick Barron, who's now at the
University of Missouri, and I
could talk the whole pot about
the rivalry between KU and Mizzou.
Um, maybe we don't want
to go there, but it's
an interesting rivalry.
Um, so he's there now, and I
saw him a couple weeks ago.
Actually, my son goes to Mizzou.
So, so I'm one of those KU
guys that, that can go to
Mizzou, wear Mizzou stuff,
go to KU, wear KU stuff.
I can go, you know.
As long as they're not playing
Texas A &M, that's right.
Uh, but I told him, I, he was
introducing me to some people out
there and I said, I said, yeah, I
work with Rick at, at KU and they
do a, they do a thing with their
head when they say you're at KU.
It's what people do.
But anyway, he, I, I said
it's my favorite, it's
one of my favorite trials.
Pain control.
Scott Berry: just so to, to give the
one minute, uh, elevator pitch of
what was the pain controls trial.
Byron Gajewski: I describe for
me my personal experience with
pain controls and pain controls.
is jump into the water
to learn to swim.
And what I mean by that is,
um, it's interesting because, I
don't know if you remember this,
but Rick, I mentioned, mentioned
Rick Barron, who's a neurologist.
He went to a talk of yours,
and, uh, I think it was at a
translational, uh, clinical and
translational research conference.
And he liked what you had said
about adaptive designs, about
Bayesian adaptive designs, and
he said, Hey, can we do that?
Do you know anybody
that can do that?
And you said, well, Matt
and Byron can do it at KU.
And so then, because he and
I, Matt and I had done some
papers, some Bayesian papers
about design and clinical trials,
but we hadn't actually done any
Bayesian or Bayesian adaptive trial.
We hadn't actually formally
designed them or implemented
them at the time.
And so you got me involved.
You, you helped with that.
We worked together on it.
Melody worked on it from Barry.
I remember that.
We did a little bit of
FACTS, but we did some R.
And anyway, that's not
the one minute thing.
But the thing I want
to tell you is I jumped
right into it with that.
We're talking, uh, uh, group
sequential features, response
adaptive randomization,
identifying the maximum treatment.
There's four arms in the
trial, four treatments.
There's two endpoints in it.
And, uh, so it happened.
Scott Berry: this, I, I think this
Byron Gajewski: Yep.
Scott Berry: comparative effectiveness
of four different potentials for
controlling pain, um, in, in this.
Yep.
Okay.
Byron Gajewski: Yep.
Yep.
The disease was, or is,
uh, cryptogenic neuropathy.
Cryptogenic meaning they don't
know why it caused the neuropathy.
It was coined by Rick Barron.
He discovered, or he defined that
disease, uh, disease diagnosis.
So
Scott Berry: end result of the
trial, which I don't remember.
Byron Gajewski: there's four drugs.
I'll be kind of abstract about it.
Four drugs.
We were trying to
identify a single winner.
We had put in the protocol at the
12th hour before, before finishing
the protocol, um, that there be
identification for loser arms.
And there ended up being two,
two loser arms, which, and so
two are recommended for, Uh,
diagnosed, or for, for treatment
now in, in, in, in the realm.
Um, and by the way, even
the quote loser arms probably
are better than do nothing.
So, those are still good.
Scott Berry: didn't have a do
Byron Gajewski: We
Scott Berry: though.
Byron Gajewski: did not
have the do nothing arm.
No, no.
Scott Berry: Okay.
So let's, let's talk about hobo.
Um, the, the HOIT trial, it's a,
a, a very cool trial, adaptive
trial and, uh, funded by the NIH.
It's running.
We are both blinded to the data, so
we don't, we don't know the data,
but tell me about the HOBIT trial.
Byron Gajewski: Yeah, so, this
trial that the disease is
severe traumatic brain injury.
So, think of a person gets
in an accident, and it could
be a car accident, a bike
accident, it could be a fall,
and their head gets hurt.
They get hurt, they're
in a coma, they're out.
And that's usually diagnosed using
a thing called the Glasgow Coma
Scale, which you look at their
eye response, their verbal response,
motor responses, and it defines that.
By the way, you kind of
have to check to see if
they're alcohol induced.
Um, because if that happens,
it's not a severe TBI.
It's alcohol and it's temporary.
It's not, it's not a long thing.
So there's no good
treatments for severe TBI.
Um, we need to do better.
And so, uh, Impetus of the,
the, the HOBIT is, is hyperbaric
oxygen treatment is, is, is
looked at as being a potential
treatment for severe TBI.
It was studied preliminarily by Dr.
Galen Roxworth out of Hennepin
County Medical Center and the U.
He's affiliated with the U.
Scott Berry: Yep.
Byron Gajewski: Um, and
he has preliminary data.
He's been studying this for decades.
And it's become so dire that the
NIH has said, you know, that is
probably something we need to study.
And the peer reviewers
thought the same thing.
You know, let's further study that.
Um, the preliminary data suggests,
so hyperbaric oxygen chamber is you,
You put a patient into a, it's,
it's indicated for, um, wounds and
other medical conditions, and you
put a, put a patient into a dive.
They go into a, sometimes into a,
what's called a monoplace chamber.
It's a single chamber.
Um, and there's a multiplace chamber
where clinicians can go in with the
patient and dive with the patient.
So you're getting pressure.
So you're getting better
oxygen to the body.
And Galen's theory is that,
and And based on data, Uh, also
empirically, empirically, uh, is
that it reverses the ischemia
in the brain, the damage in the
brain, and can help with, uh,
improving, improving their health.
So he has preliminary data on
that that suggests that there's
a absolute improvement of 12 .7
percent in favorable outcome between
that and, and, and standard of care.
Preliminary data.
Scott Berry: Okay, but the trial is
Byron Gajewski: Yes,
Scott Berry: a two arm trial of
hyperbaric oxygen against none.
Uh, it's a, it's a more complex
treatment, and so you have
multiple experimental arms.
Byron Gajewski: yes.
As I, as I mentioned, there's
a dive, but there's, so, so
atmospheric pressure is a 1 .0
at sea level.
And so 1 .0
pressure can be increased to 1 .5,
to 2, to 2 .5,
and the patient goes into a dive.
So it's like, it's like a
diver going in the ocean, going
under the, under the water,
or if you're an airplane.
That pressure change happens
and it happens slowly so
you're not hurting the, the,
the patient or the clinician
who's in a monoplace chamber.
And so we're looking at, we'll
cast a wide net on this.
We're in a kind of a phase
2 case, phase 2B if you will.
And, We have three different levels
of hyperbaric oxygen, and then
there's a thing called normobaric
hypoxia, which is not pressure
-reduced, it's just oxygen.
And we have that in addition to it,
so that turns into six arms, and
then we have normobaric by itself.
You can kind of think of
the arms as being eight
arms, kind of a four by two.
Scott Berry: Yep.
Yep.
Byron Gajewski: Four hyperbaric oxygen
levels and two normobaric hypoxia.
And then the one with 1 .0
ETA and non -normabaric hypoxia,
that's the control, that's
the standard of, of care,
which, you know, um, yeah.
Scott Berry: it wa it's, it's a
Byron Gajewski: Not
Scott Berry: So they go
through the procedure.
They're just not getting the,
the, the mechanism of it.
Byron Gajewski: really, they do
not go, they do not go into,
no, no, no, they're
just treated standard.
Scott Berry: Oh, so
Byron Gajewski: No.
Scott Berry: do a fake dive.
Okay.
Okay, so you have these multiple arms.
What's the, uh, and the endpoint
in the trial is what are
Byron Gajewski: a, yeah, it's a scale
called Glasgow Outcome Scale Extended.
That's six months, and it,
it's a, it's an ordinal scale
that goes from one to eight.
And one is good, eight is
bad, one is, I'm sorry,
it's the other way around.
One is, one is bad, eight is good.
Um, so death is a one,
vegetative state is a two.
Kind of like the modified
Rankine scale, a lot of people
treat those as the same.
The one and two, the, the
death, the vegetative state.
Thank you.
Three through six is a kind
of a fine, different gradations
of moderate disability.
And then a seven to eight is good
recovery, gradation, good recovery.
So an eight is really good.
Um, and then we, it's a
slide, it's a sliding dichotomy.
So depending on the severity of the
patient in the beginning at baseline,
you have different definitions of
success and it's a binary cutoff.
This is a, this is a binary.
Uh, outcome that we analyze
at the end of the day.
Um, uh, we,
that's
Scott Berry: on where, what
your baseline status is, defines
differently what a response is.
Byron Gajewski: right.
Scott Berry: sliding dichotomy part of
Byron Gajewski: Mm -hmm,
Scott Berry: And now
the design is adaptive.
So you have these multiple
arms in the trial.
You have a control arm and
the endpoint's at six months.
But what, what are the
adaptive parts of the design?
Byron Gajewski: sure, sure.
There are, there are several
adaptive features to the design.
Um, we use response
adaptive randomization.
And we do it on
the primary endpoint.
We do a information formula,
which combines the probability.
So, so by the way, we're
trying to say the, the, the
probability that the active arm
is best among all active arms.
Mm-hmm.
Uh, and we, we put variance
and sample size of that,
uh, in that arm already.
And, and, and do that.
We do an information formula.
So it uses both the best arm
as well as the information in
that arm in a balanced way.
Um, and then we also have, like,
group sequential features to it.
So we, we do RAR every
so often, and it's, it's
precisely the enrollment times.
Actually, it's not
just every so often.
It's very specific time or
very specific enrollment points.
And then we do, uh, we test
for whether we've achieved
success or futility at interims.
We have a maximum sample size
of 200, and 20 of the, of the
participants have been randomized to
the control throughout the trial.
Scott Berry: So the, Hmm.
So the RAR changes the active arms
Byron Gajewski: Uh -huh.
Scott Berry: on pressure
and, uh, this, this NBH.
Uh,
Byron Gajewski: Yep.
Yep.
That's
Scott Berry: the trial could stop
for futility approximately every
20 patients or something like that.
You
Byron Gajewski: right.
Yep.
We put 20 patients.
It kicked in at 116.
You go 116, 136, 156, 176.
And then 200 is the maximum.
I'm talking about enrolled here.
So the 116th, uh, patient that was
enrolled to be a participant, we
started, we did an interim analysis.
And I say we, uh, Jonathan and
Renee did the interim analysis.
Scott Berry: Yeah, so this is, uh,
Byron Gajewski: That's
Scott Berry: of South Carolina.
Byron Gajewski: right.
That's
Scott Berry: coordinatings, they're
implementing the adaptive design.
Byron Gajewski: right.
Scott Berry: yep.
Yep.
Okay.
Yep.
Alright, so, uh, and, and do you know
roughly the, this trial's enrolling
you're, you're roughly at a sample size of
Byron Gajewski: Yeah, so
we're, we, we are at 159.
So the latest interim
we did was 156.
And I know, so we talked about
this a little bit, uh, I, I think
of, so I'm, I'm, as you mentioned,
I'm a blinded, I'm a blinded STEM.
Scott Berry: You're a blinded advisor.
To?
To, to,
Byron Gajewski: Right, right.
And so I don't know
what's going on.
Except I, I, I use
the analogy of poker.
I'm playing poker here.
And obviously we're not
gambling with Hobbit.
We're not, that's not
what I'm trying to say.
I'm just trying to use an analogy.
Scott Berry: yeah.
Byron Gajewski: Um, where I'm
playing against somebody, and I
don't know what their hand is.
But I have some telltales.
I know how they're betting.
I might know that they're hitting
their ear or, or, or nose.
I don't know if I believe that.
But,
um.
So I have some tell, the telltale
I have is that 156, we have
not achieved success or futility.
I have some information
about the trial.
I know that it hasn't, it
hasn't hit those workers.
Scott Berry: And you don't know the
randomization to the arms and all of
this, but this is, I mean, in some ways
this is actually a fantastic trial.
This is a, uh, uh, this
is a syndrome where.
Nothing works is severe
concussion, severe traumatic brain
injury, many different drugs.
Now, there's actually a brain adaptive
platform trial that's running in Canada.
They're investigating multiple things,
uh, but largely this is a really
refractory disease to treatment.
So the, the ability for this to show and
explore multiple ways to give hyperbaric
oxygen in one trial and demonstrate
lump some level of efficacy would be.
Complete game changing,
uh, with positivity.
Now we, we don't know that, you
know, this could be in completely
ineffective, uh, we don't know the
data, but the design is a really,
really cool design, uh, in this setting.
So, uh, really looking forward to
seeing how the randomization came out.
The trial came out fantastic.
Byron Gajewski: yeah.
So,
Scott Berry: Uh, and now
when you are working with.
The very ac various academic
groups and good receptiveness
to Bayesian adaptive designs.
You feel like this is, uh, hard
discussions, easy discussions.
Byron Gajewski: um, easy.
It's really remarkable.
Um, I'm really surprised.
I really thought in the beginning it
would be a real struggle because I
was kind of brought up, you know,
in the 90s, we were still in this,
when I was at Texas A &M, we were
still in this time of great debate.
Frequentists in Beijing used to
struggle, but the debate centered
around kind of mathematical,
philosophical struggles.
Um, you know, you talk about
philosophies, what is science, and
When you talk to collaborators who
are non -statistical, and you say,
Hey, how would you like to interpret
your results as the probability
of a treatment effect, given data?
Well, that sounds great.
Scott Berry: Yeah.
Byron Gajewski: Well, we'll
just go from there.
Scott Berry: Yeah.
Byron Gajewski: And, and
people really like it.
They like, also, that I really, and
I don't think this is necessarily
a Bayesian thing, but I think it's
easier to sell in the Bayesian.
It is, and I think you talked
about this on a pod, about reading
the JAMA article about labels,
and I alluded to it earlier.
You know, yes or no, this binary
world in science and medicine
is very unappealing to me.
I like to think of
science as a spectrum.
And I think when you can
say the probability of a
treatment effect is .96,
I think it was Tesla.
Trial and stroke.
I think the trigger was, I
want to say it was 0.975
or something for the,
and I think it hit 0.974
or something.
And then, New England Journal of
Medicine, I think said, well,
it doesn't, it didn't, they
didn't say it doesn't work.
They just said, you couldn't say
that it worked, that endovascular
therapy was better than, and it's
really kind of like, I, anyway.
I think people, I talk
to pediatricians, we
have a network here.
Uh, that, that Matt Mayo got a,
uh, data coordinating center for,
it's called DCOC, and it's a rural
pediatric network that he's in charge
of as a statistician, the Data
Coordinating Center and Operations.
And I talked about this concept,
and they were like, oh, oh, but I'm
talking about like, maybe I don't
want to say names necessarily, but,
Scientists in the room, we were
talking, and they just said, Yeah,
that makes a lot, this concept of
labeling, not labeling it yes or no,
but labeling the probability of, of
a hypothesis, they really liked it.
So I guess,
it's been easy.
It's been an easy sell, I would say.
And I don't mean to say a sell.
I just mean to say, I like to
work with investigators and have
them, let them make decisions.
I just present kind of ideas.
Scott Berry: Yeah.
Byron Gajewski: That's
Scott Berry: this trial that you're
running, the HOIT trial that you're
running, the primary analysis is
going to be, what's the probability
that this arm is better than control?
Byron Gajewski: right.
Scott Berry: I believe, quote
unquote, success in the trial is
greater than an 85% probability.
It's not.
An FDA approval threshold of 97.5
or 99.
And so the really, the interpretation
of this is a, is a quantitative scale.
What's the probability
that this is beneficial?
And it may be that 90% is
practice changing, maybe 50%.
You know, I mean, so it,
it, it, it's the idea that.
Running this trial, and
this is a hard trial to run.
It's been long, it's been slow.
The idea that there's one answer that
comes out, yes or no, seems absurd.
Byron Gajewski: Yeah.
Scott Berry: And the Bayesian
analysis is gonna help
quantify, given the information.
You do have a dose response
model across these arms.
So it's not like multiplicities of this.
But given the entirety of the data and
the modeling, what's the probability
that this is, uh, beneficial,
which is, which is really cool.
Byron Gajewski: Oh, and by the way,
um, there's also an additional
success criteria for that.
Not only is it the probability
has to be bigger than 0 .85,
but there's an and, we
predict, posterior predictive
probability of phase 3 success.
Scott Berry: Hmm.
Byron Gajewski: So the
future phase 3 would be a
thousand participants, 0 .025
type 1 error on one
-sided, and 0 .025
type 2 error on one -sided.
Chi -square on the endpoint,
one -to -one randomization.
If that probability of
success is bigger than 0 .5,
then that, so it's an and statement.
Scott Berry: Yeah, so th th this
Byron Gajewski: Right,
Scott Berry: proof of concept
phase two trial, and if it's, if it
predicts that phase three would be
reasonable likelihood of success, that
that's a, a positive of this trial.
Byron Gajewski: right, right.
Scott Berry: cool.
Very cool.
Alright, so, uh, lots of, uh,
fun Bayesian trials going on.
Any other exciting
things going on at KUMC?
Byron Gajewski: Um, no, I mean, I,
I think that since, since pain
controls, I really, I've been,
so
Scott Berry: So you've, I know you've
had a number of students that are
now working in this area of Bayesian
trials, um, uh, innovative trial design.
Is that.
Uh, find that students find that
an exciting area to go into?
Byron Gajewski: the answer,
short answer is absolutely.
Right.
You, you actually, it's,
it's kind of cool.
They, they're excited to do it.
They're, they're, you know, I
actually, speaking of Melanie, I
had this discussion with Melanie
years ago at Joint Statistics.
Scott Berry: Cantana,
who's at Berry Consultants.
Yep.
Byron Gajewski: I had this
discussion with her about building
adaptive designs and how it's
like being a kid in a sandbox.
You're just building it.
you know, a sand castle
or legos or something.
It's very rewarding and
fun, and I think the
students think the same way.
They think the same, and it
kind of is reassuring to know
that other people appreciate
it too, and I'm not just some
strange guy who loves
building adaptive designs.
Scott Berry: Yeah, yeah.
Uh, that I, I, I don't know if that's
a fear that we as statisticians
generally have that we find really,
really cool things and wonder
does anybody else think this?
Is that cool?
This is, this is super cool.
Yeah.
Byron Gajewski: Thanks
Scott Berry: Yeah.
Oh, that's awesome.
Uh, the, the, you're, you're, you're
turning out, uh, uh, new design students.
That's, that's fabulous.
The work going on at
KUMC is really exciting.
Look for the hobo trial.
Um, it's not over.
It's still running.
Uh, but Byron, thanks for joining
us, uh, here in the interim.
Byron Gajewski: for having me.
I really enjoyed the
discussion, Scott.
Scott Berry: And everybody,
thanks for joining us.
Thanks for tuning in.
Until next time, we will
be here in the interim.