In the Interim...

In this episode of "In the Interim…" guest host Cooper Berry moderates a detailed discussion on the evolution and practice of Bayesian methodology in clinical trials with fellow family members Dr. Don Berry, Dr. Scott Berry, Dr. Lindsay Berry, and Dr. Nick Berry. The panel outlines the foundational principles of Bayesian decision-making in medical research, ethical debates informed by historical reports like the Belmont Report, and the shift in regulatory acceptance. Computational developments such as Markov Chain Monte Carlo (MCMC) are examined for their role in enabling applied Bayesian models. Panelists give practical accounts of implementing adaptive and platform trials, including I-SPY 2 and REMAP-CAP, and analyze challenges faced during the COVID-19 pandemic. The implications of Bayesian statistics in artificial intelligence and contemporary clinical decision-making are explored, highlighting ongoing shifts in trial design and evidence synthesis. Each discussion is grounded in direct experience and technical rigor, providing insight into both the operational realities and future trajectory of Bayesian-driven methods in clinical research.

Key Highlights:
  • Historical development of Bayesian clinical trial design and foundational influence from Leonard J. Savage to current methods
  • Ethical tension in trial conduct, referencing the Belmont Report and equipoise
  • Advances in computation and Markov Chain Monte Carlo (MCMC)
  • Regulatory frameworks for Bayesian adaptive trials, including FDA guidance
  • Implementation details from I-SPY 2 and REMAP-CAP platform trials
  • Bayesian methodology in the context of artificial intelligence, precision medicine, and future data integration
For more, visit us at https://www.berryconsultants.com/

Creators and Guests

Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC
Guest
Don Berry
Berry Consultants Founder & Senior Statistical Scientist

What is In the Interim...?

A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: All right.

Welcome everybody.

Back to, in the interim, uh, I'm your
host for the, maybe the first minute here,

Scott Berry, and welcome everybody back.

And I have, uh, an interesting
discussion for you today.

I'm gonna turn it over to a guest
host and I'll introduce him in a

minute and I'll introduce the podcast.

But the, the, the other guests that
have joined me today are Don Berry,

who you've seen on in the interim.

And as far as this concerns, Don
has 50 years plus of experience

of Bayesian clinical trials.

And Lindsay Berry, who is joining us,
who has been at Berry Consultants now

for seven years, do I have that right?

Lindsay Berry: Six years.

Scott Berry: Six years, uh, feels
like seven, uh, and has gr a great

deal of experience, uh, in this
time in Bayesian clinical trials.

And Nick Berry, who's also been
at Berry Consultants now for six

Nick Berry: Six and a half years.

Yeah.

Scott Berry: and six, and has more
experience than Lindsay, as you can

hear in, in Bayesian clinical trials.

So this episode is about Bayesian
clinical trials, and the, the theme of

this is that the siblings and grandson
and my son, Cooper Berry, who is an

undergraduate student at Pitzer College,
taking a course in the history of

mathematics, he wants to do a project on.

The use of Bayesian statistics in
clinical trials and investigate that.

And he just so happens to have a wealth
of experience within his own family.

So he is going to conduct this podcast.

He's gonna be the host and explore
the, the impact and the future

of Bayesian in clinical trials.

So Cooper, welcome to in the
interim, and this episode is yours.

Cooper Berry: Thank you.

Thank you for having me on.

Uh, I'm very excited for this.

Um, and yeah, I, uh, I think we
can start right ahead with, um, uh,

a few questions for, uh, for Don.

So Don, you're, um, your
dissertation work, it was with a

very famous Bayesian statistician.

Um, I think you referred to him as one
of the founders of Bayesian Statistics.

Jimmy Savage.

Um, he was involved in a lot of different
areas, but maybe not clinical trials.

What would Jimmy Savage's
views of the Bayesian approach

today in clinical trials be?

Don Berry: So, uh, he, uh,
first this, this was at Yale.

He was my advisor.

I came to him with a problem, uh,
that, uh, I thought was interesting.

Uh, and it wasn't specific, although
we did discuss the possibility

and I kept thinking that this is
applicable to clinical trials,

this is what clinical trials are.

Um, and then I got ridiculed by the
people who actually run clinical trials.

For many years, we're talking
about the 1970s and eighties.

Uh, by the time we got to the nineties,
they had, they were used to me and

some were beginning to listen to me.

Um, but Jimmy Savage, what
would Jimmy Savage have said?

Well, he said a lot in reading my
dissertation, the five versions of it.

Um, and none of that had anything
specific to do with clinical trials.

Uh, it was, uh, decision making,
sequential decision making, um, as

very simple to say what the problem is.

I suppose you have two coins,
um, and you don't know what the

probabilities are of heads every,
but you're given the opportunity to

toss the coin, say a hundred tosses.

Every time you get
heads, you get a dollar.

Every time you get tails, you get nothing.

So think in terms of clinical trials,
it's, uh, you have a treatment,

you're investigating or you, and
you compare it to another treatment.

So you have these two coins, and if
you choose a treatment, that's good.

The patient lives, if you choose the
treatment, that's bad, the treatment,

the, the patient does not live.

Um, and what your goal is, is to treat as
many patients as effectively as possible.

Get as many heads as possible,
make as much money as possible.

If you're Google, you want to use, uh,
ads that attract when somebody clicks

on an ad, Google makes money when you.

Don't click on the ad.

Google knows you're looking at
it and you didn't click on it.

So you don't, Google Gut
does not make any money.

Um, and they wanna make as much money as
possible, but they also want to, and this

is the critical issue, they also want to
learn about what is the better therapy.

Not only treat patients effectively,

but also learn about the
treatments so that we can treat

patients later effectively.

Now, this is just to set the context.

1979, uh, there was a Belmont report in
the United States that was a collection

of people they met, met Belmont, I guess.

Um, and they put together
the ethics of clinical trials

and what they said in this.

Was very counter to what I just
said about bandit problems.

They said the purpose of of, uh,
doing clinical trials is only

to learn not to treat patients.

And they're very ex explicit about that.

And so part of my harangue then
and continuing is we can do both.

We can combine clinical
practice and clinical treatment.

Now, doing that is surprisingly difficult.

And the reason it's difficult
is there are two reasons to

pick a coin over the other coin.

One is the coin is looking better
in terms of its effectiveness.

The, the, the.

Ability to generate ahead.

Um, and the other is to learn so that
we have information in the long run

that's gonna help us treat patients.

In the long run, you know, the last
50 patients would be benefiting of the

hundred of the a hundred, uh, coin tosses.

The last 50 are gonna
benefit from the first 50.

And the first 50 are gonna help
in benefiting the latter 50.

But there's no 50, there's no, you
know, discreet aspect to the thing.

It's that you're you at, every time
you're going to toss the coin, you're

confronted with just that decision.

But that decision has a ramification on
the rest of the story and the reason.

It's difficult is because you,
we, I talk about information,

I talk about treating patients.

Those two things are not,
um, uh, uh, separable.

You can't look at one say,
oh, here's the information.

You can talk about a standard
deviation, for example.

Um, you can talk, you can talk for
the, um, the, uh, treating patients

about the mean of the distribution,
of the probability of heads.

Um, but you can't separate them out.

Um, and so my dis my dissertation.

So, so, so picture your problem.

You are now, you've now treated
20 patients and it turns out that.

Coin one was used 12 times and
Coin two was used eight times.

Uh, and of the 12 times there
were, uh, nine successes.

And of the eight times there
were, uh, two successes.

Uh, what do you do?

Well,

Cooper Berry: when is it worth
taking flipping coin Two, even

though it's the worst option
to get more information on it,

Don Berry: say it again,

Cooper Berry: when is it worth, um,

flipping coin two, even though
it's had worse data, just

to get more information on

Don Berry: that's an excellent question.

The worth of Coin two.

Uh, we don't know as much about
Coin Two as we do about Coin One.

Uh, and it would be good to get
information about, uh, coin.

Two better to get information
about Coin Two than Coin One, even

though Coin one is looking better.

You know, I would want to use Coin One
if I could, but suppose I use Coin One

and then I never go back to Coin two.

That would be bad

because even though the, you know,
the point estimate of, uh, coin uh,

two was less than Coin one doesn't
mean that the actual probability

of head zone coin two was less.

And so if we never go back to
it, that's opportunity lost.

And so that's a negative,
uh, in this circumstance.

I I haven't done the, uh, calculations.

The calculations are
really, uh, difficult.

Um.

But, uh, I think that Coin one
is indeed appropriate because the

information here is not enough to,
uh, make it worthwhile to treat the

current patients potentially less

effectively.

Scott Berry: but that, that, that
depends on your horizon, I assume.

So if it

was 20, if it was 21 patients,
the answer's trivial.

You do one.

If it's a million patients, you
might do something different.

Don Berry: that's where
I was going, Scott.

Uh,

Scott Berry: Okay.

Don Berry: and, uh, yes, exactly.

Um, and with the, if, if it's a
thousand patients, um, I, I remember,

uh, Herman Chernoff was very
famous statistician who, um, uh,

Scott Berry: Oh, so for, for
this podcast, was he a Bayesian,

Don Berry: He was not a Bayesian,

Scott Berry: Okay.

Don Berry: he said he, he said
to me that he was not a Bayesian.

Scott Berry: Okay.

Don Berry: Um, and using, uh, uh, Jim
Berger as a, uh, famous Bayesian, um, Jim

said that anybody who says he's a Bayesian
is a Bayesian, although he was telling me

about this one person he didn't think was
a Bayesian, since my definition is anyone

is, is a Bayesian, if they say they're a
Bayesian, um, except for this one person.

Scott Berry: Okay.

Don Berry: Uh, so Herman Schoff
said that he was not a Bayesian,

but he couldn't do what he wanted
to do without being a Bayesian.

Because in the Bayesian setting,
you are able to ask yourself,

what is the probability that
I get a success on Coin One?

What is the probability that
I get a success on Coin two?

And then if I pick one of those and
go to tomorrow where I get to use

another coin, uh, I update that.

So every time I make an
observation, I can update it.

And Chernoff wanted to do that.

And he, in this one problem he
wanted to calculate, you know, so

how many times would I use Coin two?

Um, if it, it is, uh, I've seen two out
of eight, and then I'm going to keep

using, uh, let's say the horizon is big,
and so it tells me I should use Coin two.

How many more failures on Coin two should
I accept before I switch back to Coin one?

And that, you know, he was advertising
to me the calculations that he got.

Um, but it was precisely the
issue that Scott mentioned that

it's, it's the horizon anyway.

So this whole idea was anathema
to the clinical trial community.

Uh, I was aghast when I actually, uh,
decided that the only thing I could

do to influence them was to join them.

And so I did that.

Um, I, uh,

Scott Berry: well, let's, let's hang on.

Let's, let's, let's turn
it back to, to, to coop.

Don Berry: okay, I, I got, so
what would, what would Savage have

thought about, uh, randomization?

And read aways clinical trials.

Uh,

he would, he would've liked my approach.

He did like my approach, uh, uh,
he was mostly interested in, in

the theorems and, and results in
the mathematics of, of my approach.

And,

you know, the First,

the first draft was 10 pages and
he said, you know, this is great.

Maybe we can do more.

And the final draft we, I published
in the analysts of mathematical

statistics, uh, was 27 journal pages.

Uh, and if I tell you what the results
were, you would say that, well,

that sounds trivial, but the trivial
turns out to be very difficult.

Uh, and, and in terms of.

I'm, I'm, I'm gonna,

Scott Berry: so, so, so

hang on Don.

Hang on, Don.

We're, we're gonna run out.

Yeah.

Let's, let's turn back to Coop.

Cooper Berry: And so what
about a clinical trial makes

Bayesian, why was Bayesian looked
down upon in terms of clinical

trials, especially in this time?

Don Berry: so it depends on the person.

If you are a pharmaceutical company or
if you are a, uh, national Institutes

of Health and you're trying to do things
efficiently, that that's what they want.

They want to get answers quickly.

Even if the answer is, uh, uh, is,
you know, we, we drop, uh, treatment

too because it wasn't effective.

You wanna have a compelling
answer to that question.

So it's all, uh, for the clinical trial.

Community, uh, efficacy.

And, uh, they, they actually, even today
they don't worry as much about treating

patients as effectively and the clinical
trials, but this is coming because the

clinic, you know, diseases are no longer
the same diseases that they were back

when they are much more, we have much more
information about what this disease is.

I mean, there used to be
one kind of breast cancer.

Now we've decided that there are at least
four and we've separate them out into

four, but in fact there are hundreds.

And so as we continue, we've got to be
more and more interested in treating the

patients effectively as well as learning.

So back

to my, back to my

Cooper Berry: Yeah.

Are there, are there ethical concerns
in giving a patient a treatment

with a prior probability that's
lower than a different treatment?

Don Berry: Yes, and the way the,
um, clinical trial community

addresses it is to close their eyes.

So they say, well, we've
got to do randomized trials

and we've got to, you know, give
e equal number of patients to

Therapy one and therapy two.

Um, and, uh, we need a trial big
enough for us to conclude that one

therapy is better than the other.

Uh, and we're going to, uh, have a
placebo and randomize to placebo.

Uh, and nobody's gonna know who
got the placebo and who didn't.

And so they close their eyes,

and I don't open my eyes, but I
open an algorithm's eyes, I build

an algorithm which is looking
and does what is appropriate,

say in the bandit, uh, approach.

You know, we're trying to treat as many
patients as effectively as possible.

There is absolutely ethical
concerns with doing that.

And, um, it is, you know, it is the big
deal really, uh, in getting, uh, this kind

of a, uh, of an approach in the clinical
practice, in the clinical trial practice.

Cooper Berry: Yeah.

Follow up question.

So.

Scott, um, leading into around the
year 2000 when you and Dawn, uh,

founded Barry, uh, where was your
head at with the I SPY two trial?

Where was your mind on what was
Bayesian statistics for you then and

relating to clinical trials and what
was it like after the I SPY two trial?

Scott Berry: Yeah, the, the, so
I, I'll go to 2000 and sort of

the timeline of this a little bit.

So, I I was trained as a Bayesian.

I went to Carnegie Mellon University
where most of our courses were Bayesian.

I, I, you know, it's a little
bit like a, a, a language.

I, I think is a Bayesian I model.

As a Bayesian, I can translate it
to Frequentist and what, what, you

know, what does that mean relative so.

I, I was doing sports problems.

I was solving problems as a
Bayesian, and that's how I thought.

And then Don and I started
Berry Consultants in 2000 and we

started, I started, that was my
introduction to clinical trials.

And as you see, Don has had 20 plus years
of experience in them ahead of that.

So I jumped into a number of
projects with Don and it was how

I modeled it was how I thought.

And so it was a very natural
thing for me to go in there.

And I was a little naive,
not understanding that the

world was very frequentist.

Uh, it was dominated by doing
frequentist clinical trials.

And this was a very new thing.

I Spy twos about 10 years after
that in the development was entirely

a Bayesian trial within that.

But we started this
really with the idea of.

Can we do Bayesian clinical trials?

And, and Don will tell you
this, I'll tell you this.

We started this with, this might not work.

You know, Barry consultants
might last a year.

We don't know if, if Bayesian
clinical trials will make it.

And now here we are, 25 plus years
later with more than 30 scientists

and you've got four of them here
doing Bayesian clinical trials.

So to some level it, it worked
despite the uphill battle.

Cooper Berry: Mm-hmm.

And how was the

Don Berry: add, can I just add

Cooper Berry: yeah, absolutely.

Don Berry: one?

So in, in doing the trial, a
lot more complicated because.

You know, you have to
know what the data are.

You have to build an algorithm
that's based on what the data are

accumulating, and there's all kinds of
logistical issues associated with that.

So at the very juncture that you're
talking about, uh, uh, Cooper,

we had a new arm that
was coming in the trial.

This is a multi-arm trial.

It's not two arms.

It was ended up to be 23 arms.

Um, uh, a new arm was coming in
the trial and it had some unusual

aspect that I wanted to modify.

So I went to the people that I'd worked
with to build the, um, I was at M MD

Anderson, I still am cancer center.

And, uh, I worked with people there
to build the trial and, um, it was an

extensive process of, uh, like two years.

Because there were all kinds of things,
you know, the, the mathematics of course,

but the, uh, the, the, the demonstrating
that, uh, we could actually do it.

Uh, we had some pilot studies at MD
Anderson, um, and, uh, and, uh, more

generally, and they had lost the code.

You know, this happens.

Um, and we, so I, I, I couldn't do
what I wanted to do, so I went to Scott

and we did this on a pro bono basis.

Uh, we never got paid for doing it.

Um, and in view of my, uh, uh,
potential conflicts at MD Anderson,

that was appropriate, uh, and I
said, Scott, this is what happened.

He, I said, can you rewrite.

Of course he had the benefit of
seeing what everybody else had

done when was writing it, even
though we didn't have the code.

And

so he rewrote it, how
long it takes, Scott?

Scott Berry: Uh, it was a long weekend.

Yep.

Yep.

Don Berry: And so it was, and we've
now come, we've now come to the

point where point, uh, we can do
that, uh, more and more quickly.

So the building is not a small thing.

It's a, it's a complicated process,
and running is a complicated process.

But as you've heard, uh, we're doing
it, we're running these trials despite

the logistical issues that,
uh, arise along the way.

And with each trial, we learn new things.

Cooper Berry: Yep.

So when Scott, when you were at
Carnegie Mellon, the, uh, the Markoff

Chain Monte Carlo methods, um, were
starting to appear, they were created.

Uh, how did that change
the debate around Bayesian.

Scott Berry: I, I think it
changed it dramatically actually.

Um, a very famous statistician, Brad
Efrin, wrote a paper, why isn't everyone

Bayesian recognizing the benefits of it?

And largely the papers was, well, you
guys can't actually calculate anything.

Uh, it's really too hard to do it.

It's, it has philosophical advantages.

It's nice, but it's really
hard to calculate things.

And it was true.

Um, many of the models we
use now in clinical trials

in the eighties, we couldn't.

We couldn't calculate those things.

I, I set up a model for Don in
one of his trials and it ran in

our basement for three months.

Um, this was in 1983
or something like that.

And we had to hope the electricity never
went out to calculate a very simple model.

Uh, and so this was
very, very challenging.

In the 1990s, the Markoff chain
Monte Carlo techniques came out where

tied to the, the computing power
made the calculation of all of the

Bayesian quantities very doable in
line with doing frequentist things

and man, in some ways catching up
and passing frequentist things.

So the whole debate about it
was kind of, well, yeah, it

looks good, but you can't do it.

Now all of a sudden we could do
it and we could do it really well.

And it changed the debate between
Bayesian and Frequentist for

sure, in every scientific field.

Clinical trials a little bit slower,
but it absolutely changed the debate.

Cooper Berry: Uh, follow up question.

Why was it slower in clinical trials?

What specifically about clinical
trials made it less applied?

Scott Berry: Yeah, it's, uh, I think
you might get four answers if you

ask us all four, but it, it, it
largely, it was clinical trials were

done a particular way, and to do
it different is hard to change it.

SaaS had been developed and so
there were models and calculation

available in a frequentist way,
and this is the way it was done.

And anytime you want to change an
industry and do it different, it's,

there's, there's stasis there.

It, it, it's challenging.

So it, it, it, it was really just changing
the, the usual way it's done and a

lot of people are set up in that way.

I think that was, that
was a huge challenge to.

it

Don Berry: You

know, the, the, the big,

the elephant in this
room is the regulators.

And for many years, the companies
and others would say, what will the

FDA think, what will the FDA think?

And so, uh, during the past, uh, 20 or
so years, uh, changed the FDA changed

their attitudes and some personnel, and
they have become very much positive.

About doing this.

And so even in, in so-called phase III
trials, the trials that are pivotal, that

are used for, for approving drugs, um,
so it, it, it, it, it's, it's changing.

It's, it's developing.

Um, back to you,

Cooper Berry: Yeah.

The, so it was 2010, is that correct?

When the FDA established guidelines
for Bayesian clinical trials,

Don Berry: it's a complicated question.

Uh, and no, um, it
depends what you mean by

the FDA.

The FDA has three distinct pieces.

Uh, one is the Center for Devices,
another is the Center for Biologics, and,

and the other is the Center for Drugs.

And so the center for.

Devices became very much interested
in the Bayesian approach.

And they in fact, approached me
in the 1990s to write a, uh, a,

a draft guidance for how to do
Bayesian things in medical devices.

And that was eventually, uh, the
eventual guidance was bore little

resemblance to what I had written.

But, um, it was about in, in 2010.

Yes, but the, the, the baying, there
is no, uh, approved, uh, Bayesian

guidance, I think, for drugs, but
there are guidances for what's called

complex innovative designs that
specifically includes the Bayesian thing.

Scott Berry: The, the center
for, uh, biologics and drugs are,

are supposed to have one out.

It was supposed to be in September
with the government shutdown.

Who knows when this comes
out, but it's due any day now

is a new guidance.

Yep.

Don Berry: But in,

in terms of the susceptibility in the
community, you know, when we build

these trials, an automaton can run it.

Um, and so what we do is run the automaton
many times and, and simulate to evaluate,

you know, how likely is it that if
you have a drug that's not effective.

And this is what the tradition in
the FDA has been forever, including

as, uh, this, this Belmont report,
uh, is the false positive rate.

Suppose I have a drug that's
not effective, what is

the probability that you.

Um, can, uh, uh, that your
trial will answer that way.

Uh, and so the false positive
rate is not a Bayesian quantity.

Uh, it is rather the standard for, uh, the
frequentist in terms of, uh, uh, whether

we're doing things, uh, well or not.

But we can do that with bays.

And so what we do is we run these trials,
uh, we simulate, uh, the trial many, many

times, uh, millions of times sometimes,
uh, to get an accurate assessment

of what is the false positive rate.

And then that becomes part of the,
uh, protocol and the approval process.

So we, we do things that are frequentist
in order to get them to be used.

Cooper Berry: Yep.

So Nick, I'm gonna turn the
page to you here a little bit.

When would you consider yourself
being a Bayesian statistician?

Your career?

Was it at Iowa State?

Was it after Iowa State?

What was the frequencies Bayesian
debate like when you were at Iowa State?

Nick Berry: Okay.

So the first question, when did
I become a Bayesian statistician?

I'll say when I was like
16 years old, probably.

Um, just growing up around Scott and
doing things like playing board games

and strategy, and I remember looking
over his shoulder and doing like little

problem solving things on the computer.

And, and I think it's like a, a method
of thinking that was sort of ingrained

in me, um, this, you know, sort of
sequential update idea of Bayesian.

And so I think I learned a lot
about, well, what's the probability

this will happen in the board game?

And can you make a decision
that maximizes some win, uh,

condition, given that probability?

And so it's sort of a.

Introduction to, to Bayesian
statistics through that.

And then I went to Iowa State
and maybe my Bayesian arc kind of

took a dip because Iowa State's
not a very Bayesian department.

So I, I started fast and then leveled out.

And, uh, I didn't write
a Bayesian dissertation.

I didn't have a Bayesian advisor.

Um, at Iowa State, you had to go outta
your way and take electives if you wanted

to learn about Bayesian statistics.

Um, not that it was a taboo subject,
just that it wasn't part of the

core curriculum for the most part.

Um,

and so I, I of course did that
and took Bayesian classes.

Um, but my learning kind of stalled
in the sense that Scott described

it being a language that he spoke
and it being just innate and, you

know, pervasive in the department
wasn't really an experience I had.

It was something that, uh, you talked
about as an alternative to the, the real

way to do it or something like that.

And so.

Um, sort of leveled out.

I did, and I've always been a
little focused on computation.

And so one thing I did think about
and, and talk about and, and, and

research when I was in school was

MCMC methods, uh, markup
chain, Monte Carlo.

Um, and I think that set me up really
nicely for my eventual career as a bay

statistician.

And I still think of myself
as sort of a computational

heavy, uh, statistician, but,

uh, I can't.

So maybe the modeling aspect of it is
the part that sticks out to me the most.

Just the ability to estimate these really
general models using this, this kind

of, uh, all in one toolkit that is MCMC.

And, um, so that sort of has drawn
me to it as someone that's always

on the computer, always programming,
is having this MCMC toolkit just

makes everything so available,
whereas with a frequentist method.

Is there a package that does this?

Uh, sort of, what is this?

Can I ask, can I get a confidence
interval for this combination of

parameters and things like that, that
I was always thinking about, just

become inherently natural and it's just
such a smooth way to, to do model one.

Uh, and so now I've gone up,
as I described at, at Berry.

I, you know, have been immersed
in Bayesian methods for, for six

years now, six and a half years.

And, uh, I'm, I'm back on the way out.

Don Berry: So if you had two
words to describe the difference

between Bayes and a non Bayesian,
it would be decision making.

Um, and if you look at some
definitions of statistics, it says

decision making under uncertainty.

Uh, but that's not the
frequentist approach.

Uh, is the, you can make decisions of
course, in the frequentist approach,

but it's not specifically part of it.

When Savage wrote his fundamental, uh,
book, the Foundations of Statistics, um,

the one thing that came out, of course
with the Bayesian, uh, but the other

thing that came out of his efforts was.

He, he wanted to have,
um, a utility function.

You know, why are we doing this?

What, what is the purpose?

Um, and what do we conclude
based on what we've done?

So it leads to, uh, as part of
the, uh, foundation, putting

the two together, the decision
making and the, the data analysis.

Data analysis, without decision
making, uh, the, it's, it's

not, can't be Bayesian.

Bayesian has the two of them together and
that's involves modeling very frequently,

Scott Berry: Oh,

Don Berry: strategies.

Scott Berry: so Cooper,

You're are you muted.

Cooper Berry: Yeah, yeah, yeah.

Um.

So my follow up question is, where are we
at in the history of Bayesian right now?

When you look at what's going on now
with AI revolutionizing almost every

field, how does that affect Bayesian
and where are we going with Bayesian?

Is it on the way down?

Is it on the way up?

Is it just starting?

Uh, I'll start with Nick.

Um, and then we can go all
the way around, uh, with

Nick Berry: Yeah.

Okay.

So ai, um, so do you know
who Jeffrey Hinton is?

He's a, a, a computer scientist
that was at the University of

Toronto for a long time now known
as like the godfather of ai.

He essentially invented in a way,
neural networks by inventing back

propagation, which is now used to
like get the, the weights and estimate

all the parameters that go into
like these billion parameter models.

Um,

he

A Bayesian as a computer scientist.

He's a, you know, he's talked about
Bayesian statistics and, um, thinks like a

Bayesian, um, has tweets about subjective
probabilities that the world, the, the

human race will be extinct because of AI
in the next 30 years and things like that.

So he thinks like a Bayesian.

He has students that
have done great things.

Bradford Neil is a Bayesian, uh, wrote
up his dissertation about Bayesian

estimation of neural networks.

Um, and so there's, there are bayesians
doing amazing stuff in AI fitting

these large language models there.

They're at the forefront of ai, but
Bayesian methods are not used in

any of the state-of-the-art models.

Bayesian ideas probably are, but
they're not estimating posterior

distributions for these parameters.

It's impossible just because
of the magnitude of it.

Even now, we don't have
enough compute power to do it.

It's just not feasible.

So I think.

Bayesian has had a huge impact.

Bayesian statistics had a huge
impact on people doing deep learning.

Right now, it's not implemented
in the state-of-the-art models.

I don't know if in 30 years,
like now we look back and say, we

couldn't estimate this model that's
now, you know, four lines in Stan.

But in 40 years are we gonna look
back and say, okay, we can now get

posterior joint posteriors for these,
for these parameters in the model.

And we have uncertainty about
these things that we can combine.

So I think that's, I mean,

it seems impossible looking at it
now to, to do anything like MCMC

on those parameters, but maybe some
variational inference or some, um,

more direct way to sample from these
will, will lead to, to progress.

Cooper Berry: Um, yeah.

So Lindsay, you went to Duke,
uh, I understand that it's a

more Bayesian graduate school.

What was the frequent, his first
Bayesian debate, like when you were in

graduate school and starting at Barry?

What was the, what was
the transition like?

Lindsay Berry: Um, yes.

So I guess I, I won't tell
you if I'm a Bayesian, but um,

the prior is I am Dawn's granddaughter
and I'm I'm Scott's daughter.

So that's your prior, uh, you
can form your prior from that.

And then I did my undergrad at UT
Austin in math and I took classes

from James Scott, who was a Duke Stats
PhD and he taught intro statistics,

including Bayesian statistics.

And then I went to Duke
to get my statistics PhD.

And I was taught by, uh,
Mike West was my advisor.

Uh, Jim Berger is at Duke, um, David
Dunson, uh, famous Bayesian statisticians.

Um,

so

Scott Berry: anyway, um,

Lindsay Berry: on that data, you can
update your, your, uh, posterior and,

and guess whether you think I'm a
Bayesian or not, but, um, yeah, I, I

mean, I think where I land is that I
do, I agree with Scott when I, when

it comes to modeling, I think like a
Bayesian and, you know, my research was

in essentially sequential forecasting.

So you are getting updated information
every day about sales of a, an item in a

supermarket, and your goal is forecasting.

You know, how many items
will we sell tomorrow?

Uh, well, we sell in two
days and 14 days, and so.

It's just natural to a attack that problem
from a sequential learning framework of,

you know, how many did we sell yesterday?

Um, the next day, the day after
that continuously updating, um,

your posterior, um, to forecast.

Um, so it's just natural.

Um, I, I guess I feel like

as Dawn said, you know, we have
to speak both languages though.

You know, we can't, we use frequentist
methods even here at Barry.

Um, and we speak the frequentist language
and we try to use Bayesian methods

where they're appropriate and we feel
like they're adding a lot of value.

Um, so I think that balance is
really important, um, uh, for me.

Um, and then I, I guess I can talk about.

My transition here at
Barry was pretty unique.

And, you know, we talked about some of the
barriers of Bayesian in clinical trials,

and one of those is just, it's not what's

traditionally done

you know, and when you're going
against that status quo, you hit,

um, you run into those challenges,
getting new ideas across.

Um, I joined Barry in the summer of
2019 and I had a, a nice, you know,

six month period of, of learning
how clinical trials are done.

And then at the end of 2019, uh, the
COVID pandemic started ramping up.

And I really think that was a
point where we were forced very

quickly to change how we do things.

And early in 2020, I was brought on
to a project called remap Cap, and

this is an adaptive platform trial.

That had existed already, um, studying
patients with community acquired

pneumonia and the platform was kind of
designed to be prepared for a future

pandemic because they thought, you
know, it was, it was very likely a

respiratory influenza pandemic or some
other pandemic could occur, and they

wanted to be ready to pivot, um, to
study patients with those diseases.

So I, I think COVID really forced us.

We had to be Bayesian,
we had to be adaptive.

You know, we, we were faced with
so much uncertainty at that time.

What was this pandemic gonna.

You know, how far was it gonna spread?

How serious was it going to be?

How many patients could
we enroll in a disease?

There was so much uncertainty
during that design stage that we

couldn't do the traditional thing
of running one-to-one trials, um,

you know, fixed power calculations.

We had to be adaptive, we had
to do frequent interim analyses.

Um, so I, I think that was a very
unique experience and, uh, it, it's now

paved the way I think for, for people
to have more acceptance of Bayesian

methods of adaptive platform trials.

Cooper Berry: Mm-hmm.

Do you think in the next pandemic that
a, um, a Bayesian approach will be taken?

Lindsay Berry: Uh, yes, I think
it, it kind of has to be, I mean,

I think, you know, one of the

benefits is just.

I think you can't just focus on the
trial that you're running, you know,

things are happening outside the trial.

You're learning constantly.

There's external data emerging and um,
you know, being able to either incorporate

that into the analysis in a formal way
or, you know, to stop a treatment and

then still have an interpretable result.

I, I think, yeah, absolutely.

Cooper Berry: Got it.

Awesome.

Um, so now looking into
the future a little bit.

So Dawn, it's been about over 50 years and
you've seen Bayesian clinical trials grow.

Uh, you've seen Monte Carlo methods, Mar
uh, Markoff, shame Monte Carlo methods,

um, and now looking towards ai, what is
Bayesian statistics to you going forward?

Are you happy with where things are going?

Are you pessimistic?

What, where's your head at with it?

Don Berry: So, uh, um,
depends what you mean by ai.

There, there are various definitions,
but the, the, the notion of

learning, uh, is, uh, fundamental.

You know, you teach something to teach
itself, to teach itself, um, what

to do in the context of the process.

Uh, and so it's, you know, is what we do.

So in a sense are, the algorithm
that I mentioned is ai.

I mean, it's learning.

It has many possibilities that it can do.

And as it's, it's built to learn.

And as it learns, um, it makes decisions.

And, uh, and what I've read
about, uh, artificial intelligence

is it's, it's, it's all in
decisions and making decisions.

And as I said before, making
decisions means Bayes.

I mean, you, you, you can't really
do it without Bay Herman Chernoff.

I'm not Bayesian, but I use
Bayesian because I have to.

It's the only thing that
lets me do what I want to do.

And that is a summary of what
happens in artificial intelligence

and what happens in ai.

So I, I think the futures is, is
Bays as to, you know, artificial

intelligence to me is scary.

Um, and, um.

I am, I'm, uh, uh, I'm not
convinced that Bayesian analyses

won't be used for, um, some of the
scary things that might happen.

So who knows what's going to happen?

You know, as Yogi Berra said, prediction
is difficult, especially the future.

Cooper Berry: And so if we, if we pick.

Sure.

Bayesian has a sort of
timeline and we predict Sure.

We, let's start the timeline.

The first quarter of a, of a game
at you doing bandit problems.

What, where are we at right
now at this point in time?

What, what quarter of a football game?

What, what time period
are we at halftime yet?

We still in the first quarter.

Is it almost over?

Um, in terms of Bayesian methods,

Scott Berry: Who are you passing
the ball to with that one?

Cooper Berry: that's a tough one.

Anyone have any strong.

Lindsay Berry: I have an answer
and I don't know, I'm just

kind of making stuff up here.

So I think right now we do a lot of
Bayesian inference and, um, you know, we

might run a Bayesian trial and at, at the
end of the trial we report a posterior

probability of benefit from a treatment.

I think what's missing is.

The decision making piece, Dawn was
just describing, you know, a lot

of times we just leave it there.

Here's our posterior probability.

What are we supposed
to do with that though?

Um, and, you know, how should
a clinician use this treatment?

Should a regulator approve a drug?

I think that second piece is
underrepresented in analysis of

clinical trials and, and design.

And so I, I think I would
say we're maybe at halftime.

You know, we are using some
of the tools of Bayesian

statistics, but not all of them.

And I think there's a lot of
promise from decision theory that

is yet to be, um, discovered.

Scott Berry: I, I, I'll be, I'll be
bold and say that I'm not even sure

the game has started yet because I
think there's gonna be an incredible

explosion of data, and we've seen
that a little bit in the 25 years.

When we, when we, when I started
doing this 25 years ago, there was

no data outside of a clinical trial.

And Frequentist is kind of good at
analyzing a single trial result.

It's, it's, it's what it does now.

We have these incredible sources
of data and, and in some cases we

have that now, and we're changing
the way we're doing trials by

incorporating that data, incorporating
that on a much grander scale about.

Exactly what Lindsay and
Don said, decision making.

The only way to synthesize the
incredible amount of data that's

coming in clinical trials, but outside
of clinical trials will be Bayesian.

Uh, I, I think we're gonna see this
incredible growth and it'll be an

interesting thing, the history of
mathematics, the class you're in.

Looking back at this, at some
point I think we're gonna see this

huge explosion on it, but I, I
interrupted Nick, your prediction.

Nick Berry: Yeah, I won't
let your answer swing me.

I'll stick to what I was gonna say
originally, which is I think we're

at the two minute timeout at the
end of the first half it feels like.

We're not at halftime yet, and
we're actually in like, at least

in the clinical trials context, in
this sort of mad dash, like, uh,

these things are getting accepted.

They're being used in, you know, a
lot of trials, regulatory trials.

Uh, people that aren't in Barry
are designing Bayesian trials.

It's not just us.

And so I feel like we're in this
two minute drill to get to halftime.

We're, we're, we're leading
up to something, but moving at

a really fast pace right now.

And maybe we'll get to some stasis in
some short time where, okay, we plateaued.

There's no questions about
Bayesian clinical trials.

You do them.

Here's how they're done.

Here's the history.

Like here's a, a backlog of things you
can look at for other people doing them.

And uh, and then we will see how
the second half goes, which maybe

is Scott's thing about synthesizing
these massive amounts of data.

Um, and, and you know, things like that.

Cooper Berry: Do you think your job, Nick,
uh, specifically facts related, is gonna

get easier or harder as we get more data?

Nick Berry: Oh, facts related as in
developing our software, I think it

probably gets, gets harder, um,
harder in the sense that we, we

try to create software that is
generally useful to a lot of people.

And as the problem space gets bigger and
bigger and bigger and bigger and people

have different types of external data and
they want to use it in different ways, the

capabilities that we have to provide to
users grow really, really, really fast.

So I think it probably, you know, it'd
be much easier to write a clinical trial

design software that just simulated
fixed frequentist clinical trials.

Um, and so I think as we get more
data, um, more models, more methods

for combining data, it probably gets
harder, um, but also more useful.

If we have something that allows people
to do it quickly, um, they don't have to

know how to program something in stand.

They don't have to work on hierarchical
priors and stuff like that.

It can open the door for a lot
more people to use these methods.

So, um, yeah.

Cooper Berry: Don, how about you?

Where are we at in the,
uh, the Bayesian Bowl game?

Don Berry: Uh, I'm two minute well.

Like all bayesians I
probability distribution.

And my probability distribution
has a mean of the two minute mark.

We're two minutes into the game.

Uh, but the distribution is,
is really, uh, uh, has a large

variance, large stem deviation.

Um, the critical thing to modify what,
uh, Nick was saying and Scott was

saying is, is something about the data.

What, what are the data?

The data are not simply adding more
things and more patients and the,

you know, but it's the biology.

Biology today in every disease
is a far cry from biology.

50, even 20 years ago.

Um, and we, we learn more about
the diseases and more about the

individual aspects of the diseases.

You know, you have two patients
and they look the same.

Um, let's say they have breast cancer
and they're the same age and they

have breast cancer and the same
breast and, and, uh, the size of

the two breast cancer is the same.

Everything looks the same and one dies and
the other lose and dies of heart disease.

Um,

we talk in medicine
about precision medicine.

We're running more and more about
the, the, the patient characteristics.

And soon, and it's this thing that I
mentioned about, uh, there are many.

Uh, categories of any disease,
uh, and every disease is becoming

more and more, uh, categorized.

Uh, and it is that kind of data
that we confront when we do

trials and we ask about, um,
uh, the artificial intelligence.

We ask, you know, about how's the
therapy doing in this type of patient?

Or are there, are there, uh,
things that we should be measuring?

Uh, and if it looks like it's better
in that type of patient, maybe we

should use the therapy for that, uh,
patient, but not for another patient.

And so that's the kind
of thing that we did.

And, and I spy too, but it's,
it really is the future.

Yeah.

And, and all of these are
decisions and therefore bays.

So, um, I think the future is, is bright.

Um, we, I think all of us worry that
because it seems so bright, there's

gonna be a lot of people trying to do it.

And frankly, not everybody can do it.

Uh, and they need our help, but there
are certainly lots of people out there

who can help us just as well as we can.

Cooper Berry: That's great and I'll,
I'll ask one more closing question here.

So clearly Bayesian statistics has
somewhat of a, a grasp on our family.

Um, what is it about Bayesian
that makes it so appealing?

For statisticians, what is the one core
item of it that makes it so appealing?

Lindsay Berry: I, you said, what
makes it appealing to statisticians?

I think it's, it's just appealing to
humans because Bayesian statistics

is the way a human thinks and learns.

So, you know, you start out, you don't,
maybe you don't know much about something

and then you gather some data and you.

You update your beliefs as you do
that, and that's a sequential process.

I think you, you know, you,
that's how you change your mind.

And, um, I just, I just think that's
such a fundamental concept that people

get it, you know, we work with a lot of
clinicians and that's how they think,

you know, they see one trial result and
another trial result, and they kind of,

as they go learn how to treat patients.

So I, I think it's that concept that's
so appealing about it to not just our

family, but I think a lot of people.

Cooper Berry: Hmm.

Is this appeal different than
the appeal of an adaptive trial

over a non-adaptive trial?

Scott Berry: So I think
it is different, but

it utilizes exactly what Lindsay said.

Once you adopt and recognize the
beauty of, I learn something, how does

it, how does it update my knowledge?

Now I want to keep updating my knowledge.

I wanna learn something.

And, and the idea of now doing sequential
experiments when we can do sequential

learning, it's, it's the natural thing.

The idea that we will wait
three years before we look at

the data is just completely the
opposite of the Bayesian view.

Nick.

Nick Berry: Yeah, I think
I, I agree with that.

I agree with what Lindsay said, obviously.

And then for me, it's, it's the toolkit.

It's the, the modeling, um, the hot,
just, I don't know the ability to.

To estimate hierarchical models, draw
them as this, you know, these trees

then have your hyper priors and estimate
these hierarchical models with shrinkage.

It's like, I, it's such an incredibly
powerful tool that you can't help

be like drawn to the light by,
you know, like a moss, right?

You just, you keep seeing it as
you, you're in set statistics, it's

hierarchical models and you, you can't
help but like be drawn to this Bayesian

estimation of parameters and the, you
know, borrowing and, uh, hyper priors.

It just, it's such an
elegant, nice way to do it.

And now that we can estimate these
parameters so well, it's just that much

more powerful, um, with, with not a lot
of startup that has to go into this.

Like there's not a lot of
cost to doing it that way.

Don Berry: So, for me, you get a story.

Uh, I agree with, uh, Lindsay.

I agree with, uh, Scott.

Um, uh, the Bayesian
approach is, is, is natural.

Um, the,

there's a, a very famous statistician,
the most famous statistician in the world

ever was father of modern statistics.

Um, uh, Ronald Fisher, he was
a, a, uh, sir Ronald Fisher.

Uh, he was a statistician at
Rothamsted, uh, agricultural

Institute and developed, he.

he.

Defined P values.

He, uh, invented randomization.

He invented factorial designs.

He's regarded by virtually
every statistician that I

know, uh, as almost a God.

So what would he think?

You asked, uh, Cooper about
what Savage would think.

Uh, what would he think about
randomization in clinical trials?

And he was actually asked that question.

He was, he worked in agriculture.

Um, and by the way, he hated
people who would, uh, run

experiments asking one question.

He said, you have to
ask lots of questions.

And we do that.

But, uh, the one question that he
was asked by a, uh, statistician,

um, because he, that that's the
statistician asking the question, was

interested in whether, um, the, the,
the god of randomization would think

that it's appropriate to randomize
patients, um, in clinical trials.

Uh, and Fisher had no experience
with no knowledge of, uh, clinical

trials, but he said, I suppose
what you would do is you would, um,

calculate a p value, um, thinking P values
instead of Ian Statistics, you would

calculate a P value for your current data,
and then tomorrow you would calculate

another P value for the current data.

And use some function of the p value
to assign randomly, but biased,

randomly biased in favor of, of
the therapy that is doing better.

And that's the kind of thing that
we do, that we did in, in I Spy too.

And it's natural.

I mean, you don't want to ignore,
here's a person faced with this

question from another statistician
who, um, doesn't wanna ignore the

randomization, but wants to do it.

Right.

Nick asks the question, even in that
circumstance about ethics, yes, there are

ethics associated with it, but especially
if you could, um, you know, combine

the, uh, things going on in randomized
trials with Fisher's concern about

treating patients that are, that are.

Things are developing.

The, the ai, the algorithm is
telling us thing, whether it's a

p value or posterior probability.

Somebody else, uh, uh, somebody
named Thompson, uh, did a, uh,

wrote a paper back in 1933, essentially
what Fisher said, but a Bayesian

calculation instead of a P value.

It probably doesn't make any difference
or much difference as to what you're

using, depending on what function
it is, but evaluating with, uh, with

simulation, and by the way, to Scott,
one of Scott's earlier points, um,

Fisher could never have simulated
his, uh, proposal, nor could Thompson.

Um, because it's, uh, complicated, uh,
many simulations that you have to do.

Many scenarios, and by the way, Fisher
would be asking other questions too.

So how do you incorporate that?

Um, and it's the age of the computer
and the ability to do things.

And, um, so it's, it, it's natural even
to this, this, uh, god of statistics.

Scott Berry: Okay, so

I am going to grab back control of the
host duty Cooper, thank you for coming on.

Uh, uh, a wonderful conversation
and I want to thank everybody

for, for joining us, and I want to
thank everybody for spending time

in the interim until the next one.

Thank you.

Cooper Berry: Thank you.

Lindsay Berry: Thank you.

Scott Berry: Yep.

Bye.

Lindsay Berry: Thanks.