In the Interim...

Dr. Scott Berry hosts this episode of "In the Interim…", opening with statistical analysis of elite athletes before focusing on the misunderstood role of clinical trial simulation. He distinguishes simulation as a predictive tool from its use as an in-silico process that enables trial design exploration, iteration, and optimization. Clinical trial simulation provides a mechanism for iterative comparison of multiple designs, driven by ongoing team feedback and evolving trial objectives. Scott stresses that rigid simulation plans are “not productive,” since the most effective designs typically emerge when stakeholders view real trial examples and suggest new design options in real time. The ICECAP trial serves as a key illustration, where the final design was shaped by simulation-informed team input across multiple iterations, from three tested durations to ten with response adaptive randomization. Scott also discusses the creation of the FACTS software, highlighting its ability to test alternative designs rapidly, present side-by-side comparisons, and conduct counterfactual analyses—revealing what different trial configurations would have produced using the same simulated datasets.

Key Highlights
  • Simulation contrasted as a predictive tool versus engine for iterative design evaluation.
  • Emphasizes design process as team-driven and iterative, not prescriptive.
  • Use of concrete example trials enhances communication across multidisciplinary teams.
  • FACTS software enables design flexibility, in silico iteration, and comparative scenario analysis.
  • ICECAP trial as an instance of simulation-informed design adaptation.
For more visit: https://www.berryconsultants.com/

What is In the Interim...?

A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: Well welcome
everybody back to In the interim.

I am Scott Berry, your host for today.

I am flying solo today.

I have a topic I thought
I'd fly solo with.

First I'm gonna, I'm gonna jump in
from the sports world and then we're

gonna talk about clinical trial.

Simulation.

I, it, it's, it's something that
everybody views differently.

I think there's a lot of misunderstanding
of what clinical trial simulation is

within clinical trial design world.

And I, I want to talk about that because I
think there's sufficient misunderstanding.

Even among statisticians about
what it is as a tool, how it's

used, what that looks like.

So I want to talk about that.

And, but first, I, I have
to, as a statistician, as we,

we observe things going on.

I, I do want to talk about the
sports world and those that, um,

uh, know, me, know I love sports.

Uh, my, my initial interest
in statistics was in sports.

Uh, when I was in academia, I, I, I
wrote about it for Chance Magazine.

I wrote a quarterly.

Article, uh, called a statistician
reads the sports pages, where I got to

pick a different topic and write about
it, explore it from a, a statistical

perspective, and I loved it and I wish
I had the time to do it again now, but

from the sports world, I want to bring up
the, the oddity that is Scotty Scheffler.

So I, I enjoy golf.

I play golf, and I follow golf.

And hey, I, I, I want you to appreciate
how stunningly good he is as an

outlier, a statistical oddity within it.

And, and I, I am particularly
interested in those elite athletes that

separate themselves from the curve.

The babe Ruths the Wayne Gretzkys, the.

The, the Tiger Woods, Jack Nicholas,
these athletes, and, and, and there,

there, there, there are not a lot
of them by, by definition, but

that really separate themselves.

And what's so stunning to me is how
good he is within a game that it.

It's darn near impossible
to be as good as he is.

So I want you to think about the, the Tour
Championship is coming up this weekend

and let you know when we're taping this.

And there are 30 golfers in that, that
perhaps the best, 30 golfers in the world.

And you can argue that there's a few
missing, but largely it's the best.

30 golfers in the world.

Every one of these golfers.

Is dedicated to this game.

They are all elite.

They were all the best in
college when they were there.

They, they all had phenomenal years.

They all work at their game.

They all train at this elite
level in terms of golf.

And there are 30 of them.

So, and, and they, they play
in the tour championship and

they all start from, from zero.

And estimates of Scotty Scheffler,
odds of winning this tournament

are 31%, something like that.

And among 30 golfers, where
the average percent here is 3%.

And all of these elite golfers,
he has something like a 31%.

Quantitative estimate.

I mean, you can argue about what
this is, but the odds of him winning

this are related to when Tiger Woods
was dominating golf at this level.

It is stunning to me that somebody
in, in that game can be that

much better than everybody else.

It's, it's hard to actually
fathom him being that good.

He has this unbelievable streak
of finishing in the top 25,

I think it's now 18 in a row.

Golf tournaments where
he's finish in the top 25.

There are 140 professional golfers
that are all elite at what they do.

They're all in the right hand curve
of the normal distribution, and

here he is finishing in the top
25 18 straight times five wins.

He's really done this for three
years, so I I, if you appreciate

statistical oddities and rare things.

Watch Scotty Scheffler it,
it, it's, it's unbelievable.

And he sort of fits into this,
this curve of, of elite players.

So that, that was on my
mind as a statistician.

We can actually come back to the topic for
today of simulation and maybe something

I did around Tiger Woods around that.

Um, uh, okay.

I, I, I can't say that and
not say something about it.

So, when Tiger Woods was winning all those
golf tournaments and he was a, a freakish.

Uh, athlete on the, the, the
incredible right hand scale of

the player performance in that.

And there was this impression that he, he,
he could, will himself to win at times.

So I, I simulated a golf tournaments
of Tiger Woods where we, I estimated

his ability relative to the other
players and I simulated his career.

And he won almost exactly the
number of golf tournaments

that Robo Tiger simulated.

Tiger won.

Only using the estimates of all of the
PGA tour golfers, you can fit really

nice normal distribution models to PGA
tour golfers for their, their 18 hole

scores or their tournament scores.

Uh, it's slightly right skewed,
but largely it fits very

well for a PGA tour golfer.

And when I resimulate his career, he
won almost exactly the same number of

tournaments simulated on average that.

The, the simulation showed he should win.

So what what does that mean?

Largely it means he was
just better than everybody.

He didn't will himself
to win golf tournaments.

He was just better than everybody else.

Now we have, we, we, he had amazing
shots and the progress in all of this.

And when you're better than
everybody, it sort of feels like.

But I, I'm not sure there
was any intangible to it.

And he didn't win any more than what
just his raw ability said he should win.

And it would be interesting to do
that with Scotty Scheffler as well.

So simulation, that's an interesting
aspect of simulation and a lot of

people see simulation in that route.

The NFL season's about to start
here and we could simulate

what's the chance that different.

Uh, teams win the Super
Bowl and this is done.

You can go on ESPN and read about this.

You can read about Hurricane
Aaron and the prediction of where

Hurricane Aaron is gonna go.

It's right now, it's off the East Coast
and the prediction is gonna go out to the

Atlantic Ocean, but how close does it get?

What kind of havoc?

And this is all simulation.

And you can read the European model,
you can read other models of it.

So.

The general public sees simulation as
prediction and it's ingrained in people's

thought that, and I talked about it
for Tiger Woods, predicting what, how

many wins he would have if we only
simulate independent IID observations.

How much should he win and does
he win more or less than that?

That's a prediction.

Hurricanes are prediction.

We use models and and
simulation to, to predict them.

PKPD scientists.

Getting back to in the interim here
where we live in the world of clinical

trials and medical decision making.

PKPD scientists use clinical trial
simulation to predict what's gonna

happen when you put a certain
amount of drug with a certain amount

of frequency into a human being.

Of a certain size and characteristic it's
prediction and they talk about simulation.

You can go on and see all the PKPD
tools, which is simulating what's

gonna happen in the outcome of it.

I use clinical trial simulation
in a very different way.

Right.

And I use it for creating a
clinical trial design, and I think

it's very, very misunderstood.

So, uh, when I talk about it, I'm
gonna talk about how do I use it?

So how do I use clinical trial simulation
when there's a particular design proposed

for a clinical trial, a phase two trial
with four doses, 50 patients per dose.

I can create software that simulates
patients coming into the trial,

being randomized to a dose.

Just like the protocol says, I could
simulate an interim analysis happening

with some number of patients in the trial.

You can track real time of this, so it
is creating software that can simulate

the outcomes of clinical trials.

Exactly as designed under it, and I
can record what happens in the trial

now when I simulate clinical trials.

I play the role of Mother Nature,
where I say, assume all of the

treatments are the same as placebo.

What would happen in this trial?

Assume that, uh, one of the doses has
a strong effect and the other doses,

it's linearly increasing to that effect.

So I play the role of that and
I plug in, in the simulation.

That's how I wanna simulate
the outcome of patients.

Then I simulate the trial and I
can record incredibly detailed

information about this trial.

I can record the nu amount of
drug use, the time in the trial

according to enrollment rates.

Projected enrollment rates.

I can simulate random enrollment in
the trial, a process of enrollment

in the trial, but I can record which
dose wins the trial has, has, has the

best effect on the efficacy endpoint.

I can use different analysis
methods of the data to estimate.

The ED 90 dose in the trial,
I can change the randomization

probabilities and simulate the same
characteristics I could simulate.

What's the probability based on
that outcome, that you would go

forward into a phase three trial?

You would meet, go, no-go.

Now you have to ask those questions.

You have to ask the team.

Under what circumstances
would you go to phase three?

By the way, you have to ask
them, is that a good decision?

Suppose the truth of the drug,
were this, do you wanna go?

Would you want to go to phase three?

So what's nice about this is you
start asking them the real questions,

by the way, in their language,
when would you go to phase three?

What would you have to see
on the primary endpoint?

And they might say, you know, I
need to see that the secondary

endpoint is doing the following.

You can simulate that.

We can simulate a tolerability
or a safety endpoint and they

say, you know, I, this is what I
need to see to go to phase three.

And then the predictive probability
of phase three based on the efficacy

estimate needs to be above some number
for a 400 patient phase three trial.

We get all kinds of different
answers, and that's okay.

There's no right answer to that, but we
can record incredibly detailed information

about the outcome of that simulated trial.

And then we can repeat that
simulation for many, many trials.

Now if you're a
statistician, this seems why.

Why is, why is Scott talking this way?

This is so straightforward.

I understand this.

This is how statisticians think.

We think about the sample space and the
parameters, and we understand this trial

is a random of occurrence from a set
of random possible outcomes of trials

and simulation seems so natural to us.

Remember, almost everybody you're
talking to thinks it's prediction.

And they're thinking it's something else.

And a lot of people say that as
we start to work on the design,

oh, you know what's, what's
the chance we win this trial?

And that's not directly
what we're we're addressing.

By the way, I'll come back to this
because you can use simulation

tools to answer that question.

But what we're doing is
we're simulating the trial.

We're recording outcomes of
the trials based on different

truths of the arms in the trials.

The characteristics in the
trial enrollment rate in the

trial could matter to that if
you're doing an adaptive design.

But we, what's very misunderstood, I
think, among statisticians that don't live

this world, is we don't pick a design.

And then simulate it
and say, oh, we're done.

You know, go, go off and simulate
this and, you know, fill in the

protocol, the number that's, that's
not how we utilize this as a tool.

We we're able to simulate the trial under
different designs, different randomization

ratios, different number of arms, and
we have a very rich set of outcomes.

We use it to compare different designs.

Okay?

Suppose this is the truth and I run
design A or design b, I now have this

ability to compare on very rich outcomes.

What happens in the trial?

Sometimes it's what is the probability
you meet statistical significance

at the end of the trial In a phase
two trial, that's actually rarely

something you're that interested in.

What's the probability I'm
going to run phase three?

What dose would I have
selected based on that?

How often do I pick the
right dose in the trial?

In that scenario, incredibly
rich set of scenarios.

I might be doing a basket trial in
oncology of five different oncology

types and I, the question is, which
one do I run in phase 3 Do I go

into type 3 head and neck cancer?

Do I go in GI cancer?

Can I take two or two or three of them?

Can I take all five and we can
record, you know, if you'd run this

design, here's the chance that you go
forward in the following tumor types.

And we know what the right answer is.

We can show them based on this
truth, what, what, what, what

would you want to do if, if Mother
Nature told you exactly the truth?

We can then simulate
different trial designs.

We're using this as a tool to
compare designs, and I'm gonna

call that in silico design.

In this scenario, it's the ability to
stress, test to the design and simulate

many different designs for comparing them.

So I've seen a number of papers
where people write about clinical

trial simulation, and I think
they get it all wrong actually.

They want to create a simulation plan
before you start, and it makes no

sense because what always happens when
you start to compare these different

designs and you show, here's what
you do with the, the, the different

starting designs, you compare them is
immediately somebody says, oh, I, I

don't like this and I don't like that.

And you introduce a new design
that you didn't start with.

And that's really, really good because
you start to, the clinicians, the team,

the regulate, the regulatory people,
the design team you're working with.

Start to see the outcomes of the
trial, and they, they think, oh, I

know I want the design to do that.

I, I would never want it to
do that under that scenario.

And that's productive.

That's part of, in silico
design, you can't pre-plan that.

You don't even want to pre-plan that.

This ability to iterate the
design and go through it.

Is critically important to
getting to a better design.

And that's the goal of the clinical
trial simulation process is

getting the best design possible.

And so if somebody says, oh, we need
to write up a clinical trial simulation

plan, here's the seven designs we're
gonna compare, here's the seed we're

gonna use, and here's the scenarios.

It's not productive.

'cause invariably when you look at the
results of those, immediately you say, oh.

I want this component of that design.

What happens if I change randomization?

Would RAR benefit this?

Would this benefit it?

What if we change when we do them?

That's, you want that, and that's
what clinical trial, simulation

as a design tool is really about.

It's the process of getting
to the final design.

It's not.

A design dictated by, by, by
somebody that says, I want to

do this design with interims at
this time, and I want to do this.

Sure, you can calculate that,
but that's not a design process.

You're not vetting the design, you're just
calculating now to back up clinical trial.

Simulation is numerical integration.

It's nothing other than
numerical integration.

It's markoff.

It, it, it, sorry.

It's Monte Carlo integration,
which is numerical integration.

Given the design, given these
assumptions, what happens?

The beauty of it, the what happens can be
really rich set of things that normally

when you're making a power calculation.

You can't actually characterize.

So there's also statisticians that
say, oh, you don't need to simulate.

I don't think they're using it as
a tool the same way we're using

it to vet different designs, to
find optimal designs within that.

So

what.

That, that idea, I think is, is
misapplied, and I'll come back to when

that could potentially be a valuable
thing to do, but it's very late in

the process when your design is done.

And we will talk about that
and how simulations are used.

It's a little bit of a d
different kind of beast.

So we're still in this in silico design
process, and this is a very standard

thing that we start this process
with a set of candidate designs and

somebody might come to us and say, we're
thinking about doing the following,

seamless phase two three design.

Uh, but we don't know the timing of this,
the decision criteria, how big phase two

should be, how big phase three should be,
but let's here, here's a possible design

and here's two other possible designs.

So we have three skeleton designs.

Yes.

So given that we then simulate those
skeleton designs, and this is the,

the first thing you want to do in
this case, and this is my warning

to you, I I, I've been simulating
clinical trials for 25 years.

It's, it's largely what we do at
Barry that if you jump straight into

operating characteristics, you're
gonna lose a huge number of people.

They're not gonna understand.

What they are for, they, they
absolutely won't understand what

they are, and that's your fault.

You need to bring them along to make
sure they understand what you're

doing, and it's really simple.

Clinical trust simulation is really
simple, but if you skip straight

to operating characteristics,
you're gonna lose them.

So what's, what's the right first step?

You have to show them example trials.

All of the people you're working
with understand trial results.

They understand example trials.

So show them the data at the interim.

Show 'em raw data, show 'em spaghetti
plots of patients, show 'em Kaplan-Meier

plots that that's a snapshot of data,
almost what A-D-S-M-B might see.

And then show them, okay,
here, here are the data.

It.

Here's the analysis we've talked
about running of that data.

Uh, uh, proportional odds model, a
logistic regression, a COX model.

Here's the result of that.

At the interim, here's what it
says about the relative efficacy.

Here's what happens when we fit
a dose response model to it.

Here's the estimate that you get.

Here's the raw data, and
then here's the decision.

In your seamless two three trial,
here's the dose you would pick

according to the skeleton design, and
it shows them the data and all of that.

And then you show them the
completion of the trial.

You show them the final analysis,
and you say, you know what?

This trial, we're gonna record this as
not meeting statistical significance.

At the end of the trial,
we'll call that, uh, a loss.

At the end here was the dose that it
picked here was the effect size at the

end that didn't win, and everybody will
understand that single trial result.

By the way, another hint of that, don't
tell them the truth that simulated

that trial because as soon as you tell
them the truth, they're gonna want the

trial to do the right answer to it.

You simulated a case where
dose three was the best dose?

Well, well, we just
want to pick dose three.

Well, in the real trial,
you're not gonna know that.

So you want to show them this as
though they, they don't know it.

They're sitting on a data
safety monitoring board.

This is what the algorithm learns.

And if they say, well,
what was the truth here?

I say, well, I'm not gonna tell you that.

I want you to see it for
what the, the, the design is.

But then when that design's over and you
say, this trial results in statistical

significance, here's the effect size, by
the way, show 'em a positive trial first.

If you show 'em a negative trial, they,
they sort of get a bad vibe to that.

Show 'em a positive trial, maybe show
'em an inconclusive trial, um, then, or

a hard trial, show 'em a, a clearly a
failed trial, and this is what happens.

Then you say, okay, now that
first trial here was the

scenario that I simulated from.

After you've shown them a
couple, and now I'm gonna repeat

that trial a thousand times.

Here's the number of trials that met
statistical significance at the end.

Here's the proportion
that picked dose three.

Like that example trial did 20% chance
you picked dose four, 10% chance, and

you can show it and they'll get it.

And you've walked them through it and
they'll understand the simulation.

It's, it's really easy.

Clinical trial simulation is
easy if you present it right

and you walk them through it.

Now as you start to iterate the
design, you might be able to skip

those because they're hooked.

They get it and they know how to
interpret the operating characteristics,

which as statisticians we find
very straightforward, but we think

differently than everybody else.

Uh, and you have to know that people
don't think that way, so you have

to sort of guide them through it.

When we do clinical trial simulations.

We spend more time on the graphs and
the presentation of them than probably

creating the simulations themselves.

It's that important to get people to
understand it again for the goal of

getting the best design at the end, that
they know what they're signing up for.

I have simulated trials where the,
the, the, it's a fixed trial and

there's a target product profile, so
there's a delta that if that drug is

that good, we want the trial to win
and in 80% power trials, 400 patients,

and they're, they're okay with that.

And I'll simulate that trial.

From the target product profile, from
their TPP, that delta, that, that's a

great drug, gonna be highly effective
and, and benefit a lot of patients, and

I simulate 20% of the trials fail and
they've said, wait, wait, wait a minute.

If that drug's that good,
20% of the trials will fail.

And I say that if you're a
statistician and all that, you say,

well, that's 80% powered for that.

But sometimes they don't really understand
what that means at the level that a

simulation shows them what that means.

And, and it's not our job to laugh
and say, you don't understand it.

It's, it's our fault.

That's our science.

To make sure they know what
they're signing up for and

make sure they understand.

When you're running an 80% powered
trial for that really good effect, 20%

of trials are gonna fail Immediately.

They say, wait a minute, maybe we
should look at a bigger sample size.

Maybe I should be 95% powered, but
then the sample size is so big.

Well, let's do adaptive sample size.

It leads to them creating a better design.

That simulation will get them there,
where if they just say to you, gimme

an 80% powered sample size, you
haven't done, you haven't helped

them with that by saying it's 400,
that I'm not sure that they always

understand what they're signing up for.

You can also then guide
them through futility.

You can walk them through futility.

If we do a futility analysis at 200, show
me the data that we're gonna stop for.

Within that.

We can do this.

Should we borrow historical data?

What does the data look like?

What?

What do the effect size look like?

That win a basket trial with multiple
baskets in enrichment design?

Suppose we're gonna drop subsets.

These are all complicated things
that clinical trial simulation

can make relatively easy.

So this is the Silical process.

So we show them the results of their first
three skeleton designs, and rarely do they

say, oh, design three, is it, we're done.

They, they, they rethink
about the design, the effect.

I don't want to do that.

This assumption makes sense.

Can you change this?

And there's three new designs
and they're all closer to

sort of where they want to be.

And there's another
iteration of new simulations.

They want to add some scenarios.

You know, we, we, we
think this is more likely.

You generally have scenarios that
are statistically interesting.

Within that, you wanna show a range of
effects from zero, maybe even harm to

big effects, bigger than the TPP within
that, to show them a range, then show

them interesting statistical scenarios.

And then they'll have scenarios that
they think are likely to be true,

uh, under that, and they can simulate
that it leads to an iteration.

That leads to a new iteration, a new
step of simulations where they're getting

closer and closer and they're further
and further understanding their design.

This is the process and
it can't be scripted.

You can't write a a, this isn't
like a SAP that you write.

You have to write ahead
everything you're gonna do.

This is you.

You don't want this to be scripted.

You want to be able to go to new designs.

You want to explore things you
didn't think of that the simulations

brought out interesting from
everybody on the team Within that.

And I think I, every, every time I read
a paper that says You should write down

a simulation plan before you start the
design process, I, I've never done that.

I don't think it's a good thing, and I
don't think it leads to better designs.

Again, let's get to a
better design within that.

Okay.

Now, the in, in terms of this iteration,
we eventually we get to a final design.

And there we have a design.

We can write it.

If you simulate it, you can write it down.

You can write down the design,
whether it's uh, the analysis method,

the interim plan, what success
is, all of the characteristics

of the design you can write down.

Now, that's the point where maybe
some simulation plan makes some sense.

Again, I think it's a
little odd at that point.

But here's where now we use
simulation to fully show the

characteristics of the design.

We've set up what is the type one
error from a range of null scenarios.

What is the power from a range of
scenarios, across doses, across

subgroups, whatever it is, we are
doing final simulations with big.

Simulation number of trials, and
we are putting that in a report,

and we then give it to regulators.

We put it as an appendix to the
sap, an adaptive design report or a

simulation report that becomes that.

Now, that's the part that
gets a little bit hard.

Many times that only shows the
simulations for the final design.

It can be confusing if you in that
report, show simulations for a design.

You're not running.

It, it really can be confusing.

And that's not the role of
the simulations at that point.

They're, they're about operating
characteristics at that point.

Uh, and, and if you're going to
regulars, you can simulate the bias of

the estimate from our analysis method
at the end of the trial, for example,

and all of these things to evaluate
the design that's being submitted.

Yes.

You might be asked, okay, you're
doing response adaptive randomization.

Can you demonstrate to us that
that's a good thing to be doing?

So you might have to have a section
that shows what if the a different

design were being run without that,
or, uh, whatever thing about the trial

is maybe innovative or different, and
it might not be, uh, within that case,

you might show comparisons to justify
the design you're doing to convince

somebody that it's a good design.

Now what's hard about that is is
I, I've been asked this question

or, or this comment has come up
when I've presented a design.

The design has gone through this process.

We come up with a final design and I might
give a 20 minute talk on what the design

is, and I might show simulations of that.

And people have said, oh, you're
just being complex to be complex.

First time somebody
said that it was upset.

Um, I, I took that as criticism and all
that, but I get what's really hard is

they haven't been through that process
of the design process where everything

you, everything you've done in the
design, you've done for a reason and

you've compared it to other designs.

You can't, in a 20 minute
talk get up and present.

47 designs you're not running
and why that you might be really

interested in what's the value of RAR?

What's the value of borrowing?

And so you might need to present to
the audience to that, but that's hard

and that's a different sort of thing.

So clinical trial simulation's
a huge tool there.

It's not in silico design, you
have a proposed single design.

You're writing up in a report,
you're providing simulations, you're

showing the DSMB example trials.

That at interim one, two, and
three, here's what you might see.

Here are the rules.

Here's how that works.

It's a hugely valuable tools in
many ways, but it really shifts from

comparing designs to showing that single
designs, characteristics and behavior.

And I, I think a number of people
write articles about clinical trial

simulation, and they're all focused on
that end game where, you know, the design

90% of this time and effort and work.

And the, the, the really, the, the skill
of the design part is this in silico

design part, and it's very different.

And I think misunderstood by
people who don't live this space of

simulating for optimizing a design.

Now, it a, a couple side aspects of this.

So, so one design, I, I've talked about
this publicly multiple times and it's kind

of an interesting time because the trial
is going to read out in several months.

I'm sure we will do an inter in in the
interim on it, but the ice cap trial.

Was a trial it.

It was a trial for looking at hypothermia
for the treatment of cardiac arrest.

Individual suffers cardiac arrest.

They're revived.

The heart is revived, resuscitated.

They go into the emergency room.

The thought is, is that putting
somebody in a state of cold hypothermia

prevents the negative cascade of heat.

The neurological damage that
happens in heat exaggerates

that from the lack of oxygen.

I'm a statistician, but that's my
understanding of the the mechanism of

action of hypothermia post cardiac arrest.

We went in this and simulated a trial
for this where it was three durations.

The question was, how long do
you keep them in that state?

It's kind of like a dose response,
but it's a duration response.

And was it 12, 24 48 hours?

I showed them example trials
in almost every one of them.

They wanted a different design.

Boy, I wish we had patients less than 12.

I wish we had patients greater than 48.

I wish we had them between there.

The design grew through that process
with the design team to 10 durations,

6, 12, 18, 24, 30, 36, 42, all the
way up to 72 hours, five days, and

10 different durations with response.

Adaptive randomization.

And the, the duration response
model became critical to

driving where those patients go.

It was a very restricted
U-shape that it couldn't wiggle.

There's no way in which it's
a non monotonic relationship.

Uh, within that they, they, and
they wanted the design to do that.

That all came through by
understanding it through simulation.

That trial's about to read out.

Uh, they've stopped enrolling, so that'll
be an interesting trial to look at.

But built entirely by simulation.

That's the kind of thing, the richness.

Of looking at the example trials,
the analysis method, comparisons,

that, that, that values, that.

You couldn't have come up with that
design and say, Hey, let's do the

following with this model and all that.

It, it, it, it, it doesn't work that way.

Now, the, one of the issues
is if you want to go through

this process, it can be slow.

To custom code this if you need
to write code to do all the design

variants, all these things to process
the data and all of that, you know,

and, and the design team has timelines
and you can't say, okay, gimme six

weeks and I'll come back with that.

This moves at a periodic process a
week or two weeks to make progress.

You don't need the final answer, but
we need to make progress, which is

why we created Facts And Fax is fixed
and adaptive Clinical trial simulator.

And it allows you to do a wide
range of design ideas, analysis,

ideas, types of datas and
simulations within a user interface.

And you can within an hour have
clinical trial simulation results.

You can vary the design, you can do that.

And this was set up to do that.

Now our, the, the product is, is kind
of interesting because there are other

simulation products out there, but
they're almost all driven that you go

into the product, you say, oh, I want
this design, simulate that design.

But it doesn't allow
you to vary it in ways.

It doesn't allow you to
change it in this way.

Fax doesn't have any kind of name designs.

Uh, you know, I want
dragline design number seven.

Uh, it doesn't do that.

Uh, within it, you create it, you
create all of the parts to it to allow

you to do this in silico process.

So it's kind of a very different
design, and it's grown out of 25

years of doing that it silico process.

So when people jump into the product,
it's, it, it can look funny at first.

And if you don't do that in
silico process, you might not

find facts valuable at all.

Uh, within the setting, if you
want to do this process and compare

designs and contrast them and look
at alternatives and, and vet them,

it's a way to do it very, very fast.

Custom coding is great.

We as statisticians love to custom
code, but this is a way to do that and

we have a number of people to use that.

So it was, it, it, as people ask
about that, it's understanding,

it's a tool for getting to the
right design is really its role.

Now, it could also be used by regulators.

If somebody submits a design, they
could go in and simulate that and

verify operating characteristics
and all of that within it.

One of the really cool things
in simulations I've found is

counterfactual simulation.

What do, what do I mean by that?

You could simulate a phase 3 trial
that always goes to 800 patients.

And see what would happen if you
run a fixed 800 patient trial.

Then you could add in an interim
for early success and futility.

You can go in at 400 patients and
say, we would stop for futility here

and or we would stop for success.

And you can say, what would've
happened if you didn't stop?

We stopped for futility and
it would've failed in the end.

That's a good thing.

Because we, we saved 400
patients to the same conclusion,

or we stopped for futility.

But if you'd let that
trial go, it would've won.

And you can see that you can't see
that any other way, but you can see

that through clinical trial simulation.

Then you can vary the futility rule.

You can see what, how, if you lower your
power, how much you lower your power.

With futility, you can see the the,
the how your sample size changes.

The effect sizes and all this.

So counterfactual simulation.

Is something that's incredibly
powerful in simulation.

When you're comparing two designs, what
would've happened on the exact same

trial with a different analysis method?

If I assume a a, a linear slope,
or if I do an MMRM, you could take

a single data set and compare the
analysis method, how many win over

here, but lose over here, lose over
here, but win over here, for example.

So these counterfactual simulations
become a hugely valuable

thing in trial simulation.

Uh, within that, now the
approach is quite different.

So this is my view of clinical trial
simulation, the role of it in creating

de designs a sort of space I live in.

But a lot of people ask about this and I,
I think there's a misunderstanding of it.

So it's a whole different viewpoint
on that, uh, in the design.

So.

I hope you enjoyed this from Scotty
Scheffler to Tiger Woods, to robo

Tiger to in silico design, where
we are here in the interim, unless

this has all been simulated and
we're not here in the interim.

Hmm.

Food for thought until next
time where we're either

simulated or not in the interim.