In the Interim...

In this episode of "In the Interim…", host Dr. Scott Berry undertakes a detailed, methodical critique of ICH-E20 draft guidance language as applied to adaptive clinical trial design. Focusing on an innocuous but corruptible paragraph in Section 3.1, Scott scrutinizes the logic behind regulatory reluctance to appreciate multiple or complex adaptations in confirmatory trials. Drawing on extensive experience, he highlights how such restrictive interpretations do not reflect practical development realities, instead setting up “false choices” where alternative designs desired by regulators are infeasible. Through operational scenarios—including the SEPSIS-ACT trial, an enrichment design, and sample size re-estimation examples—Scott illustrates the empirical benefits of seamless and multi-adaptive trials for sponsors, patients, and regulators. Technical discussion addresses misconceptions about complexity and bias and stresses the value of presenting realistic alternatives when engaging with regulatory authorities. The episode ultimately encourages a more nuanced dialogue to advance efficient and scientifically robust clinical trials.

Key Highlights
  • Discussion of ICH-E20 section 3.1 guidance and its operational impact on adaptive designs.
  • Dissection of “false choice” dilemmas in regulatory interactions, referencing real adaptive trial submissions.
  • Case-based examples: SEPSIS-ACT, enrichment, and sample size adaptation trials.
  • Highlighting myths regarding bias and operational burden from multiple interim analyses.
  • Emphasis on practical strategies for more effective regulatory communication about adaptive trials and realistic alternatives.
For more, visit us at https://www.berryconsultants.com/

Creators and Guests

Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC

What is In the Interim...?

A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott: All right.

Welcome everybody.

Back to in the interim.

I'm Scott Berry.

I'm your host, and I am a
statistician, as, as many of you

know, uh, Bayesian statistician.

And here on, in the
interim, we explore things.

Around clinical trial science,
medical decision making, uh, data

analytics, uh, this is our space.

We, we have a new studio here.

We're we're, our technology has
changed a little bit, and for those

of you who consume this via video,
looks like a bit of a different place.

Um, I am, I am recording
from a different place here.

Uh, as, as you'll see throughout
multiple different places, I was,

I was traveling to this place as
I introduced the topic for today.

I was traveling here and I was, I was
connecting through an airport and I had,

uh, it was a bit of a long day and I,
I was looking to get something to eat.

And I went by a, uh, hamburger
place with, with quite a line, and

I'm deciding do I want to go there?

Do I wanna wait in the line?

And I decide, no, I, I don't want that.

Choice.

I I and because I believe at one of the
places next door with a really short

line that there's a, a really good, uh,
uh, chicken bowl, uh, by the way, a, a

a bit of fan of kava if you have kava.

So I think that's what I'm getting.

And so I say no to the burger place.

I, I, and I go and I
to get my chicken bowl.

And so I get this, and it turns out
it wasn't at all what I was expecting.

And I got something very
different than that.

And it turned out to be almost
kind of a chicken salad type thing.

And it was interesting because
then I got that and I'm, I'm

walking by the burger place.

You know, I, I, I really wish I would've.

Gotten the burger, uh, in this setting.

Uh, it's the, the, the, the
trials and tribulations of travel.

Well, it, it introduces a bit of
my topic today, and the, the, the

title of today's discussion is
Regulators False Choices and ICHE 20.

And so I'm gonna try to tie this all
together in, in what I mean by this.

So I'll start by talking about
the ICHE 20, uh, Kurt and

I did a discussion of this.

We, you can go back and, uh, previous
EP episode of the ICHE 20, it's

a draft guidance document, uh,
that discusses adaptive trials.

Now, most of this is good.

Most of the ICHE 20 is good in the
sense that the existence of it, that

adaptive designs are confirmatory trials.

And by the way, backing up ICHE
20 is very much a harmonization.

And so what goes in there is largely.

Harmonized, essentially agreed throughout
many, many regulators, global regulators.

And so in some level, this
is the floor of things.

Now, different, different, uh, regions are
gonna have different behaviors on things,

but to some extent a floor level things.

So from that perspective,
adaptive designs are confirmatory.

Sample size.

Adaptations are confirmatory.

Enrichment trials are confirmatory,
seamless trials are confirmatory response.

Adaptive allocation is confirmatory
and it has a small set on

Bayesian being confirmatory.

All of these things are in there.

Now, as you can go back to that there,
part of this is, it's written from a

pretty, um, pretty negative viewpoint.

Uh, uh, I'll say that from the perspective
of a lot of it is very cautionary.

It, it's not inviting in that
perspective, but yet its existence

is still a positive thing.

I wanna jump into.

What I'll call the paragraph and.

It's because this paragraph has been,
uh, used, uh, uh, almost weaponized.

Now I'm gonna go into this.

I'm gonna talk about regulators and
I, and I do wanna say this upfront.

I think regulators do an amazing job
and their perspective is different.

I've never been a regulator.

I've been proposing designs to
regulators for more than 25 years,

and I think they do a tremendous job.

So part of this may border on what you.

You consider criticism, and I'm gonna
try to put myself in the mind of a

regulator and think of that perspective
of what happens when you're on that

side and, and evaluating designs.

But from that side and perspective,
there's a particular paragraph and ICHE

20 that I think has been weaponized
and I think the paragraph is wrong.

So I thought that this would
make a good podcast, uh, to

be a bit controversial here.

So first in ICH-E20 There's a section
that talks about, this is section three

and it talks about the adequacy within
the development program, section 3.1,

and it had, this is not the
paragraph yet, but it has the

sentence that says A stepwise.

Program with careful analysis and
evaluation of completed exploratory

trials helps inform the goals and design
choices for subsequent confirmatory

trials, and ultimately generate data
necessary for regulatory decision making.

Somewhat of an inno, innocuous
sentence, but it lays out this notion

that a stepwise program, to some
extent is a bit of the gold standard.

And I, it, it, that sentence has
is isn't a problem, but it, it, it

sort of sets it out that almost what
we're doing now is kind of right.

And things that are different than
that are, are pushing against that.

But that moves into in the same section.

And this is line 1 0 1
in this draft guidance.

So this is the paragraph now,
and the start of this says

the number and complexity of
adaptations at the confirmatory

stage should generally be limited.

So that's the first sentence,
and I'm gonna push back on that.

And then it continues
increasing either of them.

So the number and complexity.

Increasing either of them as a
replacement for a sequence of multiple

trials can impair the ability to
answer important clinical questions

and limit the opportunity to carefully
reflect on prior results to design a

development program most effectively.

So that's, that's the paragraph.

It does continue on.

I think those two sentences are, are
what we'll spend a good bit of, of

this, this episode, talking about, it
says, before planning a confirmatory

trial with multiple adaptation sponsors,
sponsors should discuss whether

additional exploratory trials are
necessary to investigate the questions

addressed by the proposed adaptations.

So this paragraph has, has been somewhat
weaponized in a way, and I know that

sounds pejorative, but you, that this idea
that the number of adaptations, the number

of interims and the complexity is bad.

And it's a replacement for a
sequence of multiple trials.

Now, remember, going back to some
extent that's the right answer here.

So a couple things about this, and so by
the way, this has been presented where

designs have been presented to regulators.

They've come back and they've pushed
back on the design some form of

no, and they've cited this draft.

Paragraph for the reason for that.

It's too complex.

Do a separate trial and then a simple
phase three confirmatory trial.

So within the guidance document
on adaptive trials that says

all of this is confirmatory,
there's a paragraph that really.

Limits much of it to something
that I'm gonna push back on,

and I think it's largely wrong.

So first, there's this, this general
thought that more adaptations is

somehow bad or somehow not good.

I think this is largely a myth.

I don't believe it creates more bias.

Yeah.

It's operationally more
complex by the way.

Marginally.

It's to do, you know, seven
interims compared to six is

marginally very little to do.

Two compared to one is
marginally very little.

That first one in setting up
the process of how this is

done is an operational lift.

It's it's minor relative to the
general operational burden of

running a clinical trial, but.

I, I think this is largely somewhat
of a myth that multiple adaptations

in some way increases potential bias.

And to some extent, I think it almost
decreases it when you do a single

adaptation in a trial, a single
futility or a single success analysis.

And much of these, what I'll call
myths, come from the traditional

thing of the only adaptations or
stopping for success, yes or no.

And now trials have moved on
greatly as many of those that I

read that are in this guidance.

Uh, the possibility of those.

So when you do one of these analyses and
you continue on, or you don't stop or you

jump futility, there's something learned
about what's going on in the trial.

It's, you know, depending on when it,
it, the relatively minor, but people

can perceive there to be potential
biases of that when you do one.

That's the biggest potential there.

If you did a hundred in a trial,
each one of these, I, I mean,

it's just, it's like daily.

These are being done essentially
ignored, and we've done trials.

I SPY two used to do interims daily.

It, it eventually went to weekly.

And I think by the way, trials
are more likely to go that way.

It essentially removes these,
these types of concerns.

So I, I wanna push back on that,
but that's not even necessarily

the, the, the large extent of it.

It's that overall this sets up, and
this is where I'm gonna get into

this idea of sort of false choices.

This somewhat sets up the notion
that adaptations are shortcuts.

That when the alternative is doing
multiple exploratory trials, that

somehow adaptations are always shortcuts.

And by the way, when you read
this ICHE 20, that's an overriding

thought about this guidance document.

It's, it's the part that when you read
it, much of it is set up in that way.

And I think it's a false choice, and
I think false choices set that up.

And so I wanna walk
through this a little bit.

I'll walk through it with
examples and try to explain what

I, what I mean by that from it.

So I, I, I'll start with an example trial.

And this trial, uh, was
conducted several years ago.

The trial's over the name of the
trial is Sepsis Act, and it was a

trial run by faring pharmaceuticals.

It was the treatment for sepsis
and they had a treatment, uh, a

form of a long-acting vasopressor,
and they had three doses.

And to explore.

And they were, they wanted to explore
the, the effective dose and they wanted

to explore the effective dose on clinical
outcomes, not necessarily biomarkers.

So the role is a long-acting vasopressor
going to be beneficial on clinical

outcomes for patients with sepsis.

So they have three doses in
placebo, and the question is,

what kind of trial do they run?

And they want to explore dose, and then
they want to run a phase three trial.

Well, when we started looking at
this, initially, they thought of,

okay, maybe we do a phase two trial,
separate sequential trials, and

explore three doses in placebo.

Well, to get some idea of the effect.

Of dose on clinical outcomes like
length of, of organ support and

mortality, that the endpoint in this
trial ends up being a composite of

mortality and length of organ support.

For those that survive an ordinal
endpoint, that 200 patients, or a

hundred patients, or 200 patients
was, was insufficient to select dose.

It's, it's too small.

As you start to get into 400, 500, 600
patient trials, you're almost starting

to get into phase three territory.

So to run a realistic phase two
trial to understand clinical outcomes

starts to get to 5, 6, 700 patients.

Well, you now you're getting
into mini phase three trials.

So they explored this and they
ended up with, here's, uh, I'll

describe the Sepsis ACT trial.

Where, when a hundred patients
are enrolled equal randomization

to three doses and, and placebo
response, adaptive randomization

starts only on the active doses.

The, the randomization
to control is fixed.

And by the way, it's fixed at,
uh, a third, so 33% and then

the remaining 67 are set up.

So it wasn't equally to all of them, but
the doses are equal and placebo's 33%.

The response adaptive over these three
doses, it also had the possibility

of adding a fourth dose if the
dose response looked increasing.

Within that.

So a neat little design that at a
hundred, these RAR updates start.

Once 200 patients are enrolled, and
by the way, those happen monthly.

So every month RAR is reset.

So now we're starting to do a good
number of interims In this stage.

One of the trial, once 200 patients are
enrolled, the trial could go to stage two.

That would be selecting a single dose
and moving to one-to-one randomization.

And the final sample size would be
1800 patients, including stage one,

and that would form a phase three
trial and interims keep happening

monthly, and it could go from 200,
anywhere up to 800 could be the

dose ranging portion of this trial.

At 800, there's a forced decision.

Now you have to go to stage two
or you have to stop, and it's a

large phase two trial, but largely
you'd be stopping for no go.

I, I won't call it futility, but
it's a no Go to phase three, and

it becomes a large phase two trial.

If things are going very well and
you've identified a dose, it could go

earlier and it's eight 1800 is the, is
the max, is the final sample size, and

so there's no adaptive sample size.

If this goes to phase three, the trial
ended up running, uh, more than 20

interim analysis during the course of it.

And this, this is considered a
complex design, multiple interims and.

So now I, I'll, I'll claim that if
this, this design would be submitted

in the presence of that paragraph, it
would be very easy for a regulator to

look at that design and say, that's
too complex, too many interims.

Go run a separate phase two sequential
trials are better as set up.

Go run an 1800 patient phase two trial.

Then when you've selected dose,
made your decisions go run an

1800 patient phase three trial.

So as a regulator, maybe
that's what you're evaluating.

So when presented this seamless
two three trial with a, with

a good number of interims.

In the trial, a fair amount of complexity.

And again, I'll back up and
think that these, these interims

aren't causing more bias.

They're not somehow risking this.

It's not a bad thing that these are done.

I think in some ways they
de-risk all of it, but

the the when, when saying, oh
no, you can't do that trial.

They think the alternative.

Is an 800 patient phase two, followed
by an 1800 patient, phase three.

The problem is, that's not the
alternative, that's a false choice

as this extreme sort of value
is, oh no, I don't want that.

I, I'm gonna get this other thing,
but you're not getting that.

So you say no to design a and now you've
got a paragraph that somewhat weaponizes.

A reason to say no to that.

Hoping you're going to get
design B 800 plus 1800, but you

don't get B, you get design C.

What do you get in that circumstance?

Well, you either get that they don't
explore dose and they go to phase three

or a small 200 patient trial that actually
does a really poor job at dose finding.

Go no go.

Followed by 1800 in that.

So the design you want, which
is 2,600 patients in the

separate, is never gonna be run.

It's a false choice.

And the reason for that, now coming
back to this idea within this trial

is, is this adaptive design a shortcut?

And it is a shortcut.

When evaluated compared to 800
and then 1800, it's a shortcut.

Uh, in that setting.

It's combining the 800 with the 1800.

Now we can control type 1 error.

We can understand all of
the characteristics of this.

We can make it a confirmatory
trial, but it is using 1800

patients rather than 2,600.

It's efficient within that perspective.

So, but compared to 800 and 1800, you
could claim it's a shortcut, but that's

a design that's never going to be run.

What would be run is 200 plus 1800,
and the 1800 with the seamless

up to 800 is a better design.

It's better design for regulators.

It's a better design for patients.

It's a better design for sponsors.

Everything about it is better.

And that's the value of
adaptive designs here.

And efficiency isn't shortcut efficiency
is the ability to answer questions that

aren't going to be done separately.

Now.

Go back to episode 17, by the way, where
we, we, I, I talk about a paper, uh, with

Lindsey Berry, Joe Mary, and Kurt Veley.

If you have 1800 patients to use as
a sponsor and you have three doses to

explore, a seamless two three trial is
going to dominate in terms of getting

the right dose in terms of power.

Any other sequential use of those
1800 patients, you could break

it up into 500 and 1300, uh,
compared to a seamless trial.

And that paper shows, uh, uh, Barry
et all shows that that's the optimal

way, improves power, dose selection,
everything about it is better.

Better than separate trials.

So that's the part of this that
I think's a false choice is.

No, I don't want A, I want B.

But you don't get B, you get C.

It happened to me traveling yesterday.

I said no to a thinking I was gonna get,
you know, I said no to the hamburger

thinking I was getting something else.

What I got was not that.

Uh, I would've gotten the
hamburger had I had the choice.

The same thing from the
regulator perspective here now.

What does this mean?

Interestingly, by the way that trial
design was, was run, it was given a spa,

a special protocol assessment by the FDA.

Was approved, they said yes.

Uh, there were, there were various
interactions and I think the interactions

really were in what, what would be done.

And it was presented, and by the way,
this did go to multiple regulators as.

Why don't you do a 200 patient phase two
trial and then a phase three trial and

it, we were able to present the different
operating characteristics of this, and I

think it was with the ability to see the
range of potential possibilities here.

This is actually a pretty good
design for all parties involved,

and it's not a shortcut.

Adaptive designs are not shortcuts.

Efficiency is not shortcuts.

Depending on what you think the
possibility of those designs are.

Okay.

Interestingly, in that trial where
these interims were run, I was actually

unblinded during the course of this trial.

Had this trial been a 200 patient, phase
two, they would've run phase three.

They would've run an 1800 patient
phase, uh, phase three trial.

The data at that time were quite positive.

Now, by the time 800 rolled around.

It wasn't positive and
it stopped for no-go.

And you can go, you can look up the
various characteristics of, uh, uh,

the results have been reported on
this, and the trial was a no-go.

It didn't go to phase three.

It ended up using approximately 800
patients, uh, with, and I think the,

the result of it was quite good.

200 patients.

It would've conducted a very large,
expensive phase three trial with

CMC and the expansion to it, which I
think would've been the wrong choice

now, uh, that, that's, that's one
particular outcome of the trial, but

we simulated all the possibilities
and we know the probability

that of these things happening.

From a 200 patient trial is quite
different from the 800 and that was

part of how this trial was built.

So this, this idea is a lot of times a
design is presented to regulators and

they're evaluating that single design.

And so when you're evaluating that,
it's always well compared to what,

and that's the, that's what as, as
somebody who for 25 plus years have been

submitting design to regulators, a lot
of times we submit that single design.

And yes, there's a line I use quite
frequently, I'm sure I've used it on

another podcast, is when somebody asks me
as a statistician, Hey, how's your wife?

Uh, the answer's well compared to what?

Now my wife, by the way, listens
to all these podcasts, uh, and

she's a saint, as you can imagine,
uh, putting up with, with this.

Uh, but that's the same sort of thing.

It's what compared to what and if you're.

In the mindset of, okay, I've
been presented this design a boy.

I'd rather have them do separate
trials, big trials sort of thing.

If, if, if I'm thinking that's my
comparison, oh, I, I, I might say no to

A, because I think I'm getting B, but B
is not possible and it's not gonna be run.

And you get c.

So a recent example of this.

So part of this is how do we,
how do we help this process?

And if, if I were a regulator, I
might very well do the same thing.

So I don't want this to be
a criticism of regulators.

They sit in a different chair
and they're evaluating a

single design, a single thing.

How, how is that?

Well, I mean, compared to what, and I
might think, oh, you should do, you should

do this adaptive design sort of thing.

So I, I push back on a.

But I don't get a better adaptive design.

Maybe I get a design I think
is worse than that one.

So that's a challenge in this interaction.

You usually throw a single design
and say, Hey, how about this design?

A recent example of this was
another seamless two three trial.

It explored two doses and, uh, a
relatively rare disease, by the way,

exploring two doses and picking a dose,
moving to one-to-one randomization,

and going to a section that is
then, uh, uh, adaptive sample size.

So three interims to select sample
size once it moves to one-to-one,

and that that's the trial design
and submitted to regulators.

Now there's some pushback on that.

And again, that paragraph could be
used to say, aha, you're doing dose

selection, you're doing several interims.

That's too complex to be confirmatory.

Go run a separate phase two
and then run phase three.

And this was actually the advice given
by regulars given by the FDA within that.

When this comes back and you're
working with the sponsor and you

say, okay, how about the possibility?

Oh, and, and by the way, part of
the reason was the pushback is

the endpoint in this trial is a
relatively long-term endpoint.

And so we're making dose selection
based on early predictive markers.

Now, those markers are not the
confirmatory marker endpoint at the

end, which is the longer term agreed.

Uh uh, fields function survives endpoint.

But we have to, because of enrollment and
'cause of the scenario base it, there will

be a little bit of long-term endpoint, but
largely it's an earlier endpoint and the

FDA was concerned about the predictability
of that and they would rather have a

separate phase two that goes all the
way out, long-term clinical outcome,

make your decision and run phase three.

And that was the advice given.

So now the regulator gets
this design A and thinks, oh,

that would be a better design.

And you can always create a superset
design to an adaptive design that is

better, but that would then be this
larger, longer phase 2 larger phase 3

And when going back to the sponsor and
saying, okay, is that a possibility?

It's impossible.

They said the company would fold.

That trial can't be run.

So that, I mean, that's where this
falls into it is any adaptive design.

You can create a design that's a superset
of it and say, oh, you should do that.

It's not a possible choice.

You, you, that you can't do that.

So saying no to A, because I
want B, it's a false choice.

So what the, what, what's if
somebody, if regulator says, no,

you can't do design a what gets run.

Is a phase 3 trial without dose selection,
and that's what the sponsor said.

If we can't run that trial that selects
a dose based on earlier interval, we're

just gonna pick a dose and run phase 3
What you get back is not why you said

no to a, so all of a sudden, suppose
the regulators understood that the

choice was that design A, that at least
selects dose based on an earlier marker.

And then moves into the seamless phase
2/3 part of it into the confirmatory

part, or you just get a phase 3 and
they pick the dose with no data.

I think they'd pick design A, and they'd
live with picking it on that endpoint, but

when evaluating it, in their mind they're
thinking, oh, I, I think you should

do design B, but that's not a choice.

Now, this is a bit of
a challenge for us is.

How do, how do we work with that?

How do we present this in a way
that's consumable and all of that?

And I'll, I'll come back
to that a little bit.

What, what happened here, by the
way, is a great story is in the

in-person meeting, the sponsor
shared that we can't run that design.

We can't run separate troughs.

I mean, it would not be run.

The agency became a partner in this.

Sort of development as where do we go?

What can we do under this setting?

And largely, uh, a form of a
slightly a, a adapted in a good way.

It was what came out of that.

So it was a beautiful interaction.

And again, I think the FDA did a
tremendous job in being partners, in

creating better care for patients.

And it was, it was great, but it's
this hard aspect of, of evaluating

a single design in isolation.

Another example of this, and I'll,
I'll do this relatively quickly, was an

enrichment trial where it started off
one-to-one randomization phase three

trial, but there's a concern about
what's the right patient population

and there's some risk that we might be
enrolling patients who who don't benefit

the, you know, there's always this.

Enrolling people on the healthy side or
the less healthy side maybe, that you

can't really mod, uh, you, you can't
modify the effect in those patients

sealing and, and, and floor effects.

So.

Starts off enrolling a rather
larger set, but could restrict

enrollment, do enrichment in
the trial, and to do enrichment.

Well, by the way, when you
simulate these trials, you

have to do it relatively early.

If you wait too long to do enrichment,
you know your trial's largely

enrolled and it has little benefit.

In it, but it, so you need
to do it relatively early.

So the trial had three interims
for potential enrichment, and then

had three interims for sample size.

So all of a sudden, this is a
six interim phase three trial.

This literally referenced
the paragraph that I gave you

as, this is too complicated.

Doomed it by saying, this is this trial.

It makes because of the complexity and
the adaptations, it makes that trial

exploratory, and that's, that's death.

It means it's not a confirmatory trial.

By the way, A different regulator
region said, yes, the design's good.

We like it, which is, which is a
challenge here, but I think the

regulator that said, no, this is
exploratory, and use that paragraph.

As a weapon, thought they'd get
a different design, run that

trial to understand the patient
population and then run phase three.

But that trial's never going to happen.

Uh, it's unaffordable the time the, those
large phase twos don't happen in that.

What would happen is they guess at
the patient population and they don't

actually explore that and they run a
phase three at risk, or they ignore

the region that said no to the trial,
which neither one of those is a better

option than trial A of what they got.

And so that's a huge aspect here and,
and such a challenge, uh, in these

circumstances of being a regulator
because maybe you don't know what

the alternatives are if you reject.

The null.

What's the alternative?

If you reject a design, what,
what, what am I gonna get?

Uh, and maybe I would go back
and actually really like the one

that, that was already submitted.

And so several parts of this, again,
that design this, you know, call it.

A thousand patients in this
enrichment phase three trial.

What they'd rather have is a thousand
patient exploratory trial to find the

enrichment or the enrichments only can
happen up through 400, run a separate

400, and then run a thousand patient phase
three, but that's never gonna be run.

That trial is more efficient, sorry, not
more efficient, but more powerful than

this seamless thousand patient trial.

But that's not a choice.

And so this now does better than all the
alternatives and the adaptations allow

a better trial to be run for everybody.

Another very, very simple example.

It's a very common example.

This isn't a real one,
but it's reflective.

The other ones were real examples is
reflective of what happens quite a bit.

Is uh, a sponsor's interested in running a
phase three trial, one-to-one randomized.

Uh, double blind, all the nice things.

And they say, okay, for, for, uh, an
effect that they think their drug has

of Delta, they get an 80, 90, it doesn't
matter percent power trials, 180 patients.

And then you show them, well, you
know, a smaller effect than Delta,

which is still clinically meaningful.

Might only have 50 or
60% power with your 180.

But they don't wanna run a 300
patient trial because they think

they're better than that and they
think that's a waste of resources.

So if given the choice of a
single fixed trial, they're

gonna run 180 patient trial.

Now the risk is that there's a
clinically meaningful drug that

doesn't hit success with that.

So you show them what if we do
several adaptations for sample size?

And suppose we do interims at
150, so if the drug's even more

effective, we get out sooner.

200, 2 50, or 300 with
appropriate futility rules.

And success looks and all of a sudden
this design performs really, really well

because if their Delta's true, they still
get out with something that on average is

180 or, or better than that, they stop at
150, 200 with high probability, and that's

really good if it's clinically meaningful.

But less than that Delta, they
still have a good chance of

successor 200, 2 50, or 300.

So a really nice design.

Now, if somebody says, that's too
complex, too many interims, they

might think that because they're
getting a 300 patient trial.

I say no to that because
run the 300 patient trial,

but the 300 doesn't get run.

The 180 patient trial gets run.

So that's the challenge of this.

This is the, the, the
challenge of a regulator.

And this is, this is, um, this
is not necessarily their fault.

In thinking about what this alternative
space, and if I were a regulator, I

might have a different alternative set
of designs that I think could be run.

And so I say, oh, you should do
the following, but the those are

never gonna be run potentially.

And I get something back as a reaction to
this that's worse than the original one.

And these are the challenges
of the way these are.

This is particularly done.

So I think we need to communicate
that better, which is hard, by the

way, because you can't say, if you
don't take a, we're gonna do C in a

way that is almost, uh, blackmail ish.

That, you know, boy, if you don't say
yes to our design, you're gonna get this.

Uh, you know, and that, that's
really not the way we traditionally

when we submit a design like that.

We work really hard to explain
that single design as to what

you're gonna get as a regulator.

Now we don't spend a lot of time
comparing to a bunch of other designs.

It's actually can, can be confusing
to somebody receiving this.

What, what's the design again?

But I think we need to do a better
job of that, to understand the breadth

of the designs that could be run,
not in an aggressive way, and showing

that this is a, a better design, a
seamless two, three trial designs.

Better design than a lot of
the alternatives that could

be run in that scenario.

And so thi this is really our challenge
as clinical trialists to be able to

do this in this weird interaction we
have with regulators tied to it is.

As this draft guidance on ICHE 20 is
going, hoping a bit of that paragraph can

be massaged and less of a sharp weapon for
moving forward, uh, in clinical trials.

All right.

I appreciate you joining me today in
my, my new digs, and yes, I survive

the travel and in all of that, and
so I appreciate you joining us.

Until next time, we'll
be here in the interim.