In the Interim...

In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele review the ICH E20 draft guidance on adaptive clinical trial designs, offering a technical yet accessible breakdown for trial sponsors, practitioners, and those interested in clinical development. Drawing on their practical experience in creating and presenting adaptive trial designs to regulators, they discuss the document’s strengths, areas of consensus, and where cautionary or restrictive language appears. Listeners are guided through the evolving regulatory landscape, distinctions between Bayesian and frequentist approaches, and what new harmonization efforts mean for planning adaptive confirmatory trials. The episode conveys hands-on examples, such as the Sepsis ACT seamless trial and the ROAR pan-tumor trial, illustrating technical points with real-world context. Key operational topics—blinding, operational bias, adaptive design reports, and clinical trial simulations—are addressed. The discussion includes practical advice on navigating regulatory dialogue, limitations of ICH E20 in early-phase or nontraditional designs, and the necessity of clear, justification for adaptive (complex) trial features.

Key Highlights
  • ICH E20 as a global regulatory framework for adaptive designs
  • Tone and caution in guidance may shape sponsor interpretation
  • Seamless, Bayesian, and enrichment all confirmatory trials
  • Operational guidance: reporting, simulation, interim, and blinding requirements
  • Emphasis on justification and transparent communication with regulators
For more, visit: https://www.berryconsultants.com/

What is In the Interim...?

A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: All right.

Welcome everybody.

Back to in the interim, I'm Scott
Berry and I'm joined today by my common

partner in crime here, Kurt Veley.

Dr.

Kurt Veley, welcome
back to In the Interim.

Kert Viele: Thanks, Scott.

Scott Berry: Boy, you're showing up
enough here on the interim, you probably

don't have a whole lot of alpha left.

Kert Viele: I don't, and I'm old too.

I've done a lot of trials.

Who knows?

That's why we gotta hire
new people more alpha.

Scott Berry: yeah.

Okay.

That's not our topic for today, but
that is, people get a sense of what

our sense of humor is around here.

in the interim, our topic today may
sound dry, but it from its guidance

documents, and it's specifically the ICH
E20 draft guidance on adaptive designs.

so my wife Tammy, listens to all of these.

she's the one that
listens to all of these.

And so we, we have people
like Tammy, who's brilliant.

but she doesn't live this every day.

So what is the ICH E20 the
International Council for Harmonization.

That's the ICH of technical requirements
for pharmaceuticals, for human use.

Kert Viele: That did make it
really exciting right there,

Scott Berry: yeah.

Yes it did.

Yes it did.

So and the key word of this,
I think is harmonization.

And it talks about its mission statement,
is bringing together regulatory

authorities and pharmaceutical industry
to discuss scientifical, scientific and

technical aspects of drug registration.

And this is really, can they come
to a harmonization a, an agreement

on guidelines for clinical trials
interacting with regulators.

And an important thing to think
about this is harmonization across

regulatory agencies, global.

EMA, PMDA FDA global countries.

And so for something to show up
here, it's largely agreed upon

globally about these guidances.

what to add about ICH, the
importance of this to some extent.

Kert Viele: So I, you know, I think the.

The important part here is that
this is used for potential sponsors,

people who are designing trials.

This is a, essentially a document
which is saying, this is, okay,

this is mildly controversial.

This may be more controversial.

nothing in here says you
can or can't do anything.

but it's certainly giving.

Here's the kind of issues we worry about.

It provides a structure for
how to talk to regulators.

This is essentially it.

It sets up the playing field and
essentially says, these are the kind

of things that are always in bounds.

These are the things that are,
you're gonna need more discussion.

All of this is vital in when people
first walk into the, I know I

have a trial, what should I do?

This kind of documents where it
says, you know, here's my options.

This is the set, the field I can play in.

Scott Berry: Okay.

And it says, harmonization is
achieved through the development of

ICH guidelines via our process of
scientific consensus with regulatory and

industry experts working side by side.

So in, where is this?

In the process?

They wanted to put out a guidance
document, an ICH guidance document

on adaptive designs, something.

Barry, you and I have been living
in this space of adaptive designs.

Now the FDA has a guidance.

regulatory agencies may have guidance,
and now this is an international

harmonization of guidance, in the sense
this is draft and they're looking for.

Feedback from the industry
on their draft guidance.

So today we are going to dive into
what this guidance, this draft guidance

says, the opinion from people who
live the life of adaptive designs.

This is what we do and what our
viewpoints on this draft guidances.

So that's the topic today.

So Kert what's your overall
impression when you read this?

and by, by the way, people can
go out and get access to this.

You can read this.

You should, if you're listening to
this podcast and you're not, Tammy,

you should go read this draft guidance.

Kert Viele: You should make
Tammy read the guidance.

That should be a requirement.

Scott Berry: Yeah.

let me know if you're successful on that.

yeah.

okay.

So the first time you read
it, what did you think?

Kert Viele: alright, so this
is, first off, this is a

really interesting place to be.

'cause where we are is
the public comment period.

So, you know, the natural thing to
do is immediately dive into issues.

But, you know, I think the.

The big picture is the first
place to start, and by and

large, this is a great document.

this is evidence, you know, a continuation
of the FDA, having adaptive guidance,

the EMA, all of these other places.

The fact that we've spent the last 20
years designing these trials, this is now

something that all needs to be formalized.

there's a wealth of experience, you
know, hundreds of adaptive trials

out there, probably thousands.

I haven't done any kind of census on this.

but this is a document that
basically says adaptive trials are.

Part of the armament, part of the
things that you're meant to be coming

with us and we need to discuss.

So this is all great.

I think the document does a wonderful
job of laying out all the issues.

all of the pros and
cons of adaptive design.

it talks about various kinds
of designs, group sequentials,

enrichment, seamless designs.

it talks about platforms to some
degree, although there are other

guidance documents on that.

So this is a thorough, complete
document and I think everybody on the

committee should be congratulated.

So this is, you know, we're gonna
talk about our, you know, points

where we disagree on occasion, but
by and large this is, you know,

95%, you know, this is wonderful.

Scott Berry: So 10 years ago, ICH was
silent on adaptive designs and, it doesn't

mean we weren't doing them, but to some
extent it's not part of the harmonization.

So I agree.

The existence of this thing is great.

and it says, and we will come
to each one of these, but

Bayesian can be confirmatory.

Simulation is important.

Seamless can be confirmatory,
enrichment can be confirmatory.

The, the, that, so that,
I mean, almost full stop.

We can end the, we can end the podcast
and it says all of those things

Kert Viele: Well, it, it says even more
than that 'cause it says how to do it.

I mean, there's layouts.

If you do simulation, here's
what needs to be in it.

Here are the issues you need
to discuss with regulators.

It's not, it limits the surprises
and de-risks the process for sponsors

to adapt, adopt these methods.

Scott Berry: Yep.

Yeah.

Okay.

So let's go into, should we go
into the various pieces of this?

within it, what was the, what
were potential, negatives?

One of, one of the negatives for me when
reading it is a little bit of the tone.

to it a little bit.

And, a really good example is back
in 2010, the FDA came out with a

draft guidance on adaptive designs,
and it was felt to be, challenging.

And they came out with the phrase that
some designs are less well understood.

And that phrase, that four word
phrase for some people was.

Negative.

Oh, it's less well understood.

You can't do this.

It's a struggle.

It didn't bother me when I read that.

let's make it understood.

You know, that's part of our
job sort of thing, in it.

So are there similar things here to
perhaps the tone of it in your read that

could be construed to be similar to that?

Less well understood.

Kert Viele: So on the plus side,
I think there's less of that.

I think they've, there some of
that lesson has been learned.

maybe that lesson is now well understood
about how much a document influences.

Practice indirectly.

I think to me, in terms of the
language, issues, like things

are very hard or impossible.

you know, things that I know that we've
done and we've gotten accepted, I think

are phrased as, oh, this is gonna be
incredibly difficult and we know it's not.

So I think that would be valuable
to try to get rid of some of that

and to more talk about the issues.

Scott Berry: Yep.

Okay.

Alright, and the other really
important thing, I should

touch on the existence of it.

Is there anything this document
says you can't do when you're doing

a design of an adaptive trial?

Kert Viele: No, and I mean, we could
keep in mind that there's nothing

anywhere that says, I mean, I know
more about the FDA's legal basis than

other regulatory agencies, but there's
nothing in the law that says you have

to be frequentist versus Bayesian.

There's nothing in the
law that says 0.025.

These are all, there are standards
that, you know, this is an easier

path than others, but nowhere
does anything say no to anything.

Scott Berry: Yep.

which is a really important thing.

We, I don't find this
restrictive of at all.

I'm not going to be involved with
the design of a trial and alter

it because of what this says.

overall,

Kert Viele: Yeah, I think this is
really about where are you on the

adoption cycle for adaptive trials.

If you're on the bleeding edge, this
document's probably irrelevant to you.

It doesn't say no, you're gonna
stay on the bleeding edge.

This document is an invitation for
people a little farther back to go.

Adaptive designs are now stable
enough, acceptable enough that another

group of people can hop on board.

Scott Berry: Okay.

So on the grand scheme, this is an
intangible summary from you of this

particular document, the original
FDA draft guidance document, 2010, I

think it was, I think generally was
perceived to be a lot of, caution.

and generally.

the people on the edge of
this doing adaptive designs.

It wasn't gonna stop 'em, but it
didn't feel like an invitation.

The current full adaptive design
guidance by the FDA is almost saying,

you should be doing some of these things.

Some of these things are better and
it's gonna supply better ways for us

to make decisions on your treatments.

It was almost kind of a kick
in the butt, if you will.

So it was moving the swath of
people to saying, boy, we better

know what's going on here.

We better do this kind of thing.

That maybe that's the sort of scale
of, it, it's very encouraging.

You should be doing this.

It's gonna, it's gonna change
the industry to caution and

people are gonna use it as well.

If you read this diet got document,
I don't think people want you

to do this kind of thing, where
do you find this on that scale?

Kert Viele: This is probably a
little closer to 2010, and, but it's

also, you know, a few changes and it
moves closer to the other document.

so this is all, this isn't substance.

This is, as you've said, tone.

so, you know, I think in terms
of, let's talk about, for

example, a bias discussion.

There's discussions of, oh, if
you do this or that, you could

run into biased estimates and on.

and all this is fair to be, I mean, I
certainly, unbiased I think is important.

Quality of estimates is important.

I'm not sure unbiased is.

You know, the most important
thing in every setting.

But this is a set, this is a
situation where, you know, you

could have a tone where, oh, all
these horrible things might happen.

But, you know, we respond to
FDA and EMA questions on this.

We quantify what the bias is.

It's not that much and
things are acceptable, or we

need to make small changes.

So this isn't, it's not a roadblock,
it's something to be aware of.

Scott Berry: Yep.

Yep.

Okay.

So let's sort of dive into
the various pieces of this.

in terms of adaptive designs, it
goes through, different parts of,

different kinds of adaptive trials.

So, by the way, this does
make a reference that this is

largely for confirmatory trials.

So we should be thinking this late stage.

There's one, one comment
in there that I do.

really not like that.

Says that this guidance is also relevant
for early trials, and I think there's

a lot of places that's completely
misplaced, that this isn't really guidance

and it doesn't make sense in that.

But, so think about this as phase
three trials for registration of this.

So it talks about seamless trials.

What do you think about the discussion
it has about seamless trials?

Kert Viele: So you gotta read
that two or three times I think.

So the seamless trial discussion, it
talks about in some detailed trials

that say, have two doses and you do
a dose selection and narrow it down.

It doesn't talk about trials
where, you know, what we would

consider the more general seamless.

Where you're combining a phase two dose
finding and a phase three confirmatory

trial where you've got lots of doses.

and it, it has a couple really
negative statements in there.

I think it says adaptive trial
shouldn't be a replacement for

a proper dose finding trial.

you know, I think this is one of my places
I really would like to see this changed.

you know, a seamless trial.

The fundamental advantage that we
see, and this goes back to, I should

probably give the broader picture.

If you separate these trials, what
companies will do is they'll spend

a little bit of time in phase
two, so they can move to phase

three as quickly as possible.

When they do that, you're more
likely to get the wrong dose

when it's a seamless trial.

You actually spend more time in
phase two, you can do the math.

You should spend more time in phase
two because you get to use the

data, you make better decisions.

So I would say a seamless two, three
trial, not always, we should have a

discussion, but in many cases it's a
more proper dose ranging trial and gets

the information that regulators want.

It has more information
about what is the right dose.

Scott Berry: Yeah, so they, there's
snuck in there some phrases that just

feel sort of cautionary, like saying if
you do a seamless two three trial, you

might have to do a proper dose ranging
trial as though that's not proper.

so a few words like that go away.

And it's overly, you can do
seamless two, three trials.

And I agree that's the part where
in, in some sense it also says that

the goal of adaptive designs is
to be more efficient and it almost

sort of brings out this aspect that.

It's not be, you're not
gonna get better answers.

You're just going to do
it with less resources.

And I think you gave a great
example of where you actually

get better answers with this.

I'll throw a trial out there where
I think this exactly happened.

The Sepsis Act trial
was a trial in sepsis.

Faring was the sponsor.

They published a paper on the design.

It was a seamless two three trial with.

Four doses in phase two going to phase
three where it picks a single dose

because the data in phase two was going
to be included in the final analysis,

and that's what a seamless trial is.

It moves from multiple doses to a single
dose, but you can use the data in the

first part in the con confirmation because
that data was used in the confirmation.

They made the phase two part larger.

it's actually a trial that because it
was larger, it stopped for futility

and didn't go to phase three.

Interesting.

If you go into the interim of where
that trial was and you go back

to a small trial, they might have
run, they might have jumped and

run a very large phase three trial.

Inappropriately and failed
this large phase three trial.

They got better answers by being
adaptive in that circumstance.

This isn't, you know, it, there, there
are efficiencies, but that's a, that's

an example of a trial where I think you
get better answers by being adaptive.

Kert Viele: Well, in a
lot of trials like this.

We go, Hey, if I do this one, if I
do this in a standard way, I can only

explore three doses if those efficiencies.

We don't always trade
them for sample size.

We trade them to answer a more
complete question, explore five doses.

Whatever you need to do here.

Scott Berry: Yep.

And if I have to do fixed randomization
and I add a dose, it linearly

increases the size of the trial if I
do adaptations over there, seamless.

Two three response.

Adaptive randomization.

I might explore more doses.

And we have examples of that as
well where Eli Lilly did seven doses

exploring Trulicity, where the original
design was three in a fixed trial.

But because of response, adaptive
randomization, they could explore

seven, within that circumstance.

So, multiple examples of that.

so I think it's a case where,
first of all, they talk about

seamless being confirmatory.

That's great, but there's a few hidden
adjectives we would love to see removed.

Okay.

what about Bayesian?

Does it talk about
Bayesian in the guidance

Kert Viele: so yes and no.

so I, you know, I have strong
opinions about the Bayesian.

I noticed you didn't lead with that.

but anyway, so the Bayesian trial,
it is there, there's a section on it.

there's a brief introduction to
Bays in general, defines it as

basically updating data with a prior
and so on, and then it moves to.

borrowing of external trial
data, which going back to the

seamless trial, it's kind of odd.

you know, if you say you can't do
seamless, you're saying you can't borrow

in that context, but you could borrow
external data into your phase three

trial, but you can't borrow from phase two
that was randomized in the same anyway.

It's an odd, it doesn't flow right.

So, but the Bayesian, I just,
one thing that's important to

me when you get into borrowing
Bayesian is not equal to borrowing.

There are lots of ways to be a
frequentist and do borrowing.

we don't tend to do those, so I'm
not expert in them, but they exist.

and they certainly do nice things,
Bayesians do borrowing, and there's

lots to Bayesian that is not borrowing
and that's given short shrift there.

I think there ought to be discussions
on hierarchical models for modeling

subgroups, predictive probabilities
there, those kind of things.

Should, Bayesian should not only
be informative priors, there

are lots of other applications.

Scott Berry: Yep.

Yep.

in that, so gen generally, when
you read that section, and by the

way, that's done by others, a lot
of times people contact us and say,

we want to do a Bayesian trial.

And what they mean is an adaptive trial
in that it's not necessarily driven by.

Bayesian statistical models, in that many
of them are, which is why that tying those

things together happened and many times.

I su, I suspect when people come
to the agencies about potentially

borrowing, they're probably
bringing a Bayesian with them.

And so there's ways in which
these things kind of get tied

together, but agree completely.

they're separate topics.

Overall, what was your
view of their discussion?

Of base?

Kert Viele: so I thought with the
idea that it needs to be expanded for

the bays part, the borrowing part I
liked, it has a reasonable discussion.

Of what you need to worry about.

I am not anti type one error, so I am
okay with the discussion of that as well.

I think it ought to be there.

I think there's a discussion.

You know, when we go to the agency
with these trials, a lot of times

we don't even argue the statistics.

We argue is the data
we want to borrow from.

Relevant, and that's
certainly in this document.

So I think it says, you know,
here are the issues you ought to

discuss, and these are the issues
that we've discussed in the past.

And borrowing is sometimes
appropriate and sometimes not.

Scott Berry: I thought it was, in some
ways I think it's relatively weak.

There's not a lot there.

that's not bad.

in it, again, it's not restrictive in it,
it exists, and this is a harmonization.

So some ways this is the factors that
largely agreed upon regulatory agencies

globally is that Bess is confirmatory.

I, I mean it's existence in that, that's
a really important thing in that, and

it being not overly prescriptive I
think is actually pretty good in there.

I thought it was opening, the
idea of borrowing is a very.

A reasonable thing, and I think by the
way, forecasting forward, that's something

that's gonna be done much more now.

It's not always borrowing
things we already know about.

Historically, it's a trial of four
baskets that I'm prospectively going

into four subsets of disease and I
wanna borrow from one subset to another.

I think this is pretty open to that
idea as a very reasonable thing to do.

It.

Kert Viele: So I read it as well,
and I think here's where it's

worth a little bit more language.

Because part of the language in
there on say, bias and estimation

is a little bit at odds with bays.

I mean, to me, working in adaptive
trials for the last 15 years or

however it has been, 15 years.

Oh, you're, anyway, it's been 15 years.

The, But anyway, the greatest advantage
to bays when you're doing adaptive

trials is the posterior distribution
is the posterior distribution.

You don't have to worry
about correcting it.

You don't have to worry
about what the design is.

You don't need these different,
corrections all over the place.

the posterior is the posterior.

The inferences are the
inferences, and you are done.

However, from a strict
frequentist standpoint, those are.

Biased in some respect, and I think
there ought to be some discussion that we

ought to be trying to get good estimates,
and especially in adaptive trials, that

should be a focus on mean square error
and not necessarily bias is the only thing

or the pre-eminent thing in the world.

Scott Berry: Yeah.

Yep.

Yep.

agree.

Okay.

you talked about enrichment
trials being in there.

anything specifically
about enrichment trials?

Kert Viele: I like what's there.

I do think it auto, this
is the same with, seamless.

It talked about two doses and not
the full seamless, the enrichment

section, it talks about essentially.

Biomarker positive and negative.

And that's the focus.

we do a lot of trials, you
know, rare oncology, rare.

Tumor types in oncology where we
may look at 10 different cancers

and we wanna borrow among them.

the ROAR trial, for example, was the
first pan tumor approval for FDA at least,

where you got a drug that was targeted
to anything that had the right mechanism.

the.

The right target for
the mechanism to act on.

I do think the enrichment section should
talk about those kind of trials, and

this gets into the estimation again, we
know if we look at multiple groups, the

highest of the high is biased high, and,
you know, hierarchal models to correct

that, that's an important part of this.

Again, getting good estimates,
not necessarily unbiased, isn't

necessarily the best estimate.

Scott Berry: Yep.

Yep.

and it tied in this idea.

So we do a lot of adaptive designs
where we've been told no by a

regulator because we haven't yet
convinced them of type one error.

I mean, this is a red line to some
extent that we have had trials

where we know type one error is
inflated, and that's been okay too.

So I'm not saying that every
single trial is that way, but

that's a very strong thing.

maybe this is a bad thing to say, but I've
never, and we supply estimation summaries.

In almost every one of our simulation
reports, we look at the behavior of

the estimation, and I don't think
we've ever been told no because

of the behavior of the estimation.

Kert Viele: Well, I think in
general we want this to, the

shrinkage does reasonable things.

We actually run into the opposite
problem where we're, we run a

basket trial, one of the baskets,
the drug performs incredibly well.

We shrink it down and we can run into
debates over, oh, I don't want the

estimate to be less, Hey, I want to be
able to put the big number on there.

But we go, Hey, you know, this
is the highest of the high.

It's biased high.

It should come down.

Scott Berry: Yep.

and in some scenarios we
know that the estimate of the

highest group's gonna be biased.

I mean, that's the whole role of
the particular shrinkage of that.

It does spend a good bit of
time on, on two other topics

in the FDA it does as well.

Operational bias and the role of that,
the role of blinding and all of that.

What do you think it's dis about?

Its discussion about the implementation,
about adaptive designs, ensuring

blinding in those care, those aspects.

Kert Viele: I think those are great.

I think, you know, if I have
comments, they're all minor.

it lays out issues.

It talks about communication
flows, emphasizes you don't want

data to get to the wrong places.

it discusses what I would consider maybe
esoteric topics for Tammy, in terms of.

If a sponsor needs access to
certain information, there needs

to be blinded firewalls in place,
you know, to make sure it doesn't,

doesn't negatively affect the study.

There's discussions of
interim analysis committees.

I think it's very thorough.

I like it a lot.

Scott Berry: Yeah.

Yeah.

I do too.

I, and largely agree with it.

and making sure that's done really well,
I think is important for adaptive designs.

So they, it also discusses a, what
we call an adaptive design report.

A report that summarizes and
describes in complete detail the

adaptive design and the simulations
conducted, around the design.

What did you think about its
description of the need for

this, the components of this?

In the ICH.

Kert Viele: I think so this
is another one of my sections

I think has really done well.

I think it lays out tons of issues,
talks about different scenarios,

has important points like, you know.

Plausible scenarios
rather than all scenarios.

So there's the famous example we've
talked about in other podcast where

we were asked to simulate when
people become immortal under certain

conditions, which I think is, you
know, largely a time of time passed.

but this document is in the
present, so it does that well.

it has a few throwaway
statements in it, I'd say.

I don't know if they're throwaway
or not, but it's essentially.

This is impossible to do under certain
conditions, and I would prefer that

this gets back to less well understood.

I'm worried about somebody just pointing
to that and going, oh, they wrote 10 pages

on this, but they really only meant the
one sentence that says this is impossible.

That's the secret code that tells
us not to read the other 10 pages

and pay any attention to it.

Sometimes this is hard.

Sometimes the null space is hard
to explore and you know, we need

to be transparent about that.

But in a number of cases, you
know, a dichotomous response.

You know, look at a grid of probabilities
and you've done what you needed to do.

Scott Berry: This is another place
if we go back to, there's an ICHE

20 guidance that says simulation for
demonstration of type one error is.

re is a thing you can do.

It's a scientifically reasonable
thing, and it says it in the ICHE 20.

It's draft.

but that's another place that,
looking back over the years, that's

been a controversial thing for
years and it was never in the FDA.

The question is this, and
now it's in the ICHE 20.

The progress that's been made
in this, I, that's another place

we don't want to ignore that.

This is a super positive
thing that's in them.

Okay.

there are a couple things that we
kind of disagree with that I, and I'll

call them little myths, for example,
that I'm not sure are grounded.

Don't do a lot of interims.

You know, there, there are people
that have this viewpoint that, oh,

you should only do one interim.

And we get people that tell us,
oh, you shouldn't do two interims.

Regulators don't like that.

Now, it doesn't tell you
can't do six interims in it.

But it gives this feeling that
generally interims are bad.

If you read that statement,

in, in that, and we.

I understand the notion that I, and
it's interesting for a long time, and

Don has said this, that if you do one
interim, it might actually be something

that supplies a certain amount of
information about what's going on.

If you were to, at the other extreme,
do interims every day, there's almost

nothing that's ever learned by anybody.

Because it's just continually
going on within this.

So it's always a statement I've found
odd, and I'm not sure it's actually

real, but it's repeated by people.

So there's a couple things like that,
that show up in the document as well.

Kert Viele: and one thing I think we ought
to, so things that aren't missed that are

cautionary in this document, I think we
ought to be, we ought to make sure we.

Echo.

In this podcast, there's a discussion
about there is some limit on how many

adaptations and so on you can do.

Now, we certainly aren't on the side
of, you know, hey, three interims as

opposed to one is exceeding the limit.

That's not what we would call
the budget, but certainly the

document raises a good point.

If you're talking about a platform
trial, the model's changing.

there's different protocols
at different sites.

you know, lots of things
can be very complex.

We've.

We've at some level experimented
with experimenting, and I

think it should be fair.

We've walked at the end of the trial and
said, Hey, that was maybe too complex.

It didn't do exactly
what we wanted it to do.

but this should be part of the discussion.

I don't think, I don't think anyone
should take this as, you know, don't

do two interims or don't do two things.

You know, do a factorial design with early
stopping, but don't throw in everything

in the toolbox at once, is fair.

Scott Berry: Yep.

I, and I do think that's fair that
touches on another topic that I'm

interested in your viewpoint on this.

It does give the viewpoint that
you should justify why you're

doing, say, an adaptive design.

Now, I think some people might
take the viewpoint of what

does somebody have to justify?

They're doing a fixed trial.

Why do I have to justify a
scientifically credible thing?

What's your take on that general
sentiment in the document?

Kert Viele: So I am, I'm in favor
of that sentiment in the document.

I do think, You know, if we're
talking about non-adaptive trials,

they're easier to implement.

They're usually easier for
people to write up to understand.

If you get into corrections for
estimation from a frequentest

standpoint, there are complexities
that come with an adaptive trial if

you are going to get exactly the same.

So at the end of the day.

Patients are gonna be treated
essentially the same chance they

get a good drug, the same chance
they mistakenly get a bad drug.

If all of that's the same, and that
includes time, correct, choice of

dose, I mean there's a lot more
into it than power and type 1 error.

If those are all the same,
definitely go simple.

You need to say, here is the
advantage that justifies the

cost of doing an adaptive design.

Often that's in adaptive's favor, but
we know, you know, if accruals fast,

not time to be adaptive, for example,
there are exceptions to adaptive wins.

Scott Berry: The other part of that,
and I'm completely fine with that as

well, there, there's one challenge
to it, but one other part is.

we don't decide on a design.

We don't throw in adaptive things
because, ooh, it would be cool

to do that, or we're showing off.

We've gone through a sequence of
comparisons with the design team.

We've said, here's what
you could do with this.

We always start with a fixed design.

Here's a fixed design.

Now we start to add adaptations.

We've done all those comparisons
and we've demonstrated the

advantages that this design.

Supplies, in it.

So I think it's a very reasonable thing.

One of the challenges that I've always
found with this is we go through this

iteration of the design where we do
lots of comparisons to other designs.

And what if we do RAR
with different frequency?

What if we don't do it?

And it's hard to present that
entire story of comparisons

to regulators of the designs.

We're not running.

So we walk in and we say,
this is the design here.

I know it relies upon us to explain
some, the value of the things we're

doing on that, but it's a little
bit harder at the end of the story

to provide all of that story.

Kert Viele: And we've had confusions
where regulators have said back.

essentially they have brought up
reasonable issues about designs

we're not proposing because
they're often those comparisons.

It can be confusing.

So, you know, it's a cautionary tale
to lay those out carefully and make

sure we know what the default is.

Now, I will say on that issue, my
pet peeve is the one where it says

Bayesian has to be justified adaptive.

I think there's a cost to being
adaptive that needs to be.

Justified in terms of the extra
effort, operational bays and

frequentist, that's not the same thing.

This is just two different
ways of being inference.

I always call this the animal farm
argument of statistical paradigms.

Frequentist is equal, but
frequentist is more equal than base.

So this, I don't, I object to that one.

Scott Berry: it's good.

I should have asked you your pet peeve.

You have a pet peeve of ev each
episode, so I, okay, good one.

other topics I missed on the guidance

draft guidance.

Kert Viele: I think we've actually
done pretty well going through it.

As I said, you know, I think we should
come back to the positives here and just

how much of this document we agree with.

I think we agree with
virtually everything they say.

You know, you should justify
the following aspects.

We agree, you know, these
are the pros and cons.

We agree.

it's a lot of the adjectives around it.

That we're, we are reticent of
essentially, you know, people who

are involved in regulatory consulting
going, this is what this means.

It's, oh, this is the sentence that
invalidates the rest of the document.

And we've been through that often enough.

We'd like it to focus on the issues.

Scott Berry: Yep.

Yep.

that the regulators don't think a
seamless two three trial is proper.

Kert Viele: Yeah.

Scott Berry: you know, and
weird things like that.

So largely, I agree that I think
this is, I read this and it was

a little bit of uhoh, you know,
what are we gonna get here?

And I thought it was largely a
very positive document, in that.

So, congratulations to
everybody that worked on it.

I know.

Consensus and harmonization.

Yikes.

in this, and so congratulations to them.

are, is Barry planning to
provide a response to this?

They've invited responses.

Are we planning to provide a response?

Kert Viele: Yeah, we're definitely
going to provide our own response.

we're also interested, I know other
groups are working on responses as well.

we're happy to collaborate
on those responses.

so certainly, you know, we are, we're
gonna make sure that we, we express

where we would like to make changes.

and I think we've covered a lot
of the, our big topics today.

Scott Berry: Yeah.

Yeah.

Brilliant.

Alright, so, thank you Kurt for joining.

Thanks for weighing in on
this and here in the interim.

And remember, we're always in the interim
of science and life, so here we are.

Thanks Kurt.

Kert Viele: Thank you.