In the Interim...

In episode 40 of "In the Interim…", Dr. Scott Berry examines the statistical, operational, and behavioral challenges of using interim analyses as triggers for funding in adaptive and seamless Phase II/III clinical trials. The episode presents a typical hypothetical scenario for rare disease drug development, contrasting conventional two-stage development with a seamless design and highlighting efficiency gains in sample size, patient allocation, and trial duration. Scott details the construction of administrative (financial) interim analyses, underscoring their distinction from futility analyses and their role in funding decisions when complete funding is not secured upfront. He addresses FDA operational bias concerns, emphasizing blinding and limiting information sharing to protect trial integrity. Finally, the episode focuses on developing objective interim funding criteria—using Bayesian predictive probability and assurance—and on leveraging illustrative simulation outputs and sample datasets to bridge the “I’ll know it when I see it” divide between scientists and funders. Practical, empirical, and tailored to real funding barriers in clinical research.

Key Highlights
  • Statistical structure and efficiency of seamless Phase II/III trial designs
  • Administrative (financial) interim analysis setup as funding decision triggers, distinct from futility analyses
  • FDA operational bias guidance and requirements for trial blinding
  • Predictive probability and assurance as objective interim criteria
  • Sample data and simulation outputs to facilitate stakeholder alignment
For more, visit us at https://www.berryconsultants.com/

Creators and Guests

Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC

What is In the Interim...?

A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: Well, welcome back everybody.

To, in the interim, I am
your host, Scott Berry.

Appreciate you joining me here.

Uh, as, as on many of
these I have discussed.

Uh, my wife Tammy, listens to all of
these and her comments to me about,

uh, about the various podcasts.

Uh, also asking, you know, where
am I looking in all of this?

So I, you can see I've, I've this,
this sort of setup and I don't.

I don't peer into the, the camera here.

I'm looking at my large monitor
where I have my, my notes about the

podcast and my, my other computer
down here and the camera's over here.

So I, I know it's a little bit
odd that I'm always looking

in the opposite direction.

Maybe this is in part my.

Uh, my introvertedness of being
a statistician, it's, it's

hard to peer into the camera.

There's nobody there, uh, in it.

And that, that, that's the
old, that's the old joke.

Uh, how do you tell an
introverted statistician from

an extroverted statistician?

The extrovert looks at
somebody else's shoes.

Alright, so I, I think it's fair game
for, for me to criticize statisticians

and I, I may do that again, uh, because
I am one, uh, in this years ago.

We, I was involved.

We were involved.

Barry consultants were involved.

A number of people, really wonderful
effort in a project called Adapt It

Adapt it was funding from the FDA
and the NIH to investigate bringing

innovative trial designs into
trials that were part of the net.

Neurological emergency treatment trials.

And we went in and we, we, we took what
were gonna be five different trials.

It turned out to be six
trials we investigated.

And what if we looked to
innovate the trial design?

What was really.

Great about the project is we
had, uh, behavioral scientists

studying that process of bringing
innovative trial designs, uh, to

these, the, these particular trials.

One of 'em was, uh, which ended
up being ice cap cardiac arrest.

Sub people subjects, uh, sorry.

Patient suffers cardiac arrest.

They're brought into the emergency room.

All of these particular scenarios
centered on the emergency room.

And the question is, does hypothermia
create better neurological outcomes?

There's a period of time where the,
the blood is not flowing in the brain.

Lack of oxygen and heat.

Makes that worse by putting a
patient in a hypothermic state,

does that create better outcomes?

That was just one of six where
there was a stroke trial, there

was a spinal cord injury trial.

There were a number of particular
trials, so there were behavioral

scientists that were studying bringing
innovative trial designs to that.

We explored adaptive sample size.

We explored response,
adaptive randomization.

We explored futility, interim analyses.

Multiple arms modeling, all
in the different scenarios.

The behavioral scientists in
studying this part of their, their

goal was what are the barriers to
bringing innovative trial design?

Lines.

Is it the funders?

Is it clinicians?

Is it regulators?

Is it operations?

Are it statisticians?

Uh, it what, what are
the various barriers?

Interestingly, the number one barrier
that they found worse, statistic.

Which is kind of an interesting place
that, that interesting role, and I

think it was a different time, this
was about 2005 to 2010 ish that we

were investigating these trials.

It was a different time, but they,
they were hurdles that you had

to overcome that bringing these
innovations, the what ifs in all of that.

Um, now.

What are, what we, we do
this at Berry Consultants.

We do a number of what
I'll call innovative.

And, and now innovative is
a self glossing is better.

And I I always get concerned saying,
oh, we design innovative trials By

some sense you mean better trials,
smarter trials we're better.

Um, but what I mean by innovative
trials is, is things that are

rarely or never been done before.

Her.

Now, you could argue whether they're
worse or better, so I wanna leave

that aside, but I, but I'll bring
that we're doing things that are done

relatively rarely or have never been
done before within trial designs, I'll

call those innovative trial designs.

Okay?

What turns out to be barriers?

Uh, I, I, you know, statisticians
in some cases can be barriers.

I don't think there.

They're barriers.

Uh, I think you can present
and create scenarios.

What are the way that we make it
more, more robust and handle that?

What turns out to be a barrier to, to
me, I'll speak for myself, is the, the

scientists who say, I can't define it.

I'll know it when I see it.

Now, what do I mean by that?

The scenario is you're, you're
trying to look at adaptive designs

or you're trying to measure is
design A better than design B?

You have to be able to understand
what's the goal of the trial,

what are we trying to do?

Are we trying to find the
right dose, the right duration?

Are we trying to find the right patients?

And what is success?

If we're trying to compare A to B, we've
gotta be able to say, A does a better

job at fi, uh, uh, at being successful
than B does, or vice versa, or the amount

of resources to get the right answer.

But you have to be able to define
what is good, what's right what, what

trials would you go to Phase three.

And when you, when you want to simulate
that and you want to compare that and

you want to say is a better than tr trial
B if, if the, if the scientists on the

other side say, I can't define what's
better, show me the trial result and I'll

tell you whether I'll go to phase three.

That doesn't lend itself
to analytical comparison.

It doesn't lend itself to
building a better design because

I don't know what success is.

When are we there?

What, what, what is good and what is bad?

That becomes really the enemy
to doing innovative designs.

Now, uh, I'll talk as we go through
here about ways to combat that.

I don't think that's, I, I think you can
get out of those individuals good answers.

In that scenario.

So I wanna bring up a common
scenario where this can happen.

And this was, by the way, um, this was a
topic suggested to me as somebody said,

oh, I'm sure you did a podcast on this.

I'll go find it.

And I hadn't.

But it is, it's a ubiquitous thing for us.

This comes up all the time, and that
is Here I'll, I'll, I'll, I'll do a

setup and then I'll get to the topic is.

We're a a, a sponsors looking
to design a phase two trial.

The phase two trial has two doses
in a placebo, so it's a three arm

trial, and they're thinking it's a
reasonably rare disease, and they're

thinking it is a rare disease.

They're thinking of running a
trial that is 30, 30 and 30, so 30

patients on placebo, 30 on dose,
the low dose, 30 on the high dose.

They want to run the phase two
trial and then they're gonna

run the phase three trial.

And this is a scenario
where we know the phase 3M.

Point.

Uh, there, there are some scenarios
where you're running phase two

and it's really a wide open space
of what the endpoint might be.

That the endpoint is, is somewhat dictated
by the area measuring clinical benefit.

So the, the endpoint is known and
then they're gonna run a phase three,

which is a one-to-one comparison of
the dose that they select to placebo.

And they're thinking that trial might be
90 versus 90, that the effect size that

they need to have, they, they have a
pretty good idea of that trial being 90

versus 90 being powered at their effect.

So they're looking at a
development pathway that is

phase two, which is 30, 30, 30.

They, uh, get full follow up on patients.

They go through the analysis of that data.

They look at that data, they make
a decision on going to phase three.

They then submit the protocol.

They go through all what I'll call
the white space between two and

three, that they then run a trial
of 90 versus 90 in phase three.

So we, we said, well, what
about a seamless two three trial

where you enroll 30, 30, 30?

You select a dose from that, and the
trial seamlessly moves into phase

three, which is randomization of
placebo to the dose that's selected.

In phase three, and we can talk
about the size of that, but let's,

for a minute, just think about the,
the separate trials was a 90 patient

trial followed by 180 patients.

The seamless II/III trial
could be 30, 30, 30.

You pick a dose and you enroll 60 against
60 in the second part of the trial.

And the patients in the first
part and the second part are

part of the primary analysis.

So the dose that's selected in the
seamless trial will have 90 patients

against placebo, 90, and you'll
have 30 on the unselected dose.

Now what, what does that mean
In the seamless trial, you

do have to adjust the alpha.

At the end, if you're gonna include
those patients because you, you quote

unquote picked the better dose in phase
II what you cherry picked the better

dose in that, and then you, you are
including that data at the end of phase

III straightforward ways to adjust
alpha and do that, that, that, what's

great about that and what is, is kind
of amazing is that you have more power.

Even though it's still 90 patients,
than you will have, if you run a

separate 90 vs 90 phase III the power
of the Seamless II/III trial, which is

90 patients followed by 120 patients,
210 total compared to 270 total, it's

higher power for demonstrating success
at the end of that phase III So huge

benefits in that scenario for resources.

But the other part of it that's
just incredible is the lack

of white space between them.

The time for that trial to read out,
the time to then select the dose.

Start phase three, enroll
180, uh, another 180 patients.

So we, we simulated that.

And by the way, you run into this
case of how do we pick the dose?

Between them, and I don't wanna
spend a ton of time on this.

Maybe that's a whole separate
podcast of picking that dose.

It can be purely on efficacy.

It can be a combination
of efficacy and safety.

It can be efficacy with A-D-S-M-B,
weighing in on whether there's a, a

safety or tolerability issue on it.

But for now, let's think about picking
the better dose based on efficacy.

And in this particular case, the
primary endpoint at the end of phase

three is really a combination of both.

It's it's the clinical outcome of
the patient, so we're gonna pick one

that has the better clinical outcome.

That's a case by the way, that you could
run into this enemy that I referred to of,

I can't tell you how I'll pick the dose.

You need to show me the
data and then I'll pick it.

It, but that's not the topic.

When we presented this and we showed
that this trial of 210 patients has more

power than 270 patients adjusting for
Alpha, the incredible part of this is

they estimated the time between phase two
and phase three will be almost a year.

That, that's just the natural
space that will happen if they run

a separate two A and two B, that
space disappears in a seamless two.

A uh, seamless two, three
trial, uh, in that setting.

We, we asked them how long that will take.

We looked at enrollment.

We simulated, uh, uh, the trials
using the numbers presented.

And again, one trial is 270 patients
and the average time to readout

of the phase three portion of
that was approximately 50 months.

It was a range to that, that we had a,
a, a confidence interval or a prediction

interval for the range of that.

And the, the median time
for the inferentially

seamless trial was 25 months.

Just call it 45 months to 25
months, 20 months different.

Less resources, higher probability
of success, win, win win.

In that scenario,

the feedback was, no, we don't want to
do the inferentially seamless trial,

and it's a case where it's sort of, I
have this, wait a minute, what, what?

What do I not understand?

This is a very common response
actually, in a scenario like this.

Uh, another aspect of this that's so
important is the chain of communication.

If the communication is somebody said no,
and that went through person A and person

B, and then we hear the answer's, no.

It's hard to understand why,
and it's hard to ask about it.

It's hard to potentially understand.

Is there a remedy to that?

No.

And many times over the years we
brought, I brought this up that

we're trying to do innovative things,
things that are not commonly done.

It's not infrequent that somebody
says, no operation says no.

Regulators say no, somebody
says no, and understanding why.

A lot of times there's a reasonably
straightforward way to remedy that to

then get cheaper, faster, stronger.

The reason for this, which is a
very common reason, is funding

the company is on the smaller side
of the pharmaceutical industry.

A big pharma may be able to fund the phase
two and phase three, and they can set

up those criteria and they can run it.

What about a smaller biotech?

They currently don't have the
money to finish the 210 patients.

They can run the 90 patient trial.

They have the funding for that,
but they need to get funding

in order to enroll the rest.

So that's a very common
response, a very real, uh, thing.

I'm not gonna laugh at that in any
particular thing, but it, it, it's

part of designing these trials,
innovative trials is can we combat that?

So it's a very common
scenario, by the way.

It's the importance of being there.

When somebody says no, it says
maybe we have a remedy for that.

So can we set up funding solutions?

And triggers of the phase
two that triggers funding.

And, and by the way, these funders could
be private equity, the funders could be

a larger pharmaceutical company, could
be a range of individuals that might

be interested in investing in the drug.

They would like to invest in a drug that's
going to be successful in this scenario.

So in the separate trials, what typically
happens is the phase two data reads

out and they unblind themselves to that
data, and they can share that data with

a potential funder, and the funder can
play with it, and they go back to this.

We know it.

When we see it, the funder then can
decide, will I invest X dollars in this?

As we move forward for the
phase three trial, and then

I'm a partner with the biotech.

If this ends up being successful and
the drug makes money, we're invested

in that and we bring in our experts
and we're we're able to decipher

from the phase two data whether we're
gonna be successful in phase three,

what the market share looks like,
and is this a good investment for us?

And that's a very common
scenario, and that's what happens

in a typical phase two truck.

Now what are we looking to do?

We're looking to take that phase
two trial and setting up objective

criteria that would trigger funding.

So you go to a funder and you say,
okay, you're interested in funding this.

Can we define what you would need to
see in phase two to fund phase three?

If we can do that.

We can do that interim analysis
where we select the dose and we

can also let the funder know, did
you hit your funding criteria?

Sometimes this is done
separate from dose selection.

By the way, this can be done in a
phase three trial, so this isn't

unique to Seamless two three trial.

That was just the particular recent
example that we're working on.

Sometimes this happens where the company's
ready to run phase three, but they

can't fund phase three in its entirety.

And a funder says, you know,
I'm interested in this drug,

but I need to see more.

I need more proof of proof of concept
of this before I'm willing to invest

the big pot of money for phase three.

Sometimes you can start phase
three and do an interim analysis.

Of whether we hit that proof of concept,
proof of funding that the funder says yes.

If you could jump this hurdle, we
would fund the rest of phase three.

If we can objectively set that up.

Agreement can be made that an
analysis takes place during the trial.

I'm not gonna call it an interim
analysis, but it, and I'll come back

to that, an analysis takes place.

It's conducted by an independent
statistical group, an independent

CRO, and literally it's a checkbox.

Did we hit the funding criteria?

Maybe it's two or three check boxes.

Did we jump the following hurdle?

And that is signed by the independent
group and that is given to the

sponsor and we'll come back to who,
who knows and who sees what when.

But that checkbox could be presented
to the funder in agreement.

That they've agreed to now fund
the rest of the trial based on the

deal they have with the biotech.

And they know they hit proof of
concept in phase two, even though

they don't get to see the data.

Now what that, that's the big
question of what you get to

see during the course of it.

So.

The data in a phase three trial
with one of these analyses or

a seamless two, three trial?

It doesn't really, neither one is
different because as far as regulators

in the scientific community are
concerned, those patients in the phase

two part or the earlier part of phase
three, they're phase three patients.

It is, it is considered bad
form and we'll come back.

I'll come back to that.

To unblind the data during the
course of a phase three trial.

Sponsors are remained blinded to what
is the effect size of the treatment.

During the course of that, we don't
want Jane Doe to be unblinded.

We don't want the pharmaceutical
company to know that Jane Doe or

John Doe was treatment or placebo.

It does the data.

You know, is there altered, is
there some aspect of knowing that

unblinded data that changes the trial?

And, and this is this thing
called operational bias.

So we'll come back to why, why,
typically you don't, but let's

just assume that you can't.

Unblind the data.

Go to a conference present.

Ooh, look how good our data are in
the middle of the phase three trial.

Will you help fund the rest?

Which is why a lot of times it's a
separate trial so that they can do

that or, or, or present their data.

Unblind the data in the middle
of a phase three with one of

these analyses, or a two, three.

You can't interrogate the data itself.

You don't want to unblind yourself to
any individual patient, but you can come

back with a limited set of information
of is the data progressing well at

this point above a particular hurdle?

Sometimes if futility analysis
are done in trials a lot, this is

different than a futility analysis.

What do we call this by the way?

We tend to call it either
an administrative analysis

or a financial analysis.

We'll write into the protocol
that an administrative or

financial analysis will be done.

We don't want to call it futility because
writing in futility means if you don't hit

that, we're gonna stop enrolling patients.

We're gonna change the trial.

But this analysis doesn't change.

The trial at all doesn't change
the sample size doesn't change

how a patient is treated.

All it does is trigger funding.

Potentially.

It could be that that funder, it
doesn't trigger the funding, but

the company gets funding another
way and the trial continues on.

So this is not something that affects.

Patients, the, the, the
scientific aspect of it.

And so we don't want to call it futility.

It's a misusing of futility.

If you do that for funding,
call it administrative, call it

financial, and have that done.

So let's come back a little bit to, to
why we do that and who knows what, when.

Ha.

But if you read the FDA guidance
document on adaptive designs where

interim analysis are done, they
have a great deal of concern of

what they call operational bias.

Operational bias is, is, it's a hard
thing to define largely, it's that by

knowing the knowledge of the data, it
alters the future of the trial or the

interpretation of the data in some way.

Okay.

Imagine in the middle of a trial if, if
patients find out that the investigational

treatment isn't working very well and
many of them drop out of the trial, now

you've got a huge missing data problem.

The interpretation of the
trial, you've, you've.

Uh, risk.

The whole interpretation and the ability
to understand is a treatment beneficial.

The opposite could happen if there's
a really positive signal that patients

drop out and go elsewhere, that
future patients enrolled look very

different, that the investigator will
no longer en enroll this patient or

that patient, or something changes.

Now this, these are loosely defined and
it's, it's, by the way, it's hard to

know if you ever have operational bias.

Uh, it's hard to defend that you didn't,
if somebody says, oh, you knew this, and

the it, it, it, it changed your trial.

It's hard to prove that it wasn't there.

So we do a great deal to, to make sure
that there's no concerns about this

mysterious thing called operational bias.

So during the course of a phase three
trial, we don't want the pharmaceutical

company, the sponsor to know the data.

We don't want clinical sites
to know the data patients, to

know the data and all that.

It's important within the
scientific credibility of the trial.

So in this circumstance.

What?

It's not uncommon that you get a piece of
paper that says, was this criteria met?

And I'll come back to what
do those criteria look like?

What kind of things do we do for that?

But is it met?

Yes.

No.

Is criteria one met or
criteria two met or no?

So three check boxes.

So what does that mean?

Yes, you can reverse engineer
from the checkbox roughly.

Are the data above or below this
largely that's not considered to

create the risk of operational bias.

You still want to keep that
information relatively confidential.

You don't want to publicly disclose
if you can prevent it, that

our effect size was above 20%.

At the interim, does that cause
this potential operational bias?

So setting up a relationship with a funder
that says you're gonna get this checkbox,

the at the sponsor, maybe somebody in
the C-suite gets this as well, A CFO.

Somebody who's not going to sites and
talking to clinicians and investigators

that are enrolling patients, you'd
like to create that separation.

You'd like the people involved in
running the trial to remain as blinded

as possible during the course of it.

So it might be that the, the CFO, the CEO.

Seize that data, but otherwise it is
firewalled from the rest of the company.

And then the funder can get this
information also agreeing not to

publicly disclose it, but they agree
to write a check if that's there.

Now, what kind of criteria do we
tend to use here in, in, in a phase

three trial in that, uh, it's not
uncommon that we use the predictive

probability of trial success.

Now, what is that?

That is a calculation that's integrating
over the uncertainty of the treatment.

What's the probability that statistical
significance is gonna be met when

we enroll out the full trial?

Sometimes this is called assurance based
on a particular prior distribution.

Uh, in that scenario, sometimes
conditional power is used given

the effect size we've seen.

Now, suppose that's the real effect size.

What's the probability of success?

The Bayesian predictive probability of
success doesn't assume a single effect.

It assumes the uncertainty of that effect
given the data we have, it's a very

natural Bayesian thing to integrate over
the uncertainty of the treatment effect.

And calculate the probability
that we're gonna meet statistical

significance at the end of the trial.

What I've described that is a lot
of times, and, and you'll, you'll,

you'll not be surprised that
I'll go to the sports world here.

Uh, you can watch an NFL football
game, you can watch a basketball game.

Uh, you can go to ESPN and it will tell
you the probability that one team's gonna

win the game at any point in the, in the.

The the game.

So at halftime, one
team's ahead by 14 points.

It might say the probability they win
this game is 92%, or if it's a tie

game, might be 50/50 In that setting.

That's the predictive
probability of success.

That's exactly what we're doing in a
clinical trial, that we're calculating

that given what we understand now and
the strength of the two teams, the

strength of the two treatments, what's
the probability that we win the trial?

Now, a clinical trial is not a
game where the high score wins.

You've got to win by a point spread.

You've gotta be statistically
significantly better.

So if it's, if the two treatments are
tied at this point, the chance of winning

this trial is gonna be less than 50%.

It depends on what sample
size it is in that.

If you are seeing exactly
the effect size you need.

To show at the end of the trial,
that's where it's about 50 50,

that you're gonna win the trial.

If the data gets a little
bit better, you're gonna win.

If it's a little bit worse, you're
gonna lose statistical significance.

That would be a 50%
probability of success.

That's pretty good data by the way,
depending on the, the, the disease.

If this were an Alzheimer's
trial, for example, or a acute

stroke trial, and you've got a
50 50 shot of winning the trial.

You might want to invest in that trial.

That's a really good particular
gamble in that scenario, uh,

because the, the outcome is
such a positive outcome for you.

The, the revenue to reap in that scenario
is, is positive enough to win the trial.

Now that predictive probability of
success sometimes gets confused for power.

Power is not something that changes
based on the data that you observed.

Power is something that if the
treatment effect is Delta, here's

the chance we win the trial.

The trial's gonna be 80 or 90% power.

Every trial is 80 or 90% power.

You just figure out what
Delta makes it powered.

So there are times where they say,
oh, to invest, we want it to be 90%.

That's a really, really
high predictive probability.

Think of that sports game.

So you, this is, this
is exactly the scenario.

You're sitting there.

You could make a gamble on team
A or team B, and at halftime,

a team has a 90% chance to win.

If, if you're investing a and
you get a double A at the end,

of course you're gonna invest in
that, uh, uh, in that scenario Now.

The, the people that the group that
you're investing in understands

the odds and the the payout of
that is going to be adjusted by it.

At the time you make that.

Now this is different.

You get to make that investment on it.

Now if you say, sure, I'll invest
it 90%, it's pretty unlikely.

Well, I, it may be unlikely that
the company can hit that early on.

It, but maybe they can hit 50% or 30%,
maybe that's where I would jump in.

You can go back to thinking about,
suppose I had a separate phase two trial.

When would I invest?

What effect size would I need
to see before I'd be willing

to invest in phase three?

That can all be incorporated within
this administrative financial analysis.

Is, and again, the enemy of that is,
I can't tell you when I would invest.

You have to show me the data now.

Many scenarios.

This is kind of a weird concept, this
predictive probability of success

in the trial to the people who are
so used to building a trial where

they're used to talking about power
and type one error and all of that.

This is all of a sudden
a little bit different.

They may be familiar with conditional
power and they may be familiar

with futility analyses, uh, but
this predictive probability of

success is a little bit of a.

Different sort of beast in the trial.

So sometimes they're concerned that
funders might not understand that.

The interesting thing is funders
understand that immediately.

That's the world of funding.

What do I invest?

What do I make, and what's
the likelihood that I make it?

And that can evaluate that investment.

Incredibly straightforward.

So this is a powerful thing to funders.

Now, my why might a, a, a funder, uh,
agree to this objective criteria in it?

Well, let's go back to the beginning
of this, is if they say No, run

a separate phase two and show me
the data, and then I might invest.

They're investing something that's
45 months to the readout of potential

success and it's a bigger investment.

270 patients will, will is
the investment and it's 45

months as opposed to 25 months.

In a smaller investment, we get it sooner.

It's a win, win, win.

If we can agree to this potential
funding, it's a win for.

The pharmaceutical company.

It's a win for the drug.

It's a win for investors.

It's a win for patients.

It's a win for everybody in this
agreement if that can be agreed upon.

Now it comes back to this.

I know it when I see it.

Can we define when we
would invest in that?

It might even be a function of, if
we hit this higher hurdle, we'll

invest, and here's the outcome.

If we hit the lower hurdle, maybe
they get a higher fraction of the

outcome because it's a bigger risk.

I, I don't live in that world, but you
can imagine fairly straightforwardly

this being a part of this agreement.

This is a very common situation.

Many groups are in this, even big
pharma needs to get big pharma.

To agree to it and maybe large pharma
doesn't want to say we're gonna

invest in a 1500 patient trial.

We haven't seen enough individually
to, uh, at, at the corporate level

to say, we're gonna invest in 1500.

Run a phase two trial, well turn it into
a phase three if you're ready for that.

Or a seamless two, three where
you have a funding go, no,

go internally in the company.

That's gotta be much easier to do.

And again, it's a win-win, win.

For the pharmaceutical company in
this scenario, if you can define

it and create it, it's a very
straightforward thing to do in the trial.

Important part in the
innovation part of it.

A little bit of a warning here is
some of the terminology in this

that you'll hear, uh, companies talk
about a TPP target product profile.

They're going into it where they know
to be a competitive in the market.

They need to see a 20%
improvement, a 10% improvement.

What, whatever that 50% improvement,
whatever that is, that's what

we're targeting for that product.

I have to tell you, lots of people
talk about that and they all

interpret it very differently.

Sometimes it's the delta.

I want to power my trial to win on that,
and so if the effect is 30% difference,

I want the trial to be 90% powered for
that 30% difference, but you could win.

If you see 20, about two thirds of
that effect, you can win the trial.

Some people call the TPP, that
20, that when the trial's over,

we need to be that value or above
to be competitive in the market.

Those are two very, very different
things, and then they're two very,

very different things for funding.

So if 30% is what you're
powering the trial for.

And at the interim you say,
we need to see 30% or above.

It's disconnected with the trial.

'cause you can win at 20,
halfway through the trial.

If you see 30 and it's power,
you see 29, that's awesome.

You are likely to win the trial again.

Your probability of success
is probably hovering over 90%

at that particular time point.

Now, if somebody says, oh, but you're
not meeting TPP, that's the part where.

The terminology gets in the way
of good communication for that.

So one of the ways to combat this,
I'll know it when I see it, is

to show it to them show them a
dataset and say, if we have this

data in phase II would you fund us?

Because it's hard for, for, uh, lots
of people, non-statisticians To think

about the quantitative aspects of what
is a predictive probability of success?

What does this mean?

What does that mean?

But if I can see the data, if I can
see the effect size that's meaningful

to me, clinically meaningful, so we
show them a dataset and say, would you

fund it if we showed you this data set?

Sometimes they say, you know,
I also need to see this.

Okay, we'll show you that as well.

Would you fund it?

Say, you know, if we see
this data, we would fund it.

Okay.

How about this dataset?

Would you fund this dataset?

And we show them example
datasets Kaplan-Meier plots

of what data might look like.

Uh, estimated hazard ratio, difference
in proportions, mean changes.

We show them these data and
then you show them thousands.

You show them scatter plots
of the various effect sizes.

And then you say, by the way, this
point right here that you would fund

the predictive probability of success
is 82%, or it's 63%, or it's this,

and now here's this other data set.

And you say, here's the predictive prob.

And they get to understand
it it's meaningful to them.

I know it when I see it through that
quantity because it's now familiar to me.

So while that initially can be thought
of as this is the enemy, that person

is somebody that you can work with
in simulating this, showing this

example, showing example data sets,
tying it to a particular criteria,

sometimes it changes the criteria.

Let's just do the observed
effect at that time in that.

Many times it can iterate and a lot
of times, by the way, you're making

this decision in a seamless two
three trial where you don't have

complete data on the phase two part.

That's okay.

You might be building
longitudinal modeling of this.

You might over enroll phase two a little
bit to get enough completers because

those patients count in phase three.

They're not wasted.

Maybe if you are enrolling one to
one to one, and you over enroll the

three, do the three arms, two of those
arms would continue in phase three.

So if you don't continue the trial and
you stop, they've been over-enrolled.

But if you continue, only a third of
those patients have been over-enrolled.

The other ones contribute to the
end of the phase three analysis.

These are all the things that
go into these types of trials

and these types of comparisons.

In order to tackle what is a
ubiquitous problem, you've, you

have to be able to fund the trial.

All right, so I, I, I hope you've enjoyed
the discussion of funding Jumping Hurdles,

combating the I'll know it when I see it.

And thank you for joining me.

Until next time, we'll
be here in the interim.