In the Interim...

In the 60th episode of “In the Interim…”, Dr. Scott Berry, Dr. Nick Berry, and Dr. Joe Marion discuss how Berry Consultants uses AI in clinical trial design and software development. The conversation addresses current applications, limitations, implications for productivity, and the ongoing need for human expertise in clinical trial design. The team examines both promising use cases and the risks associated with security, compliance, and AI-generated statistical work.

Key Highlights
  • AI is used to develop user interfaces and code modules, notably expediting tasks like R Shiny app development and software prototyping.
  • Statistical coding for complex modeling and simulation—such as numerical integration and predictive probability calculations—remains unreliable when delegated to AI and still requires direct oversight and manual review.
  • Attention to security and confidentiality is central; Berry prohibits the use of client-sensitive or patient data within AI tools.
  • Generative AI assists with drafting and editing documents, but the output tends to be non-specific, generic, and sometimes imprecise, requiring expert editorial input before use.
  • While embracing AI to improve efficiency, the discussion is critical of current AI hype, especially around black-box modeling and pushes back against the perception that current AI can replace domain-specific statistical design or strategic judgment.
For more, visit us at https://www.berryconsultants.com/

Creators and Guests

Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC

What is In the Interim...?

A podcast on statistical science and clinical trials.

Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott: All right.

Welcome everybody.

Back to In the interim.

This is actually our 60th episode of
In the Interim, I'm your host, Scott

Berry, and I'm joined by two other senior
statistical scientists at Barry, Dr.

Nicholas Berry.

And Dr.

Joe Marion.

Welcome back to In the Interim.

Nick Berry: Thanks.

Joe Marion: Uh, thanks Scott.

Happy to be here.

Scott: So the 60th episode of,
of in the interim 60 interims.

Interestingly, uh, Joe, it was
just recently involved in the 60th

interim analysis of a clinical trial.

And we won't say what trial that was,
but you're doing double duty on the 60.

Joe Marion: Yeah, it's, uh, it's funny
how that number creeps up on you, right?

People always ask me like, how many
interims can you have in a clinical trial?

Or like, you know, sure.

Surely, you know, you
can't do too many of these.

In a sky's the limit as,
as far as I can tell.

If I think, if I stayed at Barry for
another, uh, 30 years, you know, we'd be

at what, how many interims would that be?

It'd Be a lot.

Scott: Be a lot.

Joe Marion: lot.

Yeah.

Nick Berry: joke about how you, uh.

Spend your alpha at Barry.

Sometimes when you do an interim,
you're spending your personal alpha.

So 60 interims is a lot
of alpha for you to spend.

Joe Marion: I am all at Alpha man.

Well, I mean, if you believe that
the alpha spending literature,

every time you get a failure,
you get to recover some alpha.

Right?

So maybe I've got plenty
of alpha, you just dunno.

Yeah, yeah.

Yeah.

Scott: Alright, stay.

There's nothing better than
adaptive design humor, uh, for,

Joe Marion: That's

Scott: our episodes.

Yeah.

Joe Marion: People love it and

Nick Berry: Yeah.

Joe Marion: invite us to parties
because they want to hear it.

Nick Berry: Hmm.

Joe Marion: Yeah.

Scott: Okay.

So today's episode, I, I.

When we come up, we've done 60 episodes of
this, and we're not running out of ideas,

just like we're not running out of alpha.

We have lots of ideas, and I'm asked
all the time when I go out, what, how

is AI affecting Berry consultants?

I'm asked by statisticians.

I'm asked by clinicians.

I'm asked by people who don't know
really what we do, but we know we have

a company, uh, in, in the tech space.

So today's episode is.

AI at Barry.

How do we use it?

What, what do we know about it?

What do we think where the world is going?

And today, I, I have Nick and Joe
on, and they, they're part of our

committee that is exploring ai and
it's a incredibly rapidly moving thing.

And we're, we're, we're keeping
track of this, so we're gonna

talk about today, AI at Berry.

I think it's important to
set up what Barry does.

So statisticians within the clinical
trial space do lots of different things.

Some of those things we don't do.

Some things we do, and I think
it's important in the role

of AI in those, those things.

So Berry Consultants at its heart
is a clinical trial science company.

We have clinical trial
scientists that work at Berry.

Most of them are statistically trained.

We do have clinicians, we have
mathematicians, computer scientists,

but they are clinical trial scientists.

We do really three major, major things,
core competencies, and the first

is trial design and trial strategy.

So the science of clinical trial
development and how the trial fits

into that, the science of it, we most
of the time were involved in designs

that would be considered innovative.

And from that meaning.

Not typical, not fixed trials.

So we do adaptive trials, seamless
trials, adaptive enrichment trials.

We incorporate external data within a
clinical trial, more complicated analysis.

In a trial platform, trials,
basket trials, all of these

fall under this umbrella of
innovative clinical trial design.

So we spend a lot of times in
the design of these trials.

The implementation of these innovative
trials, whether it's carrying out the

analysis, doing 60 interim analysis in
the trial needs, additional expertise,

and so we have a group that are experts
in implementing innovative trial designs.

So that's another core competency
here at Barry in, in doing that.

Clinical trial simulation plays a critical
role in the design of innovative trials

simulation, guided clinical trial design.

We do it on individual projects.

We might be writing custom
code for individual projects.

We also have software that
we license to pharmaceutical,

academics, government, CROs.

For clinical trial simulation.

And so we have a product that does that.

It's another core competency
at Barry in, in what we work on

and what we do on a daily basis.

So thinking about what is the role
of AI in these various activities?

What we don't do is we don't
do a process stuff in trials.

We stay out of the
quote unquote CRO space.

We don't create databases for trials.

We don't do data management in
trials, generation of the tables,

listings and figures, the the
SAS programs that generate these.

Creation of CRFs.

We don't do that.

This is CROs are really good at that
and that's not part of this strategy,

part of clinical trial science.

SAPs play an interesting role and maybe
an interesting role in the whole I.

Role of AI in all of this.

We do contribute to
statistical analysis plans.

We don't create the huge tomb of the,
the all of the tables, listings and

figures, concomitant meds and all of
that, because the, the group doing

the TFS should be involved in that.

But we play a really.

Contributing contr critical role to
the analysis, the objective within sap.

So we work on SAPs, but we don't sort
of generate them as a core competency.

The last thing is we
are not experts in ai.

Uh, now we're interesting conversation.

Should we be calling ourselves
experts in ai, uh, in that, but

we don't consider ourselves.

So in this question of ai, we're not
selling clients that we're ai, but we're

utilizing AI and how we're using it.

We're learning about that, and that's
what we're gonna talk about today.

So maybe I'll, maybe I'll
throw it to Nick now.

Nick, you're, you, you, you do
design, you do implementation.

Uh, you, you, you spend the majority
of your time on the software team.

How do you use ai?

Nick Berry: Yeah, I mean this is
maybe the most straightforward, um,

use case where I think it's pretty
generally accepted that AI is very

good at coding, especially if you.

Limited to things that are, you
know, all over the internet.

Like me, it's incredibly good at
creating user interface interfaces.

Uh, it's better than I am at modifying
visual interfaces, connecting,

um, buttons to effects and making
sure all the data is propagating

to the right places and tracking.

Data storage models and things like
that, it's incredibly good at that.

Um, and I, definitely use it.

Uh, for that.

We have this fax that is now, I dunno, 17
years old is the name of our software Fax.

It was created obviously before ai?

the desktop app, the fax desktop app has.

I mean essentially no, uh, LLM
generated code in it, right?

It's written predating it.

It's is, yeah.

Um, we are working on new technologies,
uh, developing new fancy versions of,

of facts and other applications and
it's comes in handy, in that, with that

said, like with the user interfaces
thing that I love using it for, I.

Am by now familiar with some
of the pitfalls of using it.

Um, probably makes me better at
using it in general, but it's not

incredibly good at writing technical
statistical code in my experience.

Um.

Some things it does really well.

Uh, you know, I think if you're simulating
a trial and you want it to, to generate

an accrual of patients for you, like
it knows it's gonna do the cumulative

sum of exponentially distributed
patients, and it does that every time,

that that's not what I'm talking about.

Um, yeah.

Predictive probabilities is a case
where I've seen it, you know, it thinks

it knows what it's doing, and, uh,
even says something really, you know.

Reassuring about what it's doing
and then lo and behold, it's not

doing what you think it's doing.

And so some of the more, you know,
maybe nuanced statistical concepts,

I think it, it struggles at.

But in general I think really good.

Even, you know, in simulations
writing simulation code, it will

give you a backbone, it'll give
you a nice flow through your app

and, and help you write stuff.

And so, I think we use it for a
lot of like non-technical coding.

I tried to, to get help our, we, we
are adding analytical calculations

for a lot of designs to facts.

And, you know, this is
a numerical integration.

It's, it's a,

it's really computer science heavy
code, not so much like statistical

heavy code in a lot of ways.

And in that case.

It, it

Joe Marion: Is it good?

Nick Berry: away, like, oh, you're using
this person's integration algorithm.

These are your weights, so on, so on.

And then it randomly slips in
something you don't want to be there.

And it's hard to, to catch that in a
lot of cases because, you know, those

algorithms are not like intuitive
to look at and say, oh yeah, okay,

I see exactly where it went wrong.

And so we had some, some mixed,
uh, mixed results in, in like

that heavy computation area.

Joe Marion: Is, is the cost of
debugging worth the cost of using it?

Like, that's what's, what's
the trade off in time?

And then like, I don't know,
what's the trade off in enjoyment?

Like what, what would you rather be doing?

Yeah,

Nick Berry: for me it,
it was, it was worth it.

Um, the,

yeah.

that, that's not like
a dopamine hit for me.

Right.

Like writing numerical
integration code didn't, actually,

Joe Marion: No,

Nick Berry: that doesn't keep
the lights on like mentally for

Joe Marion: not coming over
to my house, that's for sure.

away on Friday nights.

Yeah.

Nick Berry: that interestingly, the,
the UI stuff, watching it make big

changes and like, oh, I, I feel like
I have a really nice flow for this.

Like, I'm gonna help people
understand what they're inputting in

our app and it's gonna be so nice.

That is a dopamine hit and the
amount that it can do sort of in

one shot is kind of incredible.

And it's low stakes, right?

A lot of times I'm just, you
can look at it and say, Yeah.

it did the right thing.

And I can look at the, the
database and say, yeah, all the

stuff's where it needs to be.

other things that are higher stakes,
like my group sequential probabilities

being calculated correctly and am is
the type one error that I'm reporting,

actually a type one error and things like

Joe Marion: Okay.

Nick Berry: yeah,

Scott: So let's talk a little bit about
efficiency, and you sh you demonstrated

to me an example the other day, and I, I
think, by the way, I, I can ask, what's

your favorite AI for code generation.

Nick Berry: I, I use.

Claude, um, I use

Scott: Okay.

Nick Berry: as of today.

4.7.

Um, is the model

Joe Marion: Extra thinking.

Right?

Or maximum thinking How?

How much thinking do you think?

Yeah.

Extra high thinking.

Yeah.

Yeah.

Nick Berry: places to be.

I can't use Max.

Yeah.

Scott: Okay, so you provided a
demonstration of a pretty simple clinical

trial design, but simulations it made
in our shiny app, really beautiful app

for exploring things and all of that.

And, and I know you kind of knew what you
wanted to do going into this, but largely

you could write this code in 10 minutes.

Uh, it was the demonstration of
this with Claude, with Claude.

Nick Berry: right?

Joe Marion: the Claude writes
the code in 10 minutes.

Scott: yeah.

No, I, yeah, I meant using AI where if
you sat down from scratch in writing

this, it's, you know, it's 10 days.

Uh, I, and, you know, don't
know the magnitude of that.

And now, yeah, now I just, I just
challenged Nick about how long he

could do it for that, but, but.

Nick Berry: I might never need to write
reactivity code in shiny ever again.

I mean, I'm not a shiny developer, right?

I'm not even really a web developer,
even though I do It to some extents.

But I mean, I'm a statistical
developer, uh, in, in most regards.

so, Claude

Scott: was

Nick Berry: good at doing the
things that I don't really.

Know that well, right.

These are the fringe cases of my job
that I've had to learn and I've had to

go and, you know, take courses online
about how to do web development and now,

um, passing off a lot of that skillset,
um, which is great as long as it's doing

a good job, which it seems to be doing.

Um, but the skillset that I'm sort
of an expert in, which is the more

statistical computation writing
MCMC code, things like that.

I'm doing that still.

Right.

That's not something I'm passing
off to, to AI at this point.

Scott: So we we're putting
together a development plan.

So you, you mentioned fax as a
desktop product, for example.

We're making it a cloud-based product.

We're redesigning the GUI to this.

We're adding a, a, a number
of really nice things.

Had we done this five
years ago, we would've had.

More developers and it would've
taken significantly longer.

All of a sudden with what AI
has done in the last year,

we need less goey developers.

Within all of this, Nick, is
the productivity maybe of three

developers years ago in terms of time.

This is, uh, this is game changing,
enormous ability for us to do

this much more efficiently.

Nick Berry: Well, in my mind
it's like the perfect use case.

I, I'm not gonna say like,
oh, we 10 XD our development.

That's not,

Scott: Yeah.

Nick Berry: a mature app that
we've spent 15 years with

statistical programmers developing.

We've always had The engines
in the background, like

the, the crown jewel, right?

This is the multiple amputation

Joe Marion: part, right.

Nick Berry: response modeling, all
the good stuff that drives facts

and makes it do stuff that no other
software can do that is there.

It's essentially unchanged.

The only thing we have to do for this web
development is create a facade around it

that takes in inputs, converts them to
parameters, and passes it into engines.

So I think it's like the
perfect use case, right?

The, the tests that we use
for current facts carry over.

They go to web fax, like,
you know, our testing.

We can run these on the cloud,
we can run them on the desktop

and make sure the results match.

Like we have so many protections
while we develop this, that it's

like the perfect use case, uh, for
AI in for, for, you know, this.

AI as a coding agent, at least in my mind.

And so we've gotten really
nice returns from it as like

Scott: Yeah.

Nick Berry: UI scaffolder.

Joe Marion: Scott, let me, let
me ask you a question, Scott.

now Nick is doing the work of three nicks.

I mean, we're making up the number
three, but let's just say it's three

Scott: Yeah.

Yeah.

Joe Marion: So I mean.

To me that says maybe you
want more K Nicks, right?

Like he's, in some sense,
you're getting three times the

value for the, the same person.

Like maybe you want even more like,
like maybe the, maybe it's not that

we're more efficient, but like, I
mean, maybe you'd want nine, nine

K Nicks worth of capacity now.

Right.

And

Scott: Yeah.

Joe Marion: I

Scott: Yep.

Joe Marion: the margin, it's kind of
changing the value of what we can do.

So.

Scott: Yeah.

From a business perspective
perspective, this is really interesting.

Is, is.

Do I have the work to support nine
times, nine nicks, you know, sort of

thing within that in the development.

And so we, as a company we've
talked about, we, we are.

We, we've got a good amount of business.

Barry's doing very well.

We could expand people, we could
add more people to Barry, it's an

interesting question of, in five
years will all of the individuals

at Barry be three times themselves,
five times themselves, and all of a

sudden, you know, the, the work needed?

Do I need more people?

I, it's a really interesting thing
from a business perspective and

software is different than design.

Joe Marion: Yeah, software is,
I'm thinking more on the software

side of the house as well.

This is like,

Scott: right,

Joe Marion: you know, you, you
could, you could do things.

It would've been really
expensive before and now may.

Now maybe they make sense,
Right, Like now they're,

Scott: right,

Nick Berry: the skillset that makes me
good for the software team right now

is not ability to develop software.

It's.

It's a little bit of that, but
mostly it's me being able to

see what I want and have ideas

Joe Marion: I.

Nick Berry: I.

want from, you know, we've,
we've learned a lot of lessons

with fax desktop over the years.

We're trying to improve the, this
cloud app to resolve all the things we

wish we'd resolved, but being able to
know what I think people would expect,

what I want this, this new app to do.

And then precisely state, and
this is where my software skills

come in, is tell the, the.

AI agent exactly what I want it to do.

And a lot of times that involves
speaking in sort of software terms and

using things that, uh, aid me in being
precise about specifying what we want.

Um, and especially for like the front end.

And for things like that, it's
mostly just having a vision and

being extremely specific about
how you want to implement it.

Joe Marion: Yeah.

And having good taste.

Having a good vision,

Nick Berry: vision.

Scott: Yeah.

Nick Berry: A good

Joe Marion: yeah.

I think both of those
things matter, right.

Nick Berry: Yeah.

Yeah.

Joe Marion: It's an incredible
time to build things like.

Scott: So this is gonna come back.

As in you, you brought up the
really important issue that

you have this imagination and
ability to know where to go.

You're consuming lots of information
and making these really good decisions.

And then you can tell AI to do
something and you know what it is.

And AI's really, really good at taking
those instructions and creating something,

and you're iterating it and you're, you're
on top of it and you're controlling it.

Nick Berry: It's been fun because I
can ask it to do three things and I

can pick which one I like the best.

Whereas normally I, I don't have time
to do all three things and look at

Joe Marion: Yeah, you're
not gonna do three things,

Nick Berry: things.

I can look at

Scott: Yep.

Nick Berry: I can say, oh, this
one actually turned out the best.

Let's do that one.

And like that's, yeah, so

Scott: Yep,

Nick Berry: amount is actually get

Scott: yep,

Nick Berry: at the options
live in the app and going so,

Scott: yep.

Joe Marion: And I,

Scott: Joe, how.

Joe Marion: for general use too, right?

Is like,

Nick Berry: sure.

Joe Marion: ask for multiple
things, give it feedback on what

you like, like, you know, the labor
is in some sense cheap on its end,

Scott: Mm.

Joe Marion: uh, yeah.

it and interact with it.

Yeah.

Scott: Okay.

So that, that, maybe we should make
sure we touch on how to use it well.

But that sounds like that's
an important role of it.

So how do you, uh, Joe spends more of his
time in the design of clinical trials.

He does implementation 60
interims in a particular trial.

Does implementation, um, what,
how do you use AI at Barry?

Joe Marion: Yeah, I mean, I think there's,
I with what Nick does of course, right?

Like I think.

Coding is where these
things are their very best.

So, you know, we're
writing simulation code.

Um, we do a lot of visualization.

It's good at visualization, but of
course, you gotta be very specific, right?

You have to know what to
tell it that you want.

It'll, it'll produce plots, but
they're often like, you know,

not quite of the right quality.

And so, You know, things you can do is
like, you can have like a style guide

for how you want plots to look and that
kind of thing, and that helps it sort of

one shot those tasks a little bit better.

Um, things I'm really excited about.

Um, well, I, one application that
I think is really cool there's

this, this common like, step of
regulatory interactions with.

Where, where when you're preparing for
a briefing with the FDA, I feel like

every company ever that's doing this
preparation, one of the things they

do is they think about what could the
FDA's response be to our questions?

What should we be prepared to answer?

How do we do that strategizing?

I, I sort of always, I don't know.

I mean, I felt it was necessary, right?

But I hated writing those questions down.

I kind of knew what they were.

it's tremendously interesting to get the
AI critique of like a sanitized document.

What is the regulatory
perspective gonna be?

And I think it's really valuable to us
at Berry because you know, you know,

based on the things it's trained on, it's
gonna give you a pretty traditional view.

And I think that can really help
you sort of understand what that is.

It's not the argument we're usually
articulating because of, you know, the

kind of work that we do, but uh, I love
that ability to get critique from it.

I think it's tremendously fascinating.

Nick Berry: I, in my experience when
doing something like that, it's usually

like a numbers game in getting a lot
of different things from the, the,

the AI model where you can pick and
choose the things that are actually

intellectually interesting and.

and like you said, I kinda wanna
emphasize we work at Berry Consultants.

We tend to have, we're not, obviously
not a homogenous population of

people, but in general we're designing
adaptive trials, often Bayesian

interim analysis, things like that.

And the literature, you know, everything
that's posted online, the vast majority of

it is traditional and frequentist and not
as innovative as a lot of things we do.

So the LLM.

Thinks like that, right?

So you're almost getting a, you,
you are getting a conservative

perspective when you ask it questions.

Joe Marion: Yeah, complimentary
review and then you can think

through how you'd answer them.

It's not very good at answering
its own questions 'cause you know,

we've got a particular house view
that's like really underrepresented,

but you know, I think that that.

It's like a pretty exciting thing.

And yeah, like Nick said, you want a
bunch, like if you get 10 criticisms,

you might like three or four of
them might really be worth, worth,

interesting, and worth addressing.

But it's still good value.

Yeah.

Nick Berry: told it?

Uh, give me feedback like you
work at Berry Consultants.

Joe Marion: Yeah.

Not, well, actually, I think one of
the, I've, I haven't tried this in a

while, but I used to tell it to like.

You know, write like Kurt Veley, right?

Or like, give me the Kurt Veley
arguments for this kind of thing.

And like, not a very good Curt, I
wonder if it's a good curt now, I

Nick Berry: yeah.

Joe Marion: that would
be interesting to me.

Nick Berry: Yeah.

Joe Marion: I, I kind of
think it probably isn't.

Um, it, it can do other public figures
reasonably well, but I don't think it

does any like statistician super well.

Nick Berry: Yeah,

Scott: Okay, so this I,

Nick Berry: on Twitter a lot, right?

He has

Scott: yeah.

Nick Berry: Twitter about different
things, and he may probably not, you

know, he has a lot of public visibility
on some of these topics because

Joe Marion: He has some chance
of being out there, right?

Yeah.

Yeah.

Um, other stuff that I
think AI is cool for.

Um.

I, produce way more
shiny apps than before.

because the, the cost of
doing so is really low.

And what's great is that you can pass
them to clients and, and then that

really gives them like a hands-on tool
that, that allows them to interact

with the decision they're making to
tweak all the levers themselves to

get that kind of firsthand experience.

you know, sometimes in like a rare
disease trial, they'll want to

know it's, you know, I have 50 end
points, which one should I use?

Right.

And the ability to hand a tool
over instead of like a ton of

plots is, is tremendously useful.

Absolutely improves the communication.

I'm hopeful like that.

This will be good for facts too, right?

You can imagine a plain, plain
language version of facts where

you describe in text what you want
the design to be, and then it knows

how to set up a FACTS file, like.

You know that that's, you know, probably
a little bit of a ways away, but it's

not like a pie in the sky dream anymore.

I think five years ago that would've
been a tremendously complex endeavor.

Yeah.

Nick Berry: course.

Every single person that we show this
cloud app to their mind initially

goes like, it makes so much sense.

This has to be something that we can do.

It's so cool and.

The parameters that go into
facts are known, right?

There's, it's not like it has
to write simulation code that

can do infinitely many things.

There's a set of parameters
that are predefined.

The storage mechanism for
those parameters is predefined.

We have documentation for that.

Presumably, you give that to an LLM,
someone can describe their trial that

they want, not design me a good trial.

I don't, we can get to this later,
but it's not gonna do a good job.

It's not gonna do what you want.

It's not gonna do a good job in that case.

But I want a trial with three interims.

I want Goldilocks stopping style,
um, decisions at interims with

these predictive probabilities.

Um, and I want to figure out
what p value threshold gives me.

A good type one error.

Like now it's got the parameters it
needs, and it can fill in, can say, oh,

when do you want your interim scheduled?

And things like that.

And then all it has to do is say.

Interim analysis schedule Vector
comma 400, comma 500, and you

know, it's, we have to teach it.

It's guardrails and then
I think it's like Yeah.

Joe Marion: But it'll not.

Nick Berry: of of doing, of, of
figuring out exactly, you know,

what prompt gets it to do the
right thing and things like that.

But, but it's

Joe Marion: Yeah.

W what's the right like iteration loop
that, that lets it learn that, right?

yeah.

Scott: So as, as of this point, so
AI is being used, the LLMs where.

You're asking it to write code or you're
asking it to do a task, and it's making

us much more efficient, even better at
that task, you said, uh, typically you'd

walk in historically with a set of seven
slides that you're presented to a client.

Now you're walking in with
an app that has presumably.

Hundreds of potential things
that they could look at.

So you're, you, you're, you're increasing
things, but largely you're using it

to be more efficient and maybe a bit
more broad about what you're doing.

What you just described is a whole
different thing where we're training

something and a generative AI that we're
now creating a tool that's using AI

for somebody to use facts or somebody
to use it, which we haven't done yet,

but is, is an intoxicating thought.

Joe Marion: Yeah.

Really democratizing in some sense, right?

You still have to have good
taste to use that tool, right?

You still, you still would have
to know what to tell it, right?

It.

It's not a self-governing
process, but I think it cuts

so much of the learning curve.

Some, so much of the uninteresting
learning curve out maybe.

Yeah.

Nick Berry: Yeah.

Scott: Okay, so let's, let's
dive into some of that.

I, I, but let's talk about things that
are happening at Barry with, with ai.

I think.

You can't do a search engine now without
it being ai the default in all of this.

And that's, that, that's pretty common.

I think everybody's doing that.

The writing, you talked about writing,
uh, the ability for it to, to read your

writing, to critique it, to, to update it.

Every one of these podcasts goes
into AI and it creates social

media, blurbs, and a blog.

Joe Marion: sense.

Scott: And a blog.

Now, I read every single one
of the social media things.

I edit it, I read the blogs.

I, I will go in and read some of these
and there will be a paragraph that

I just don't understand, and it just
didn't sort of work and all of that.

I don't mean that to be critical of ai.

I, I think that that's, that's.

Joe Marion: the technology
is sometimes, right?

Like

Scott: Yep.

Uh, and maybe it was because I did
do a very good job on the podcast

of explaining it, and that's what AI
came out with, with the meaning of it.

Uh, we did, we did it, we did one on
the award, five trial, that was the

acronym of the trial, and it kept
saying the award winning Trulicity

trial, you know, sort of thing.

Um, which was great.

But those things, if I were
to produce them myself.

Is maybe four hours to write the
blog and the social media stuff.

I get those and I edit them in an hour.

Uh,

Joe Marion: Okay.

Scott: is, this is productivity
and it, it does some neat stuff,

uh, to it that, that, that I think
makes it better than if I had to

self generate it entirely by myself.

It's a different sort of view on it.

We're, we're exploring things
like we write, we write

hundreds of proposals a year.

We have a bank of proposals.

Um.

And I, I write many of these and
these take, you know, a couple

hours to write each one of these.

All of a sudden, AI does a
really nice job, uh, at, at that.

So these are natural things
that, that, that this does, uh,

within it and, and the code.

Uh, other things currently at, at Barry
that we can say we currently do with ai.

Nick Berry: I think we
need to talk about writing.

Um.

Joe Marion: Mm-hmm.

Nick Berry: You, you kind of, this
could be considered one of our,

you said our three pillars are like
design, implementation, and software.

I, think you could essentially add, you
know, publication and research to that as

something that we do a lot of at Barry.

Uh,

Scott: I, I'll push back on that a little
bit though, and what AI does on that.

'cause I a, a simple little thing where,

Nick Berry: I think too, but.

Scott: okay.

Joe Marion: Yeah, I think we are
gonna disagree, so this should be

Scott: Oh, that, that
makes for a good podcast.

Uh,

Nick Berry: Oh

Scott: an interesting blog when
it comes out and describes,

Nick Berry: I, I might disagree with
both of you, but I'm, think AI sucks

at writing things from scratch.

think it's obvious when
it writes something.

I, I, maybe this is but I feel like
I can tell, and obviously the, the.

Other side of this is that, think
of all the times I've read something

and haven't said, oh, that's ai.

And when it actually has been.

So obviously I don't
know my error rate, but

Joe Marion: Well, there, there's
some science behind that.

If you're a heavier user, you tend
to know to be a better identifier.

Nick Berry: okay.

Joe Marion: That's a, true thing.

That's a, that's a, that's
a studied fact, but yeah.

Nick Berry: think it's, it's vacuous and
non precise when it writes, especially

when it's writing from scratch.

even, you know, I go on LinkedIn
where this a, this podcast will be

advertised on LinkedIn and I'll see it.

single post on LinkedIn looks the same.

They're all the same.

it's like header, five bullet
points with emojis footer.

And it's like people just over
and over and over write this.

So I think it's bad at
writing things from scratch.

It uses similar patterns over and over.

And when I notice that
pattern, I immediately.

Start discounting the ideas in the thing
that I'm reading, So I, maybe I'm, I'm

being a little too negative, but this
is like kind of how I go, you know, I

will not as much heed in what is written
if I think I detect AI being the main

author of the thing that I'm reading.

I think it's

Joe Marion: I bet a lot
of people feel that way.

I think, I think that's like a,
I think that's a correct view.

Yeah.

Would wouldn't disagree that that's how
people perceive that kind of writing.

Right.

I mean, I think so.

One.

You know, there are lots of kinds
of writing that aren't, that

aren't necessarily about getting
lots of good character or an

interesting structure into it.

Right?

There are lots of ki, there's a lot of,
there's a lot of writing that's about

just like sort of clearly presenting
what you're describing, laying

out the facts, that kind of thing.

Um, I think, you know, the idea of voice
is less important in that kind of writing.

Um, but I don't, I don't think it's
like a standalone thing that, that you

should let write something by itself.

Right.

Um, I.

I think a really good use case of it
is like, you know, you always used

to get that writing advice, like just
write down your ideas as fast as you

can and get a first sloppy draft out
there and just like make that happen.

And I think this is a much more pain-free
way to, to get that first draft of

a paragraph or something like that.

It's like definitely not the final one.

But you know, getting words
on the page is a huge.

A huge, uh, un blocker
in a lot of situations.

Um, and then I like that once that's
been done, I can be very critical, right?

It puts me in the mode of a
reviewer when I'm reading that text.

you need to be right, 'cause
you need to be critical of it.

But when you're seeing it,
then you can think, okay, don't

have any buy-in in this text.

There's no sentences I love in here.

Like, let's, let's really dissect
this and pick it apart and make it

into something that's worth doing.

will say like a lot of it's like.

It often like Um, it gets like the
subtleties way wrong when it writes

and, and so that it's a kind of like
sloppiness or imprecision in, in the,

in the way that it's communicating.

And so that I pay a lot of attention
to, if you just kind of breed through

it, it's like, oh yeah, all this, those
words sound kind of nice together.

It like kinda, it gives you a warm
fuzzy, but then when you think

real deep about it, like, oh,
maybe I don't like that at all.

So.

But I love, I love the ability
to do that first drafting.

then I love the ability later on to
do tweaking little things like, Hey, I

don't, I know this sentence is wrong.

Here's what I don't like about it.

Gimme five different options.

And then you get a sentence
and then you tweak that again.

And then like, I don't, I like that
iterative, you know, version of writing.

I'm sure it comes real natural to someone
who's written a very, very large amount.

Yeah.

Nick Berry: I

Scott: I think,

Nick Berry: editing and
the, the fine tuning

Scott: yeah.

Nick Berry: the ability to, to ask it,
to edit your text and then review things

as it tries to make changes where it's
not just now sending you back a huge

paragraph with stuff change, like,
you know, it highlights where it made

a change and what's new and things

Joe Marion: What's different

Nick Berry: walk through.

see what it's actually
changing without having to

Joe Marion: and it'll tell you why
sometimes too, which is nice, right?

Yeah.

Nick Berry: I think it definitely,
it, it's a good editor, uh,

in a lot, a lot of cases.

Scott: So I don't think we
have the huge disagreement

that we thought we would have.

Nick Berry: Okay.

Scott: I agree.

Agree entirely.

Yeah.

Um.

Joe Marion: What?

are

Scott: Yeah.

Joe Marion: for?

Yeah.

Scott: Yeah, that's right.

Right.

That, that's right.

Now, but the interesting thing
in all of this is not to become,

uh, you know, critical of ai.

And, and somebody wrote me a, a thing
where they largely created, they took

25 references and said, write me a
paper based on these 25 references.

And I, I hated it.

Uh, there, but the, the,
there's nothing new.

It wasn't written very well.

I didn't, you know, it, it was,
it was, but this is gonna get

better and better and better.

I, the, the idea that AI will never
do this, I mean, imagine how good

it's gotten in the last few years.

So this is a current
status of it, but I agree.

It's, it, I, I don't like it,
but it's an, it's an amazing.

Editor tool.

It's amazing thing that if you are
actively working with it and you're

writing it, getting suggestions,
this is an incredible tool.

Nick Berry: Yeah, a couple years
ago, which I, I mean a couple years

ago, like the very onset of chat GPT,
when it became public, someone sent

a, uh, email to Don Don Berry, our.

Founder, um, been on the podcast a bunch
of times saying, if you give me $5,000,

I'll write a book, a biography about you.

And Don asked for a snippet,
said, okay, send me a snippet,

or blah, blah, blah, blah.

And it wrote Don's biography and
they sent it to him in a lot of

things and he read it apparently and
said it got a lot of things right.

So he replied and said, if you
can make it do this, this, this,

this and this, I'll pay you.

And I think they became impossible for
AI to do, um, especially at that time.

But, so we never paid $5,000, but you know

Scott: Yeah.

Yeah.

Yeah.

Nick Berry: yeah.

So.

Scott: It's interesting.

For example, could I dump my 60 episodes
of, in the interim into AI and say,

write me a book on in the interim, or,

Nick Berry: at that point, but yeah.

Scott: yeah, Or, or, you
know, what would Scott say?

Joe Marion: out there for sure.

Scott: What would Scott
say to the following?

Does it know who I am now?

Obviously it, it's gonna do really
well at knowing, you know, what would

Trump say, what would Obama say?

What would others say?

'cause there's such a
wealth of stuff out there.

Uh, but that's interesting.

But let's, let's move to something
that, that concerns me as a company.

So.

Where when we're working with
a client, we really need to

worry about confidentiality.

So what does that have to do with ai?

Uh, if we use ai, we need to make sure
that the confidential information that

it's using to write something doesn't
then go outside and become part of,

its its wide world that it's using.

To write other stuff where all of a
sudden it's using that for somebody else.

So security's a huge deal.

Um, what, what, what do
we know about security?

Nick Berry: Do you want this,

Scott: come on.

Nick Berry: So

Scott: Okay.

Nick Berry: the

Joe Marion: I'll, start.

Yeah.

Scott: You've looked into this.

Nick Berry: Yeah.

Joe Marion: So I mean, we
know some things, right?

So there are things that are, that
are, that they say that, you know,

for example, our data is encrypted
in transit and it rest, that means

that there's not really a point in
the process when it's human readable.

So you have to be able to
decrypt it to read the data.

Um, they talk about,

Uh, they, they say, you know, the,
uh, business versions of the paid

versions of these softwares say
that they won't train on your data.

We learned yesterday that that is, has
at least some known caveats around it.

uh, in particular, if you give them
feedback, did you like or not like

a response, well then that actually
gives them permission to look at that.

So, you know, that's kind
of concerning, you know.

It's a little bit harder to parse the
implications of that statement, right?

Like it's, it doesn't,
won't train on your data.

I mean, what, what does that mean exactly?

And how would a lawyer understand that?

It, it's harder.

I don't know.

I, think in some sense I do think
about this in some ways at least.

Um, and there are absolutely exceptions
to, there are absolutely reasons why,

this is not a good analogy, um, which we
can go into, but I think in some ways.

It's a little bit analogous to
email, right, in that you have your

confidential information, uh, it's
stored in someone else's, uh, it's

stored in someone else's servers.

It's not local.

It is potentially, it's encrypted.

Um, but you know, you're kind of
relying on their guarantees that

they're treating that data in a
way that's fair and reasonable.

I don't know.

Nick, what do you, what do you think?

Nick Berry: Yeah, I mean, I think I use
that as a mental model in the same way.

And, uh, there are things that
we don't email right at Barry.

Certainly, you know, uh, level
data doesn't go through email.

Interim analysis reports for trials
that are still unblinded, that

are still blinded to the public.

Don't go through email and
we have different like SFTP

protocols that we use for those.

there's, I mean, we are
not AI policy lawyers.

There are AI policy lawyers looking
into these things, but that's not us.

Um, I think in because of that, we.

Have taken a sort of conservative
approach at a lot of things.

And even though we have this like
business plan that says we're not

gonna train on your data, like, that
doesn't give us free reign to upload

everything and to put everything in there.

So we, we are tasked with, and we don't
have an SOP yet, uh, we don't have this

like put together, but Joe and I and the
committee that we're on are tasked with

trying to come up with, uh, strategies for
this and, um, you know, they're obvious.

Red line, don't do this.

And there are gray areas, and
we don't have all the answers

to this, but, uh, conservatism

Joe Marion: Yeah.

and I think people generally
don't have the answers.

Right.

And, but yeah, I agree with you.

We're we're trying to take a
conservative, kind of measured approach.

Nick Berry: so

Joe Marion: Yeah,

Nick Berry: data blind.

Uh, documents that you upload, um, you
know, you can give the structure of a

data frame without needing to give the
data set to, to AI and things like that.

And it's, it's hard because each,
we're not all, like Barry is not

a homogenous group of AI users.

Every person feels differently, thinks
differently, uses it differently,

and at the end of the day, we
don't get to, to watch people.

Interact with ai.

And so we kind of have to have some
trust in the people at Barry, and we

tried to do this at our meeting, which
is sad, is we have to instill some

sort of working knowledge about AI in
the entire company because everyone's

gonna be using it and we want everyone
that's using it to be at least informed

in some, in some regards about how they
should and shouldn't interact with it.

And so.

Um, it's, it's hard, but we laid
down some like obvious do not dos,

and I feel like that helps a lot,
uh, to, to, to clean up some of

Scott: Yep.

Nick Berry: But

Scott: Yep.

Okay.

Nick Berry: areas.

Scott: Oh, okay.

Now another one that bothers me and is
AI that's generating a statistical model.

And interestingly, if you ask AI to
create you a piece of code, you have

the code and that that's the product.

And you can, you can do that.

Uh, unlearn, for example, talks about
generating digital twins, and it's,

they don't tell you what it does.

It's ai.

It's a black box.

And by the way, it's also a hype thing.

It can't be wrong because
it's AI sort of thing.

So I bristle at this a little bit
because behind this, if AI helped

you create a model, there's a model
and we could talk about the model

and the data that goes into it.

So I really worry this
becomes both a hype thing.

A black box thing and takes
away from the value of us as

statisticians, as model builders.

So I don't think at Barry we've
ever used AI to create a model.

Nick Berry: To fit a model to
get results from a model, like,

Scott: Yeah, I'm, I'm not even sure what
that means nowadays, but it is certainly.

Nick Berry: density of the the
treatment effect parameter.

Scott: Or predict what this patient
is gonna be like in six months.

Don't tell me how you did it, but,
you know, predict it kind of thing.

Joe Marion: because I've definitely
done, here's the model I want right.

In Stan.

I mean that, that's no issue,

Scott: yeah, yeah.

Joe Marion: not what you're talking about.

Right?

Yeah.

Scott: no, no.

Uh,

Nick Berry: would

Scott: not that so,

Nick Berry: model, fit your stand
model, make you a density plot,

and do all that for you now,

Scott: so I, I really worry about that.

Joe Marion: to

Scott: Yeah.

Joe Marion: separate experiment.

Yeah.

Scott: Though, if I ever want, if I wanted
to sell Berry, I think if I changed its

name to Berry ai, much like what is this
shoe company, Allbirds or something?

Who did this?

Joe Marion: uh, birds.

Scott: Yep.

Yeah.

Joe Marion: AI is,

Scott: somehow this hype all of a
sudden Barry's worth five times what

it is, you know, sort of as a thing.

So I, I, I bristle at that.

We talked a little bit about,
about, about generative ai and

we haven't jumped into that.

I think a real, let, let's sort of
finish this a little bit with the

question of will AI take our jobs?

Within the Senate?

It's a, it's a fair question.

I think certain people in the
statistical community, it's a,

it's, it's, it's a, it's a worry.

And it's, it's interesting.

I, it, the, the people whose jobs are
threatened is more than the people

who think their jobs are threatened.

Oh, AI can't do what I do.

Uh, you know, eventually AI
is gonna do what we all do.

But you know, how long is it, do
you feel like AI threatens our jobs?

Joe Marion: I mean, I, I hope
it threatens part of my job.

I, I hope that pretty soon
we're using AI just like fully

for validation of code, right?

I think we would do a lot more validating,
which I think would be valuable for us.

And I would personally be
doing less code validation.

And if it took that part of my
job, I wouldn't shed any tears.

Right.

Um, is it gonna take my job overall?

I don't know.

I think it's quite hard to
say where things are going.

The robots are building
themselves right now.

Right.

And hard to know like what
the end state of that is.

Um, yeah.

Nick, what do you think?

Um.

Nick Berry: Yeah, I mean, I do
more software development than,

than both of you, so that puts
my job immediately at more risk.

I think, um,

Joe Marion: Yeah,

Nick Berry: you,

Anthropic, just you, you're
talking about the hype of ai.

Anthropic just released
that their new model mythos

can't be released to the public

because it's too dangerous because
it found zero day security issues

in every major operating system.

I, it's probably hype, like right?

If you want to advertise your new
model called Mythos, you say it's

too dangerous to be released and

Joe Marion: Oh, absolutely.

Just like fax Cloud.

Too dangerous.

Nick Berry: gotta think of a better
name with Greek origins than Fax Cloud.

We, we did talk about, uh, I can't
remember something, some cloud

name, uh, Cirrus or something like
that as our fax cloud or something.

That probably works,

Joe Marion: What's the word
for storm cloud in Greek?

That's what we want.

Uh.

Scott: Yeah,

Nick Berry: yeah.

Scott: it's, yeah.

Nick Berry: So my job might, at least my.

Pure programming.

Part of my job is probably more at
risk, but like I talked about earlier,

Scott: Yeah.

Nick Berry: the part of my job
that I'm retaining completely is

the decision making part and the
planning part and the vision part.

And it's hard for me to see
with the, you know, generation

of models that being at risk.

Uh, what happen eventually is that my
decision making turns into more of a.

Clicking.

Yeah, that's good.

Yeah, that's good.

And it's iteratively making a bunch of
stuff or you know, it makes 10 decisions

and I choose the good ones and it
builds itself, or something like that.

So, I mean, obviously we're doing, we're
stupidly conjecturing about something

that we could can't possibly know
and aren't that informed about, but.

I don't know.

I feel like the decision making that
we have is safe and the, the planning

and telling the story in the right
way, which is something we do so well,

and so and so purposefully is safe.

But you know, a lot of the, you know, the
technical things, eventually it's gonna

be able to do the technical things that
I said it's not good at doing right now.

Joe Marion: The, the parts of this
job that are about communication and

especially relationship building,
I don't think those will go away.

Scott: Okay.

Joe Marion: Like, they're never
gonna send Claude to go talk to the

FDA, they're gonna send Scott Berry.

Right.

Like, you know, that's, the AI is
just not gonna have the same weight.

Right.

Same credibility, like, yeah.

Scott: The, the whole strategy part that
even people are using it to design trials.

You could go into to one of
these and say, design me a trial.

But your it, it's gonna
do what you tell it to do.

Three arms, this and all that.

Not asking the questions, is this
the right thing in your development?

What are your uncertainties?

Asking the right questions to
understand why are you doing the trial?

This whole part before you get into,
give me the calculations of the

following are the huge part to this.

And, uh, AI can't do that yet.

Maybe eventually it's, it's doing
those parts, but it's not there yet.

Okay.

The, um, uh, I, you know, I, I, I harken
that to, um, I was, I went in for.

Dizziness a couple days ago, and I saw
a real clinician and she asked me a

couple questions and she knew immediately
what it was, uh, positional, vertigo.

And not only that, she said,
do this Epley maneuver.

And it was a physical thing of
moving your head into position

for three minutes and it was gone.

Nick Berry: Okay.

Scott: It was a cure.

So, you know, maybe AI's gonna get
there and it could ask me the right

questions and did the right thing, but.

Joe Marion: That's crazy.

Scott: Uh, I, I did, and, and
your mom spent more of the time

asking what was wrong with me.

I think she, she asks
Claude that quite a bit.

What is wrong with my husband?

Joe Marion: with my husband?

Scott: Yes.

Yes.

Joe Marion: Oh man.

Scott: All right.

We will end it there.

We appreciate you all joining us
for our 60th episode, and until next

time, we'll be here in the interim.