MedTech Speed to Data

Andy Rogers talked with Mike Acosta, EVP/Head of Compliance at Coagusense, and later recapped some of the lessons learned with Senior Electrical Engineer Jake Cowperthwaite. Andy and Jake have an informative discussion about how to define performance requirements when you’re aiming for FDA approval.

Show Notes

Coagusense developed the first point-of-care prothrombin time/Internationalized Normalized Ratio (PT/INR) monitoring system for cardiac patients to help them maintain warfarin dosage within a therapeutic range. In the latest version of their device, they actually removed connectivity features to accommodate the needs of their older, less-tech-savvy self-testers. Therefore, they had to go back to the FDA with more bench data for re-approval. 

Andy Rogers talked with Mike Acosta, EVP/Head of Compliance at Coagusense, and later recapped some of the lessons learned with Senior Electrical Engineer Jake Cowperthwaite. Andy and Jake have an informative discussion about how to define performance requirements when you’re aiming for FDA approval.

Need to know:

  • Make sure the performance requirement is objectively verifiable by a measurable test result, functional demonstration of performance, simulation analysis, and/or visual inspection.

  • Plan for the number of prototypes you’ll need to create confidence in your statistics. Large companies may have their own internal mechanism to create a plan. Startups can consult with the FDA. 

  • Understand the regulatory considerations for seamless FDA approval.  It’s a good idea to meet with the FDA prior to submission to outline your requirements. The earlier you have regulatory buy-in, the better.

The nitty gritty:

Rule Number One is to make sure that the performance of the device is objectively verifiable. For example, simply stating “the device shall be easy to use” is vague and subjective and won’t cut it with the FDA. 

Write your requirement in a way that can be verified through: testing and measuring results, a functional demonstration of performance, analysis via calculations or simulations, or visual inspection. A well-written requirement is specific with clear criteria, for example if your product was a pump, a performance requirement could be: “the aspiration pump shall have X flow rate within Y bounds”. If it meets that requirement, you’re ready to move on. Don’t over-spec.

Three things the FDA is looking for:
  1. Above all, efficacy and safety
  2. Accuracy
  3. Is your device novel technology or based on a predicate device?

At the test bench, start with a good understanding of how many prototypes you’ll need to have statistical confidence in your results. Sample size will depend on the data needed; an on/off switch won’t require a large sample size, but testing with different operators – as with in-home devices – will need a substantial data set.

In some cases, it’s possible to short-cut the process early in development by testing multiple variables at once. This will yield a lot of data, which can then be analyzed. You’ll find some variables meaningful and others not, but understanding these variables and their sensitivity early in product development has great value and can save money in producing fewer prototypes.



USEFUL LINKS
https://coag-sense.com/about-us/

https://www.greenlight.guru/

What is MedTech Speed to Data?

Speed-to-data determines go-to-market success for medical devices. You need to inform critical decisions with user data, technical demonstration data, and clinical data. We interview med tech leaders about the critical data-driven decisions they make during their product development projects.

Hey, everybody, welcome to MedTech Speed to Data.

I'm your host, Andy Rogers from Key Tech.

Thanks for joining once again.

Episode 18 here.

We're going to be talking

about defining performance requirements.

We have Jake Cowperthwaite here on the line.

Jake, welcome back to the show once again.

Hey, Andy, it's nice to be back on the podcast.

Great. Yeah.

So we're going to talk also about the episode

we just recorded, episode 17,

Mike Acosta from CoaguSense,

where he described his PT/INR

at-home diagnostic test

for those those are with patients

who are taking Warfarin.

He talked about his next generation platform,

the Gen3 platform that they're developing.

It was pretty interesting with that product

how they're actually making it simpler

than their Gen2 product.

And by eliminating

most of the connectivity features

that they had in their Gen2

for their Gen3,

listening to their customer,

the elderly patients

who really just want a simple device to use.

The other highlight from that episode,

I will point out,

was kind of the

the impetus for this episode

is some of the challenges

they're having with getting approval

from FDA related to

verifying their performance requirements.

This is, you know, making sure the product is

equivalent to their on market predicate product,

their own product.

They've had to go back

and forth a few times with FDA

presenting more bench data.

So we thought it'd be a good time

to step back and talk with you, Jake.

You've managed many projects

where we've taken products through regulatory

approval through one way or the other,

either an internal Key Tech product or,

you know, externally with clients.

And it always boils down to what is the product?

What is the product doing?

What are the performance requirements?

And so today, Jake, we want to walk through

what we concluded

to be kind of the primary considerations

when defining these performance requirements.

As you're developing

your product requirements document

and the considerations we have, there's three.

Step one, make sure the performance requirement

is objectively verifiable,

straightforward, clean cut.

The second,

you know, what statistical data do

you need to collect to prove

that the requirement is met?

You know, how many prototypes will you need?

What does the data actually look like?

And then the third consideration

when you're defining

your performance requirements

is related to regulatory approval.

What what considerations do you need to make

as you're

defining these requirements

so that you'll get seamless regulatory approval?

So those are the three considerations.

Number one, objectively verifiable.

Number two, what statistical data.

How many prototypes will

you need to test and prove

that the product actually meets

those requirements?

And the third,

you know what regulatory considerations

are there as you're drafting

this very critical requirement,

your performance requirements.

So let's get going, Jake.

So walk us through,

what does it mean

to have

a requirement, be objectively verifiable?

You know, this is the first consideration.

So when you think about requirements

that are objectively verifiable,

there's four different types

of verification that we use at Key Tech.

There's testing where you run a test

and take measurements.

There's demonstration

where you have the device, perform

a function and, you know, show that it can occur

as expected.

There's analysis

where you,

maybe do calculations or simulations

and you analyze whether the requirement is met.

And then there's inspection

where you're just doing like a visual inspection

to make sure that something is

exists as it should.

So if you can't use one of those four methods

to prove a requirement,

that's probably not objectively verifiable

and you need to rethink how it's written.

Does that make sense, Andy?

It does. It does. And I would say,

in my experience,

aren't most performance requirements

verified through testing?

Yeah.

Most performance

requirements are going to require testing.

Just Jake, for our audience,

what's an example of a well-written requirement

that's objectively verifiable versus one

that's poorly written?

I guess I'll start with the poorly written one.

We see it all the time.

It's a requirement

that says the device shall be easy to use.

That may work well in a higher level requirement

set like a user needs.

But in product requirements,

you want to be more specific than that

because you can't really can't verify.

Something is easy to use.

You can validate it.

You can do user studies

and obtain feedback from users.

But you really can't,

you know, run a benchtop test

and decide, okay, this is easy to use.

So a well-written requirement

is going to be very specific

and it's going to have clear

acceptance criteria.

So to just pick an easy one.

You know, Key Tech,

we deal with pumps a lot and flow rates

and things like that. So a requirement might be,

you know, an aspiration pump shall have,

you know, X flow rate

within, you know, plus or minus Y percent.

It's very clear.

So you have, you know, a flow rate

and you have acceptance bounds.

So you run a test,

you prove it within that range. Great.

You move on.

But again, if you don't have

let's say you don't have

performance bands, let's say

the flow rate shall be X.

You may run the test.

It may not be, you know, exactly as stated.

You know what sort of limits

are allowed to consider it passing.

So you need to include that as well

with your requirement.

So just real quick on that point

with limits,

is it a fair statement to say

that the limits of a requirement

are usually driven by the risk

of going outside of that limit?

So you mentioned, you know,

volumetric accuracy, for example.

There's really

three different values to think about.

And I guess this is another trap

that sometimes folks fall into with requirements.

So going back to the aspiration case,

let's say that for physiological purposes,

you need to be 5% accurate.

So as long as you're within plus or -5%,

you're going to get the

same physiological effect.

But maybe for safety purposes,

plus or -10% is fine.

So that kind of gives you,

you know, wider bounds.

So what we see sometimes is

a client might find a vendor

who's advertising a pump

that, let's say is plus or -1% accurate.

So it's even better than they need.

And the client might think, great,

I'm going to use this pump.

I'm going to set my requirement at plus -1%.

So that's even better than they needed

to perform for physiological reasons,

you know, not to mention safety.

But so what they've done is

they've set a narrower requirement.

They're using this pump.

And, you know,

maybe the vendor spec

is based on specific testing,

you know, on a batch at a certain temperature.

And, you know, maybe it can meet

that performance criteria.

But then you get later on in the project

and you learn, you know,

for that application, it's not great.

And all of a sudden

you've kind of trapped yourself

because you've over-spec’d your requirements.

It would really you need to be testing to 5%.

You're trying to beat something

that's tighter than that.

So that's something we see a lot.

Yeah, that’d be an example of market

requirements, maybe

pushing down to product requirements

that are unnecessary.

Exactly. Yep.

Gotcha. Okay, cool. Great.

So now going forward,

we'll make our requirements

objectively verifiable,

particularly on the performance side. Right.

Which is a good segway

to the next consideration, which is

before you locked in your performance requirements,

make sure you understand what

how many prototypes you need

and what sort of statistical confidence

you need in meeting

and demonstrating that performance requirement.

So can you talk a little bit

about your experience there?

So performance testing is really time consuming

and you're kind of making your luck upfront.

So I guess what we just talked about

was how you write a requirement

and if you write it

in a nice, clean manner,

it might make it easier to test

than otherwise would be.

But that’s the testing part.

The other part is the sample size.

So, you know,

you might have a really easy test to run,

but if you have to have a high sample size

it could be extremely time consuming.

And what we see in terms of sample size

is it's usually risk based.

So what sample do you need to know within,

you know, some confidence

that you're going to meet

some reliability bounds or acceptance

criteria.

And that usually drives everything

if it's a simple test

that's kind of black and white, you know,

that the pumps are going to turn on or it's not.

Then, you know, one sample might be fine,

but if it's a,

really sensitive test

where you're trying to determine accuracy

across, you know, different

consumables, maybe different lots,

then you have to set up a much larger study

and use a much higher sample size.

So there was one project

I'm aware of here at Key Tech,

where we actually hired.

So in lieu of

tens or potentially hundreds of prototypes,

in lieu of that,

we developed this sophisticate weighted

design of experiments plan

and got the confidence that we needed

with less prototypes.

When would that make sense Jake?

So what we've done is

relatively early in development.

So maybe a late alpha or a beta device.

And what a do you do that

that same experiment will do.

Is it will help you understand sensitivities.

So at the time, I think there were

seven critical

variables.

And we want to understand,

you know, the impact of each.

So we hired a consultant, he came to Key Tech.

I think we met for a couple of days

and he helped us design this experiment where

if you take I don't know, you know,

five or ten devices

and test them in different environments,

with the variables set differently,

you can accumulate this huge pile of data

and process it and tease out

what the various sensitivities are.

And the great thing about that

is you're getting it all at once.

And so instead of setting up,

seven independent studies

to look at each variable specifically,

we could just run this, you know,

kind of one massive study

and obtain everything at once.

And that was really helpful

because at the end of it,

we were able to understand, you know, okay,

you know, this variable barely changed.

So, you know, we we ran it at the extreme

extreme of the range for that variable

maybe in a cold environment,

maybe in a warm environment,

maybe with a different person

operating the instrument.

And we could see, okay, you know, it

basically remained flat,

whereas some of the other variables

we could see, you know, very significantly.

And that told us that those variables

that had more variation needed to be studied

more extensively during the project.

Gotcha, and there's real value to be had there.

I mean, you're not building

multiple prototypes

that for some of these complex platforms

can be 50, 75K each.

So I can see the value there

and also just the general value of understanding

which variables are of interest

to then beat on with those small number of prototypes

you've already built.

So it made sense in that in that case.

Absolutely.

And I guess the the other factors

we could automate everything.

So, you know, with automated testing,

not only have you reduce the number of tests

you need to run,

you're also more efficient

because you can set it up over, you know, a week

and then come back and look at the data.

So it was extremely efficient.

This is also another,

I think, example of where marketing might drive

product requirements.

But I guess in your experience, where have you

seen these confidence levels come from?

Like you need 99% confidence,

you're going to hit this requirement.

Where does that come from?

Some of our global clients

will have their own internal procedures

and it will be risk based.

So, you know, minor, moderate, major.

And if it's a major level of concern

or major risk,

then they're going to require high confidence,

you know, high confidence that they're meeting,

you know, their specified requirement.

If it's, let's say, lower,

minor concern,

maybe you don't need that high confidence

because if it falls a little bit outside the range,

it’s not going to have any clinical impact

or, you know, affect the patient in any way.

And then the other end of the spectrum,

I guess, would be the startups

and smaller companies

who don't have those internal policies.

And for that, they're going to be,

you know, hopefully looking to

what other other companies have done.

They're going be talking to the FDA

to try to get some input

and establish their sample sizes.

Yeah, Jake, you just gave away the third

consideration, which is making sure you have FDA

or general regulatory

buy-in on your performance requirements

before you go submit your design history file.

So talk a little bit more

about getting this, you know, FDA

and regulatory buy-in

on these critical performance requirements.

So the first thing you should do is

internally use your own judgment

of what you think the sample size should be.

And that's a combination of

engineering expertize.

So understanding sensitivities

as well as clinical expertize.

So assessing the risk.

So you should come up with a plan internally.

You know, what sample size do you plan to have?

What testing do you intend to perform?

And then you should get in touch with the FDA.

And there's a thing called pre-sub meetings

or also called queue-sub meetings.

That’s where you would present your plan to the FDA

and have them provide feedback.

You know, do they agree with your sample size

and your test approach

or whether they'd like to see you

do something different?

And in that meeting, I'm

assuming you want to have data in hand, right?

The preliminary prototype

data showing that, okay, look,

we're already meeting the requirement.

This is our plan as we go in the formal V&V.

You'd like to have some preliminary data.

Yeah, especially if you can show.

Well,

you know, going back to the same experiment,

if you can show studies you've done

that have established

where the device is sensitive

and where it's not,

I think that's, you know, really

helpful in justifying your

sample size decisions to FDA.

Going into a pre-sub meeting

Jake with the FDA,

what are they looking for it at a high level

with your product,

you're trying to get approved.

So the goal is safety and effectiveness.

And in a PRD,

you're going to have all sorts of requirements.

You're going to have some related to safety.

You know,

to pick an example, complying with IEC 6601

for electrical safety, that would be an example

safety requirement.

You might have effectiveness request.

Well, you will have effectiveness requirements

and those might be based on a predicate device.

So if you're going to go

the 510K route,

you're going to look at a similar product.

What sort of performance does it have?

You know, what are the accuracy criteria,

what's the resolution, those sort of things?

You're going to want your requirements

to be as good or better. So those are critical.

But you're also going to have other requirements

that may come from marketing.

They may help sell the product.

They're not really in the safety and effectiveness,

but they're going to differentiate the product

and make it more successful on the market.

So so when you go into the meeting

with the FDA,

the focus really will be on safety and efficacy.

And you want to make sure

you have a good plan for

the sample size to test those

as well as the procedure to test those.

And then the other ones the more

marketing related requirements

that don't really fall

into those two categories.

They're going to have less emphasis.

But, you know,

because they're part of your design endpoint,

you still need to meet them.

It just may not be as rigorous a testing.

Yeah, I mean, I've never actually been

in one of those pre-sub meetings,

but the only

feedback I've heard is go into

those meetings prepared and

communicating your plan and not asking

kind of high level abstract questions

because they'll see right through that

I suppose.

Yeah, I think you can waste a lot of time

if you're not presenting a plan to begin with.

It's not a brainstorming session,

you know, where FDA is helping you

figure out how to test your product.

It's more

you've done your homework

or you're proposing a plan

and you're getting feedback on that.

All right, Jake,

thanks for your words of wisdom here today.

You know, related

to defining performance requirements

and one of the key requirements

in a broader product requirements document,

which is linked in our bio here

down in the episode here

where we have a whole

flowchart for defining requirements

as well as a template that you can download

for a product requirements document

but related to performance requirements,

which are the most critical.

Jake, you outlined the three main considerations.

The first, absolutely.

The product or product requirements

need to be objectively verifiable.

Secondly, you need to have a plan

for the number of prototypes to build

so that you can get the appropriate

statistical confidence

that the product will meet

those performance requirements in the market.

And lastly,

arrange for a pre-sub meeting with FDA

where you outline your requirements

and they're going to focus primarily on the

the performance requirements

related to safety

and efficacy of the product.

So that's it here from Key Tech.

Again, check out the link below to download

the Product Requirements Document Template

or look at the flowchart

for how you actually define

various requirements.

Until next time.

Thanks everybody for tuning in. Appreciate it.

Thanks Andy, take care.