Trials with Maya Z

In this episode, Maya meets Dr. Leeza Osipenko. She is the founder and the CEO of a non-profit organization Consilium Scientific. It is dedicated to improving the integrity of clinical research.

Maya and Leeza discuss the following topics:

(2:18) The role of NICE in developing clinical standards
(6:23) Why did Leeza start Consilium Scientific
(9:35) How to avoid suboptimal clinical trials
(19:02) What are the mechanisms that improve clinical research integrity
(25:30) The ultimate goal of Consilium Scientific
(32:47) What is the biggest challenge in clinical trials at the moment


Tune in today for a thrilling journey into the world of clinical research – expand your knowledge! 

Maya Zlatanova, CEO of Trialhub
Leeza Osipenko, CEO of Consilium Scientific 

See More:
TrialHub
Consilium Scientific


Creators & Guests

Host
Maya Zlatanova
CEO of TrialHub

What is Trials with Maya Z?

90% of clinical trials fail, 85% get delayed. Let’s deep dive into the world of planning and running of clinical trials with some of the most experienced and passionate people from the Industry and find out what makes trials more successful or more challenging. Welcome to Trials with Maya Z podcast!

Maya Zlatanova, CEO of TrialHub: https://www.linkedin.com/in/mayazlatanova/

Hello, everyone. Welcome again to Trials with Maya Z. This is Maya. And perhaps you can hear that I am still recovering from my COVID. So yes. Apologize. See if you, if my voice is a little bit. Yeah. You can't hear it so well, but I will do my best because I have a very exciting guest today.

And we will be speaking about something let's say we're not speaking about it, every day, and we probably should, and she will tell us why. So, without further ado, let me introduce you to Leeza Osipenko, the CEO and founder of Consilium Scientific, a not-for-profit organization working to improve the integrity of clinical research.

Leeza, tell us a little bit more about your background.

Hi, Maya. Thank you so much for this invitation. Very excited to be on your podcast. So I have a background in academia, consulting, and in public sector. And, I wouldn't call myself an entrepreneur because I started a nonprofit organization. So, it's a very different concept.

It's, it's counting. It's counting.

So I used to wear a lot of different hats, but I would say my, key defining role has been in the public sector as Director of Scientific Advice at NICE, and that's where my infatuation with clinical trials began.

So, where I learned the most and, as part of this role, I reviewed with my team and with many experts and colleagues from different organizations and clinical experts and economists. We've reviewed more than 250 clinical programs and learned the goods and bads of clinical research. And I think this particular experience led me to establish Consilium.

Okay, just for the audience that's outside of the UK, just explain what NICE is.

Yes, NICE is a public sector organization, in the United Kingdom, responsible for England and Wales, because Scotland and Ireland have their own jurisdictions and have their own decision-making bodies. So, it's the National Institute for Health and Care Excellence, which has a lot of different functions, but the key function it's most famous for globally is deciding whether to reimburse new pharmaceutical products or new devices and new diagnostics and that's what gets NICE into the front page of, the newspapers or on BBC. So, it should be remembered that NICE also plays a very important role and clinical guideline development in clinical standards in general communication of evidence available on different technologies. So it's much more than just the health technology assessment function.

Yeah. So basically that's the institution that everyone from the clinical research industry, especially sponsors, dream of getting reimbursement. And like the approval and reimbursement after that, and similar with other local institutions. So, Leeza, you mentioned that you work on more than 250 clinical programs and the assessment of these programs.

And you said that you found that like some interesting things. Enlighten us, actually, what was so interesting, and what was the main learning when you were assessing these programs?

Yeah. So, companies come for scientific advice, or they should come for scientific advice when they're planning their pivotal trial because this is the point where we can input in the most productive way. And it's not just NICE, scientific advice is given by many different organizations, and HTA agencies across Europe, Canada, and Australia.

It's given by EUnetHTA, which is a group of HTA agencies in Europe for joint advice. It's given by the European Medicines Agency. It's given by the MHRA in the UK, which is their UK regulator. So it's not limited to NICE, but the whole idea of engagement is prospectively to figure out what needs to be done to meet the needs of the payers and regulators rather than retrospectively saying, 'Hey, this is the trial we have done. Does it meet your criteria?'

Unfortunately, situations like that happened, and they did not make for a very productive meeting, but sometimes they're logical because the company comes with relevant questions, figuring out what else can be done in terms of what kind of data can be collected after the pivotal trial has been logged or has been completed and whether further data can be collected through either phase four trials or real-world evidence.

Of course, it's natural that the regulators and HTAs have divergent requirements sometimes. Well, not sometimes, but they have different objectives. So the company is focusing on regulatory objectives and matching what needs to be done for the FDA, which is a key, key market for every product launch, with very few exceptions. And then all the needs of the HTA agencies are needless to say very challenging.

So, we have definitely seen very good programs, very clear proposals, very genuine desire of the company to do their best for patients, for the system, to get the product on the market. And of course, we've seen many programs, which could have been much, much better in terms of clinical reality, in terms of choice of comparators, choice of populations.

And the job was to really explain to the companies that gaming the system is not the way forward. And the idea is to really collaborate and try to figure out how to design such a trial, how to collect relevant data to um, benefit themselves and to benefit patients and the clinical community.

Yeah. And so maybe you can explain what it means, integrity in clinical research, and what led you to start Consilium Scientific and start fighting for integrity in clinical research.

Yes, I had an absolutely golden experience at NICE. And it helped me learn what goes wrong with clinical trials. And a lot of times these things might be system driven. They might be driven by different requirements of stakeholders. They might be driven by the sponsor of the trial. They might be driven by simply changing requirements of fast-paced developments in a particular clinical field. So there are so, so many things. It is such a complex environment. And unfortunately, there is no standard protocol where you can look up, say, okay, tick all the boxes. And now I have a perfect trial. It's an art just like medicine. So we do have guidelines.

We have protocols of what should be done, but a lot of decisions need to be made to fit the circumstances and the process of making those decisions is complex and there are obviously a lot of gray areas. Academics have done quite a bit of research to show that many trials are quite suboptimal, and they lead to decision-making, while the evidence that coming out of these trials is simply not ready.

And, regulators and payers know this, they've developed different techniques to try to deal with that. But sometimes these techniques are successful. Sometimes they're less successful and many products get on the market with very suboptimal clinical evidence. And, when we have further information, unless it's alarmingly clear information that the product doesn't work, these products stay in the market.

That's an issue and a lot of the things that go wrong with clinical research can be avoided. It's also important to emphasize that my expertise is only within the design of clinical trials, rather than their technical and practical execution, where I think there's a plethora of different issues that I'm simply not aware of, but people dealing with actual clinical trial research, actual patient recruitment, sites monitoring, many, many other things. I am simply even not aware that's a whole other issue of what can go wrong with clinical trials.

Of course.

Yes, clinical trials are very complicated. I can probably, compare them to an organism, where there are so many different parties and they have to work together. They're like, like parts of the body, and they have to work together. And if one of these parts does not work properly, then the rest will fall down as well.

So, I understand. Sometimes we, like the doctors, we understand that, for example, one area really, really well, and then the rest, we have some idea, and that's why it really requires collaboration, and unfortunately, also, like, pretty hard, but you said something that really triggered my curiosity.

You said that there are suboptimal clinical trials, like, producing suboptimal data, and this can be avoided. If we know how to avoid this, why are we doing that, or do we do that?

That's a very good question. And a lot of times we find out that something is suboptimal when it's too late when the study has been reported and either academics or clinical academics go and put a critical review, there's a fantastic community on Twitter, critiquing clinical trials, especially clinical trials, and oncology.

A lot of times, this comes as it's post-factum that we find out. It's at the stage when it's under consideration with the FDA or FDA already given approval and then clinicians have fierce discussions of how wrong this trial was, that this comparative is completely inappropriate because this is not what we use in this particular setting and these patients are not representative of who we would treat.

And, this crossover was completely inappropriate. So all these things come up, as a reaction to what has happened. And, as part of my career in scientific advice, I was in the role to try to preempt these problems and to advise companies running trials proactively to think about this before. That being said, of course, in my career, I dealt 99 percent of the time with commercial trials.

What goes on with academic trials? It's a completely black box. And this is good. It would be a very wrong assumption to think that they are better or they are in their own way. So, unfortunately, there's not even a system that has this kind of scientific advice process for academic trials. Yes, of course, there are grant-giving bodies, which have their own processes of evaluating protocols, of

judging the integrity of these trials, some of them are good, and some of them might not be that good. So I'm not there to judge, but, we do know that there are many academic trials, which also could have been better and, uh, there are even fewer tools out there, less transparency preemptively, ensure that they tick the boxes, and it's also extremely important to remember that it's always a moving target.

Always a moving target. The trial that is recruiting in 2023, and that will be reporting, let's say, in 2025, will be reporting in a completely different environment where a comparator has changed, where clinical practice has changed. So, these are very difficult decisions. And questions that need to be asked are. At the moment of recruitment of the first patient, have you done your best and have you reflected the reality or not? So, because these judgments also they're very difficult to make. Another point to bring along is that it's always, it's much easier to be in a critic's seat than in the, and not, like, prevent it. Actually, that's exactly what I was going to ask you. Is there anything we can do in order to predict that this can happen? And I was thinking, because you said, How close this is to reality. I know that for years we've been discussing the role of real-world data.

So can this actually help us and do institutions like NICE or other institutions help companies with some frameworks on how they can possibly predict what design is closer to reality in what way and so on and so forth?

Yes, of course, I have to give credit to the payers, to the regulators, to the decision makers as А, they're under massive pressure to stay with the time and update their processes and their decision-making frameworks. And the B, they work very closely with industry trying to make sure they are aligned, even if it may not seem this way.

So, NICE as an example has been changing its methodology or updating it. And, it does have special guidance on the use of real-world evidence, so do many other organizations. There's a huge emphasis on it now because of massive pressure to speed up clinical trials, to make clinical trials more efficient.

And there's definitely a legitimate call for this because, on one hand, there's a lot of bureaucracy, which can be avoided. On the other hand, These measures need to be very carefully taken so that the right corners are being cut, not the wrong ones. And unfortunately, that's not how life works. So, the ability of real-world evidence to fill in the gaps is there, but once again, the question is for what, yes, sometimes, it's very, very helpful.

And, for example, if we had, A single arm trial for some kind of product, and it looked quite promising, and it was put on the market. And then we try to figure out the efficacy of this product, the real-world evidence without comparative evidence. This is really, really difficult unless we have a miracle product.

If we have another imatinib coming to the market, I'm sure real-world data will provide quite convincing answers, and saying, yes, it does work. Yes, patients do live longer. Yes, it's purity. It's almost cured. How often do we see imatinibs entering clinical trials?

Yeah, that's actually exactly one of the issues that one of my guests before mentioned the fact that innovation is not at the same speed as before. I don't know. I would rather be more on the positive side, but I also understand that businesses are trying to optimize their costs on my spend on clinical research, so how much are we actually going after the big changes and like the big innovations and how much are we trying to actually get a product that we can just like get a good price and probably that works.

But again, is it 10 times better? 100 times better than before? Maybe not. So that's also one of the challenges that we've been discussing.

And I wonder, when you say real-world data, real-world evidence, I actually happen to see that whenever I speak about this, different people understand it differently.

A lot of people actually see real-world data as electronic health records. Or claims data in the States. So, can you tell me your understanding of real-world data? What would you consider real-world data and real-world evidence?

Maybe let's consider it at NICE, for example.

Yeah, the definition is actually quite wide, and, yes, electronic health records can definitely play into this, and claims data definitely can play into this. The question is, basically any data that comes out from a non-controlled comparative trial is, the data that is collected in the real world, whether It's through a prospective collection of information, or by looking back at the records.

This is real-world data, so the idea is not in the nomenclature, not in the word that you choose to describe it. The idea is making labeling things clearly and use things for the right purpose to answer relevant questions. So that's why, for example, NICE has a very good document describing step-by-step how real-world data collection plans need to be put together, how they can be made consistent, how the reporting should be done to ensure that all of this starts building into a transparent coherent story rather than pulling information where it exists, trying to fit things together, plug it into the model and see what it says.

So, unfortunately, such steps are also necessary because not many things you can do, or people might not have the budgets to do prospective real-world data collection. Sometimes some historical data that has been collected in the real world might indeed be very useful. So, the idea is proper documentation, and proper selection of these sources, not cherry-picking particular points that fit your agenda. But clearly reporting why particular data was used and how all data at disposal was considered. And unfortunately, no, there is no single source that has said, oh, this is real-world data and this is not. So basically, yes, data on a product or on intervention collected in the real world setting at whatever time is the real world data.

Interesting. So, I'll bring you back to integrity, because, you mentioned that Consilium Scientific is fighting for the integrity of clinical research. Tell me, what are the mechanisms for fighting for better integrity? To improve the integrity of clinical research?

Yes, there are many different approaches and, you've had a discussion with Till Bruckner, who is the head of TranspariMED. So, to me, he is the epitome of fighting for integrity and clinical research, taking a very deep dive into clinical trial reporting and he has done fundamental work to actually improve clinical trial reporting across the board.

And, that has been a massive, massive achievement, which has support from student organizations across the world. Companies are very much on board. I have to say industry is doing much better than academics and reporting. So that has been a massive change. But there are many, many other aspects.

So, for example, at Consilium, we have done some work on clinical trial registries. I find that world to be an utter mess. There's no better word
about it.

Because, unfortunately, WHO did not manage to take sufficient lead to somehow, maybe it should not be WHO, I don't know, but somebody did not manage to take a sufficient lead to say, yes, we're a global, entity, uh, tracking clinical trial registration. To me makes a lot of sense because there's a significant amount of clinical trials, very important, pivotal clinical trials, which are international.

So, there should be a unique source. Unfortunately. The given clinical trial can be registered in many different places. This leads to duplication of records and, unfortunately, many records, in the registries are not of good quality and, they are self-filled by the sponsor or by the academic group and, they might not be updated.

And. For example, if someone pulls from the register and it's all public information, you can look up a particular study. I can look up a particular study. You might pull it up and say, that's pretty clear. That's very informative. That tells me a lot of very informative, useful things. When you start looking at the totality of evidence, for example, if you decide to pull studies on a particular biomarker or pull studies on a particular disease, you start seeing how much missing data we have and, it's really mind-boggling because I'll give you an example that for blinding information, you will see that there's almost 40 percent of trials that are missing this information in the registry.

if there's no blinding, they should say none, which is completely fine, but this field isn't completely empty. So there are about 40 percent of trials, which are missing information on the phase of a clinical trial. So, for 13 percent of the trials, believe it or not, intervention is not described. So there's the quality of data is

very lacking. So that's another example. And then, as we've had this discussion in terms of what you see in trial design and integrity, but that's a qualitative parameter, which is not documented. I mean, it's reflected in the registry, but you need special expertise to understand whether this trial is of good quality or not. That's a whole other problem. How do you design a particular trial that does answer the clinical question? and a lot of trials are designed to tick a few regulatory boxes. And, sometimes we're lucky as a society that the trial does answer the question, and a lot of times we're not.

So, there are a lot of different components where integrity must be improved.

But aren't the regulatory bodies the ones in charge of making sure that the clinical trial answers the critical questions?

Yes and no, because the remit of the FDA and the EMA is to look at the safety and efficacy of products. They definitely have very clear standardization and guidelines and, their job is not to decide which product is better. Their job is to decide, whether it's safe enough to get to patients and whether it meets the Requirement for the endpoint defined, and the trial is designed to answer that now hypothesis in the research question.

But in the end, we put products on the market to help patients. That's where HTA bodies come in and that's usually the gap, which can be left open, where the trial actually ticks boxes for the regulator and, it might not tick the boxes for the payer. There's also another thing because I think it's also important to distinguish the discussion with scientific advice where it's ahead of time, you discuss with the regulator.

Am I right to do this? Are we okay to do this? Is this the right consideration? Unfortunately, what happens is the regulator often gets a trial that has been completed. Um, there is a very clear statement of unmet needs. They, for example, say. There is no treatment in this line of cancer or there is nothing for these patients.

There's very significant pressure from patient groups. This is understandable to get at least something rather than nothing. The regulator also feels particular pressure. They open the box and this is what they see and then they need to make the decision and they need to say, where can we compromise?

Yes, this trial could have been better designed. Yes, this trial could have collected this, this trial could have done that, but they have to make the decision. And that's why not every product gets a yes when things are completely wrong. But, it doesn't mean that if the product got a yes from the regulator everyone in the decision making panel was so happy with how that trial was, was designed.

So,

yeah, interesting. So, if you have to summarize what's your main goal with Consilium Scientific, or at least next year, for example, if you don't have a long term goal, is there anything specific that you would like to achieve?

I'd rather give you the ultimate goal because who knows how next year pans out. So, yeah, while we are working on different projects To improve integrity of clinical research, the main ambition is to really create a community, to create a platform for people, for organizations, for academics, for loan scholars, for any, anyone working in this direction, to give them a ground, to give them funding, to give them opportunities to succeed with their work at the policy level, because I am aware of a lot of organization, a lot of individuals who do it outside of their main jobs, looking at bad trials, looking at other underreported trials, working with the registries, trying to make this world a better place, but sometimes work is disjoint sometimes, most of the time, these people or organizations don't have sufficient resources. Yeah.

An idea is to join forces to bring policy change because that is the ultimate goal to say, okay, this is, it's not the issue of endpoint switching, what can we do to put together a framework, to put together a policy, to put together such circumstances where this practice is not widespread, where this practice is allowed only for very specific reasons.

And sometimes endpoint switching is reasonable. However, the extent to which it's been found between the protocol and the study reported in the journal does not seem to make much sense. So that's just 1 example, but there are so many little bits and pieces that many people are working on and there is no sufficient support.

In the system to get this very important work, sufficient profile and sufficient support to get the quality review, to get the methods sorted out because in the end, we will all benefit from it, and yeah.

So there is always this dilemma. Should this be a governmental effort, societal effort, or not-for-profit organization's effort, who should lead that? But that's like, probably, an even bigger topic, and probably there is no single right or wrong answer here, but definitely worth thinking about.

What about, what's your main challenge, Lisa? Well, not your particularly, but like, because, Till Bruckman, you, like, incredible people, all founded incredible organizations fighting for the society. For clinical trials, integrity, and transparency, but overall, basically, it's for the society. So, how can we support organizations like you?

I would say two key challenges. One is the most obvious is funding. The second one is the realization that this is indeed a problem because, what happens is, that society is built around quick ratification, and quick wins, and what we're working on is actually a very difficult topic for pretty much every stakeholder because patients, for example, they might be shocked that clinical research is not the best that can be. They don't understand it

most of the time. But they don't want to hear that. This is so, so disappointing, especially when their relatives are in clinical trials, when themselves are in clinical trials when they are about to benefit from a particular product that came out of clinical trials. Uh, I mean, they're a good example right now.

There's a big hype around Alzheimer's drugs.

I was thinking about the same.

understandable how Families of these patients want them to have at least some hope to get that drug. And there is a lot to be said about whether these drugs should be on the market and whether they are actually helping patients or not.

So, a lot of these situations with oncology drugs, especially in the U. S. where people go bankrupt, people sell their houses to get a particular treatment for 100 K per year. To extend survival by six weeks or so. So a lot of times, patients don't understand it. Of course, interestingly, many other stakeholders don't want to hear that because this is not money-making news.

The money-making news is we have a new blockbuster. We have another Ozempic. And, that's where the future is no one wants to hear that this trial was not good enough. This drug is not as good as they thought it might be. It's not good news for shareholders. It's not good news for the company. It's not good news for regulators, especially, let's say they approved the drug and some smart academic is ripping apart a clinical trial behind it.

So, it's a very difficult territory where being liked is very difficult. So, our objective is not to be liked. Our objective is to do our best in the background. It's not about making headlines. It's not about getting your name on the wing of a hospital. It's about making sure there's someone in the background ensuring that In the end, people don't even need to know how the system works.

People need to trust that somebody in the background has done everything that needs to be done and that they didn't roll into the trial, which will make a difference. Very important to know that a negative trial is a very important trial and it will make a difference. If it's well well-designed trial, we will know that other patients should not be getting this drug.

And it's not about the success of products getting to the market. It's about getting things done right.

Yeah, you're right. Sometimes we learn more, not sometimes, always, we learn more from our failures than from our wins. So actually, even when a clinical trial does not provide positive results, actually, that will, that can give us ideas on what not to do, which sometimes can be even more powerful than the other way around.

Incredible and I understand the complexity of the whole topic, pretty sure that we can spend a lot of time going back and forth about what's right, and what isn't. I can also understand from the patient's point of view and remember what happened with my family. When my sister was going through a nightmare, sometimes you just hope that this works, and I'm not speaking about clinical trials.

So, the drugs, you just hope that, and you also want to see the positive news and not focus on the other way around. But at the end of the day, science is about being true. So kudos to you and your organization that you're fighting to bring more trust into clinical research, and I'll be following you.

I have one last question, Leeza. It's something that I'm asking everyone, that I'm interviewing. From your perspective, what makes or breaks clinical trials?

So I'll be brief. Thank you for this question. I think what makes clinical trials is integrity and transparency and what breaks clinical trials are academic egos and shareholder expectations.

Wow, that was very to the point. I very much like it. Thank you so much, Leeza, for your time, for being transparent, and for doing what you're doing, and hope, I hope that we can find a way to support more of these organizations like you. Thank you.

Thank you so much, Maya, so appreciate your time, and thank you for the opportunity to speak.