“Research Ethics Reimagined” is a podcast created by Public Responsibility in Medicine and Research (PRIM&R), hosted by Ivy R. Tillman, PRIM&R's executive director. Here, we talk with scientists, researchers, bioethicists and some of the leading minds exploring new frontiers of science. This season, we are going examine research ethics in the 21st century -- and learn why it matters to you.
Today we are pleased to have with us Dr. Vardit Revitsky, the President and CEO of the Hastings Center for Bioethics. The Hastings Center is an independent, nonpartisan bioethics research center, which is among the most influential bioethics and health policy institutes in the world. Vardit is a part time senior lecturer on global health and social medicine at Harvard Medical School, and passed full professor at the Bioethics Program, School of Public Health, University of Montreal. She's the past president of the International Association of Bioethics and a fellow of the Canadian Academy of Health Sciences.
Ivy R. Tillman, EdD:Vardit has published more than 200 articles and commentaries and has delivered more than 300 talks. She is a regular contributor to the media on bioethical issues. Her research focuses on the ethics of genomics and reproduction, as well as the use of AI in health. Vardit is a principal investigator on two Bridge to AI research projects, funded by the National Institutes of Health, that expanded the use of AI in biomedical and behavioral research. She also serves on the steering committee of the National Academy of Medicine, to develop an artificial intelligence code of conduct.
Ivy R. Tillman, EdD:Thank you for being here with us today, Vardit.
Vardit Ravitsky, PhD:My pleasure. I'm honored to speak with you.
Ivy R. Tillman, EdD:First, I wanted to thank you once again for being the keynote speaker for our annual conference, PRIMR25 , this November in Baltimore. We are very much looking forward to hearing from you at our annual conference. So without giving away too much, can you share with our audience what some of the highlights of your remarks will be?
Vardit Ravitsky, PhD:Of course, AI is top of mind for everybody. Whether you're in biomedical research, whether you deliver care, whether you're running a health care system or an insurance company, it touches on everybody's professional lives, and let's be honest, on our personal lives as well. Absolutely. So we thought of making that the focus of the keynote. I'm hoping to map some of the actual uses that are currently hitting the ground, as well as what's around the corner, and then focus on the trust that we need to establish and promote for those tools to actually be effective and beneficial.
Vardit Ravitsky, PhD:So if you're a researcher and you're wondering how to use AI in your research and how to go to your IRB with that new method and educate while you're making sure that you're doing everything ethically and appropriately. You'll find something for you. And if you are embedded in providing care, you'll find some remarks regarding what is top of mind for clinicians today, and especially how to promote the trust of patients in these systems, as well as the trust of clinicians. So really the intersection of AI, healthcare and trust.
Ivy R. Tillman, EdD:Wonderful. I'm so excited to hear from you at the conference and excited for those who've already registered and those who plan to register to hear your keynote. So thank you once again. And before we get too far into our conversation, I would like to explore your career path. I always love to know the story of how you arrived at where you are and how you got involved in this field.
Vardit Ravitsky, PhD:Oh, that's always a fun story to tell, right? Because origins explain a lot where we are today. I grew up in a family of philosophers. Everybody in my family was in humanities, philosophy and in education. So as a little girl, I thought that that was the only profession available.
Vardit Ravitsky, PhD:And it's actually quite funny because when I went to study philosophy in university, people said, what are you going to do with that? And my response was, wait, there's something else? But the funny thing is that when I started doing philosophy as an undergrad, I realized I was most attracted to philosophy of science on one hand and ethics on the other hand. And I thought, so how do I combine these interests? And then as a young woman, something personal happened to me.
Vardit Ravitsky, PhD:A friend, a much older friend who was going through IVF, asked me if I would donate an egg. And that threw me into a deep reflection on what reproductive technologies mean today, all those new ways back in the '90s that we were starting to have babies. And I had to ask myself, wait, what does it mean to be genetically related versus socially related? What is the meaning of now having surrogate mothers and egg and sperm donors and babies created by three and four and five people? So I went to the literature to try to help myself understand what I was feeling and read quite a bit.
Vardit Ravitsky, PhD:And then I realized that what I was reading was bioethics. And I was hooked. I said to myself, okay, this is the field that combines everything that I care about. Ethics, science and technology, and issues of human identity. What does it mean to be a human being when these technologies around us change fundamentally who we are and how we relate to others?
Vardit Ravitsky, PhD:So early on, was reproduction. It changes how we have families. It became genetics genetic identity is so central to our lives, but also challenges us. End of life, I worked quite a bit on cultural perspectives on end of life because technologies also change how we die and when we're considered to be dead. And now AI feeds right into that because it forces us to question what it means to be human when, you know, you chat to some of these basically algorithms and you are feeling that you're talking to another human and you know that you're not.
Vardit Ravitsky, PhD:It really forces us to question yet again who we are and how we relate to others, but in a totally French way. So I feel like my whole career was about identity and the intersection of ethics and science, but technology keeps throwing new challenges at us, which keeps the work always fresh and interesting.
Ivy R. Tillman, EdD:Fascinating. And when you talked about the center of your work being about identity and the role that identity plays in science and trust and technology, I think it's central to many of the conversations that we're having today, not only within the field, but just within the the general public. So, yeah, fascinating, fascinating story and origin story as you described it. And so I'd like to begin to discuss a little bit about the work that you're doing at the Hastings Center for Bioethics. You're involved in so many different interesting projects.
Ivy R. Tillman, EdD:You talked about AI. We know of a few others. But from your perspective, what are some of the highlights so far for 2025?
Vardit Ravitsky, PhD:The main highlight for us in 2025 is that we launched a new strategic plan for the coming five years. The technology is moving so fast that thinking on a five year horizon is really It is. But this is a strategic plan that kind of casts broadly what our priorities are, what themes we want to address. And we're really going to double down on issues of justice and equity, because as we all know, the world is not making it easy today for researchers, for clinicians, for patients to cope with these issues. We still live in a country that has terrible health disparity.
Vardit Ravitsky, PhD:And the political climate is such that exploring these is considered now brave. Have to be a moral leader to ask questions that previously were really obvious about social justice, about access to care, about our rights as patients. So, we're really leaning into those questions. It has always been a focus on our work, and we're sticking to our values and to our mission. We're also leaning strongly into issues of trust, because the challenges are only becoming bigger with AI, with political pressures.
Vardit Ravitsky, PhD:We knew always that we disagree on values. Now we disagree on fact. Or we don't even define facts in the same way. So the challenges to trust have become daunting. And a part of our strategic priority is to really tackle from every direction possible what it means to be trustworthy in this environment and how we can help people, again, patients, clinicians, the entire system, build trust and promote it.
Vardit Ravitsky, PhD:So that's at the level of the strategic plan. We have some really exciting projects that involve AI. One of them you mentioned in your introduction, Bridge to AI. This is an NIH wide initiative where the NIH was really visionary in understanding that biomedical research is gonna be revolutionized by AI tools, and that the basis for that is good data. If the data is biased, not representative of the patients that we want to help, if you don't engage with all the stakeholders in the process, if you don't ensure that your workforce is diverse, the outcomes will be very suboptimal.
Vardit Ravitsky, PhD:And so this initiative is actually about building flagship data set that are trustworthy and ethical. It sounds simple. Oh my gosh, it's so complex because we have four different projects. Each one collects different type of data. And for each type of data, the challenges of trustworthiness and being ethical are different.
Vardit Ravitsky, PhD:So the Hastings Center takes a big part in that project. We also have an innovative project called Hastings on the Hill that is meant to take the bioethical insights regarding AI and health and translate them to policymakers, Bring them to the hill. Now, all know that whether you're a staffer or a congressperson, you're busy and everybody's So screaming for your how do we create a tool that is interesting, captivating, accessible, that really kind of floods issues and the values into your way of thinking when the time comes to maybe regulate. We created what we call a patient journey. This is a narrative.
Vardit Ravitsky, PhD:It's about a Mrs. Jones who goes through a health crisis, and at every step of the way, AI is somehow involved in her care, whether she knows it or not. And we use the narrative, you know, the power of storytelling, Absolutely. As a way to flood all those potential issues and benefits, so that as you read through the story, you are engaged, but you start asking yourself, wait, if that was me, what would I care about? Would I wanna know?
Vardit Ravitsky, PhD:Would I care about my privacy? Would I want to have the option to opt out? Or if this were my mom, how would I feel? It's supposed to be engaging and yet very educational, so that if you're a regulator, if you're a policymaker, you at least consider, you pause to consider what the ethical issues are that you should address. Other than that, we do a lot of engagement activities.
Vardit Ravitsky, PhD:So for example, we had a partnership with Cedars Sinai, the big healthcare system in Los Angeles, and we organised together a conference on surprise, surprise, trust and accountability regarding AI and health. And you had a room with clinicians and lawyers and patients, and you could just sense in the room, I don't want to say fear, but deep concerns. For doctors, are they going to lose their jobs? For those who run healthcare systems, how to make wise choices about what tools to purchase and how to implement them. From patients and chaplains and families.
Vardit Ravitsky, PhD:What does that mean for me? Am I going get Is better this a threat to me? So it was just a wonderful engagement activity to have such a conference with a healthcare system bringing the ethics to the ground.
Ivy R. Tillman, EdD:Fascinating. So do you have plans to do more of those types of collaborative engagements? Because it sounds like all the stakeholders were there. When I say stakeholders, those interested parties, those who are in the development, the actual conduct, but also those who benefit from said technologies. Do you plan on doing more of those types of activities?
Vardit Ravitsky, PhD:Yes. So Cedars Sinai was so excited about the success of this event that we're gonna do another one in 2026. Wonderful. But beyond that, we're partnering widely to bring these concerns to the public in a way that, as you just said, engages multiple voices and multiple stakeholders. For example, we have a partnership with the Museum of Science in Boston on a series called The Big Question.
Vardit Ravitsky, PhD:It's a conversation that we have with experts, again, accessible for the general public, about various big questions in science, technology, ethics. And the one that was just released this week is called, What does it mean to be human? And we talk precisely about how AI forces us to unpack this age old philosophical question in new ways. We have partnerships with global bioethics centers. We're gonna have a conference in Paris in the spring about AI as an existential threat and opportunity, bringing leading voices in ethics and philosophy from all over the world to discuss this with the public.
Vardit Ravitsky, PhD:So we're shooting in all directions in the sense of employing various engagement tools academic conferences all the way to podcasts and online publications to have inclusive conversations so that all voices are included because, as we said in the beginning, this touches everybody.
Ivy R. Tillman, EdD:It absolutely does. I want to go back to the strategic plan that you discussed, particularly around the double down on the issues of justice and equity, which of course, you know, are very closely aligned with Primer's goals, but also me personally. Right? So I'm throwing my personal in here for a minute. I am often asked and would love to get your perspective, particularly right now in the times that we're in.
Ivy R. Tillman, EdD:How do you double down and how do you continue this really important work that's essential to everything else that you're doing in justice and equity?
Vardit Ravitsky, PhD:I've been having conversations with other presidents and leaders of organizations that do ethics, especially And in the public especially in the early days of the new administration when we were all feeling quite dizzy from the pace of change around us. And one interesting thought that emerged that I feel was shared by all the leaders in the field was no knee jerk reactions. Don't be reactive to There were days that I went in to give a talk. By the time I came out, the world has changed five times. Yes.
Vardit Ravitsky, PhD:And you could spend all your energy just reacting. Issuing statements and and we said, no, let's sit for a second. You know, we we have the the luxury of being patient. Considered and reflective. And let's react in ways that do not respond to this crazy pace that is meant to throw us into disarray and to disorient us.
Vardit Ravitsky, PhD:So dumping down, first of all, means taking a deep breath and sticking to your values. You don't start changing language and shifting your values and reframing your mission because you're constantly trying to respond to So the flavor of the sticking to value, sticking to mission is one thing, and it has become very costly in some cases, right? You're under threat of funding being taken away. Under threat of not being able to support your organisation. So the sense of moral courage emerges again.
Vardit Ravitsky, PhD:Just speaking with the programs becomes an expression of bravery. So that's one answer. But the other is, I think doubling down Of course, you continue to do your research and your engagement activities, but doing it in a thoughtful, wise way.
Ivy R. Tillman, EdD:Sure.
Vardit Ravitsky, PhD:I think what the current atmosphere shows us is that we are in our echo chambers often, and some of us forget to listen to what's happening outside. And some of us are struggling, true honestly, struggling to understand those other voices. Mhmm. They're so foreign to us.
Vardit Ravitsky, PhD:So I think doubling down is not just staying in the echo chamber and continuing on the exact same path. Sometimes it means broadening our perspective, listening more, know, being more inclusive in what voices you listen to.
Ivy R. Tillman, EdD:Sure.
Vardit Ravitsky, PhD:And it doesn't mean letting go of notions of science and evidence and what deserves trust.
Ivy R. Tillman, EdD:Correct.
Vardit Ravitsky, PhD:But being truly inclusive in learning to be more sensitive to the world around you and learning how to engage those voices that are really challenging for you and really difficult to incorporate. So it's doubling down also on the listening on what you mean when you say I'm inclusive. Not comfortable inclusive.
Ivy R. Tillman, EdD:Great point. Great point.
Vardit Ravitsky, PhD:And doubling down on unpacking those notions that are so controversial today. You throw the d word into a room today, diversity, and you're causing an explosion. Right?
Ivy R. Tillman, EdD:Right. For no for no reason, it's my opinion.
Vardit Ravitsky, PhD:But let's unpack what we mean. Different people use this term in different ways. Some for political reasons, some for great reasons. Let's unpack. Let's go deeper.
Vardit Ravitsky, PhD:So doubling down is not just staying on track. It's also unpacking further and further what we mean, adding clarity and sensitivity to the language that we use and to the debates that we're having.
Ivy R. Tillman, EdD:Amazing. Amazing. Thank you for sharing those insights. So I want us to switch gears. We're going talk about some of the artificial intelligence and the work that you're doing there with the rapid implementation of AI in medical care and biomedical research and the issues that you're focusing on, as well as I mentioned you being on that steering committee at the National Academy of Medicine on the AI healthcare code of conduct.
Ivy R. Tillman, EdD:It's really interesting and fascinating, the work that's being done there. Can you just describe it a bit to our audience and elaborate on what this code of conduct will do? How does it intersect perhaps with research and and ethical concerns around AI and research?
Vardit Ravitsky, PhD:It's a great opportunity to showcase the work that we've been doing. So first of all, the work has been completed, and the code of conduct has been published. It's available online. We had a launch webinar attended by hundreds of people describing the work. A few things make this code of conduct stand out, because there are many documents internationally even trying to provide guidelines and guardrails for the implementation of AI in health.
Vardit Ravitsky, PhD:First of all, this is really a 30,000 feet view in the sense that it's across the healthcare system. It works for patients who have concerns and for COs who run healthcare systems and for insurers and for regulators. It's really a very high level view of what principles and what we call commitments should guide this implementation. But while being so high level, it also becomes granular enough to be useful by applying those high level principles and commitments to various stakeholder groups. And that's where it becomes really interesting to me.
Vardit Ravitsky, PhD:So to give you a flavor of the commitments, what we call the code commitments that should guide AI in health, We're talking about things like advance humanity. Always make sure that humans and their interests are at the heart of what you're doing. Ensure equity. Engage impacted individuals. Improve workforce well-being.
Vardit Ravitsky, PhD:You know, I mentioned that some clinicians feel threatened by AI. Sure. And people, I even hear young people questioning whether they want to go into medicine or biomedical research at all, because they're concerned about having careers at the end of their training. Another core commitment is to continually monitor the performance of the tools to innovate and learn. So these are very high level.
Vardit Ravitsky, PhD:To make them operational, we apply them per stakeholder perspective. So we have sections of the code of conduct that speak to AI developers, to researchers, to healthcare systems and payers, patients, families and communities, federal agencies, quality and safety experts, ethics and equity experts. So we didn't leave it at this very general level of this is what you should think about. We brought it down to if you belong to this stakeholder group, and the ones I named now cover everybody in society, This is what's particularly relevant to you. Here are some cases and examples of how you can apply these commitments.
Vardit Ravitsky, PhD:So I feel that we created something that on one hand is very comprehensive and high level, but also very operationalised and granular. And that's to me a winning approach.
Ivy R. Tillman, EdD:It is.
Vardit Ravitsky, PhD:Another, maybe on a more personal note, what blew my mind in the work of this steering committee that guided the development of the code is how diverse and inclusive we were. You saw in one room around one table, CEOs of healthcare systems. Kaiser was there, UnitedHealth was there, Mayo was there. You saw patient representatives. You saw industry, Microsoft and Google, at the table.
Vardit Ravitsky, PhD:And you saw a variety of experts such as ethics, myself, law, and others. Very leading voices in each of those fields, but the diversity around the table and how people found a way to speak across the disciplinary divides and across their various interests as well. Sure. And a sense of shared purpose that we're all there to help, at the end of the day, patients, and to improve the delivery of care and to improve how biomedical research is conducted. There was a real sense of solidarity and shared values that I think was fundamental to building the trust needed to produce such a document.
Vardit Ravitsky, PhD:And now we're in the phase of disseminating it, and you're helping me do that. So this is great.
Ivy R. Tillman, EdD:Wonderful. I love that we can support that. And it's a model that can be used in other areas as well. It sounds like this collaborative model was really unique and and needed. I'm gonna move to misinformation and disinformation, and particularly going back to when you discussed some of the priorities of the Hastings Center's strategic plan. You talked about issues of trust, AI, political pressures, disagreeing on facts, and where trust sits, and really trustworthiness. So, you know, we know we don't have to to get into great detail about public trust in science is at that critical crossroads. And it's something that, of course, we here at PRIM&R are focused on.
Ivy R. Tillman, EdD:You've spoken in the past about your experiences on social media combating misinformation and disinformation. What are some of the strategies that you've used to disrupt misinformation about research and science?
Vardit Ravitsky, PhD:You know, at the Hastings Center, we always say good ethics starts with good facts. And of course, during COVID, when I was in the media, to just promote public health interventions and help protect the public, especially in the midst of the crisis when it was literally a matter of life and death. So I think the number one strategy when you try to tackle misinformation, especially on very controversial topics such as during COVID or now with generally the vaccine debate, the first thing you do is stick to the fact. We always say good ethics starts with good facts. And at the Async Center, we include evidence in all of our analyses and all of our publications.
Vardit Ravitsky, PhD:The problem with that is, of course, that we're now living in a reality where what is evidence, how to distinguish facts from value and opinion, itself has become controversial and polarizing. So, the way I see the challenge, and this is kind of the fine line I'm trying to walk when I do media or debates or when I publish or design research, is that on one hand, you want to have conversations with the other side, right? All the experts say, you don't just throw information at people and change their It doesn't work. Right. So you have to listen.
Vardit Ravitsky, PhD:You have to acknowledge where that other voice is coming from. Unpack the fears and the origin story of why people are struggling with expertise, with authority, with science. But at the same time, you can't give in to what is outside of what we consider to be scientific evidence. You can't give a stage to misinformation in the process of listening. And even though you want to be nonpartisan, you know, the Hastings Centre is famously a nonpartisan research institute, Yes.
Vardit Ravitsky, PhD:It doesn't mean that all sources of information and all perspectives are equal. So walking this fine line, when you're trying to actually have impact and help people make good choices about their health, about their families, or help people in their careers decide how to run their research, how to deliver care. You want to start from a place of listening and acknowledgement, but you also want to stay grounded in what you know Absolutely. Science knows at a given And that depends on the topic that you're discussing and how hard you have to push back against, again, the flavor of the day misinformation. AI is gonna make all of this much harder because it's gonna become difficult to distinguish sources of information.
Vardit Ravitsky, PhD:And the misinformation is becoming more and more convincing in how it's presented. Yes. But that is to be the fine line. It's very easy to just go on TV and say, we, the experts, know that, and just give the facts and then remind people that they should value justice and that they should That's care about easy. What's hard is to speak in a way that does not antagonize, that does not make people feel that they were never heard, and that you're not using your authority that they completely challenge to just have this top down approach of we're gonna force you to do certain things because we know better.
Vardit Ravitsky, PhD:So you have to find a different tone for the conversation. Right.
Ivy R. Tillman, EdD:Wow. Once again, wonderful perspective advice on tackling misinformation. So you mentioned the concerns that you hear from younger professionals, individuals considering biomedical research, but also bioethics. And so what would you what what kind of advice would you give to someone say you want someone in university who's considering their career path similar to yours. Right?
Ivy R. Tillman, EdD:What would you give them as far as advice regarding a career in bioethics or a career in biomedical research? Because, you know, at Premier, we're very much concerned about the pipeline that has been disrupted of young professionals and research ethics, bioethics and biomedical research. So what type of advice would you share with someone right now?
Vardit Ravitsky, PhD:You know, Ivy, this is really a funny moment to me because at the Hastings Center, we run a series called Bioethics Chats, where I chat with various fellows of the HEC Center, leading voices in the field. I always conclude my conversation with them by asking the exact same question that you just asked me. So I'm finding myself on the other side. I'm on the spot now. I always put them on the spot.
Vardit Ravitsky, PhD:I'll answer you with great honesty.
Ivy R. Tillman, EdD:Sure.
Vardit Ravitsky, PhD:There are days that young scholars, PhD students, postdocs, or even young academics get in touch and ask for career advice and mentorship. And they really ask me, What are my chances?
Ivy R. Tillman, EdD:Right.
Vardit Ravitsky, PhD:And I feel that it's so unfair for me with my, you know, established position in this field to say, oh, go with your heart and stick to your values and don't give up and it will be okay, because I don't know if it will be okay. Right. And with funding being taken away left, right, and center, sometimes I feel, who am I to tell young people that they should follow their the the path that they started and give them a sense of security that if you just have grit and resilience, eventually, you'll find your way. Mhmm. Mhmm.
Vardit Ravitsky, PhD:The world is changing so much that I don't know about my way. How can I convey that sense of, you know, stick with it? And at the same time, I still am a believer that if you have great passion and you know what you want, something will pan out. And by something, I mean, it's not always going to be the exact career that you imagined, which is why my advice, not my kind of self reflection about how dare you give advice to this day and age, the actual advice would be diversify your skill set and also your expectations. If you're an academic, you may end up in government or in industry or in a non government organization or So open your mind and remember that the topics, the values that you care about, you could address those and contribute in multiple ways.
Vardit Ravitsky, PhD:Yes. Maybe your whole life you saw yourself as a university professor, but you're gonna end up being a policy advisor. Sure. And skill sets. You know, I don't think we have the luxury of being ivory tower intellectuals anymore.
Vardit Ravitsky, PhD:I think we should all be public intellectuals. Start writing for the media early on. Put yourself out there. It's risky, yes. But become known.
Vardit Ravitsky, PhD:Do the op ed, do the podcast. Be excited about speaking to the public more than terrified of it. Even though it can be costly, as I saw on myself. So yeah, diversify your skill set, diversify how you think about what science is and what bioethics is. And keep an open mind as you advance in your career, seize opportunities even if they don't align precisely with how you saw yourself.
Vardit Ravitsky, PhD:Keep your identity flexible.
Ivy R. Tillman, EdD:Absolutely. Wonderful, wonderful advice for those entering the profession, but also for those who are currently in the profession too, right? We see it changing the field, ever evolving. So thank you. Thank you for that.
Ivy R. Tillman, EdD:And so as we head to wrapping up, my last question relates to, we've talked about kind of where we are, the state of where we are now, your thoughts there. When you think about the future and that future for us, it used to be five years from now, but let's just talk about a year from now, right? Because we know that'll change so quickly. What are you most optimistic about for the future?
Vardit Ravitsky, PhD:I'll answer specifically in the context of AI.
Ivy R. Tillman, EdD:Sure.
Vardit Ravitsky, PhD:I see such huge benefits and they're low hanging fruit. You know, AI scribes that help doctors synthesize their notes. The tools that are becoming now increasingly approved for clinical use to help us with diagnostics.
Ivy R. Tillman, EdD:Sure.
Vardit Ravitsky, PhD:I think the potential benefits in actually helping provide better care, whether it's reducing human error in diagnostics and treatment recommendations, whether it's just better monitoring of complex cases. You know, the rate of human error in medicine and in research is very substantial. And I think one of the mistakes we make is to compare AI to nothing, or compare AI to an optimally perfect world. But when we start integrating AI into biomedical research design and into the delivery of care, we have to remember that we should compare it to what we have now, which is very imperfect. And so it's good to talk about the concerns and the lack of regulation and the lack of accountability and all of our fears.
Vardit Ravitsky, PhD:That's great. We need to tackle that. We need to address that. But let's take a moment with the incredible benefits that are here and behind the corner.
Ivy R. Tillman, EdD:Yes.
Vardit Ravitsky, PhD:In actually making care safer, in making the work of clinicians easier and better so that they can spend more time with their patients, in what it means to realign your skill set, not to lose your job, in what it means to accelerate research, not make researchers obsolete, but rather give them powerful tools that can accelerate their work. I love spending time in the yes, not in the nay saying and the yay saying. And in remembering that these are incredible tools that for decades we thought would be the holy grail of medicine and research. To some degree they are, and we're finally So seeing let's end on this positive note of focusing as clinicians, as patients, as bioethicists, as researchers, on those incredible benefits that we're beginning to see now. And just when we look into the future, ensure that they're delivered in a way that doesn't generate more heart.
Ivy R. Tillman, EdD:Wonderful. Thank you. Thank you for spending time with me. Thank you for the conversation and just imparting your wisdom and insights. They are so needed at this time, and so we're so very appreciative of you joining us today for our podcast.
Vardit Ravitsky, PhD:Thank you for this great opportunity, and I'm really looking forward to the keynote.
Ivy R. Tillman, EdD:Absolutely. We are too.