Conversations in Pulmonary, Critical Care and Sleep Medicine by the American Thoracic Society
[00:00:00] non: You are listening to the ATS Breathe Easy podcast, brought to you by the American Thoracic Society.
[00:00:18] Eddie: Hello and welcome. You are listening to the ATS Breathe Easy podcast with me, your host, Dr. Eddie Qian. I'm also the host of the ICON and podcast. Each Tuesday, the ATS will welcome guests who will share the latest news in pulmonary critical care and sleep medicine. Whether you're a patient, patient advocate, or healthcare professional, the ATS Breathe Easy podcast is for you.
Joining me today is Dr. Richard Schwartzstein, chief of the Division of Pulmonary and Critical Care Medicine at Beth Israel Deaconess Medical Center and Professor of Medical Education at Harvard Medical School, who's gonna be discussing with us AI and clinical practice and medical education. Welcome Dr.
Schwartzstein.
[00:00:55] Richard: Thanks very much. Great to be here.
[00:00:57] Eddie: listeners of this [00:01:00] podcast might think I'm a little bit of a one trick pony with almost all of my prior episodes on this feed being about ai, AI and scientific writing, AI and clinical research. however, this episode I'm particularly excited 'cause it really merges two of my main interests both AI and then clinical practice taking care of patients and also med medical education.
I think. The medical field has, and I think this is fair to say, please correct me if I'm wrong, has been pretty slow to adapt to new technologies. for example, my my pager is within my reach right here and I deal with fax machines in my day-to-day life. But I think AI has been such a global phenomenon that even us Luddites in medicine and likewise in medical education have caught on.
But do you, do you have any general comments on AI and AI in medical education before we jump into it?
[00:01:52] Richard: It's an interesting point about the technology. I think we use medical technology certainly in a fairly rapid way in terms of [00:02:00] procedures and things we incorporate. but I think around electronic medical records and some of these other types of things, perhaps we're a bit slower, but I, in our defense, I would say as a profession is this is about the care of patients and we don't wanna make mistakes.
And so I, I, I use the term embracing it. Being healthily skeptical, if you will, at the same time, while we're trying to figure out the best way to do this and with our patients always in mind that, that's my, my caveat for adoption.
[00:02:35] Eddie: Yeah, no, that, that absolutely makes a ton of sense. The but it, it is something that, that this ai, G-P-T-L-L-M.
A craze that's taking hold of the world, as has caught on in medicine as well. But like I said, I'm, I'm so excited to have you on the podcast today as, as really a true pioneer in this field. You have a paper published in chess last year titled [00:03:00] Artificial Intelligence and Medical Education. A Long Way to Go.
it is where you looked at A-A-G-P-T model and how it performed with a first year medical student prompt. Now, now I'll first say that I, I thought that was a, a fairly involved prompt, but the conclusion was that, well, to use your own words, it, it has a long way to go. I, I was curious what your motivations were in doing that product, what your experiences, experiences were, and did you have, what expectations did you have and, and did you, were they met?
[00:03:30] Richard: I start out with often by saying that this is really an evolution, I would argue as much as a revolution in the sense that we've been using computers, we've been using the internet to solve a lot of questions that we have for a number of years now. you know, if you said, what drug do I give for community acquired pneumonia?
I mean, you could find that in the old internet pretty quickly. So factual questions, factual answers [00:04:00] have not been the key for a number of years now. Now artificial intelligence does take it another step beyond that, but it is an extension of that notion of I'm finding answers to questions that come up on a regular basis fairly commonly.
and. So, so I'm looking at it more in an evolutionary way, I suppose. Now, having said that, what's really been remarkable, even since we've published that article, is that these further iterations of ai. Are much better and the change is not occurring on an annual or semi, you know, basis. It's almost weekly at times.
Things that I put into one version, you know, a month later, you know, it can answer it or do it in a different way. That is more accurate now for stuff that I really know about. so the pace of change is much different than just, you know, Google searches in the old internet framework. And that's what's [00:05:00] become really a little bit scary at some level because it, but it is evolving quite rapidly.
Nonetheless, I think there is still caution, you know, about where we are and how we use it.
[00:05:13] Eddie: I was even gonna bring that up, that when you published, you were using the chat GPT version 3.5 and I'm pretty sure version five came out very recently. So even two full, like numeric iterations ago. that that we are just that our publications really can't keep up with that.
And you, when we were talking before you had. Sent me a prompt and response from a clinical scenario that you had recently. And the point that you were trying to make was that the GPT model got the ultimate answer wrong. And, and I agree that the quote practical next steps that they suggested left a little bit to be desired.
But I think I did a pretty good job laying out at least the differential diagnosis and what kind of things to be thinking about.
[00:05:55] Richard: Yeah, I think its strength is is differential diagnosis, [00:06:00] frankly. Uh. And a lot of the papers have been published have looked at things like New England Journal of Medicine cases and things like that.
And as you and I have also chatted a little bit about, you know, the issue of what you give it, what the prompts are, you know, becomes a real key in those New England Journal of Medicine cases. It's a perfect history, a perfect physical exam, all the laboratory data, and it does a wonderful job coming up with the differential diagnosis and, and often the correct answer.
But we're not always so perfect in what we provided. and that's what I worry about for a lot of the clinical applications right now.
[00:06:38] Eddie: Yeah. Let's try to transition to talking about medical education and, and how we can, like, teach our trainees who are growing up in this new era where this is becoming, you know, second nature to them just in their everyday life.
What, how, how can we be talking to our [00:07:00] trainees, both medical students and then even our GME trainees, residents, fellows, about how, how to, how to use this and, and, and what kind of things that we need to be aware of.
[00:07:12] Richard: I think that there's fairly broad consensus, at least among people that I speak to around the country.
That there is still a need for what we call foundational skills. And the term de-skilling is one of these fears that people talk about commonly that you'll, you'll forget how to calculate it, an alveo arterial oxygen difference, or you'll forget how to think physiologically about hypotension or hypoxemia or whatever because you don't do it anymore.
You just look for this final answer. And I think there is a real concern about that. And there are times whether. When, because of speed or the complexity of a case or whatever, you're really gonna need that or your ability to identify an error that's going on. With AI is a problem. and we talk [00:08:00] about some cases we might make as educators.
I still am finding that where I'll ask it to do something, create a case, and it makes a mistake in the data that it gives me for that case. So those foundational skills, particularly pertinent for pulmonary critical care, sleep doctors, 'cause we're really embedded in physiology in terms of our day-to-day work.
You still have to know that. And I think that's still an area of relative weakness for for ai, if we say, you know, what kinds of pneumonia might this be in a, you know, patient from a particular part of Asia or Africa or something like that, well, it's wonderful at that, right? And, and that's more factual level content that we don't need to carry in our heads anymore, and it does a really wonderful job there.
But really looking in the moment at a particular complicated case and why they are now hypotensive or hyperkalemic or whatever. I think those are things that. Are are not as well done by AI or at least more prone [00:09:00] to errors potentially.
[00:09:02] Eddie: So yeah, that, so that's, that's really, that's really interesting.
What are you telling students or how are you educating students and trainees who have a desire to use this in their clinical practice or their trainees or when they're on the ward? About, about how to, how to prompt and how to make sure that we're maintaining our skills and otherwise.
[00:09:24] Richard: So the, the information that you give it, you know, is going to be your history and physical exam, and maybe to some degree even more complex stuff around ventilators.
If we're talking about an intensive care unit, for example, how much of that are you gonna put in? And then start with how do you think about the problem? Because the data that are relevant for you in thinking about that problem is probably gonna be really important for AI as well to solve the problem.
and so those are the kinds of things that you still have to know, those [00:10:00] foundational concepts very well. We had talked about a case where the carbon dioxide level in a ventilator had suddenly changed by 20 to, and the resident. Frankly, didn't know enough about how to think about that case to put appropriate prompts potentially into AI to get the answer.
So you still have to have that fundamental understanding about physiology, pathophysiology, how that interacts with ventilators and so forth in order to really get the best out of ai. And what the way I look at it now, frankly, is think through the problem. Then check, use AI as a check potentially on something I may have missed, and I have an example that I use in my teaching now about ai, where it was a real malpractice case.
One of the things we do here at Harvard is we got a grant from our insurance carrier, although Harvard hospitals have a, a joint insurance company that they use for [00:11:00] malpractice. And so I got a grant where we could pick out some cases of malpractice related to diagnostic error to use as teaching tools for our students.
And so we had one case, which was a lower abdominal pain case in a young woman that turned out to be ovarian torsion. And it was, you know, a teenager. And, and when you asked, when you put in sort of the traditional history and a little bit of exam on this case, that was actually done in the case. This was a real case.
AI didn't actually, when you said What's wrong with this patient, it did not mention ovarian torsion. When I then asked it, what else could this be besides what it originally gave me, it then listed a ovarian torsion. So those kind of even little issues about how you asked the question, let alone what data you're giving it can give you different answers right now.
So it's this notion of, of [00:12:00] prompt engineering. Some people call it, you know, is got a lot of nuances to it. and so my view right now is to try to still have people be able to think through these cases themselves, think through the problems, and then use AI to say, what else could this be? Have I made an error in my thinking?
That sort of thing. They encourage you, the, you know, people that have developed these programs to interact with it, like it's a colleague, you know, where you almost talk to in, in an iterative fashion but you have to be a little careful about that, not getting too colloquial but remembering what your, your real goals are and then, you know, framing questions in an appropriate way to make sure you're not missing something.
[00:12:44] Eddie: Yeah, it is really, it's really interesting. Like I'm, I'm not. Even compared to our field, I'm not even that old, but I remember learning in grade school the how, how to google how to use a search engine. And this, there I see a lot of parallels here where now I'm [00:13:00] very facile, like using search engines, but have to relearn how to prompt things, which is, which is different here.
Um, and, and the other thing I'll say is that it is really, really, really reassuring. For, to hear you say that you should be thinking about the case and using AI to double check and kind of make sure you're rounding out your differentials. Because, you know, when I'm on service E every week that I'm on, I try to, you know, try something new, do something new and see if you know, this is gonna work for me or what's gonna stick.
And, and one of those weeks recently was to run cases that I was, I was thinking hard about through through a GPT model. I've really felt like I was trying to lead it to my answer because I would be giving it like bits of information and then it would go in a completely different, different direction and I could tell how it was getting there with the information that I gave it.
And so I kept feeding in more information that I thought was pertinent and just trying to like, lead it to, to what my eventual answer was. [00:14:00] So it, it is very interesting but reassuring for me to hear, hear you say that.
[00:14:05] Richard: And there are, you know, different, obviously different companies putting out different products.
and I have found variations between products in terms of one making mistake, another one not making mistake, and, and I don't wanna endorse one or another right now, but just to say that it probably changes by the week even among these products because they are. advancing so quickly.
[00:14:26] Eddie: Yeah. That, that's act, that's interesting.
I, I've not considered that in the past where if, if I search up to date that article is the same, no matter who or for where or what I searched for if I find the article, then it's the same article but yeah, the, just like different search engines can give you different results. I didn't quite put two and two together.
When you're thinking about different GPT models can give you different results. I think, I think the, the potentially other scary thing is, as you were saying, that the models and versions are changing so quickly that even the same model separated in time may not give you the same answer or the same prompt.
[00:15:00] That's right which is, which is kind of, kind of scary to think about.
[00:15:04] Richard: It is. And that's when people say, is AI going to be able to do X, y, or z? I don't give definitive answers anymore. All I say is it can't do it right now, maybe. But I don't want to say exactly what will happen in the future 'cause it is evolving quickly.
[00:15:18] Eddie: Yeah. Ha. Have you, have you noticed any patterns in students and GME trainees using AI and lms? Um, we, we've been talking about some of these already, but if are, are there things that. You've seen commonly that worked well or didn't work well?
[00:15:39] Richard: I think the major problem I'm seeing is just that they are use, well, many of them are using it to as a check, which is, which I, I think is appropriate, but some are going to it very quickly, even before doing the hard work of, you know, what is my, how am I thinking about it?
I, I tend to avoid. The differential diagnosis question [00:16:00] coming up too soon, and I think this is a generic problem across all of medicine right now, which historically has been a problem where a student or a resident does a presentation, you know, a one-liner as we often say, right? And then the attending may say, what's your differential?
And, and so that's very much a pattern recognition kind of process. And God knows AI is much better at pattern recognition than we are. If we are to say more, tell me how you think about those electrolytes. Tell me how you think about the acid-based problem. That's the kind of question that I'll ask on rounds all the time.
That's a very different process and AI is not quite as good yet, you know, at that notion. And it's not a question we prompt it with typically. How are you thinking about this? I haven't really tried that very much, but to me that's what I want. My trainees and my students to be doing. Tell me how you think about it.
I don't care about the diagnosis right away. And we [00:17:00] talk about in this notion of inductive reasoning, which is to go from foundational concepts and build a picture of what's happening with your patient. It's a stepwise process with intermediate hypothesis. When I say intermediate hypothesis, what I mean is they're hypotensive, not 'cause they have heart failure.
They're hypotensive because I have a cardiac output problem. Which could be preload or afterload or contractility, right? Those are not diagnoses. Those are physiological mechanisms, and I do believe that it gives us broader ways of thinking about problems when we do that, and we're less susceptible to cognitive biases.
There's a lot of talk about cognitive bias or bias in general for AI because of the internet. There's a lot of bias in the internet, racial bias, gender bias, all kinds of things that are out there, and you have to be careful about that. But then there's our own cognitive biases that we have anchoring confirmation bias, availability by, [00:18:00] and we may, that may affect the prompts that we put in.
We're already thinking about the diagnosis and we put in prompts that are gonna lead AI to give us that same diagnosis we may already be thinking about. So you have to be concerned still, I believe, with those sorts of cognitive biases, let alone the ones that are embedded in the internet.
[00:18:20] Eddie: Yeah, no, that, that is, that is really interesting to think about, and this is how we, you know, you have the, we have a guest who is a master educator.
We're talking about AI medical education, and he is saying, well. We need, we need to use the, the lessons that we learned from AI to kind of come back to the bedside and talk about how we're framing our questions on rounds and otherwise. So that's, this is just a mark of a, of a, a greatness right here.
Y when, when we are you've mentioned a couple of times that one of the key things that you're telling, you know, your peers and, and trainees is to use AI as a double check. Are [00:19:00] there other, I guess general or specific things that you, or situations that you're using to kind of tell your students and trainees about, about how to use AI and practice?
[00:19:14] Richard: Well, I think there's always the issue of what I call intellectual humility. So you know, if it's outside your. Wheelhouse, if you will to acknowledge that, to, to use it same way that we would call a consultant. So it's a first level consultant. I mean, I don't think this is gonna replace all of that. but it's, it's also about.
Forcing them back a little bit to the bedside and really thinking about their history and exam in more detail before they go to ai as, as well. I haven't tried asking ai. Are there other questions I should be asking the patient? I haven't actually tried that yet. I don't know what it will do with that.
the other concept that I talk about a lot. Explore myself you know, with AI is [00:20:00] what is the value added of the physician in a world in which AI keeps getting better. So a lot of us are working with advanced practice practitioners now, particularly in the ICUs, but also in ambulatory arenas. you know, if you take somebody who's a nurse practitioner who hasn't had the, the depth of understanding and training that we have in pathophysiology and physiology.
But they're using the internet and they're using AI with that. What is the value we bring to it? Yeah. You know, and so if we're not doing that deeper dive into the underlying physiology and pathophysiology, pharmacology, et cetera, then maybe we're gonna be replaced sooner than later. And primary care is probably an area where there's so few people going into primary care right now.
There's a huge deficit nationally for the for, for the United States where I think that's probably where the not so distant future is gonna be going for primary [00:21:00] care. But is that same thing gonna be happening with critical care down the road? It seems less likely to me, but again, this, this technology is advancing quite rapidly.
And again, what do we bring to the patient? To the process of working the problem is a kind of common phrase that I use. It's borrowed from a very old movie, Apollo 13 which you may not have ever seen but it's one of the moonshots where there's an explosion on the capsule and the engineers are running around very intently trying to think about what to do.
And the head of engineering and mission control says, I don't want guessing. I want you to work the problem. And I don't think any of us guess it answers, but when we look at things that come to us very quickly that pop into our head, that's pattern recognition. That's kind of what I think the head, the head of mission control was talking about.
Working the [00:22:00] problem means how do I think about hypotension, hypoxemia, abdominal pain, shortness of breath? How do I think about how those things come about? And that's what I think we still bring as a physician. To the bedside, as well as our humanistic skills and our ability to communicate difficult concepts to a non-physician, to our patient, you know, or things that we can do because we understand the, the deeper aspects of medicine.
[00:22:29] Eddie: Yeah, no, that, that is really interesting, I guess to, to say in a different, different phrasing what some of the concepts you're saying is. Well, for medical students, trainees, and even for myself as, as young faculty members, to, to be, think, to think about and emphasizing and working on the skills that that set us apart.
Right? Um, so that that includes. Being at the bedside. That includes a good detailed physical exam. and, and thinking about our physiology, I think those are, those are really important lessons for, I mean, myself [00:23:00] include, I'll just say at least for myself and I'm sure a lot of our listeners as well what about you?
So you have. Uh roles and a lot of national reputation for being a, a fantastic educator. And you mentioned before talking using ai, generative AI for developing cases. So what kind of, what kind of roles do you see for AI in, in more education type of building cases and scenarios and curriculum building?
how have you used those in your, in your practice?
[00:23:33] Richard: I've been experimenting with it you know, and really trying it out and figuring what works and what the dangers are and that sort of thing as well. And it is an iterative process is what I've found. and, and one of my colleagues who's very much into AI right now talks about using it as a tool almost, or as a person.
If you had a [00:24:00] teaching assistant with you of some kind. It's an iterative process going back and forth with the teaching assistant. So I wanna build a case with a particular problem. So I'll give you an example, and I think we may have chatted a little bit about this beforehand. I was building a case for my core clinical students of around the physiology of pregnancy and particularly respiratory physiology.
So it was first trimester and the patient. Is hyperventilating because that's what early pregnancy does because of the hormonal changes that occur and you get a respiratory alkalosis and a blood gas that would go along with that. And then I said to the AI as I'm building this case, so I went the first part of the case, give me, give me this case and what the healthy, healthy young woman, age 28, you know, and you have to give them the details that you want in there and, and build this case for me.
It does that. And I say, well, that's fine. And it gives me a blood gas representing a respiratory oculus. And I say, well [00:25:00] now the patient comes in increasingly short of breath, you know, two months later. And um, and I want this case to be a case of a pulmonary embolism and the blood gas changes accordingly and it gives me a blood gas that is not an acute on chronic respiratory alkalosis, which is what I was looking for.
It just gives me a respiratory alkalosis, so I hadn't been specific enough perhaps, and I assumed AI would understand that it would be an acute on chronic alkalosis, and I had to tell it, no, this isn't the right blood gas that I'm looking for. Interestingly, it did not admit it had made a mistake. It just said, oh, yes.
Well that's a complicated phenomenon, you know, and it sort of changed the, the blood gas a bit. Although I finally had to make it, the numbers work the way I really wanted it to work, it still didn't create what I was looking for. [00:26:00] So it, this was again, an iterative process of back and forth with the AI to get it.
It still saved me time, frankly, in terms of working from scratch and giving. Some family history and other things that, that actually are laborious and other, other lab studies I needed for, you know, a pregnant woman in the first coming into the second trimester. and things that I, you know, would've taken more time to elaborate.
So, and it would create medications that might be useful and, you know, in pregnancy and so forth, you know, without my having to go back and check on all of that. So it's it definitely would save me time, but it is not something you can take. Off the rack, so to speak. You know, ask it to give you the case and just assume that everything it's giving you is correct.
and of course, again, if you're looking for citations for papers that you want to use again, you wanna really have it, give you the citations. And I check on that as well, because it has been known to hallucinate some [00:27:00] of these things as the common term that's used. But make things up or, or give things that are not entirely accurate.
so it's still important to check on that, but it can be a time saver. And I think, you know, this notion that comes up as well in some of the conversations we've had, we ran a conference at the end of April with teams from eight medical schools across the country on use of AI and medical education.
And, and the issue came up in clinical realms. Whether we think of AI as a team member or is it a tool. Hmm. One of the, we had 45 faculty at this conference from these these eight teams, and somebody said, well, there is no accountability for AI if you think about it clinically. So now, is this a team member, like your respiratory therapist and your nurse and your nutritionist and your pharmacologist, whoever else you have on your team in the ICU or is this.
Really just a tool that you [00:28:00] use. And I thought this notion that it's not accountable, every other member of your team is accountable for the advice that they give you. You as the physician are usually the final, you know, person who's accountable for what decisions made. But the AI is not really accountable.
It's just a tool. And we have to be on the alert for that because it can talk to you like a person. And if you're not careful, you start to think of it as a colleague. It's not at that level and it's not accountable yet. You can't sue Gemini or you know, or cha GPT if they give you the wrong you can try.
Yeah. Wrong advice, right? It's still up to you to decide whether that's appropriate or not appropriate.
[00:28:45] Eddie: Yeah. No, that's, that is, that is interesting. I think this is a similar conclusion that's made in like the scientific writing community where, you know, these GPT models can't be listed as authors on the author byline for a lot of the [00:29:00] journals.
So e everything is changing as we've been talking about, but seems that seems pretty consistent. Um as what, so we've talked about really a lot of the. Generative AI and cases and differential diagnosis. And now we've talked a little bit about building cases and how we build cases using it. Are, are there other opportunities do you see in this space that maybe we haven't fully explored?
[00:29:25] Richard: Yeah, so there's, there was talk at the conference that we ran and I'm fortunate enough to run this education institute here at Beth Israel Deaconess Medical Center. Um. But there was talk about using it for ambient listening, for example, for listening to a student or an intern get their history from a patient and in, you know, how do they question the patient and how do they give advice to the patient?
Could you do ambient listening and then be able to actually Give feedback to a student or a [00:30:00] trainee which was an interesting thought in our process in terms of that because we're, if we're in the room with them, it changes the dynamic. So the ambient listening would allow us to really get a, a view of what's going on in that interaction.
Are they being empathic in the way they explain a problem? Are they taking complex concepts and really making them understandable for the patient? So I thought that was an interesting thing. It's not really being done yet as far as I know. And then there's also the question that's raised about privacy issues about will the student or the intern feel like, oh my God, everybody's always listening to me.
Is there, you know, a fear that starts to come about with the, with the learners? So we haven't really developed this yet, but I think it's an interesting opportunity. as you may know, there papers written about. Having AI answer questions from patients and whether they're actually more empathic in the way they answer than [00:31:00] physicians.
There have been some publications about that. Can we use AI to teach empathic communication? Seems odd to me, but maybe there's a role for that as well. Things that we haven't really thought out about in terms of where a lot of people are in thinking about AI and clinical medicine right now. But but that's an area that is being explored.
[00:31:22] Eddie: My, my understanding of those papers and things may have, have, have changed since I last looked at them was that it was felt to be more empathic. But one of the key differences that I use a lot more words so because AI doesn't have the typing speed limitation that I think many of the many clinicians do that, that, that is, that is really interesting.
I think, I don't need to need to tell you this, but I think one of the things that comes up pretty often is that a lot of. Obs, direct observation, medical ed education, especially in some of these standardized patients, it runs on the like volunteers. And so we have to ask [00:32:00] p clinicians and at different levels to, to volunteer their time to do this.
And it really is one of the most, IM important pieces, I think, is direct observation feedback for a, for a young trainee for a student so it is interesting to think about can you use AI to. Not replace, but to augment some of those experiences and opportunities for us.
[00:32:21] Richard: Right. And, and you know, we haven't really talked a lot about physical exam now, which I think is an, a skill that is atrophying to some degree in our medical schools and, and even in our residency programs.
will there be a time when we can train AI to observe. With a camera, you know, an intern doing a physical exam. Again, do this sort of observation much as we were doing now with the auditory kind of thing with communication skills. But, but that, that is an ongoing problem because again, these are the prompts that we give ai.
If you don't do a good physical exam, you're going to get errors back from AI because you didn't tell it [00:33:00] what was really relevant in terms of the examination process.
[00:33:04] Eddie: Yeah, that's, that's interesting too. Here, here's a, I have a, I have something that I'll go ahead and pitch and you can tell me that this is crazy.
What, what about having AI as a standardized patient?
[00:33:14] Richard: I know there's work beginning on that I haven't personally done much with that yet. and again, of course That's great. Perhaps from the history taking standpoint, whether it can be reproduced in terms of your physical exam and. I'm not sure that we're there yet.
the clinical reasoning part might work quite well, as in addition. So I think that's coming down the road. Again, this is an area where, I never say never because the technology really is evolving quite quickly, but again, it's error prone same way we are. And so this notion may be of double checking back and forth.
We need to check ai, but also use AI to check on our thought processes may be the real future of where we're heading.
[00:33:58] Eddie: That's really interesting. A [00:34:00] any, any closing thoughts on generative AI and education and I guess we'd started talking about clinical care.
[00:34:06] Richard: Well, you know, again, I, I like to use the term embracing it.
that I think, you know, I, I am in the older generation, yet I am not saying, you know, this is gonna undo medicine or it's the worst thing ever to come along. but I think there is still a lot of value in what we do. As doctors and then I'll maybe end with one thing, which is a huge issue for us throughout medicine right now, which is the, the issue of what I like to call wellbeing as opposed to wellness.
there's a big focus on wellness, which a psychologist, Martin Seligman from the University of Pennsylvania. Distinguishes from wellbeing. Wellness is like happiness. You can be well one minute minute and not well the next minute if something happens. But wellbeing is about meaning and purpose in your life and what you do and accomplishment.
And if [00:35:00] we don't continue to look at medicine in terms of the human interactions, what we do bring different than a machine and what we can think about and work through. In terms of our understanding of physiology and pathophysiology and the human element. The social determinants of health, the way the patient views their options.
We think about end of life care. We think about all the things that we do as physicians that we have to develop, and it's interwoven with our understanding, in my view of the science of human biology. It's not just, oh yes, that's the soft part of medicine or whatever. No, I think these are all integrated and it's, I'm still not convinced that that's gonna be replaceable.
Ultimately by even a really well developed computer program. And then it is that human interaction with the science and the patient. To me, that is ultimately what will sustain us and be an antidote to burnout and all those [00:36:00] negative issues that are going on right now for doctors around the country.
[00:36:04] Eddie: Yeah, no, I think that's a, that's a really great thought to close on. Just we had a long discussion talking about. Ai, but talking about our, our individual wellness and how AI can help us and our wellbeing and, and bringing it back to, to the things that make us, that, that makes our roles valuable are, I think is really great.
So, but I, I will thank everybody for listening and thank you as well, Dr. Schwartzstein about. joining us for today's ATS Breathe Easy episode. please subscribe and share this episode with your colleagues. I know this is something that sneaks up on me every year, so I'll remind everybody that the ATS abstract submission season is here.
The deadline is November 4th. Don't miss out and we will see you next. Time. Thank You'all. Thank you very much.
[00:36:51] non: Thank you for joining us today. To learn more, visit our website@thoracic.org. Find more ats, breathe Easy podcasts on transistor, [00:37:00] YouTube, apple podcasts and Spotify. Don't forget to like, comment, and subscribe, so you never miss a show.