AI-First Podcast

AI is changing healthcare, and Stanford Medicine is leading the charge.

In this episode, Box Chief Customer Officer Jon Herstein sits down with Dr. Mike Pfeffer, Chief Information and Digital Officer, and Dr. Todd Ferris, Deputy CIO at Stanford School of Medicine, to explore how Stanford is deploying AI at scale, from ambient scribing in patient visits to automating workflows in clinical trials.

Mike and Todd share lessons learned from integrating AI into healthcare environments, the cultural shifts that made it possible, and how they ensure trust and safety in every AI implementation. They explain how Stanford’s unique “firm” framework helps prioritize high-value, mission-aligned AI projects that deliver measurable outcomes.

Whether you work in health IT, operations, or strategy, you’ll take away key insights on how to move from experimentation to enterprise-scale AI with confidence.

Key moments:
(00:00) Meet the leaders behind Stanford Medicine’s AI strategy
(04:01) Automation vs. augmentation in clinical care and research
(08:26) Rolling out ambient scribes across the organization
(13:31) Faculty-driven demand for AI innovation
(18:53) Digitizing analog workflows in clinical trials
(24:45) How Stanford sets AI priorities with its “firm” framework
(30:57) Balancing innovation with trust, safety, and governance
(43:00) Predictive care and precision medicine powered by AI
(52:45) The future of personalized healthcare at Stanford

What is AI-First Podcast?

AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.

This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.

If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.

Mike Pfeffer, M.D., FACP (00:00:00):
Technology is not under the ownership of the CIO per se, right? It's becoming more democratized than ever before. And so we created a secure platform for everyone at Stanford Medicine to be able to use the models in a safe way. And we have over a 15 models now accessible. And so rather than us dictating what the models could be used for, we said Go have fun, learn, experiment. And we've seen that be very successful to the point where by understanding how people are using this, let's call it sandbox, we develop products from.

Jon Herstein (00:00:36):
This is the AI first podcast hosted by me, John Herstein, chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Hello everyone and welcome to the AI first podcast where we talk about all things AI in businesses and enterprises that are taking advantage of the capabilities of artificial intelligence in pragmatic ways. And I'm very, very delighted today. We've had something new on the podcast, which is we're joined by two guests, not one. And today we have the longtime customer of box at Stanford School of Medicine and Stanford Health, and I'm joined today by Todd Ferris, who's the deputy CIO at the Stanford School of Medicine, and Mike Peffer, who's the Chief Information and Digital Officer for Stanford Healthcare and the Stanford School of Medicine. Welcome guys. Thanks for having us.

Mike Pfeffer, M.D., FACP (00:01:34):
Yeah, thanks for having us. Excited to be here.

Jon Herstein (00:01:36):
Absolutely. I'm very, very excited to have this conversation. I have been doing a lot of reading about the implications of AI in healthcare and we've got a thousand questions. We're not going to get through all of them, but we're going to cover as much as we can. But let me start first by giving you both an opportunity to introduce yourselves and your roles and how you define the scope of your roles at Stanford. Maybe Mike, we'll start with you.

Mike Pfeffer, M.D., FACP (00:01:59):
Thanks, John. So this is an incredibly exciting time in healthcare and in health sciences research and education. So just thrilled to have this conversation. And so my role is really to lead the technology department for the health organization, Stanford Healthcare and the School of Medicine at Stanford. We call ourselves TDS or Technology and Digital Solutions. And we're an amazing group of people dedicated to serving the missions of research, education, and patient care. And it's really an incredible place to be and be part of a team that's so dedicated to that mission and delivering amazing things in healthcare. I'm also a hospitalist physician, so board certified internal medicine and still around in the hospital and see patients with incredible residents and medical students here. So that's a really fun and exciting part of my job, but it also keeps me connected to the front lines.

Jon Herstein (00:02:58):
Yeah, I was going to say that probably keeps you very, very close to the day-to-day things that physicians, clinicians, and patients are dealing with.

Mike Pfeffer, M.D., FACP (00:03:04):
Yes, absolutely. It's why I went into medicine and I'm really lucky that I get to do both.

Jon Herstein (00:03:09):
Alright, Todd,

Todd Ferris, M.D. (00:03:09):
Over to you.

(00:03:10):
Yeah, thanks again for having us. I'm also a physician, family medicine, then actually came to Stanford to train in clinical informatics and actually never left, been at Stanford ever since, going over 20 years, hold a wide variety of roles, privacy, security. My fascination has just always been how do we enable our research mission and make sure we do it in a way that protects our patients' privacy and the security of their data. So that's been our huge focus is really how do we leverage technology to make that happen. This is just such an exciting time. As Mike said, for so often we said the computer should be able to do this. Well now we have breakthroughs where it can do this for us. It's just a really exciting time.

Jon Herstein (00:03:55):
Well, we're going to dive into some of the practical implications of that and where this is actually showing up for both of you. One thing I want to touch on is, as I've been reading about this area, the implications for healthcare are both on the operational side of how you run hospitals, how you run all of the things that we do for patients, but there's also implications on the clinical side and I think we're going to be touching on aspects of both. And I'm just wondering if you all could maybe at a very high level, just lay out how you think about those two domains and is there a third or fourth domain that we should also be thinking about when we think about what AI can do for healthcare?

Mike Pfeffer, M.D., FACP (00:04:29):
I'll start focus more on the clinical and operations side. Todd can talk about the research and education side, but I think AI has the potential to infuse itself into all of these buckets. And what we often talk about here is thinking about it in two ways. One, automation and one augmentation. I'll kind of break those in, but across all of those domains. So for us it's in each of those domains where there are opportunities to solve problems using the technology and do we automate or augment? So automation to us is we're taking a task that humans can do today and making it much easier, taking it completely away using the technology. So an example is AI ambient scribes. So you go in and see your physician here and they'll ask you if you want to participate and then it will listen to the conversation and then produce a note.

(00:05:24):
Well, doctors can write notes too. So we're just automating that process and giving time back to that physician patient relationships. That's an example of automation, lots of opportunities around that across all the mission areas. The second is augmentation, and that's really doing things that they human can't do by themselves or the computer can't really do by themselves. It's bringing that all together. So this is predicting different events in healthcare that if we knew earlier we could help patients. It's really kind of precision medicine type of things where we can use large sets of data to come up with better answers and better decisions for the clinicians. So those are the two big buckets. And obviously automation is easier to do in lower risk overall and augmentation takes a lot of work. And then Todd can talk about some of the stuff that we're thinking about on the research and education side.

Todd Ferris, M.D. (00:06:22):
So taking a similar tax, one area where we're thinking around augmentation is on the education side. We've launched a project with a faculty member who's spent a lot of her career really helping students in how do they think thinking habits. So when they approach a patient, how do you be systematic in that and make sure that you are approaching it in a thoughtful way. And that actually just requires a skill. There's a skill that they learn as they go through their education and we've developed a tool to help augment our faculty members. It actually can listen into a conversation between a faculty member and a student and then essentially say, did you prompt them? Did you miss an area? Was the student thinking through this carefully and help that teaching aspect? So it's again, augmenting them, helping them maybe where they miss something, not replacing them. And then on the other side areas where we're thinking about automation, we have a research project recently published where we were looking at folks that maybe have alcohol, substance abuse behaviors were really, they are struggling a bit and they could use intervention. And so we actually developed a chat bot that did an intervention. So people opted into this. They had this automated service that they could chat with whenever they were thinking about risky behaviors and we published on it. It was actually quite successful. A lot of interesting areas where we're leveraging ai.

Jon Herstein (00:07:55):
Those are some really great examples and I think we'll dig into some of these in a bit more detail as we go through. And certainly one of the things I want to touch on are some of the obstacles or challenges you have with deploying those kinds of solutions. But maybe before we get there, Mike, I want to go back to you. You gave a couple of examples of where AI can sort of be integrated into the workflow, if you will, between the clinician, the doctor and the patient. How do you do that without disrupting that or doing it in a way that feels maybe off to the patient?

Mike Pfeffer, M.D., FACP (00:08:22):
Yeah, it's a great question. I mean, these things take a lot of work to implement. It's not just turning something on. And we know that in healthcare, so I've been in health IT and informatics for over 15 years now, and I can tell you that it's all about the workflow, the implementation and the training, the technology is the easy part. So it really boils down to understanding how the technology is going to work, how does it integrate into the workflow, and then building that implementation and training to make sure it's successful. So with the AI ambient scribes for example, we were the first actually to pilot the integration of that tool into our electronic health record and test it and see would the workflow work. And prior to that integration, the workflow wasn't good, people didn't really want to use it. After that integration, it became clear that this was going to be successful and then we built from the pilot what the right training and implementation plan was going to be and actually decided to do it over a year.

(00:09:20):
So it wasn't just something we turned on for everybody right away. We wanted everybody to have the appropriate training, the appropriate support and learn from each of the waves of the go lives that we did. So we recently completed the entire implementation of it across all of our specialties. And I think because we spent a lot of time thinking about that implementation and training part, it went very successfully. I think that's just really key. And as people try to do things in healthcare, do technology in healthcare, there's often a focus on the product and that's only one small piece of the puzzle.

Jon Herstein (00:09:57):
Absolutely. And we're going to definitely talk more about the change management aspect of this and how do you get people to work differently. I am also really curious about how patients are reacting to it. So do you wind up having resistance or asking permission? Are patients saying no often and is part of the training to actually teach the providers how to overcome those objections and

Mike Pfeffer, M.D., FACP (00:10:18):
Concerns? Great points. I mean for the most part patients love it because it's taken the clinicians eyes from the computer screen to the patient and the notes come back quicker. So we have open notes, which means once the physician signs the note, it appears in the patient portal so the patient can read it. So you have better notes, you have faster notes, and you have a better connection between the physician and the patient. Now it isn't perfect and we absolutely respect patients' wishes to use the product or not. So that's always asked and we've trained in the training how to have that conversation. It's becoming so ubiquitous in healthcare. It's probably one of the fastest adoptions of a health IT product I've ever seen. And so I think patients are getting exposed to it very frequently and are seeing the benefits of it and are agreeing to participate. So we rarely see a no because it really does bring back that physician patient relationship that has been difficult with the computer. Right.

Jon Herstein (00:11:28):
Yeah, I could definitely see the huge benefit of the doctor looking you in the eye as opposed to looking down at their tablet and typing away or writing with a sty and you feel like you're getting more of that personal attention, huge benefit. And to your point, they're starting to see these technologies show up maybe in their dentist office also, so it's going to feel more natural and normal.

Mike Pfeffer, M.D., FACP (00:11:46):
It's funny you say that. I went to the dentist yesterday. I think AI scribes in the dentist's office might be a little hard in your mouth, so maybe you have to train the model how to interpret stuff like that. But yeah, absolutely. Some of the new technologies, x-rays, they don't do x-rays anymore. It's all digital. It's really incredible. So I agree with you. This stuff is really making its way through all aspects of the health sciences.

Jon Herstein (00:12:14):
And Todd, lemme go back to you for a minute. Going back to researchers and educators, a couple examples of cool wins. How have they responded to the pilots that you're running and did you have initial skepticism maybe in particular from folks who more on the research and the education side just around ai and how did you turn that initial skepticism into more excitement, enthusiasm about it?

Todd Ferris, M.D. (00:12:38):
We've had almost no skepticism. In fact, they're beating our doors down, we can't meet their demands and that just is Stanford researchers, they're always out there pushing. And so through Mike's actual initiative made available AI models very early in a secure environment that was on our own system and they just started gobbling that up and they were building things and they were immediately saying, can I get API access? Can I do this? Can I do that? And we're like, slow down, let us build things out. It's been quite the opposite.

Jon Herstein (00:13:14):
They are just gung-ho. Is that a symptom potentially the fact that you all are at stamp Berger in the middle of Silicon Valley and when you talk to peers elsewhere in the country and in the world, do you hear the same thing or is this unique?

Todd Ferris, M.D. (00:13:26):
Certainly the peers that I've talked to are also experiencing a lot of interests.

Mike Pfeffer, M.D., FACP (00:13:31):
Just the adoption of chat GPT when it came out was just enormous. I think it's not specific to Stanford. I think there's tremendous excitement about it at Stanford. Yeah, I think this is really taking the health sciences by storm really all the mission areas. This is just really exciting.

Jon Herstein (00:13:51):
So one of the things I want to shift to is talking a little bit about leadership and culture and particular leadership in the technology space. It's made even timely because I think there's been some title changes over there for you both, but how is AI changing your roles, the roles of the CIO, the roles of the CTO? What's different now from what it was before?

Mike Pfeffer, M.D., FACP (00:14:10):
Well, I've always thought the CIO or CDO role is a very strategic role. So it's not about what data center we should be in, which is very important, don't get me wrong,

Jon Herstein (00:14:22):
Right?

Mike Pfeffer, M.D., FACP (00:14:23):
Necessary. And that's part of what we need to do, but it's bigger than that. It's what strategies does the business need to accomplish and how is technology going to enable that? And now it's even technology can do things that strategically maybe we couldn't do in the past and now we could do. So it's really elevated a strategic role beyond what I think it's ever been in the past, especially in healthcare. And so I think that's been a big change. Two other things. One is technology is not under the ownership of the CIO per se. It's becoming more democratized than ever before. And so as Todd mentioned, we created a secure platform for everyone in Stanford Medicine to be able to use the models in a safe way. And we have over 15 models now accessible. And so rather than us dictating what the models could be used for, we said go have fun, learn, experiment.

(00:15:19):
And we've seen that be very successful to the point where by understanding how people are using this, let's call it sandbox, we develop products from it. And he said, well, if thousands of people are asking you to do this, let's just create something that does that automatically. So it's been a huge kind of learning environment for us to be successful. So it's understanding that technology is not just under the purview of the CIO, but really something that the whole organization can take advantage of. And this is my philosophy, but AI is not a team, a person. Everybody in technology needs to understand ai. It is a tool that everybody needs to understand and embrace and use, whether it's configuring your routers by the network team automatically, which our team did, which was amazing, using AI in our platform to all the way out the chain of clinical decision support. So it's not a team. And I think we've embraced that very early in our organization and that has really paid dividends for us.

Jon Herstein (00:16:24):
Very, very interesting. And I think your point about it being technology for everybody is exactly what we're seeing inside of Box. And also what we're seeing with a lot of our customers is that there's a lot of grassroots happening. And I think partially it is because there was so much excitement when CHATT first came out just at a consumer level of like, oh, this is interesting, I can do cool things with it. And you start thinking about, well, how can I apply those capabilities at work? I think very interesting now is you've got the grassroots stuff happening, but also top-down initiatives and where do those things meet? How much do you encourage versus limit the grassroots work that's going on because that's where a lot of the innovation comes from. So it's a very interesting time I think for us to be figuring this all out. And I do wonder, maybe Todd for you is how does this all affect what the digital transformation roadmap looks like at Stanford? So every organization's on a journey at some level to digitize, and I think you're probably pretty far along, but how does the capability of AI that's available now and all of these organic efforts, how do they start to affect what that roadmap looks like?

Todd Ferris, M.D. (00:17:26):
Yeah, that's a great question and you would be shocked that there are still some processes that are somewhat archaic. When we think about clinical trials,

(00:17:36):
There are still things that are printed out mainly from a regulatory standpoint. FDA still requires certain things or you have to have certain systems built that challenges. We still do have binders for clinical trials and insecure locations and we've been ramping that up. How do we digitize those things and give people those platforms? And increasingly, you're absolutely right, we are sort of how do we turn up that knob and increase the speed of that, especially as these tools come on board that could really automate or augment those processes, allowing us to enroll more patients into those clinical trials and provide that where might be resource limited. We can now bring more of that to more people.

Mike Pfeffer, M.D., FACP (00:18:29):
Todd, you reminded me of one of my favorite models that we implemented healthcare is the entity keeping fax machines alive. So we have a model that will read incoming faxes for referrals and determine whether they're urgent or not and then separate them into different cues. Talk about digitization and healthcare. I mean, there's so much low hanging fruit and sometimes it's going to be automating things that well shouldn't be there in the first place, but still the journey of digitization in healthcare is incomplete.

Jon Herstein (00:19:05):
Right? And that's a great example because AI is not going to make the fax machine go away because you've got this whole network of people that are still dependent on them, but it can make the process better knowing that there is analog data in the middle of it. That's fascinating. I mean, the other thing that occurs to me is when you talk about things like clinical trials, I think that's a mixture of structured and a lot of unstructured data. Yeah,

Todd Ferris, M.D. (00:19:28):
Oh yes.

Jon Herstein (00:19:28):
Yeah.

Todd Ferris, M.D. (00:19:29):
I mean it tends to push us towards structured. That's really what they want to have, but that's data that sometimes isn't collected in routine care. So there's a lot of abstraction, people reading the notes and pulling out nuggets of information as well as sending over copies of what we call source data. That's actually what was the source. So sometimes there's all sorts of things, whether they're images scanned in documents, all sorts of things to really back up the trial, making sure that we have that chain of custody laid out.

Jon Herstein (00:20:03):
So you essentially, in your world, you have, believe it or not, still analog content or data. You've got digital but unstructured, but also a lot of structured data and the way you think about AI for each of those may be slightly different.

Todd Ferris, M.D. (00:20:15):
Absolutely.

Jon Herstein (00:20:16):
Okay. So would you describe as, maybe not a trick question, but if you think about the broader digital strategy at Stanford today, would you say your posture today is AI first or is it AI friendly or is there another term that you all would use? How do you think about the importance of AI and what you do going forward?

Mike Pfeffer, M.D., FACP (00:20:35):
Well, certainly within our IT organization, TDS, it's AI first. In fact, we set a goal that we would hit 30% of all our projects to be ai. And that may not sound like a lot, but in healthcare we do a lot of things like open clinics up as a project upgrades on different platforms. We have over a thousand applications, so it's not that we could get to 90% because there's stuff we need to do to stow, call, keep the lights on, but really push us to get to at least 30% and we've beat that goal, which is really amazing. So definitely AI first in the organization, but Stanford medicine has always been very much about digital. We have an integrated strategic plan across all of our entities. That's the school of medicine, Stanford Healthcare and Stanford Medicine Children's Health, where digitally driven is one of our pillars. So it is wired into our DNA. And so it's no surprise that it is an AI driven organization, but not for the sake of ai. When you go back to our mission areas, research, education, patient care, and the problems that we want to solve to make all of those areas better, we're asking the question, can AI do that? Can AI help solve that problem? And it's really that approach. I think that the organization is take it.

Jon Herstein (00:22:02):
Have you ever had a situation where someone was leading with AI and actually not the right way to think about the problem? Oh yeah. Maybe I won't ask for example.

Todd Ferris, M.D. (00:22:12):
No, no. I mean it's fine. I had one this week. So I had suggested that we may solve a particular case with an AI solution and then chatting with the team, the team's like I think we'd be better in the first phase leveraging existing technology, learn more about the solution and then think about how do we then bring that into our AI platform. So we are, we're communicating about it, we're talking about it to the pros and the cons. Where does it fit in that overall

Mike Pfeffer, M.D., FACP (00:22:44):
Lifecycle? Yeah, that's a great example, Todd. There's so many products, vendors and healthcare, so many startups, which is really fun. I mean, I love the excite and enthusiasm that everybody has to try to make health better, which I will segue. The most important thing that we look for in partnerships with our tech companies, vendors, startups, whatever, is a connection to the mission. If it's not about patients, if it's not about research, if it's not about educating our future clinicians, then we are not going to be interested, right? It's got to be about that and then the product and everything else comes later. So that's really, really important. But there's so many solutions you can get lost in that problem very easily if you don't know what problem you're trying to solve.

(00:23:33):
And so we'll all make the mistake, we'll be like, that's really cool, let's do it. And then it's like, well, what are we actually solving with that? And is it valuable enough to solve? It may be a problem that needs to be solved, that may a good solution. We only have limited resources, so do we want to invest in that? And that's where the value equation comes in. What is the value to the organization by solving that problem and by solving it with that AI solution, we actually have a framework here that was developed by one of our faculty members. Nick Sha is our chief data scientists and his labs, and that's the firm assessment, which is fair, useful, reliable models. And we use that to actually determine is this solution that we are going to put in that has AI valuable to the organization by looking at all the mission areas by looking at how the utility of the model, et cetera. So that's very important to us because we don't just want to put everything in and hope it all works, it has to work and it has to provide values.

Jon Herstein (00:24:31):
Just out of curiosity, is that firm framework, is that something that's publicly available or is that a proprietary thing to Stanford today? Open source. Okay. Well, I would maybe encourage folks in the audience to go check it out because it may be a great way to think about this. And we are going to come back to this question of value because it's a very important one to me in my role at bots, but I think that all of our customers, the question always is like, well, what's the value of this thing? And I think one of the challenges, you guys can correct me if I'm wrong or confirm this, but one of the challenges that pretty much every C-I-O-C-D-O-C-T-O that we talk to is dealing with the fact that every one of their vendors is adding AI to their products and maybe they're just slapping it on, maybe they're deeply embedding it, but everyone's adding ai.

(00:25:13):
And so the challenge for you as a consumer of those technologies is which ones of these things actually have value? Which ones of these things actually provide benefit in your case to your patients, to your researchers? So maybe how do you all think about even just vetting and curating and deciding which of those AI solutions you're going to go pursue and which ones you're going to say, ah, there's not enough there for us to really be compelled by it and we'll hold off on that. Do you have a framework for that? Is it firm or it's something else? It's a great question.

Todd Ferris, M.D. (00:25:41):
One of the strategies that we employed and thanks to Mike for pushing on this so hard is really forcing in a way all of our TDS team members to level up on ai. So we actually published a course and required our team members to all take that so they understand the pros and cons. That way when they're evaluating working with vendors, they can understand the pros and the cons when they can start to weigh that. That's been huge because otherwise it is famous for seeing a shiny thing and getting directed to that. And this allows them to have a framework to assess is this really valuable so they don't get distracted and then suddenly distract the whole team towards that. And that's been very valuable so that when it does come up and then we've built into our project management that we identify things that have AI right away so they go into a separate dashboard. So we can keep an eye on that and understand where is that. I believe we're actually reviewing each of those before they even go forward to make sure that we like, yes, we want to make that investment, which is really important. As you said, everybody is pushing this into their product. We could suddenly consume ourselves with upgrading this and that extra license fee here, there everywhere, and are we really solving real important problems?

Jon Herstein (00:27:01):
Absolutely. And I want to go maybe even a little bit further and wade into, I think it's going to be a pretty deep area, which is we can't talk about healthcare without talking about trust and safety and we've talked about do these products and capabilities add value? But there's also the question of do we believe we're continuing to maintain the trust and safety that we're expected to in our roles as we're employing these digital innovations and specifically ai. So I am wondering, maybe we'll start with a high level view, maybe Todd from you on, how do you think about maintaining trust and safety given not just the fact that these things are new and I mean specifically AI capabilities, but also how quickly they're all moving. How do you build that into the culture and make sure that you're never losing sight of trust and safety?

Todd Ferris, M.D. (00:27:45):
It's a tricky one, but it is something that I think our firm framework and our AI governance really is trying to tackle. So number one, making sure we do that assessment before we deploy it. And that way we also understand what are we impacting and how do we measure that? That way we can continue to measure those models, making sure they continue to perform the way they expect. So models are taking inputs. A health system is a living, breathing organism, lab test change, the way we record information changes. What if we missed reconnecting the model and suddenly now the model is not getting the same inputs that it was before and it may now not fire correctly. So we need to be always monitoring that and making sure that it is behaving the way we expect it to behave, and that's part of the team's process. We have our aim right project where we track that and monitor our ai, making sure it is behaving the way we expect.

Mike Pfeffer, M.D., FACP (00:28:43):
Yeah, just to add to that, we have a governance group on the health system side that reviews all AI that we put into production, the overarching umbrella. We call it the responsible air lifecycle, but we can't just turn on AI without understanding what it's going to do. We do an ethics assessment and then that goes to a committee for approval. Obviously we monitor these things to make sure they're performing well, so it's taken very, very seriously and we pilot things, test them and publish on them. And there's a lot of work that goes into these things, which is why we don't just turn it on left and and we don't have a thousands of these things going because each one takes a lot of time to go through the process and to vet it and to understand it, and then that's the right thing to do because of the kind of business we're in. For sure. And you

Jon Herstein (00:29:33):
Said before anything goes into production, is there also a process to even take something in and pilot it? Are there different sort of phases that you take these things through? From a governance perspective,

Mike Pfeffer, M.D., FACP (00:29:44):
Yes. It depends on what it is. So everything's given a risk level, and then it really depends on what part of the business does it impact, what's the workflow? Workflow is so key. I'll keep emphasizing that how hard it is to implement all of these pieces, determine what kind of pilot do we need. Do we even need a proof of concept before the pilot where we technically see if it's going to work? And that's all built into every project we do. The assessment and the life cycle are our frameworks, but everything that passes through, it really isn't a cookie cutter. It can't be, it has to be individualized.

Jon Herstein (00:30:23):
Right. It's very interesting. I mean, I think every one of our customers are dealing with this question of how do we do governance around ai? And they've all come up with I think roughly the same structure, but I would imagine for you all is just a whole nother level of just compliance that you've got to deal with when you're talking about patient data, research data, the privacy of that, the security of that. How do you assure, I mean other than the governance process itself, are there other specific things that you do, rules or principles that you say everyone has to adhere to or I think that's from a standpoint of pragmatic advice for those who are listening and dealing with these challenges themselves, besides the governance committee and the governance process, what other things should people be thinking about putting in place? Maybe Todd, start with you.

Todd Ferris, M.D. (00:31:03):
Sure. Well, it is a good question, and I think as an academic medical center, we want to support things through that whole life cycle. So while we have very finite products that are coming, they're polished, they're ready to deploy, we also have these sparks, these ideas, and we want to create safe places for our researchers, our faculty members to innovate and then create a pipeline where we can test those where're working on increasing the support for that. It's not quite turn the crank, but we're working on that this year to get to a place where they can deploy things in a safe environment, but yet actually get real clinical data and clinical expertise coming in and then show value, be able to publish on value, and then we can evaluate to determine does it really have impact just using our firm assessment and then ultimately get deployed. So creating that pipeline, and that's what I encourage everybody is to think about how do you have that funnel, all those ideas, not all of those will be good ideas, but we really need to bring the creative spirit in and we may find those nuggets. We're again, not just looking out to the vendor community, but we're also looking inward at our faculty members who are taking care of patients, seeing real world issues.

Jon Herstein (00:32:19):
I love that idea of the funnel where you're going to have a lot of stuff coming at the top, but you've got to have a process for kind of whittling that down and what comes out. The bottom's got to be the highest value most secure.

Mike Pfeffer, M.D., FACP (00:32:28):
It's a great point. I want to add that we talked about this before, AI is everyone's job in it and everyone needs to be upskilled because when you talk about security, for example, privacy, you need an amazing infrastructure team, you need an amazing architecture team, you need an amazing cybersecurity team. You need all those pieces to come together with an understanding of ai, and we're very lucky to have that. The teams work incredibly well together, but all of those pieces are important to make sure that the models are secure and HIPAA compliant. For example, when we put them into production and we sync through how do we build our sandboxes, how do we deliver a minimum necessary data to make the models function? So all of those pieces need to come together so it's not just build your AI team over here and be like, ah, we're good to go.

Todd Ferris, M.D. (00:33:17):
It's

Mike Pfeffer, M.D., FACP (00:33:17):
Actually everybody's got to be involved and part of how you build these things. So that's something we take very seriously here, obviously, but it couldn't be more collaborative. The traditional framework of it where you have your different verticals doing different things isn't going to work in the future. There has to be this level of interdependency between all your teams to really be able to deliver this kind of stuff in a safe, secure and

Jon Herstein (00:33:46):
Meaningful way. It's a really great point. It does remind me a load of the early days of internet where companies had internet teams and if you look at how companies are organized today, you don't really have a separate internet team. It's just part of how you think about doing business now. It's great advice for folks listening just in terms of how to organize themselves around this. Maybe in the early days you still have some emerging technology folks who are looking at things in a slightly different way, but you very quickly I think get to this model of how do you integrate it into all the things that you do. So you're thinking about security, privacy, robustness, all of that holistically. It does make me wonder a bit about one particular aspect of ai, which is the fact that it's probabilistic. When you think about things like the importance of accuracy and repeatability in particular, that's incredibly important healthcare. So how do you all think about what do you do when AI gets it wrong? What sort of safeguards do you put in place around that? And again, is that built into the framework? You say every time we use an AI tool, we need to be thinking about these things, or there's certain use cases where it's actually fine that it's probabilistic in other cases it's not just maybe Mike, how do you think about that? How do you mitigate that risk?

Mike Pfeffer, M.D., FACP (00:34:52):
That is the big question I think in all of this, right? Because the whole idea behind generative AI is that it generates things, texts, pictures, whatever, and it's not going to give you the same response every single time. And so how do you handle all of this? And it really takes, again, a lot of evaluation and piloting of the tools, being able to assess how well they're performing, but also an understanding that it's not going to be perfect and how do you train people using the tools and understanding what their capabilities are? I think it's really important, but there's so much opportunity to do better in healthcare that if we aim for perfect, then we're not going to get better and we need to get better. And so that's been kind of our philosophy is we want to make things better and then we continue to learn and make things even better and make things even better.

(00:35:45):
So how we think about it globally, but there's a lot of work going on to how do you actually measure this? There's a framework that was developed here called Med Helm, which really assesses how generative models actually perform on healthcare tasks as an example, so you can begin to see how things perform, and that's something we use. We get lots of feedback, so there's realtime feedback that you get from using these models so we can see how they're performing. And so I liken it to a learning health system. It's really using these tools, learning, growing, making them better, but they perform really well and they do things that what a human might take a long time to look through a medical chart. Maybe they don't look through the whole medical chart because it's so time consuming. And so even if you can get a really, really good summary of that information, we've seen examples of where that has changed care in a very positive way because in a minute you can summarize what would take hours and really be able to move forward on that. So it's a great question. I think we're going to continue to learn in this space, but we build that into our framework and our monitoring capabilities as we move forward. One of the

Jon Herstein (00:36:53):
Tactics that I've seen come up more and more in discussions with customers is leveraging different models and different agents. We haven't really talked about agents yet, but to perform different tasks in a workflow, and so maybe you've got one model that's doing that initial analysis, but a second that's actually doing the validation and or the QA assessment of that. And are those techniques that you've begun to employ?

Todd Ferris, M.D. (00:37:14):
Certainly on the research side, we are playing with that and my comment on the previous question was that medicine is inherently a probabilistic endeavor. All the lab tests come back, there is some rate of error in them. Clinicians by their training deal in probabilities, and so they're very accustomed to tools coming back and it's not with a certainty. It's not like running a oil refinery where it's very mechanical. There's very set things. We are going by a lot of different sources of data. We all know there are some error in those lab tests, the various imaging studies we send people for. And so we're accounting for that and I think that's why these models actually work quite well in this environment. We need to understand how well they work just by doing things like med helm, but we really need to make sure our clinicians continue to view them as they're probabilistic. They have some error in them, they're just like a lab test or some other thing and we just need to be cautious of hallucinations and I think we are very cognizant of that. We don't want our models making things up, but there's just great opportunity leveraging that and bringing that skill to the forefront.

Jon Herstein (00:38:28):
Thank you for saying that because a fascinating point I hadn't really considered before, just the nature of what you do is probabilistic to your point, and people are used to working that way. I guess there's a question over time of is AI getting better at those probabilities than humans have been? Absolutely right.

Todd Ferris, M.D. (00:38:46):
The more information we get it, the better, and this is the point of med helm is to really be able to measure that As we continue to improve those, as we train the models on more and more data, their precision becomes better, and so we can continue to monitor that and make sure that it's reasonable. We don't want giving something that's a 50 50 and that's not useful. There's a certain point where we're like, this is useful. There's a little bit of noise in here, but we can account for that.

Jon Herstein (00:39:12):
It's fascinating, just a different way of thinking about it because on the business side, if you're talking about things like financials, it's predictable and repeatable and you know exactly what the answer is and you can test that, right?

Todd Ferris, M.D. (00:39:24):
Yeah. So if you're talking on a general ledger, it wouldn't be acceptable to sort of like, well, it looks like this amount of money, but in the case of a patient, we're taking in lots of data as a clinician and working with that and then looking at probabilities of what's the likelihood of this illness? Everything's pointing towards this illness, we should be headed that way, we should treat it like that. And then we watch for the response, does the patient respond to that? These can help us.

Mike Pfeffer, M.D., FACP (00:39:52):
There's often incomplete data sets too, so you don't have all the data all the time, and so that's just something clinicians are used to. And so again, we want to make it better

Jon Herstein (00:40:04):
Than

Mike Pfeffer, M.D., FACP (00:40:05):
What we currently have today.

Jon Herstein (00:40:06):
And I could see one of the uses being actually doing diagnosis. I would imagine there's also a huge use for what's the right next test to even run. What's the recommendation for next steps even before you get to what's the actual underlying condition? Is that the case? Is that happening?

Todd Ferris, M.D. (00:40:22):
There are certainly initiatives around that and research areas. Now, this is not new. Going back to the eighties, there were d xplain and other sort of tools where you could put in a list of symptoms and the computer system would pop out a list of diagnosis in probability order, but I think that the tools are getting more integrated into the workflow.

Jon Herstein (00:40:44):
It

Todd Ferris, M.D. (00:40:45):
Was a lot to ask a clinician to go over to a separate computer system, type in all of those things, and then have the system spit it out. But now it can be in the workflow, but again, we need to make sure that our clinicians are understanding that this is based on the information that the system has. You have the patient in front of you, this a clinician, you really get to understand does this person look sick? You interact with that person and you have a real good sense of what they're doing, and so they need to integrate that along with this data.

Jon Herstein (00:41:13):
Never lose sight of that, right? That there's still a human involved here that you're caring for. It's just so fascinating. I do wonder, and I dunno if you can give specific examples or not, but have you started to run into any ethical dilemmas or any of the folks that are starting to use these technologies? What are the nature of those and how do you work through those? Maybe Todd, I'll start with you.

Todd Ferris, M.D. (00:41:35):
Well, I was going to say Mike might have better examples being on the wards. He's seen a lot of this, and I mean we have some dilemmas even today of just what we call a note bloat is the term. That's where people are copying and pasting notes, and I think that there are probably some issues that we might see with this. I don't know, Mike, have you been seeing anything in the early days on the wards?

Mike Pfeffer, M.D., FACP (00:41:55):
So I'll go back to technology and workflow, right? There are how you deploy these things and the workflows that you design certainly play into these challenges. So here's an example. I mean, we haven't deployed this, but this is just an example. If you're going to create a model that predicts which patients are most likely not to show up for their appointment, sure, you could create a model that that the model could be completely unbiased and predict pretty well, who may not show up for their appointment. Then the question is, well, what do you do with that? You could double book, so you could add more patients onto this schedule, but if the patient shows up, this puts everybody at a disadvantage. The patient who was going to predicted to no-show the patient who's there that was double booked and the physician now that has to deal with all of that, or you call the patient that was predicted to no-show and you ask them, how can we help you get to the appointment? Right? Two totally different workflows, very different ethical implications.

(00:42:55):
So each one has its own potential challenges. That's why we do ethics assessments for a lot of what we do and impacts patients. The other thing is it's a little bit of a cultural change in the sense that we will start to be doing things that cross specialty lines. And so we could learn things off an image that say maybe one physician ordered for a particular reason, but find something that another physician is going to need to take care of that we didn't do before, or predict things, diagnoses or that one specialty doesn't typically handle and who's responsible for that? And so there's this a little bit of blurring of the lines that is quite complicated and we have to work through as examples of the change management around all of this. Then there's regulatory insurance and all of these other things that factor into it too.

(00:43:51):
So let's just say it's a really exciting time and fun to be in these challenging, exciting and challenging because problems, but it's really fun and a privilege to get to work through these things. And now we have another tool in our tool belt that we didn't have before. That's incredibly powerful. So it's not like we didn't know these problems existed. It's not like we haven't wanted to make differential diagnoses better using more data, but we didn't always have the tools to do that, and now we have the tools and we have to learn, but the more and more you get into clinical decision making, that's where again, the risk is higher and you start getting into software as a medical device, you start getting into really having to understand how these systems work and then educating clinicians to understand probabilities related to what the models are going to produce. So that gets much more complicated, which is why I think you're seeing right now a lot of focus on automation in healthcare because that's a lot of the low hanging fruits,

Jon Herstein (00:44:52):
Right? And I see an analogy in some of the other industries that we serve. For example, immediate entertainment. AI is very problematic from a creative perspective. So there's a lot of concern and a tendency to sort to stay away from it there. But if you think about a whole bunch of other business processes that every median entertainment company needs to run, AI can be very helpful. So sort of deciding where it is appropriate and where it's not appropriate and the guardrails may be different. You did mention Mike some regulatory stuff. I don't want to go too far down that rabbit hole, but just can you touch on that a little bit and what are some of the things to be thinking about from a regulatory perspective?

Mike Pfeffer, M.D., FACP (00:45:27):
That's a very dynamic and open space right now. So there hasn't been that much in the way of regulation of ai, just overall, just AI in general. We take it very seriously in the sense that we know everything that's in our systems with ai, how we're monitoring it and when it got approved and all of these things. So at least we are very careful about that part of it. So I think everybody's trying to figure this out. And so it's a learning space and much like everything in medicine when a drug gets approved, there might be post drug deployment findings that there's a new side effect or a new whatever, and then you have a new warning or they change an indication, and this is just common in healthcare in general. So I think the regulations for AI are not explicit, but are probably going to follow in the same footsteps that we are going to learn and continue to monitor and go from there.

Jon Herstein (00:46:27):
Thank you. Todd. I want to turn to you and pivot a little bit to looking forward, and we touched on a few things here, but what gets you the most excited about what's possible and do you have any ideas about what could be possible that we just simply can't do today? Let's say three to five years out?

Todd Ferris, M.D. (00:46:45):
Some interesting areas that we're working. I think all the craze has really been about large language models specifically for generative text and images and other things. But we're also thinking a lot about how do you leverage transformer models but not with what you would think are traditional objects like a word, what if that's really more like a medical event because then you can start to predict the next event. And so we're looking at that specifically around right now, big focus on the cancer side. We have two very large initiatives through ARPA H and then the Weill Cancer hub West, where we're working on how do we think differently about cancer care and can we predict outcomes from a very complex multimodal dataset. So not just simply what's in the EHR, but also what's in the histology, which is really the gold standard when you think about cancer treatment.

(00:47:47):
Actually taking that biopsy of that cancer tissue and then looking at it under a microscope, staining it and understanding what is going on at a cellular level is really what's going to drive. And then the genomic sampling of that. So we actually take those cancer cells and we sequence them and understand what is going on, feeding that into a model, then to predict all the potential outcomes, we're starting down that path, and I think in three years we can really be just way out there, way more than a human can integrate that information so large. And I think that's where the computer comes into play.

Jon Herstein (00:48:23):
That's a fascinating example. So the same underlying transformer technology, but applied to a very different problem set

Todd Ferris, M.D. (00:48:29):
Foundational models that are developed on top of novel data sets.

Jon Herstein (00:48:34):
And I would imagine Stanford is at the forefront of some of that research.

Todd Ferris, M.D. (00:48:37):
Yeah, we're trying

Mike Pfeffer, M.D., FACP (00:48:39):
To be be.

Jon Herstein (00:48:40):
Okay. So Mike, maybe to go back to the operational side for a minute, paint us a picture of a day in the life in a hospital augmented by ai. You don't have to tell us every detail, but what will feel different maybe to a patient?

Mike Pfeffer, M.D., FACP (00:48:54):
Well, there's a lot of foundational things that we are still working on in healthcare, like scheduling and prioritizing and things like that, that as we get better and better at, we'll be able to be automated. I think you're going to see more visual AI based technologies in hospital rooms and in clinics that will do more than just listen to conversations, but we'll understand how you're feeling, what's your fall risk, et cetera. So there'll be much more data generated around you as a person, which will then help give clinicians the opportunity to have better personalized decisions. I mean, all of this is really in my mind, moving towards personalized health. Clinical trials for drugs are averages. Basically the one group on average does better than the other group, and it's statistically significant, so that's great, but not everybody in that group did better. Some did worse.

(00:49:54):
Not everybody in is going to benefit in the same way. So the only way to get to that, this is the right medicine for you in the category of say, high blood pressure medicines, you can choose from a bunch. We'll be able to hopefully get to that point where this is the right medicine for you and we'll have the best outcome. And that I think is where all of this hopefully will go. And so the experience hopefully will be easier. I think there'll be more interaction through patient portals. There'll be agents helping you schedule things. They'll be more seamless, but ultimately it's about that personalized precision medicine that will really change that experience.

Jon Herstein (00:50:37):
It's so fascinating, and I think there's so much potential here, and I'm glad you all your teams are working on this. This stuff continues to evolve incredibly quickly. How do you ensure that Stanford, both culturally and operationally stays agile, is consuming all of these innovations as they occur in the right way? Again, back to trust and safety, but just what's the plan to make sure that you stay on top of all this stuff?

Mike Pfeffer, M.D., FACP (00:51:01):
Yeah, I mean, honestly, I don't think we can. I think there's so much coming at us all the time. I think we just have to stick to our mission and our values

(00:51:12):
And use our frameworks and identify the problems that we want to solve because it's going to be better for our patients or better for our researchers and stick to that. But it's so hard. I mean, there's so much, right? There's so much coming our way. And so staying true to that I think is how we stay on top of it. Recognizing that it's a team, it's not a person, it's not a small group, but it's really everybody. And then it's not just it, it's everybody in the organization. It's that kind of philosophy that will help us. But I think it's very hard to stay up to date with everything and everything's changing pretty quickly, which is exciting on one hand, but also when you think about deploying things into production at scale, version control, making sure these things are performing as well as they need to perform, you can't always go down the shiny object route and move too quickly. I think you really need to balance all of that. Todd, what would you add?

Todd Ferris, M.D. (00:52:08):
I think Mike covered most of it. The only thing I would add is kind of what I mentioned earlier, is that's really making sure we keep those sandboxes, those tool sets available for our own team members as well as our community so that they have that space to work and innovate because it is coming too fast and furious for any one person or one team to own. We really have to view it as our whole organization is out there looking at that next innovation and figuring that out, but able to do it in a safe, controlled way so that they, and then we can that funnel right back to that funnel, making sure that it comes in, and then we can figure out what's the next thing to deploy

Jon Herstein (00:52:53):
And using the frameworks that you talked about to evaluate things in the funnel and bring them down. And I love Mike, what you said earlier too, about always keeping the mission in mind. And I think as these things come at you, having that lens of what is the mission and how does it support the mission, whatever the mission is in your case, it's very clear patient care and education and research for business might be something completely different, but look at it through that lens is what you're saying. Yeah, we trained you well, John, you got

Mike Pfeffer, M.D., FACP (00:53:19):
The mission.

Jon Herstein (00:53:19):
Perfect. I got it. Yes. Perfect.

Mike Pfeffer, M.D., FACP (00:53:21):
Thank you.

Jon Herstein (00:53:22):
I want to close with a couple things. In my role in customer success at Box, I think a lot about kind of three things. One is delivering value, which we talked about earlier, and so I want to touch back on that. The second is around culture and change, and the third is around the experience that we provide to the folks who consume the things that we're providing. So three big areas, but I'll just ask you a couple quick questions on these. So Mike, what do you see as the critical path to value realizations, making sure that the things that you're bringing in and evaluating and ultimately putting out there for folks are actually

Mike Pfeffer, M.D., FACP (00:53:55):
Providing value? Yeah, I think it's all about the framework, and it doesn't matter necessarily what framework you use, but you need a framework to evaluate these things and then follow up if they're performing as they need to be. And if they're not, turn 'em off because they will live forever and they could create problems later. So it's really sticking to that framework and doing a really good assessment of the value of what it should provide and then measuring it later. You got to measure it later.

Jon Herstein (00:54:22):
Well, and I think your point about follow-up is incredibly important because I think a lot of, at least on the commercial side, what you see a lot is you do an initial business case to make the decision to go purchase something. But does anyone ever check back six months, 12 months, 18 months later to see, well, did it actually do the thing we thought it was going to do? Exactly. So evaluate upfront and then keep checking back to make sure that value's being delivered. Todd, let me ask you about culture and change. And we've touched on this a little bit and it's obviously a huge topic on its own, but as you're deploying these kinds of innovative technologies and deciding what's in and what's not, and guidance for folks out there about how to think about the change aspect of that, and I think, I can't remember Mike, if it was you who said, not about the technology, it's about the change and the people. So Todd, how do you think about that and what are the most important criteria for influencing that?

Todd Ferris, M.D. (00:55:09):
Yeah, culture is such a big thing. I think we're very lucky at Stanford that we have culture of innovation. It's very much sort of baked in. We're a very entrepreneurial bunch here. So that culture just sort of permeates across the organization. And so the other piece that I think is we are also blessed with is that folks do understand that there are limitations. So we've been working for a long time, since I started with the privacy and security in the, I won't say how long ago, but it was a while ago, and it's now baked in. I mean, it really is baked into the culture that they recognize that yes, I can be innovative, but I have to do it in a way that protects this data. It's so important. You've got to have those foundations, and when you have those foundations, then people can just really build from there.

Jon Herstein (00:56:03):
Excellent. And then the last thing I wanted to touch on is just the experience that you intend to deliver. And I think, Mike, early on in the conversation, you talked about the importance of these solutions being integrated into the workflow so that you're not over here doing some AI thing and then you got to go back to the way you normally do work. Maybe just how do you all think about, maybe for both of you, how do you think about making sure that the experience that you provide ensures the success of the solution that you're rolling out?

Mike Pfeffer, M.D., FACP (00:56:29):
Yeah. Well, so it's a multidisciplinary team. It's clinicians, it's technologists, it's UI experts, it's nurses, it's everybody that has to weigh in on how we design the workflow, and then we test it and then we fix it because usually the first time isn't always the right way. And that is just incredibly key. You have to have the people who are going to use it, the people who are going to manage it, and the people are designing it all have to be in the same room, really thinking about that experience. So I think that's really important and that is so important that it has to be completed before we make a decision to move forward and put it into production. Because again, and we've talked about this a bunch, the workflow determines a lot of the times what the model is going to do. And so it all needs to be incorporated there, but it's really bringing everybody together in order to understand how this is going to work.

Todd Ferris, M.D. (00:57:27):
Todd, last word, anything to add on that? I think Mike really summed it up. It is all about the workflow. If you don't get the workflow, it doesn't matter how great your models are, it's not going to work and it's iterative. You know that you can plan the best, but until you actually test it, pilot it into that workflow, you just don't know.

Jon Herstein (00:57:48):
Well, I have really, really enjoyed this conversation. Maybe I'll give you each a very quick minute to say, one piece of advice for folks who are not as far along as you all are and how you're thinking about this. Is there maybe one thing for peers, CIOs, CEOs, CTOs, whatever leadership role they're in here, what's one thing people should do or not do to really improve their chances be successful with ai? Mike, start with you

Mike Pfeffer, M.D., FACP (00:58:14):
Starting with

(00:58:14):
Me. Okay. One thing I would just say, it's making sure you have a really good team. It always comes down to the people that you get to work with. And I am so fortunate to get to work with amazing people both within TDS or technology organization and throughout Stanford, but it's just about the people. And if the team works well and the relationships are good, then you're going to be successful. It's just like with any, AI is just another tool. So it's really still boils down to those basic principles of really good team and a team that is high performing

Jon Herstein (00:58:53):
Great. And probably once you build that team, hang onto it as long as you can and continue to grow and develop it.

Todd Ferris, M.D. (00:58:59):
Yes, Todd. Okay, so no offense, John, but don't believe vendors, they're, they're not going to just come in and magically solve your problem. You got to have the team, you got to put the work in. You can't simply just pay somebody and it's just magically going to work. This is too often I see vendors in and say, we're going to solve all of this, and it's, nope, you got to put in the work. You got to think through the workflow. You got to think through the process. You got to think through all that. And to Mike's point, you got to have a strong team.

Jon Herstein (00:59:32):
Yeah. Well, the good news is, I'm not offended by that. I think there's an element of trust but verify. And certainly as you build long-term relationships with vendors, you get to know them a bit, but there are a lot of new entrants in this space. And so being really clear about what your requirements are, what your criteria are, and I think, again, going back to the frameworks, using your pre-established frameworks to do those evaluations sort of takes some of the subjectivity out of the process. There's always going to be some, but it takes a lot of it out. We are very happy to be one of your vendors and I would hope say actually a partner more so than a vendor.

Todd Ferris, M.D. (01:00:03):
Yes.

Jon Herstein (01:00:04):
But this has been a great conversation. Again, the first one I've done with two guests at the same time, and it was incredible. So appreciate both of your perspectives, both your insights, and I really appreciate your time. So thank you. Thanks, John. Yeah, thanks for having us. This is great. Thanks for tuning into the AI first podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with ai, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious and lead boldly into the AI first era.