Health:Further

In this episode, Vic and Marcus welcome back Tarun Kapoor to discuss the evolution and real-world implementation of AI in healthcare. They explore the lessons learned from Wobo, a digital therapeutic tool, and its regulatory challenges. Tarun shares insights on the future of ambient listening and vision in clinical settings, the rising need for AI infrastructure and oversight, and the shifting standard of care where not using AI could soon be seen as negligent. They also discuss workforce tra...

Show Notes

In this episode, Vic and Marcus welcome back Tarun Kapoor to discuss the evolution and real-world implementation of AI in healthcare. They explore the lessons learned from Wobo, a digital therapeutic tool, and its regulatory challenges. Tarun shares insights on the future of ambient listening and vision in clinical settings, the rising need for AI infrastructure and oversight, and the shifting standard of care where not using AI could soon be seen as negligent. They also discuss workforce transformation as digital agents emerge, and Tarun reflects on his next steps after 17 years at Virtua Health.

Tarun Kapoor LinkedIn

HF Site

YOUTUBE
INSTAGRAM
TWITTER
FACEBOOK
LINKEDIN
TIKTOK

What is Health:Further?

Every week, healthcare VCs and Jumpstart Health Investors co-founders Vic Gatto and Marcus Whitney review and unpack the happenings in US Healthcare, finance, technology and policy. With a firm belief that our healthcare system is doomed without entrepreneurship, they work through the mud to find the jewels, highlight headwinds and tailwinds, and bring on the smartest guests to fill in the gaps.

If you enjoy this content, please take a moment to rate and review it.

Your feedback will greatly impact our ability to reach more people.

Thank you.

Good, sir. Tk, welcome back to the show.

Hi, gentlemen.

I'm surprised you keep having me back, but yes, it's great to be here.

Uh, tk last time we saw you was at the, uh, jumpstart Health Summit.

And just really quickly, we want to thank you for, uh, blessing us with your attendance.

I think you, um, certainly had one of the highest rated sessions.

Your session with Nick Holland on AI was, uh, celebrated by all the attendees.

So thank you for being there with us and, and giving us that unique perspective that we.

You know, is the reason why we keep bringing you back on the show.

I, again, I, I appreciate the opportunity.

I learned a ton myself and just, uh, gosh, there's so much that's happened even since just the jumpstart session and even the last time we talked, um, on, on the podcast.

So I. Looking forward to catching up.

Yeah, yeah.

Yeah.

Vic, uh, let's, let's, let's jump right in on that, on that note, we've got an agenda.

We want to Yeah.

Sort of respect your time and, and, uh, also the listeners, so Vic, go ahead.

Yeah.

There were several things that, uh, we collaborated before that we wanted to cover.

And, and so tk, you were one of the early adopters, um, of AI through the WBO tool.

And I know there's been, you know, some good, some not so good.

Tell us about what you've learned from that experience and sort of what, what comes out of it.

Maybe lessons for other systems or, or takeaways.

Yeah, so, so for folks who, uh, who may have been a little or a little bit newer to, uh, health further, so the, the way I got introduced to all of you is I got a call from Bruce Brandis to say, Hey, the, the, these guys are talking about you on, on Health further, and said, Hey, uh, you know, you're, you're using artificial intelligence in this way, and I listened to the podcast.

I was like, yeah, we're using it.

It not exactly that way.

So I actually reached out to you guys by email and you were gracious enough said, well, hey, that's a great explanation.

Come explain it to us on, um, live on air.

And then I think we had our first great conversation and.

We've learned a ton.

So, just as a quick recap, for those who are not familiar about digital therapeutic, not, you know, digital therapeutics are probably not new to this audience, but the idea was for patients who are struggling to get into a therapist, a, a mental health therapist, or having trouble 41 or just the stigmata of, of going in to see one.

Could we use a.

A more intelligent chat bot.

And then what we did not use with robot was we did not use a large language model.

We used a knowledge graph with machine learning, so it is essentially what we call closed rules engine.

It was pretty sophisticated in its ability to listen, but it was very fixed on its output.

It couldn't really hallucinate in that way.

Right.

Since that time, there have been others large language models that have come in and I think hallucinations have tightened up pretty dramatically, uh, at, at across the board with, with large language models.

But nevertheless, we stuck with robot.

But our goal was to see can we actually prove that this thing works and then can we actually figure out a way simultaneously of getting it into the marketplace?

'cause you gotta get it reimbursed and, and so.

The results.

I think I shared some initial results with you on um, last time for the first 30 or so patients that we studied, three to four points of improvement in both their P HQ scores, which is a depression indicator and three to four points on their GAD score, which is a anxiety indicator.

Both of those scores are right there with medication therapy.

We.

Did a second set of data numbers validated.

Um, even better.

Uh, scores were four plus on each of them.

And for the, uh, statistical nerds out there, um, the p value was like point with like seven zeros in front of it, and then a one, which meant that the chances of this being chance would be.

Imagine a football stadium with everyone in the stadium flipping a coin simultaneously, and all of them being heads.

That was how statistically significant the results were.

Right?

So this thing worked right where, but people were following.

May have seen, wait a minute, I saw announcements in the last month that WBO is gonna be closing the app, and what ended up happening here is.

And actually this prob it started with the previous administration as well, is.

The curve ball we didn't see coming is that it was gonna be really hard to, it was getting very, very hard for the FDA to sign off on things for whatever reason.

Mm-hmm.

Whether it's, it started with the, the Biden administration and then of course even this administration has had some pretty significant cuts in the FDA staffing.

WOBO made a decision that.

They're not going to do the regulatory model, they're going to pivot towards, I wouldn't, I wouldn't call it a direct to consumer model, but not necessarily working through the health systems.

Mm-hmm.

Uh, but the issue comes down to is once you start down the regulatory path, you can't just pivot your platform over.

And, and so Cision has been, is a not to continue going forward with that specific product.

And, you know, I. Do people say, well, was it a, do you think it's a failure?

And the answer is, I don't think it was a failure.

I think we proved the concept can work.

It's effective, clinically safe.

Well, first it's first and foremost, it was safe.

That was the number one thing.

It was safe.

Number two is effective.

But number three is where we fell short.

We couldn't get at this scale.

But number one and number two are pretty exciting.

We actually had our first payer ready to go, and then we're pivoting away from this, um, tool to another ones.

But we have two other digital therapeutics that are teed up and both of them have payers ready to go.

So, you know.

Two steps forward, one step back type of thing.

But hey, listen, innovation is not a straight cer not a straight line.

Sometimes it's a little bit of a circular path.

Yeah.

So maybe, um, talk to me about the, I mean, I understand the, the business decision in a particular company, Wobo to decide of certain trajectory.

But, uh, from a, from a health system, health policy, public health point of view, it strikes me that it is safer to have the escalation path.

That comes with a health system, providing this in partnership with a technology provider, which would require it to go through a regulatory path.

It seems like the results would, if we can, you know, maybe it's be a slower process, but could, could be strong enough to get through that process.

Do you have some confidence that there will be solutions that health systems can use?

Uh, makes me nervous to think that there'll be.

Uh, technology tools just out there direct to consumer, but, but the actual providers that, that know how to treat a patient if, if he or she needs to be escalated, would not be using these tools that I understand the delay in, in challenges and regulatory pathways, but, but that seems like the wrong outcome from public health point of view.

Vic, I think it's a really fair question.

I, I wish I had a. Really clear cut answer on this one, and I'm not really sure I would say I, I think the whole idea behind originally of getting the FDA approval was to bring that reassurance that's, that the diligence was being done, it was being double checked, triple checked, what I think the signal now.

Coming from, at least currently.

Uh, the administration is, we want people to go fast and we're gonna have you, we're gonna trust that you're gonna double check it yourself.

And I think a lot of health systems are gonna have to think through that.

Do you have the infrastructure in place to double check these things yourself?

Hmm.

Who's monitoring Drift?

Who is monitoring hallucinations?

Who's monitoring for bias?

And I think it, it's gonna open up and.

A number of conversations that maybe smaller health systems are going to struggle with, and maybe you're gonna see more of the innovation happening again in this space.

We're willing to take risks on this consolidating into the larger systems, uh, because it just is going to require a significant amount of infrastructure to be able to monitor these things.

I, I think that's what this.

It tells us, Hey, I hope I'm wrong on it, but that's what, that's my current reading of the tea leaves.

Yeah.

That that might be right.

That I'm pretty frustrated with that answer.

Not that it's wrong, but if the US government in the form of the FDA doesn't want to take this on, how can a health system inherently much smaller and less resourced than the FDA be expected to?

I don't think that's.

That might be the answer that they're pushing on the market, but that doesn't seem like the right answer to me.

Maybe that's why I'm not in politics, but at, at, at the same time.

Right.

The FDA even recently has shown remarkable, remarkable speed on certain things.

I, I think you heard about the genetic, um, the CRISPR case of the child who was.

I say essentially cured of a very deadly genetic condition.

And the FDA turned that approval round in under a week.

So, so I, I think, but you know, but I think that was a situation where if they didn't do it, the, the child would, you know, was going to essentially be disabled permanently.

And this is a different story, so they, it was a compassionate care situation.

So that's where I think FDA was willing to.

Be a little bit more risk tolerant in situations like this.

I, I, I, I don't know if that's the case.

And, and by the way, it's not just with a digital therapeutic.

I mean, I'm seeing demonstrations of large language models for clinicians to use in real time that will read their EMR in real time.

Say, you know what, I, I noticed you have this patient on a statin for cholesterol.

I just looked at the previous result, uh, of their labs and I calculated their 10 year A-S-C-V-D score.

I recommend you go up to 40 milligrams and that advice is correct, and the question now that health systems are going to have to weigh in on, does that require.

Is that an FDA regulated device or not?

Right.

And I think the answer, right, we're gonna be getting back is, no, we're not going to do this.

You have to take this on yourself.

And the response right now is, well, there's still a human in the loop.

Right?

And we Right, and we've talked about this before.

Yes.

Where, um, you know, at, at some point you can't always have the human always checking everything.

And, and the analogy I give.

To folks is you have a car that has those, uh, blind spot signals that go off and says, now in theory, you're still responsible, right?

You're a human in the loop.

But at some point you learn to trust that light.

If it's on, there's a car in your blind spot.

If it's off, there isn't a car.

If I keep having to look over my shoulder every single time, what's the purpose of the light?

Right?

Right.

And, and, and so.

That is the part we're going to have to figure out is, and, and this is where I would love the FDA or someone who can work agilely to work on this and, and maybe this is something Marty Macari does have some thoughts about and would be willing to make these investments, but I, I think there's other things that they're trying to figure out right now.

Yeah, but well, go ahead.

I, I mean, my question is, if they are saying we're pushing it back to.

Health systems.

I also hear there we're pushing it back to the market.

So I guess my, my flash question, and we, we've got a couple of other points and I'm tracking time here.

I wanna make sure we, we move through these.

But my, my flash question here is might this be an opportunity that, uh.

The innovation economy can reasonably solve for this, this observation, this infrastructure challenge, right?

You know, is, is this something that, you know, uh, in a, in a less technology enabled world, revenue cycle was solved for by the r ones of the world, right?

You know what I mean?

Um, and so I'm just wondering, does this create, uh, a nice.

Surface area for innovators and venture capitalists to sort of say.

Hospitals are going to need support in managing their ai.

And that is, you know, it's a, it's a managing monitoring, triaging, queuing, auditing capability.

It's not necessarily compliance to a third party.

It's compliance to your own standards now.

Right.

Um, is that something that could.

Emerge reasonably out of the innovation economy.

And I'm, I'm really asking you, like, do you believe as someone who has the experience of running digital transformation inside of health systems, that that's viable?

Or do you think No, we need the government to step in here?

I, I think it's viable, Marcus, because the current alternative is we don't have one.

And so I almost think of this as to some extent, like the Joint Commission.

The Joint Commission is not a federally run organization.

That's interesting.

The Joint Commission has rules, and if you don't have joint commission, you, you, you're not gonna do very well out there in the marketplace, but it's not run by the feds.

And, and so is there something that could spin up here in the private space and say.

We have, we're using x, y, Z company to validate us, to bring validation to what we do.

We, we have someone looking at our model ops and our DevOps on these type of environments.

And is it perfect?

No.

But at least I have something and I think that could carry weight.

That's interesting in this space.

I, I, I do think that is, I, the other point I'll offer is as I've talked to vendors into this development space, I have said to them.

You can't expect to push off all of the risk, the clinical risk to the health systems.

You have to come up with some type of sharing mechanism on the risk.

I'm not expecting you take all of it, but at least can we come up with some type of reasonable shared model where it's a mutual indemnification or something.

Mm-hmm.

I think the health systems are getting savvy enough that they're gonna be asking those questions now and, and I would prepare vendors to start being prepared to answer those questions.

Okay, interesting.

One of the benefits of of us interviewing you is you're out on the, on the edge, and so giving us feedback on sort of what you're seeing out there is really helpful.

Um, moving on to, to the second topic, we've, we've been, one of the earliest uses was ambient listening for basically scribe.

Replacing scribing, uh, I made a note taking.

Um, you're starting to see ambient vision out there, and so maybe define how you think about these terms and then what, what you're seeing in ambient vision.

Yeah.

So, uh, with regards to ambient listing, I think the current use case that everyone's just so hyper-focused on is, is documentation, right?

And it makes a ton of sense.

There's a lot of friction.

There's a lot of.

Administrative waste there that stated ambient listening itself is really only amenable stances where everyone is articulating the conversation, where you're in maybe in an ambulatory setting and you're speaking, the patient's speaking.

And so there isn't any unspoken information per se that isn't covered through, but in the procedural suite.

A lot of what is happening is not spoken.

It's visual.

So imagine you're doing a procedure and you're watching the, the procedure.

You don't want to be articulating the entire procedure out loud.

Mm-hmm.

Right?

In order to capture the documentation.

It's just not, you're, if you're operating and you're cutting something, you're not gonna say, I am now cutting so and so.

Right.

I'm now doing like, it's like, like no one's gonna use it.

So in that situation, and, and so.

That got me thinking about the hybridization of these worlds and where ambient vision plus ambient listening together starts to create a more encompassing capturing of signal.

But the, um, the article actually, I'm, I'm, I'm just about to release on, on LinkedIn.

Uh, now I have to, now I'm talking with you.

I have to make sure I get it out, uh, is what I call the three box problem.

For, for ambient, whether it be listening or, or, or vision.

And that is, we're so hyper-focused on documentation.

Are we actually missing the real opportunity?

Uh, so let me, let me explain.

I'll, I'm gonna tell a story.

Um, imagine, um, so I'm gonna tell a story of when I was in residency training and I used to write a two hand write, a two-sided progress note.

I know for some of the listeners, yes.

We actually hand wrote our progress notes back in the day and.

I wrote a two-sided progress note and I had, uh, an attending come over, read one side of it, shocked to find out there was another side.

Flipped the page over, read the other side.

And he said to me, you know, Tarone, um, long progress notes are a sign of narcissism.

And then he walks away and so the next day I go write a one page progress note and he comes back.

He's like, oh, good.

You learned.

He was joking around with me about the narcissism piece, but then he drew on a piece of paper, three boxes.

Box One was this large box.

Box two was a middle-sized box, and box three was this much smaller box.

He says in number one box, the large box is what is actually happening with the patient.

Box number two, the middle-sized box is what you are aware is happening with the patient.

And box number three, that small one is what you're documenting about the patient.

Between those boxes, there's a finite amount of energy.

He says, you are spending all your time, your energy increasing.

Box number three, size.

What you should be doing is spending your energy increasing.

Box number two in size.

I think that's where there might be an opportunity to recalibrate how we're thinking about ambient.

Because if all we're doing is listening to am thinking about ambient from documentation, we're missing the opportunity to make the middle box bigger.

And so, for example, it's one thing to listen today, but we know that the emergence of vocal biomarkers is really taking off.

Right.

The ability to pick up Parkinson's and voice patterns, Alzheimer's and speech patterns.

We know their cardiac biomarkers, like if I'm fluid overloaded in my lungs, I can't speak as many words per breath.

There's no way a human can keep track of all of those conversations in their head over the years, but the machine can.

And so where I think there's an opportunity to really start thinking about ambient is longitudinally capturing all of this signal of which then a subset of it goes to documentation, but a subset of it might be able, Hey Doc, you know what I, I, I think you need to watch out for this.

Something's changed in the speech pattern and, and this.

Opens up a whole other set, set of caveats.

And so I think that's where we need to be having the conversation.

And that's why I also think vision starts to play into this.

I know this has enormous consequences with privacy, preexisting conditions, all of that.

But that's something I would be encouraging the industry to be thinking about.

It's interesting, so certainly listening and vision.

Are maybe the first two, but there's, there's many other, uh, data sources.

Um, for instance, all the, all the labs at all different paces, the sleep readings, my, my watch and my wrist.

Right now during the procedure, there's lots of, um, data that's being collected that various humans are watching, but.

But typically you one human can't watch all of it.

And is that what you're saying, that we should expand?

I mean, vision would be the next one, but you could have many different inputs into the, um, intelligent copilot for the doctor.

Absolutely concur.

And that's where I'm getting at.

What, what I think I, I'm also trying to get at is the conversation has to start now because starting to prepare the workflows to say.

We are not.

So what what really kind of got me riled up on this was I heard a, um, someone presenting out on, uh, one of their ambient listing copilots, and they said 50% of our doctors use the ambient listing copilot, and 50% just use their templates.

And we're fine with either one as long as the doctors are happy and efficient.

And I said, wait a minute.

And I started thinking about it.

I was like, no, actually that's not the right way to do this.

The right way to do this is we capture all of the signal.

Ambiently.

And then if the doctor says, I don't like the output of that for my documentation, that's your choice.

But you still wanna capture the signal because there's so much about the signal that we can establish as a baseline today that will detect biomarkers that we didn't even know existed yet.

And so, because if you only start doing this two years from now, look at all the signal loss that you have.

And I think these are conversations, at least health systems and vendors should be talking about with this is an advantage of taking, uh, taking, taking, taking advantage of all the signal.

Yeah.

No question.

I mean, my, my cynicism.

Makes me say that part of the reason that ambient listening has taken off is that it's very aligned, right?

It saves time for the doctor and it gets the reimbursement increased typically.

Um, whereas these other uses are less clear.

They're, they're definitely gonna lead to better, better diagnoses, better patient outcomes, but they might create, in some cases, more work for the doctors.

Um, and they, I think they should be utilized, but it's less clear, like when they'll be adopted to me Concur.

And I think on the flip end, if you start thinking beyond just documentation, if you're an ambient documentation vendor.

I can guarantee you're probably not commanding the same per clinician per month revenue than you would've three years ago.

So to o to stave off the commoditization impact you're going to have to either just come up with lower prices.

Or come up with other use cases.

And I think those who are actually able to capture adequate signal and feed their data sets accordingly.

And you know what the beauty of this is?

You're not even actually having to beg the health system.

Oh, can I have access to your Epic data?

Your Clarity database, right?

Yeah.

You're creating own, that's, that's amazing.

That is the other piece that I would be pushing, like you have direct access to the signal yourself.

And you don't have to get into that.

So that's something I would really strongly consider.

You know, the, the, the folks in the, in, in the innovation space.

There's a lot you can do.

And, and of course you can do even more if you have all the discrete data too, but don't discount having access to the unstructured data.

Yeah.

Excellent.

Okay.

Um, the next one I'm gonna, um, ask you and sort of, you know, tee it up for you.

So.

Tk, who's gonna manage all the AI agents running around the health system as, as they show up there.

The, the re the reason I think we talked about it in our pre is this was raised earlier.

Um.

Because listen, we're, we're now officially in a gentech world, right?

I mean, I guess we've been in it before, but the Google io yeah, I think really kind of has put this front and center and, uh, I have not signed up for my$250 a month shopping service yet.

But, uh, given that I just screwed up Mother's Day and I got my mom, my wife something, but I didn't get my mom something, I think I need to, uh, figure that out.

Um.

Um, and, and by the way, I missed on the anniversary, so you, that's probably a better one to pick.

Um, I, I think there's plenty of opportunities for me to reconsider that.

And listen, everyone's got agents now, and so the question has come up.

Great.

Who's managing your agents?

And the question that I was, I was someone posed to me and I was like, I started chewing on this and like, I don't have an answer, but let me ask, I wanna raise it with Marcus and Vic to see what they think about it as well.

Is the Chief Digital Officer, the Chief Information Officer, the Chief Information Digital Officer of the future.

Actually the Chief Human Resource Officer for agents, yes, I think, I think the chief human resources.

Chief like resource officer.

Correct.

Including human and digital.

That seems like we've been talking about this on the show That seems like.

The way to align interests where you need humans for some roles, but they're not best at everything.

Digital agents could be better at some things.

Yeah.

Mad, I mean, Moderna's already Yeah.

Was the one that, that we were reading about in the Wall Street Journal where their chief people Officer, I.

Is merging with the chief technology officer because you're gonna have, you know, you, you just have resources.

Now, some of them are human and some of them are digital.

Yeah.

But they're all resources and they all have to work together and be orchestrating and you have some responsibility to them all, which, like, that's the part that, you know, uh, I, I think we're, we're not quite fully.

Ready to, to, to deal with, but you're gonna have some, yeah.

Some responsibility, uh, to the digital resources as well.

I mean, I'm, I'm playing this with this right now, tk, about trying to use an agent for some of the workload that's done in a venture capital firm.

And the only thing that is limiting.

Un, you know, unveiling it and using it is the interfaces, like getting it to interf, getting the agent to interface on our tools, on Slack, on email, in Asana.

The, the ways that we get work done, um, I need to, or someone has to, right now it's made, build an interface, but, but the capabilities are already, are already ready.

And the, the interface is not technically hard.

It just is the hardest piece for me.

'cause you have to interface with, you know, slack, build a particular API, you have to work within their systems.

Yeah.

And, and I think where certainly in the, on the forefront of all of this where it's smaller companies, vendor companies, they're way more open to doing these type of things, the health system side.

Typical HR was about dealing with labor negotiations.

Yeah.

And the CIO is, you know, consumed with the cybersecurity stack at this point and all of those other things.

Invariably, the question is, are those, and you know, CIOs, chief digital Officer, whatever you wanna call it, are either of those roles.

Are either those roles or the people who are in the incumbents in those roles really geared towards doing those type of things or want to do those type of things.

And then the question becomes, well, if you don't have anyone to give it to, then you don't do it.

And if you don't do it, someone's gotta do it.

Is that becoming an opportunity, again, for private industry to come and say.

I could foresee, while I don't think people want health systems are going to want to fully outsource their, their human resources folks, I think they'll be way, way more comfortable in terms of outsourcing the management of their agents.

Um, and, and so I think there could be some opportunity there.

Um, and, and I think, you know, it's what's so interesting and why I, I always listen to you gu to you guys is.

AI is new.

Uh, well, the, the work we're doing with AI new, it's create, the, the adoption is creating new industries, sub-industries inside, inside of itself.

And you know, I think one is some of the stats, right?

Like half of the jobs that exist will exist in 10 years from now.

Don't, don't even, haven't been created.

I think we're starting to see some of these things, right?

It's got me thinking, wow, that's a.

I need to go find someone to do that, or I can't find anyone to do that.

Maybe I need to create that.

Yeah.

So I, I think just encouraging the, the vendors in the space or the entrepreneurs, I. Listen to all of this signal.

There's opportunity all over the place here.

Yeah, that's right.

And, and places in the market that have been not very attractive from a gross margin, net margin, scalability point I'm gonna take.

Home health.

Home Health has been a human resource staffing challenge, right?

It is huge turnover.

The, the payment for your workers is quite low.

The job is hard.

And I think AI opens up new opportunities where that entire body of work.

Could be very different with a combined set.

I mean, I don't, this will be a while before a robust giving my mom a bath, but a lot of the aspects could be very streamlined if you had, uh, a AI or a health coach in the home with the patient 24 7.

And then the, the human comes in for particular skills, but their job is much easier.

All the documentation, all the, all the messaging.

And so the.

The opportunity to do a home health business with a different way to deliver the care.

I think there's just entirely new business opportunities out there, to your point, around using AI in different ways.

Uh, and I, I always say where I think specifically, we take off my health administrator hat off and, and wearing my physician hat.

People ask me all the time, so like, Colonel, everything that's happening, would you go, would you go into the medicine again?

And the answer is, I, my answer is a hundred percent.

I would.

Going to medicine again, everything it's taught me and, and learned.

And I, I think the best thing about going to medical school for me and and residency training was it taught me to become an expert learner, which I think a lot of entrepreneurs and and VCs are expert learners as well.

But what I would make sure, I would caution people who are going into the healthcare field is don't dabble in the middle.

And what I mean by that is.

Where AI is going to create the greatest degree of disruption.

And it's not just in in healthcare delivery.

It can be in law, it can be a, a bunch of different areas.

It doesn't do currently, at least it doesn't do hyper-technical well.

Hmm.

Uh, even though Elon says it's gonna be better than the best neurosurgeon in, in, in a few years.

I, I think we're still gonna be wanting human surgeons cracking our chest open.

Yep.

And replacing organs and stuff like that.

Yep.

So the hyper-technical will still exist.

The hyper humanistic will still exist.

End of life care, hospice care, I think even oncology care, while the algorithms will be outstanding at finding the right.

Clinical trial for you, they, you still want to get the diagnosis from a clinician and what's gonna be the impact?

I, I think the folks though in the middle are going to be up for some pretty rude awakenings and where you're just like, I'm exclusively doing diagnostics and I don't have much of a humanistic interaction with people or patients and I'm not a hyper-technical person either.

Example, I'll give.

Was I, um, I just did a presentation for our board using chat, GPT and my son's knee was making a popping sound being, uh, a physician.

I ignored him.

Uh, dad, I said, no, it's fine.

Go away.

So, um, that is symptoms into Jet G-P-T-G-P-T diagnosed him with patellar subluxation syndrome.

My wife's like, I don't, no, no, I'm not trusting my, our, our, our only child to, uh, GPT.

I said, what's the paid version?

So no, no, no, no.

She, not even that, uh, she's not in that version and took primary care and or feed the pediatrician.

And Peter says like, I'm not really sure what this is.

Let's go see sports medicine.

And I was like.

That's why I bring you to primary care in this situation.

And, and so went to sports medicine.

Sports medicine.

Guess what?

Said patellar subluxation syndrome.

And, and the, that's where I think the, you know, I, I would want to expect my, my PCP of the future, my pediatrician in the future, to, to throw an ultrasound probe on you.

Like, let's, let's see if this thing is loose or not.

Right.

And, and so I think people are going to want, need to challenge themselves.

I think urgent care is gonna have to really think about this as well as, you know, if we already know that there's virtual, urgent cares and text-based urgent cares, they're pretty decent.

Uh, not for everything, but, but for a lot of things.

So I, I think making sure adequate technology you have, sorry, skill sets with, with, uh, hyper-specialization were appropriate procedural specialization were appropriate.

And, and then hyper humanistic skills.

I think that's where there's gonna be plenty of room for, for people to, to work in the meantime.

The other thing is, by the way, at the end, it, um, it asked me for do I want any recommendations of where to go?

And it asked me for my zip code.

Oh boy.

And it said, yep, I, it gave me two names of people.

One with Virtu Health System.

Good answer.

And one, an independent doctor who by the way, is very reputable.

And those two would've been on my choice, my top list.

I also tried to get it to send me to the er.

And this what it would not send me to the er.

He says, I don't think you need to go to the er.

Wow.

And that is pretty impressive.

And I know we've talked before about the large language models and what's the progression been?

Gentlemen, that that was legit medical advice.

I'm telling you right now, that was legit medical advice and OpenAI was giving legit medical advice and, and so the last thing I asked it to do is, will you make the appointments for me?

And that it said it would not do, but I'm sure that'll be the next option this fall for $250 a month.

Sure.

So sure.

That's back.

That's where we need an agent to, to sort of be able to have a tool like the phone or internet to book, book it.

Wow.

Yeah.

So, um, kind of stitching this back together to the beginning with Robot, you, you made the intentional choice to not use a generative LOM model and get a more machine learning, um, kind of controlled setting.

Um, that was 18 months ago, a little while, a little while ago.

Um, talk to me about the progress that you, you just talked about with your son, but clinical oms.

And there ares several that are designed for medicine.

Now, I, I know, um, Google has one, there's several versions, there's a couple of independence.

How do you see that progression and is it time to start using the, the gen even though it's not deterministic and you can get lots of results?

The hallucinations are becoming less and the benefit of that freedom and full range of decision making, I guess that's what you call it, is.

Well, how do you think about that choice?

I.

I, I think the first place that, and I know there's some health systems who are truly bleeding edge, more so than cutting edge, but bleeding edge on this.

They're using clinical LLMs inside of their, um, they're using clinical LMS inside of their, um, their practices.

You know, remember we have a retirement crisis.

Uh, for, well, we know we had the silver tidal wave, right?

That we're still halfway, we're only halfway through.

Don't forget that also affects our clinicians.

And so anything we can do to extend the career of our clinicians today, uh, is, is worth everything.

We, we cannot recruit our way out of this problem fast enough.

It takes seven years from the day that a clinician decides to go to med school before they even can come out on day one to practice.

Uh, primary care and, and then so let alone, if they wanted to subspecialize.

So I think that's well under the way.

I think the question that I started asking myself is at what point does it become negligent not to use it?

And I think we're starting to see in some ca so for example, augmented reality or for colonoscopy.

In my mind is now in a place where that is standard of care.

If you're getting a colonoscopy and your clinician is not using an AI tool to double check to make sure that they don't miss a polyp, I would not go to that doctor hands down.

And I think probably you'll see some case law.

You'll see a case at some point in the next, this upcoming year where someone's gonna file a case law that says, doctor, you knew this technology was available.

You knew it's not exorbitantly priced.

How come you didn't use this on this patient if you knew it was available?

And I, I think once a couple of those cases hit, there'll be other things that'll double start double checking in.

So I, I actually think we're gonna start getting into a place where in the next year, 18 months.

There'll be cases that come up of you knew there was a clinical co-pilot available.

How come you didn't use it?

Could have double checked these things.

I think that changes, I think that a couple of those cases start to hit.

I think that, I think the, the attitude starts to change very, very quickly.

Yeah.

That's where my mind, I, I wouldn't fly on a jet if the pilot was not using autopilot at times, not the entire flight, but, but having that available.

And I think eventually in the next 6, 12, 18 months, that attitude in the public will be the same in medicine.

Like you have this copilot available.

The doctor doesn't have to always use it, but he or she should always have it as a resource to draw on.

I, I would go a step further and say, and I think the health systems, I think it actually could potentially be riskier if you had it and you didn't use it.

Yeah.

It's one thing to see.

Hey, we don't even have it.

Okay, well you didn't.

If you have it and it's not on, wait a minute.

You, you had it and you didn't turn on the on button.

I, I, so I, I, I think that's, I think both have to start to be in play.

A the tools was available on the market.

You didn't go for it.

Have forbid You actually got the tool when you chose not to use it.

That's hubris in, in some of these cases.

Yeah, and I think that is going to be an eye-opener for a lot of folks.

It's gonna be a lot of change to absorb in the, in the, uh, not so forward-leaning, change, comfortable healthcare system.

I do want, I do wanna highlight that, uh, I think your perspectives on this TK have been consistent since our first conversation with you and, uh, you really kind of haven't been wrong.

The, it, it, it just kind of keeps lining up, uh, with the things that you're saying and I think that the.

You know, the general public is getting there, but I think still would be surprised, um, at the perspective of, you know, an experienced, licensed, uh, medical professional saying, uh, that using AI in the clinical setting really should be.

Best practiced standard of care.

Right.

Like I, I think, I think even, I think there's even still a gap in, in the general public's understanding.

You are hearing more and more stories, especially I, I look on social media, people diagnosing themselves correctly mm-hmm.

With chat GPT as you said.

So I think those types of stories are out there in the zeitgeist, but, but they don't translate to an understanding that.

The, the healthcare industry, you know, your community health system should actually be deploying these, these tools, um, for the betterment of everybody, right?

Mm-hmm.

Better diagnosis, better ability to, to, to be preventative, to catch things early.

You know, all, all, all of that for, for all the reasons that you've said so.

I think it will be interesting as we continue this dialogue with you to, to just track, um, that gap between general public awareness and sort of where you've been leading us for, you know, well over a year now, I'm, I'm fully bought in to, to, to what you're saying and, and I, and I would say I'm fully bought in, but maybe this is the first conversation where I'm actually gonna go home and I think have different conversations with my own.

Pcp, I've got a concierge pcp and I think, you know, the good news is I feel that I can reach out to him proactively and say, Hey, tell me the ways in which you're using AI in your practice.

I, I, I expect that of you at this point.

You know, we're, we're, we're there and just kind of see what the, what the feedback is.

So I, I just wanted to kind of point that out.

Yeah.

Yeah.

I, I agree.

I mean, I think part of the reason we'd like to have you on is you are maybe six months ahead of where.

The rest of the healthcare, even like early adopters are gonna be, but you've been Right, exactly.

On as far as it is time to really test with a known commodity robot.

So we have, so we have a very clear example and we can do the, the p value of whatever you said, oh, oh, oh one.

Um, that was where we needed to be 18 months ago.

And now that the.

Clinical lms, which are, you know, generative are worth testing and really putting head to head against other tools.

I think that's what you're saying.

I, yes, I, I, I think we are no longer in a place where you can say, I, I, I'm just going to wait longer if you are not actively thinking through how to incorporate these tools.

As a clinician, I actually think you're doing a disservice to your patients.

I, there's just, I, I'll give an example.

I, I, I do my ongoing licensing exams and I required to do the questions and there was a, there was a question about there a patient who was 60 some years old, had a pack years smoking history of X number of years, what should you do?

And one of the tests that I should have done.

I should get an ultrasound of their abdomen once, because there was one criteria between ages 65 and 75 with exactly x number of years, a pack year of smoking history, that you should do this test once.

How in the world are you supposed to keep up with this stuff?

As a clinician, there's a, that's a one time test, and I could have easily missed that.

Machine was not gonna miss that.

So I think at least for your screening purposes.

All the healthcare screening stuff, let the machine run with it folks, and then let the humanistic element of your clinician double check and reinforce to get you to actually follow through.

I think that is the power of this and.

And, and I, I, again, I, I, I think we're all, I'm preaching to the choir on this one.

Let's start embracing the tools.

Actively.

It is the right thing to do for our patients, not blindly, actively to end the show.

You have announcement, you're gonna be, uh, moving on from UA and I wanna hear about the next step and, uh, excited for you.

But tell us the details.

What, what are your plans?

What are you thinking about's next?

Yeah, I, I this, so this, I don't know if, um, I told you all the story about this, but, uh, so when I was training in internal medicine, I stayed on an extra year as a chief resident at George, the George Washington Hospital.

And the rule of thumb is if you were a chief resident, you got to pick your fellowship.

I. And so I, I picked gastroenterology.

I was gonna do gi and then I came to the realization I didn't like abdominal pain.

And so I decided not to become a GI doctor was probably the right decision to make for a career path.

If you're not into abdominal pain, like, my goodness, I love those scopes.

Uh, and that just, I. Got me into these four year cycles of what can I learn?

Go deep on something, learn it for four years.

Um, and so I actually joined a startup.

We exited that and that was at another transition point.

Uh, colleague had Virtu called me.

He said, what are you doing?

I said, I'm doing yoga.

I can't touch my toes.

I'm probably not meant to do this.

Mm-hmm.

And he said, well, come help me here at ua.

And I thought I was gonna be at virtual for like one or two years.

I just finished year 17 and I. It's, but I've had the opportunity to do like four different jobs here at ua and I think what's been so crucial for me is, and I'm so grateful to Virtua, 'cause everything I've learned here, um, at this health system has let me be an intrapreneur.

It is also time for me to four years kicked in again and four years times, you know, so I'm already a year over because four times four should be 16.

So I'm May probably a year over that I should be.

I need to challenge myself to get uncomfortable again, and I, I, I look at all the entrepreneurs who are out there and look how uncomfortable they make themselves.

I. It's time for me making myself comfort, uncomfortable again.

Where can I challenge myself to learn something new, different such that I can help the entire ecosystem get better?

And I don't have a definitive plan.

Uh, I'm gonna do some advisory work, some fractional work, but it's been a awesome run here at a health system that has given me the ability to do some amazing things.

So I will continue to keep you all posted, um, and check in with the new ideas along the way.

But, um.

It's it, it's.

It's a special time to try something new.

So let's, let's do it now.

Well listen, if you are a listener or viewer of health further, uh, you just heard it, you know, TK is leaving Virtua after an incredible 17 year run and, uh, hasn't exactly figured out what's next, but is.

Open to all sorts of possibilities.

I know our listenership, uh, is full of those types of opportunities.

So, uh, tk, can they just reach out to you on LinkedIn if, if they've, you know, they got a bright idea and, and wanna run it by you a hundred percent as long as they got thick skin.

So, uh, that's, that's, well, I'm excited for you.

It goes both ways.

It's, uh, but, um, and, and, and Jen just wanna say thank you.

The amount of work you put into the.

The podcast, the weekly podcast is just remarkable.

I mean, I have trouble just keeping up with all the volume that you cover, let alone for you to produce it all.

So, uh, a thank you to you, to the work that you, you do, just to try to keep the, um, your listenership, uh, up to date.

Um, what a remarkable time we're in.

Yeah, it's, I'm excited to hear what you do.

It's a, it is an exciting time.

Lots, lots to do, lots to, uh, work on.

So be excited to see where you, where you land and wherever you land.

Thank you much.

You gotta gonna come back and tell us about it.

Yeah, yeah, exactly.

You're not off the hook just because you're not a virtual.

Just just so we're clear.

Uh, you know, we, we'll, we'll, we'll see you in no less than six months, my friend.

Yeah.

Alright.

Very good.

Thank you all.

All.

Alright, have a good one, buddy.