Margin of Thought with Priten

In this episode, Priten speaks with Jack Kincaid, a third-year medical student at Harvard Medical School, about navigating clinical training in an era of powerful AI tools. Jack shares his perspective on Open Evidence (a medical LLM), Harvard's AI Sandbox, and the tension between leveraging new technology and developing as a physician.
Key Takeaways:
  • AI tools can accelerate diagnostic reasoning—but training still requires struggle. Platforms like Open Evidence can reliably synthesize evidence and suggest diagnoses, but reflexively reaching for them risks stunting the critical thinking that clinical practice demands. The goal should be building heuristics strong enough to stay present with patients, not offloading cognition.
  • Transparency about surveillance matters. From Canvas quiz monitoring in college to clinical logging systems, students often don't know what's being tracked. Jack's experience as a TA revealed the extent of visibility administrators have—and raised questions about whether strategic ambiguity helps maintain standards or just breeds anxiety.
  • Institutions are starting to take AI governance seriously. Harvard Medical School's AI Sandbox gives trainees access to multiple LLMs in a secure environment that protects curriculum materials and personal data (though it's not HIPAA compliant). This kind of infrastructure signals that leadership is thinking carefully about responsible use.
  • Career concerns about AI replacement are real. For students considering imaging-heavy specialties like radiology or radiation oncology, the specter of AI "scope creep" is a recurring topic in conversations with attendings and senior trainees. It's not paranoia—it's a practical factor in career planning.
  • Discovery often happens peer-to-peer. Jack first learned about Open Evidence by glancing at a classmate's screen during a simulation exercise. The most impactful tools aren't always introduced through formal curricula—they spread through observation and word of mouth.

John “Jack” Kincaid is a trainee in the Harvard/MIT MD-PhD Program at Harvard Medical School interested in the intersection of diet and disease. Jack received B.A. (Nutritional Biochemistry and Metabolism) and M.S. (Nutrition) degrees from Case Western Reserve University in 2021, where he helped investigate the impact of obesity and obesogenic diet on cancer development in the laboratory of Nathan Berger at Case Comprehensive Cancer Center. Concomitantly, Jack worked with a variety of food access and health literacy groups including CWRU Food Recovery Network and Cooking Matters STL. After leaving CWRU, Jack relocated to the UK to train as a postgraduate in the group of Sir Stephen O’Rahilly at the University of Cambridge Institute of Metabolic Science, studying the neuroendocrine regulation of human appetitive behavior and body weight. As a physician scientist, Jack hopes to leverage basic science and clinical medicine to help address the growing burden of diet-associated illnesses as well as develop safe, effective treatments for metabolic disease.

Creators and Guests

Host
Priten Soundar-Shah
ED of PedagogyFutures / Founder of Academy 4 Social Civics / CTO at ThinkerAnalytix
Guest
Ethical Ed Tech: How Educators Can Lead on Digital Safety & AI in K-12
Strategies and tools to integrate emerging technologies into K-12 classrooms in a way that benefits all
Guest
Jack Kincaid
Harvard Medical Student

What is Margin of Thought with Priten?

Margin of Thought is a podcast about the questions we don’t always make time for but should.

Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.

Each season centers on a key tension in modern life that affects how we raise and educate our children.

Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.

[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming. We spend a lot of time asking whether students should use AI, but what happens when that student is a doctor in training and the stakes are someone's health? My guest today is Jack Kincaid, a third year medical student at Harvard Medical School, currently in the thick of clinical clerkships at Mass Gen Hospital. He came up through a generation where Canvas quietly tracked everything, survived the overnight pivot to fully digital learning during COVID, and is now navigating a clinical world where large language models can suggest a diagnosis before a senior resident can. We're going to talk about Harvard's AI Sandbox, what it means to develop as a physician when powerful tools are always one tab away, and why he's genuinely worried about what AI might do to the careers of the doctors who come after him.

[00:01:08] Let's begin.

Jack: I am a third year medical student at Harvard Medical School doing my clerkships right now. I've moved from classroom learning at Harvard Medical School to clinical application as part of various services and teams at Massachusetts General Hospital.

Priten: Before we dive into your other current roles, can you think back to the very first time you as a student ever interacted with, quote unquote, an ed tech tool, whatever that means to you?

Jack: I'd say the use of Canvas as an educational platform. That started in college and was mainly used as an assignment database and grade book. It was majorly used.

[00:02:02] In fact, a number of my classes at university at that time weren't documented on Canvas. Then COVID hit, which necessitated a pretty large transition to digital platforms. All classes became integrated with Canvas at that time at my university.

Priten: At that time, do you remember having a particular reaction to it? Were you feeling neutral about it? Were you excited about it? Was it just an annoying new thing you got to learn?

Jack: I think anytime a new technological tool is introduced, I have a lot of hesitance about whether the issues have been ironed out. In the case of Canvas, I remember particularly when COVID started, all of my quizzes and tests started being administered through Canvas. There were a lot of questions I had with peers about whether answers would be documented properly, or what the functionality was for monitoring our computer activity while taking a quiz.

[00:03:06] What if I accidentally clicked off the tab? Would I be reprimanded for that? Is my teacher able to perceive every small motion? Not necessarily during test taking, but is my teacher able to see that I'm taking 30 hours on a homework assignment versus five minutes and just speeding through? I wasn't really able to ascertain the capability of the platform, so I had a lot of unease about it.

Priten: That's an interesting perspective from the student side—the black box effect. You know there's some sort of monitoring going on, you're not exactly sure what is and isn't being tracked. Did you ever notice an instructor who provided you enough information that made you feel at ease? Were they ever transparent about what they could and couldn't see?

Jack: No, they never were transparent. But what was eye-opening was in my senior year of college.

[00:04:03] I became a TA for a graduate level nutrition course where I was an administrator of the course page. You really can see everything. There are log books where you can see every individual motion on that site as one of the course administrators. There were certain instances where I could see students violating course guidelines on Canvas without them realizing I was able to. That was eye-opening.

Priten: Having seen both sides, do you think we should be more transparent with students about what is being monitored and controlled? Or do you think there's some advantage to a little bit of secrecy about what exactly can and cannot be seen?

Jack: I'm a personal fan of transparency in all things. I think it's really appropriate. When there is a lack of transparency, I can definitely see—and I've noticed this at all levels of my training in college, in post-graduate at Cambridge, and now in medical school—when there is ambiguity and students are aware of it, it does almost keep us in line. For example, HMS has a mandatory attendance policy, and when students are aware that sign-in will be via QR code that gets passed around, actual attendance will drop. But the technological artifact will show a hundred percent because everybody has a way to sign in.

[00:05:08]

Priten: What other uses of technology have you seen during your medical education? Medical education is interesting because it's high stakes, and I'd love to hear what role technology has played or has started playing, especially in the last few years.

Jack: Yeah, I was reflecting on this while filling out the interview survey you sent around. Something I've just been introduced to in the past few months that has made a really huge impact on my practice of medicine, but also something I've been simultaneously hesitant about, is the platform Open Evidence.

[00:06:09] It's a large language model being used by clinicians. It's able to spit out, from my experience, very reliable recommendations with respect to diagnosis and management of patients—of course, within the scope of not violating HIPAA. You can give scenarios and it will provide reliable suggestions by distilling evidence available online, as well as very helpfully provide useful citations that support those suggestions. I found that really helpful. It's incredibly powerful as a tool. My main hesitation at the trainee level is how this will impact my critical thinking development as training goes on. It could be very easy for me to reflex to entering scenarios I encounter directly into Open Evidence and have it pump out an answer.

[00:07:17] This is not a paradigm that's new by any means given the availability of LLMs.

Priten: What are those worries, especially for training years, of introducing a platform like that? Because if you're in the clinical setting between patients and you're quickly using it to catch up on the latest research, there might be utility there. I'm curious to hear more about how you view it during the training process.

Jack: I think particularly when I'm entering new services as a medical student—I start clerkships every one to three months with a completely new team, new organ system, new set of patients—it's really helpful for subject matter I'm completely unfamiliar with or for particularly complex patients with potentially rare diagnoses I don't understand. At Massachusetts General Hospital, many patients are medically quite complex. You reach a point in your diagnostic workup that is beyond what you see in school.

[00:08:11] I feel like I learn the first one to five initial steps and using reasoning I can piece together. But at some point, if I'm on neurology and there are many patients where you send a million labs, perform a million different imaging and other diagnostic procedures, and all of them come back negative—perhaps this is some sort of seronegative autoimmune encephalitis, for example, that has a robust presentation but is a very quiet disease that evades a lot of diagnostic testing. At some point you run out of options and knowledge set. As a trainee, how do I know? I don't know the probably thousands of autoimmune markers I could send off. So in that scenario it's really helpful.

[00:09:01] But in the converse, particularly as an early stage trainee, I think it's really important to strengthen as much as possible my fluid intelligence, flexibility, and critical thinking. Relying on tools like these can be a dangerous thing because the clinical encounter is so limited and so person-to-person. You really should hopefully be using as little brain power as possible initially and using as many heuristics as you can so that you can dedicate as much time as possible to building a bond and connection with patients in those 15 to 30 minutes you have in the room.

Priten: Tell me about what conversations are happening with you all. Are these considerations being brought up directly by supervisors and faculty? Are you all having these conversations peer to peer about what role technology ought to play in your training?

Jack: I wouldn't say so. My time in the hospital now is pretty hectic. From the moment I walk in at 6:30 in the morning to when I leave at 5 to 7:00 PM at night, I am focused on the patients on the floor. My interaction with the platform is purely for its use and it's pretty seldom.

[00:10:06]

Priten: Aside from research usage, when you think about earlier in your education, were tools introduced during your coursework that helped in particular, or that you felt were more of a distraction than helpful?

Jack: Yeah, I'm a bit anti-ChatGPT. I think it can be—again, all of these are very scenario dependent, even between LLMs. I think there are individual use cases where you can maximize the platform's potential based on its profile and capability. In the case of ChatGPT, I think it's so pervasive that it's hard not to feel incentivized as a trainee to use it. Even in the medical school context, when I was in classroom learning, you would have probably three to five hours of work outside the classroom to do each night. It would be very hard not to hear from peers who had elected to use an LLM to help them work through assignments and do it in a very small fraction of the time. There's something to say about the influence of the perspectives around you.

[00:11:02]

Priten: In terms of official policies and guidance from instructors and faculty, was any of that updated in time?

Jack: Yeah, thoroughly. We have very concrete recommendations. I can speak most about HMS. I think these platforms were being less adapted during my time as an undergrad. As a postgrad at Cambridge, I really didn't hear much about them or think about them. But now we are consistently, for every course, recommended not to use ChatGPT or an LLM to complete coursework. There are huge instructions never to input patient information or case information, even in the case of a simulated patient, into these LLMs. HMS itself has created an AI Sandbox, which contains I think five to seven LLMs. It's a completely private and secure way for trainees at HMS to use LLMs without the data being transmitted elsewhere.

[00:13:08] From that perspective, it's reassuring at least as a student to know that my institution is thinking so heavily about them, and particularly where there is concern for patient privacy, to be taking it so seriously. I really do enjoy that.

Priten: Was that made available as an option if you wanted to use an LLM, or were you told these are some instances in which this might be helpful to use? That's pretty remarkable that they've set up that infrastructure.

Jack: Yeah, AI Sandbox. I can pull it up. It's a pilot program—a secure tool that will enable users to choose any of the latest and most fully featured large language models. It features a level of data security that allows for copyrighted curriculum material and personal student details to be safely entered without such information being made available to the AI companies. However, it is not HIPAA compliant, so no patient information should be entered. It's just a way to keep everything internal.

[00:14:00]

Priten: You mentioned that you also play a mentorship role for college students. Tell me a little bit about those conversations. Have you had the opportunity to talk to them about their AI usage? Have any of them come to you with concerns or gotten in trouble?

Jack: Not within students under my direct purview, but within my residential community, I've been made aware of situations of AI use resulting in plagiarism cases with the Academic Integrity Board at Harvard. They are pretty rare, which is refreshing. I think students are very hesitant and aware of the risks and potential consequences of plagiarism in general, but also within the context of AI use. On a more positive note, there are a lot of really helpful use cases—like automated note taking services and ways of distilling large amounts of data.

[00:15:05] I work with a ton of pre-medical students and while we haven't necessarily implemented the technology yet, I think this year we're really interested in using these tools to record our meetings and better capture all relevant information that could be impactful for students' applications to medical school—making sure we're capturing and doing justice to all the hard work they've done to assemble those applications.

Priten: Is there anything about the tools, both in the education context or even in the medicine context, that's exciting you? Something you're hoping will be productive towards your training or future career?

Jack: My main concern is developing as an individual and not sacrificing the development of critical thinking through using AI in the training context. What's really exciting is the ability to amalgamate large bits of information. Within the clinical context, that's so powerful. For example, it's very common for patients to almost word vomit—that sounds super negative but it's completely natural. I definitely do it as well when I see a doctor; I just share all the information I possibly can. Some is probably helpful, some probably not. To have tools that can parse through that information and collect it—hopefully in a safe way for implementation—would be really useful.

[00:16:03] I'm sure you've heard of instances where AI has been used as a diagnostic tool to diagnose rare genetic conditions. We're all human. It's very hard to have complete universal knowledge of every condition in the book, particularly rare syndromes that are a weird constellation of symptoms that don't necessarily fit together into one system. Having AI be able to appraise whether or not—I don't know, if I had cataracts in my right leg, was orange, and my left pinky had a wart on it, and it turns out oh my God, I have gene XYB mutation. That's insane and really impactful for patient lives and outcomes, and from a financial perspective would mitigate having to use those shotgun diagnostic approaches I alluded to earlier, which are costing tens of thousands of dollars per patient.

[00:17:03]

Priten: Would you want more formal training in using those tools for things like that? Or do you think most of this will be intuitive—things that you and your peers would grasp more organically?

Jack: Well, based on the way the question's phrased, it's hard not to desire the second option where it's intuitive and easy to use. I think it comes down to a cost benefit analysis. I feel like I've become quite utilitarian in that way—my time is so limited that if these LLMs do necessitate training but are incredibly impactful clinical decision making tools, then yeah, I would absolutely be fine with growing training.

[00:18:03]

Priten: Anything you're really afraid about or scared about when you think about the next five years, in the context of medical training in particular?

Jack: Yeah, as someone deciding on what medical specialty to pursue, what residency to apply to, and what space to make a career in the long term—particularly in radiation oncology, which is an imaging heavy field similar to radiology—I think there is concern of AI replacement and scope creep.

[00:19:03] I don't know if I actually used that correctly; I think I just threw out a buzzword that maybe sounds cool. But it's definitely a concern and consideration every time I talk to a higher level trainee or attending in that space. That's always a question I ask when trying to evaluate whether the space is right for me to apply to. Having my career replaced by a robot would absolutely suck.

Priten: Yeah, reasonably so.

Jack: That is a major concern. I think that's a concern shared by almost every one of my colleagues at Harvard Med.

Priten: Is there anything else that you would love to share that you think is related?

Jack: Honestly, talking about Open Evidence was my main goal. I think that's a really fascinating new area. I don't know if I found out about it later than others, but the way I found out about it was in a simulation. Every week the pediatrics clerkship does these team sims where we work with a dummy—not a dumb person, a literal dummy that simulates a patient. We're in a team of six people, all my classmates. We're actually encouraged to use our devices just as you're allowed to do in actual clinical practice. I looked over and one of my friends was using Open Evidence. It had spit out a really helpful recommendation for the management of a patient with suspected croup, which is a pediatric viral infection. I was like, oh, what is that? I want to use it. Then I started using it in the sims and those sims became way easier.

[00:20:05] I'm excited to see where that tool will go.

Priten: Very cool. Yeah, I had not heard about it. I just forwarded it to my wife and my mom, who is also a doctor. I'm curious to hear if they've already played around with it, but I'm just quickly browsing through this. Sometimes these specialized tools are just a wrapper on the main LLM—they just add a little stethoscope and claim it's for medicine.

[00:21:06] Um

Jack: Yes.

Priten: Like they've done a lot more than that and clearly have partnerships with JAMA.

Jack: I think every time there's a little statement by New England Journal or JAMA when you put in a prompt into the platform, it's really wonderful. I think the platform might also be exclusive to healthcare workers and trainees. I had to verify, I believe.

Priten: It does look like that's the case. I'll have to use one of their accounts to play around with it a little bit more because this looks really cool. Awesome. Well, thank you so much. Take care.

Jack: Take care.

Priten: Bye-bye. I really appreciate Jack for speaking so candidly about something that most medical trainees are navigating without a roadmap. What struck me most was his insistence that the clinical encounter is fundamentally relational. That the goal of all of this pattern recognition and diagnostic reasoning is to free up enough cognitive space to actually be present with the patient in a 15 minute window. That framing doesn't make AI the enemy of medicine, but it does make thoughtfulness about how and when we reach for these tools an ethical imperative, not just a pedagogical one.

[00:22:06] Keep listening as we continue exploring the ethics of education technology. And don't forget to pre-order my upcoming book, Ethical Ed Tech at ethicaledtech.org. Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.