Margin of Thought is a podcast about the questions we don’t always make time for but should.
Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.
Each season centers on a key tension in modern life that affects how we raise and educate our children.
Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.
Episode 20 - Anna Zendall: Is AI Literacy the New Professional Credential?
===
[00:00:00]
Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming.
What happens when the classroom is preparing students for careers where AI literacy isn't optional? Today's guest is Anna Zendell, a social worker turned educator who now oversees healthcare management, human services and wellness programs at Bay Path University. She's currently building an AI enhanced curriculum from the ground up as a live pilot that launched this fall.
We're gonna talk about what it really takes to retool a curriculum for AI, why adult learners in healthcare face a uniquely high stakes version of this challenge, and how one educator is trying to teach students to think critically alongside a technology that doesn't always get it right. This is about what responsible AI [00:01:00] integration looks like when the stakes are a patient.
Let's begin.
Anna: It's actually a winding journey. I never had any aspirations to be a teacher initially. I became a social worker. I started out more in the clinical realm, and then I moved to the macro realm. I ended up settling on community social work because as I talked with people, I quickly found barriers they were encountering that they had very limited influence over, especially people from marginalized backgrounds. I gravitated toward community social work and eventually ended up working with a major research university, University of Albany, doing community-based research and interventions.
I fell in love with that absolutely and utterly. I went for my doctorate at the University of Albany in social welfare. [00:02:00] Then I had the opportunity to do a teaching assistantship. It was online asynchronous. I was really nervous about it. But as I embarked on teaching, I found myself falling in love with the research process, with the challenge of meeting students where they were. It's much like social work—I ask students what they want from their learning, what are their goals, career goals, personal goals.
It was a dementia course. I also asked questions around lived experience, and so I fell in love with meeting their needs and their learning goals. I did more and more teaching with the University of Albany, and then I got a full-time job. I still didn't have any aspirations to do more than adjuncting, but I was offered a full-time job with another institution that was online, [00:03:00] serving online adult learners, Excelsior University. I was there for 12 years developing programs, teaching, getting to know students and faculty, and I quickly found that I love administration. My passion is teaching and being with the students, and that guides me in all of my administration work.
As time went on, I was offered this opportunity at Bay Path University where I am now to work with a healthcare management program. I was really intrigued. I went there working with adult learners with a wide array of faculty from around the country. I also took on advising there and absolutely loved it. I've been with Bay Path for nearly two and a half years now, [00:04:00] and I oversee all of our healthcare management administration programs, our human services programs, and our wellness and health promotion programs.
That was my journey, and I wouldn't trade it.
Priten: Yeah, that sounds amazing. I find that folks with non-traditional paths into teaching are very interesting. You get to bring a different perspective into the education space, which is great. Before we talk about your current experience and conversations with your students, I'd love to hear about your earliest memory of an ed tech tool as a student yourself. Do you remember an instance of having a very strong reaction, either positive or negative, to the use of technology by a teacher?
Anna: Statistics was my first big foray into technology. It was not easy. It was "here you buy your software, you figure it out and good luck to you." It was a doctoral [00:05:00] program, so I understand there's an element where you do need to figure it out for yourself.
It was really a challenge. I ended up finding somebody in my cohort who helped me, who taught me how to do it. That was great. By then I had already started to teach, and so I had it in mind that I would always make sure that no matter what level a student was at, even at a doctoral level, they had the tools they needed and someone to help them learn.
Priten: That seems like an important part of this. Even as we see how AI is being rolled out at universities across the country, a lot of it is "here's access to it." Students are still left to figure out how it fits into their learning process individually. What did the pandemic look like for you? Were you still teaching at Excelsior?
Anna: Yes, I was. Because everybody was online in certain ways, that didn't shift. What shifted was the students' ability to attend to their classes, to be there consistently in their minds and at the keyboard. I've been in healthcare for most of my career, and it hit my students very hard because a lot of them were on the front lines or they were mid-level managers right there on the front lines. It was extremely challenging. That was when Zoom was really coming into its prime. There were a lot of hiccups with Zoom, and I was trying to have more Zoom meetings with my students to support them better. They were talking and sharing the distress they had, the losses that were accruing—people dying on their watches. But also the losses of time with their [00:07:00] families, especially children and vulnerable elders. That piece changed dramatically. The affective needs, the emotional needs were acute. I gave them grace in the due dates, allowed redoing things if they needed it. I actually stripped out some assignments and scaffolded them differently, knowing these were extraordinary times. I wanted them to learn what they needed to learn without stressing them with the way things were pre-pandemic. I shifted a fair amount of my curriculum and the courses I taught, and the courses I oversaw, so they could have the time they needed while still learning what they needed to know to be professionals—just a different way to get there.
Priten: I'm curious because the online education piece wasn't new, and you're talking about how differently your students [00:08:00] showed up because of the other factors changing all of our lives, especially your students working in healthcare.
What role did the technology play? Did it play a different role than before the pandemic? You talked about those one-on-one meetings with your students. Did you notice that you were thankful for the technology, or did you wish it could do something else?
Anna: That's a great question. One thing I noticed was that my students were much more eager to use voice technology. I would give them a choice between paper or presentation, and they were more often choosing presentation. I think there was a craving for connection. By and large, we were all very grateful for the technology, hiccups and all. Bandwidth was a big issue for a lot of students. Zoom was a nightmare initially because it got so quirky. Because we were [00:09:00] already online, I was fortunate that a lot of the students I had in the class I knew already to some extent. There was already a bond there. I would say we were pretty grateful for it.
Priten: I'm sure a lot of us share that sentiment. When we think about the last few years in education, that was big crisis number one, and big crisis number two seems to be AI. I'm curious how that's shown up in your context.
Anna: This is probably the first technology I've experienced in teaching and overseeing programs where there is such a conflict between student perception and student understanding, faculty perception and understanding, and administration's perception and understanding. We're working through it as an administration. Like a lot of universities, it's been a little slow and methodical until recently. Some students are using it pretty regularly and there are tells. Some faculty are putting notices, adding rubric rows—"use of AI will be an automatic zero." [00:10:00]
I do see the tone changing a little bit over this past year. I think it's become so endemic in our society that everybody's using it, including faculty. We have faculty who are really intrigued and interested in seeing what it can do, and then other faculty who are afraid. They really want students to learn, and they're afraid that because of all the stressors adult learners face, they'll take the more efficient road. Some of them are doing it.
Over the last three months we started working on a pilot with an AI vendor, and that's been really fascinating. I've been charged to take a program and convert it completely into AI enhanced. I've been curious about AI for a few years, playing with the free versions, learning how to do prompts, and coming to understand my concerns about it—the hallucinations, [00:11:00] going down rabbit holes that from a curricular perspective you wouldn't want AI to take students down, especially earlier students who might have been out of college for 15, 20 years and are relearning how to do academic thinking. They may have been using AI quite successfully in their lives to plan vacations, plan menus. I planned some really awesome menus with ChatGPT. It's very useful to me in many ways. Working on this project has given me an opportunity to test out prompting skills and adapt the curriculum. One of the things I've learned is that curriculum as it is doesn't work really well with AI.
You have to recraft it and retool it. When we do learning outcomes, for example, we're all trained to use Bloom's Taxonomy, which aligns pretty well to career readiness, but it's too nebulous for AI to use. We have to break down each outcome into component baby steps—like you're guiding the AI in how to mentor the student.
[00:13:00] Essentially, what these courses will look like is we're going to use the AI tool. Right now it's a closed system, though we may open it. We're feeding the AI materials that it will then use to teach our students, then feeding it prompts, context to guide how it will mentor our students, then creating interactive activities based on that. So students engage on something like, what are the differences between Medicaid and Medicare, and then how do federal funds influence state Medicaid funds? We've been working a lot on that. I found a wonderful informatics student who has an interest in AI in healthcare. She's phenomenal—one of our best decisions. She's been going through taking the course and modules as a student as we build them. I'm doing the curricular part, and we have an instructional designer doing that piece. Then our student comes behind and takes it as a student, and we talk through what the prompts are, where are they guiding her, [00:14:00] how do we need to shift this, and how would our students receive this? I go through it as well from the faculty side to see where I see potential deviations, pitfalls, or dark holes it could take students down.
In higher ed, this is a refined tool, but it still feels early in its infancy. We're teaching it almost like you teach a child to talk. We're teaching it how to engage with students based on its skillset. That's a lot of what we're working on.
We have the choice of students submitting their work for AI grading, and we opted against that. We're taking a very incremental approach. After the first course, I might open up a low-stakes one, and then baby step our way into it. Trust is an issue for me with AI. As faculty and as a human, I need to come to trust it with my students.
Priten: [00:15:00] What are your students' reactions to this? This sounds like it's still early stages of the pilot, but are students expressing enthusiasm for something like this being integrated?
Anna: Interestingly, we haven't started yet. It starts in September. September 2nd. The students who are enrolling in our program do know. I've talked to some of my current students as well, and they run the same spectrum as a lot of us. Some are really uneasy with it, especially if they think about AI doing the grading. They really value the faculty being there and hearing the faculty's experience. The first thing I had to do was reassure them that the faculty will still be there. It's almost going to be one-third with the AI and two-thirds with the faculty and with their peers.
Priten: [00:16:00] That makes sense. That concern is valid, but that balance feels like it makes the most of the AI tools that are out there while still valuing those human relationships. You've talked about how you're all finding how to productively use AI tools within the students' learning journeys in a program with a clear career track afterward. Do your students wonder about what AI skills they'll need themselves to excel in their career paths? Obviously it's a very different conversation than when you were talking to middle schoolers and what role AI will play in their careers.
It's probably one that can play a different role with students who are adult learners in a professional education program. There's a clear connection to their next step. I'm curious if AI literacy and skillsets come up and how.
Anna: [00:17:00] Definitely, yes. Our students are asking for it. I felt badly that we're a little behind with that in a lot of the programs. A lot of it's that we've been trying to wrap our heads around everything happening with AI and figure out where to start.
That's part of why we partnered with a vendor who has a specialty in education. We felt this person could guide us on where to start. For our students, they see it as an imperative that they need to know. Our employers do too. Every quarter or so I look at healthcare leadership roles posted on Indeed, Glassdoor, LinkedIn, and AI literacy, AI familiarity is pretty high up along the skills. Along with that is not just passively using AI, but being able to use it as a tool and vet the AI, do the critical thinking along with it. [00:18:00] I see an opportunity I'm trying to work in with this project to teach them that.
Priten: I think there are obviously two ways to integrate AI within how we interact with our students. There's using it to reach pedagogical goals we had prior to the increase in AI, and then there's new skills we might want our students to learn so they can make the most of the technology later on. It sounds like you're finding a way to do both. It makes sense that your students are asking for that second component a lot. When you look at these job descriptions, are they making explicit mention of AI as a skillset or criteria?
Anna: Especially in informatics and clinical informatics, data analytics with AI interface, those sorts of things. I've seen it too in finance, healthcare finance. AI is coming into its own in the complicated world of healthcare [00:19:00] finance.
Priten: When we think about the use of AI in different industries, healthcare is definitely one that has the most regulations and bureaucracy, but also patient protection regulations and laws in place. Navigating that alone seems like a major question for your students—how to know when and how to use this technology in ways that adhere to those regulations.
Anna: Yeah, and to an extent, we're really waiting to see how the federal government rolls out their piece. As we're talking, there's been somewhat of a plan presented, and I'm sure that's going to be fine-tuned over time. We need to see what that will look like.
We also need to see what other rules and regulations will look like as so many are changing under our current administration. That's something we talk about a lot in most of our courses. We have this new technology and you can't ask it questions the same way. You can't go to ChatGPT. There are dedicated healthcare interfaces with AI. [00:20:00]
That's a first thing that a lot of people surprisingly don't recognize. They go to what they know—ChatGPT or Gemini or something like that—but that's just not acceptable. Organizations are literally writing the rules as they go. Much like we had to with social media.
Priten: When you think about the next five years of education, especially in your context, is there anything that scares you about where the technology might go?
Anna: I appreciate technology a lot and what it can do for us. My fear with certain technologies is that we become so dependent on it that we lose our capacity to critically think, to question the tools, to double check and verify. That really does scare me. I see the allure of the technologies and how they are built to keep our attention for extended periods of time [00:21:00] and draw us into more use. It's how we use them and how we teach people to use them, especially in an environment where it's global. It's not something that can be confined to US regulations very easily or confined to one organization. It's ubiquitous.
And with every technology we have the opportunity for hacking. You go back to privacy and safety and security. The more interfaces we have, and thinking about healthcare and the Internet of Medical Things—the IOMT—you have all these openings for hacking, stealing data, and worse things. We need to get a handle as a society and in healthcare as a sector and industry on that [00:22:00] and come up with a cogent plan.
Priten: It seems like there's a lot of reacting that every industry has to do in figuring out what the boundaries are, what the regulations are. But I think you're right that the amorphous nature—it's not an entity, not one company working on one product—it's so diffused that figuring out how to regulate and enforce those regulations will be very interesting. Obviously you're working on the positive aspects and benefits technology can provide. I wanted to ask about what you're excited about in terms of where technology can take us.
Anna: I think about technology and AI for hospitals at home and critical care at home and for people with dementia. People being able to be in their homes and spend less time in acute care settings or institutional settings like nursing homes. There's so much with this technology, and if we use it right within five to 10 years, we could have a [00:23:00] society where more and more people are able to live in their homes and communities with the help of technology, humans, neighbors, and community.
Priten: There was a project I was reading about using generative AI to combat some of the loneliness crisis among the elderly. They talked about at-home nurses and how much of the role they fulfill is providing companionship. The project explored how generative AI could help navigate or at least provide a medium for that. I don't know exactly what the research will say or whether it's an effective way to combat it, but I'm interested to see how folks are thinking about the potential to solve some of the problems we all know we've been working on but are very hard to scale. Hopefully we can scale some of those. Is there anything else you'd want to share about your interaction with these technologies or any anecdotes you wish you'd brought up?
Anna: I don't think so. Do you feel like you have enough from me, from my perspective?
Priten: I do. I think the professional connection part is an angle we haven't had much yet [00:24:00] because we've been speaking so much to K-12 educators. I had a med student last week, and that was really helpful in starting to think through how it's very different to think about the technology and what impact it will have in 10 years versus what our students need to know now when they're at that younger age level—versus managing the reality that we still need to assess these students, make sure they're learning and not taking shortcuts, but also they need to know how to use the technology for their jobs in a year. I appreciated that perspective of you all trying to pilot a program of both figuring out how you'll teach better with it, incorporate it in ways to make learning more effective for students, but also help them start to build intuition for when it's appropriate, when it's not, what its limitations are. I'd love to hear more about how the pilot actually goes once it's underway. Maybe I'll shoot you an email sometime in the winter [00:25:00] and see if there are any initial findings.
Anna: Definitely, yeah. I think you also tapped into the piece about lifelong learning. We can't teach students everything they need to know about technology in our program. We also have a third prong to this: teaching them how to find the information they need going forward and how to keep growing. The technologies, the AI, are going to keep growing and getting bigger and more ubiquitous throughout all aspects of our life and careers. Teaching them how to do that is probably one of the most valuable services we can do in higher ed and in K-12 too.
Priten: Even when I think about the work we do with teachers, a lot of it ends up being—it's not useful for me to sit with you for an hour and teach you how to use ChatGPT. It really is about how we understand this technology so you can learn how to use whatever tool comes at you next year or two years from now. The pace is so fast that it comes down to the ability to acquire new skillsets and knowledge more than any preexisting skillset. It's daunting. I work in this space, I do this all day, and I get up [00:26:00] wondering, when did they announce this? What can it do now? There's just so much to consume constantly and keep up with, and that's my job 24/7. Whereas if you're juggling other things and keeping up with latest research in other industries and also have to keep up with AI stuff—it's very daunting.
Anna: It's a lot, but it's also fun.
Priten: It is. It is very exciting. It's hard sometimes to remember that when you're inundated with how concerned other folks are or seeing people struggling with it.
We kind of know that the AI is not going anywhere. Detection is kind of a lost cause at this point. So it's figuring out how we can make the most of it while helping our students build the right healthy habits with it rather than just trying to police it.
Thank you to Anna for doing the slow, careful work of actually building a curriculum. Anna reminded us that integrating AI is fundamentally a trust problem—trust between faculty and the tool, between students and the institution, and between all of us in a technology that is moving faster than any one curriculum will keep up with.
Our instinct to meet students where they [00:27:00] are is exactly the kind of human-centered thinking that should be guiding this work. Keep listening as we continue exploring the ethics of education technology and pre-order my upcoming book on how to build that trust at ethicaledtech.org.
Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.