Margin of Thought is a podcast about the questions we don’t always make time for but should.
Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students.
Each season centers on a key tension in modern life that affects how we raise and educate our children.
Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.
[00:00:05] Priten: Welcome to Margin of Thought, where we make space for the questions that matter. I'm your host, Priten, and together we'll explore questions that help us preserve what matters while navigating what's coming. Today I am speaking with Varun Gupta, who teaches Accounting and Economics at Wharton County Junior College in the Houston area. Varun is honest about his own journey with AI. He's using it extensively for lesson planning and assignment creation, but he is deeply concerned about what happens when students take the helicopter to the top of Mount Everest instead of making the climb themselves. We'll explore the tension between efficiency and learning, the challenge of maintaining academic integrity when you can't play gotcha with your students, and what it means to be a teacher who uses the very tools you're asking students to resist. Let's dive in.
Varun: I've been teaching at Wharton County Junior College here in the Sugarland area since 2007.
[00:01:05] It started as part-time adjunct. Then I became full-time in 2013. I come from a family of educators. My late mother and father were both educators and never really planned to get into it. It just happened backwards. Since I don't have a PhD, I'm stuck at the junior college level, which is fine. I really enjoy it. We're a teaching institution, so all I have to do is teach. Once I got into it, I really found that I liked it. My professional journey as a teacher has been interesting because, at least at our institution, I can't speak for others, but our institution doesn't expect much of us, and they don't give us many tools. There's a statement: to whom much is given, much is expected. Ours is kind of the opposite. They don't give us anything, so they don't expect anything. Just teach your classes, don't get us in trouble, and repeat.
[00:02:04] They really judge us on how good our paperwork is as opposed to what we do in the classroom. I could have done that for the rest of my teaching career, but over the past few years I've had this mindset that there's got to be a better way to teach. I don't know how to teach. I don't know anything about pedagogy and teaching technologies. So I went on my own journey to learn the science of teaching, what they call metacognition. I did a lot of seminars receiving training. I've done some training presenting because I feel the best way to learn something is to teach it. That's what led me down the rabbit hole of AI. It's the new shiny object. I had just gone to a conference in May, early June.
[00:03:01] It's something really big in the community college space. It's called NYAD. It's a University of Texas thing. Every single session, every breakout session was AI, AI, AI: how to harness it, how to leverage it. I've done presentations on that too. What I'm afraid of is I think it is a shiny object. It is a great tool. There's just a lot of uncertainty on what to do with it, how to do it right, if it's actually useful. I don't know the answers. I use the analogy of the calculator and spell check. If you asked me a mental math problem right now with any complexity, I would panic. It would take me a while, and I could probably do it, but I don't use that part of my memory anymore. I have a calculator to my right, just out of sight here, and I have my phone, which has the calculator.
[00:04:00] I don't use those parts of my brain. I use GPS to go to the airport, even though I've been there hundreds of times. If I don't have my GPS, I'm not using those parts of my brain. This is what I worry about as a teacher and as a person: AI is giving myself and the students the ability to get answers and content without any effort. It's taking a helicopter to the top of Mount Everest. I think there's something to be said for grinding and failing. That's my journey, and I'm trying to figure out how to use it responsibly, ethically.
Priten: The metaphor of taking the helicopter to the top of Everest is one I might steal from you.
Varun: Take it. I stole it from somebody.
Priten: So let's start with: as a teacher in your own workflow, what have you found useful, both in terms of prep work and things like that? You talked about trying to figure out how to teach, the science of teaching.
[00:05:05] Has AI already played a role in helping you get closer to understanding those things? Does it play a role in how you teach because of your understanding of those concepts, or do they feel disconnected right now?
Varun: With AI, what I've done in terms of prep work: if I need to take a problem I've been using for years and tweak it a little bit to give it a more local flavor or to make it comprehensive, I use it to create multiple versions of case studies to help them understand the economics context. It's helped me with lesson planning in terms of step by step by step. Since I'm all about metacognition, it helps me. Since I don't fully understand what metacognition is and all the different tools, it gives me a sequence. Start with a lecture.
[00:06:00] Have them read, have them recall. Here's a quiz, here's a way to interleave it. I let it do that heavy lifting, then I tweak it. I also use it to answer those troublesome emails from the kids. I can say, help me write this in a firm tone, just to save time. And I use it for quiz questions.
Priten: When you do use it, do you tell your students you're using it?
Varun: When they have an assignment, I say, listen, it's out there. I know you guys are using it. I don't need an AI detector to tell me that you're using AI to generate your answers in your term papers. I use it from time to time to create assignments. When I give them the assignment, I'll say, hey, by the way, if this was AI generated and you find an error, let me know. I don't announce it in the emails.
Priten: Tell me a little bit about how students are receiving that. When you tell them that this assignment was AI generated but you're expecting them to produce original work, are they able to grasp why you using it is different than them using it?
[00:07:04] Varun: That's a great question. I've thought about it, but not deeply. It's a bit of hypocrisy. It's almost like you telling your kids, okay, don't drink when you go to college because bad things happen. Did you drink when you were in college? Yeah, but—
Priten: Right.
Varun: When I've told them I'm using it, I'll say, hey, listen, I created this assignment in ChatGPT or whatever. If you find errors, let me know. When I ask them to write their papers and I discourage the use of AI, I say, listen, you can do that and get the answers, but you won't understand the thought process behind it. It's the journey, not the destination. That's what I try to impose on them because they're using it. I know they're using it. The stuff I have them do that's potentially AI is low stakes, and you can't really do anything because our college doesn't have a robust policy.
[00:08:11] They've left it up to each individual instructor. With AI, it's like, listen, I know you're using it. I can't prove it. I can't do anything about it. Sometimes it's obvious. They'll include the prompt in their answer. They're not even smart enough to remove the prompt. And sometimes they'll do it that way, and you can maybe work together on it. Is this any good? Is this content any good? Because they won't know if the content's any good. If I enter a prompt on econ, I'll understand if the answer's any good. For example, I was in California last week. I asked ChatGPT to plan out a public transit route from where I was to where I wanted to go. It printed out a nice itinerary. When I got to the ferry terminal and I'm waiting for the 9:10 AM ferry because that's what ChatGPT said, I'm looking at the ferry board.
[00:09:07] Ferry board says, no, there's no 9:10 ferry.
Priten: I think at face value there might be hypocrisy, but I think there's also a difference between what you're expecting from your students and why the journey is important for them. The journey might not be as important for you in creating the assignment as it is in them completing it. We do hear from some teachers who are able to have that conversation with students. Often it comes up when there's pushback from the students. When the teacher says, oh, I used this to create some initial feedback on writing you've given me, that starts to create pushback. It's like, why are you allowed to use it to grade if I'm not allowed to use it to write the essay? Those conversations are interesting just in terms of how students are thinking about their work and what they think about the role of their productive energy during classes or for schooling in general.
Varun: I feel like I've been on the journey for most of this stuff. I know how to do most of it. If I keep overusing it, I'll forget how to do it. I'm trying hard not to make it my default fallback.
[00:10:08] I try to grind through it on my own and then have it tweak it. But that's a slippery slope. I'm slipping more into—language, tragic.
Priten: You talked about GPS and the calculator. I think I share concerns about what that's shown already regarding what happens to our cognitive skills when we offload even a component of them. Do you have concerns specific to the subjects you teach about what that means for your field? If students pursuing careers in economics or accounting become over reliant on the technology, what's your worry for their wellbeing, their career trajectory?
Varun: That's a good question. I think there's a couple of layers. One is: the risk is that they become bots. They respond to feedback and put it in ChatGPT, get an answer. They don't know if the answer is any good or not, and they just send it up the chain.
[00:11:03] One concern is if you send garbage up the chain, that's going to look bad on you. Another concern is if everybody's using ChatGPT, then every layer of the chain won't know if it's garbage. We had one of our departments do a collective exam. Everybody submitted 10 questions and nobody checked. Only one person checked the questions and answers. One set of questions had most of the multiple choice responses wrong. Only because one other teacher checked did it get caught. Otherwise, it probably would've ended up in the lab manual because whoever generated the questions used ChatGPT and didn't check. So at what point does somebody say, oh, this is garbage?
[00:12:01] Discipline-specific: if these kids choose econ and they don't know the underlying grind, the journey, the fundamentals, then they're just answering questions. There's no texture, no layering, and if the garbage keeps getting pushed up—
Priten: I think the idea that there's no checkpoints to make sure what the AI is generating is accurate or not garbage is a concern across different industries. You mentioned yearning for the good old days of teaching. I'd love to hear more about what that means. The use of technology in your classroom—you're obviously embracing it, figuring out ways you can use AI. Are you happy with that? Are you like the direction education is moving? Or would you want something different?
Varun: I think the way I learned all those years ago, I started college in 1982. We didn't have any options.
[00:13:02] You go to class face-to-face. No Zoom, no asynchronous option. Go to class face-to-face synchronously and listen to the teacher. He or she could talk about anything they wanted. We had a textbook, maybe a study guide, maybe some notes from a classmate, and we had to figure it out. I don't know that that was a very good model, especially with teachers who had credentials from all over the world, like my father, who nobody could understand much of what they said. You figured it out, and that was part of the journey. If that's the good old days, and then fast forward to now where they have so much content—even before AI—they could go to YouTube, Khan Academy, so many places offering easy 24/7 content. It's like Facebook or Instagram. They can get it anytime they want. There's no time appointment necessary. If the good old days are how I was and the bad old days are where they are now with no urgency to come and be in a group, I would like something in the middle. I don't know for a fact that sage on the stage—me talking and you writing—isn't working. It's not working because they have options. They're easily distracted. There's a great book called Anxious Generation by Jonathan Haidt and another one, The Sirens' Call by Chris Hayes. So many things compete for their attention, and in the classroom setting, the idea that they're going to sit there ramrod straight and listen to me just because I'm the teacher—that ain't happening. So I have to compete for their attention. How can I do that? I can't be more entertaining. Maybe I can make my activities robust enough with the help of AI that they want to engage with the activities and not think about their phones.
Priten: Yeah.
[00:15:00] I'm curious to hear more about how students show up in your classroom. You talk about their attention spans, which is well documented at this point across all education levels—students' attention spans have gone down dramatically since technology grew in their personal lives, especially over the last decade with short-form video and how they consume media outside the classroom. Do students show up differently to your classes now than they did even four or five years ago? Think about the first week with students and what you notice about them: pre-ChatGPT versus now. Are you noticing any difference, or are students arriving relatively similarly in terms of background capabilities?
Varun: That's a good question. I would answer it with a small modification. I haven't noticed anything because this AI—November 2022 is when it really came on huge.
[00:16:03] That's not even three years yet. I would say pre-COVID, post-COVID, there's definitely a difference. I wouldn't tie it necessarily to ChatGPT, but pre-COVID, post-COVID, what I've noticed with pre-COVID kids is if they missed class, they seemed more concerned about, hey, what did I miss? And trying to get the content. After COVID, they'll miss days at a time. They don't think twice. I think they got into the habit of not showing up during COVID with homeschooling, and then they could get the information anytime they wanted. We took it easy on them during COVID, so a lot of kids just passed. I did notice that huge difference. I haven't noticed anything specifically about their preparation coming to class that I thought was related to AI. On a practical basis, when I have term papers due, they all still turn them in at the last minute.
[00:17:00] But they don't seem as stressed about the term paper. Before ChatGPT, they had a lot more questions about the term paper during the semester: hey, what's the topic? Is this good? What format do you want? Is this good? They would bring me samples throughout the semester. Now very few students ask me any questions about the term paper because I think they know in their minds: listen, I'm not going to put a lot of effort into this. I'm just going to put it into ChatGPT, tweak it a little bit, then put it in a spinner or something that makes it not GPT. That's about the only thing I've noticed that I can point to: they don't seem as concerned about the term paper as they were before ChatGPT.
Priten: That's fascinating to me. That component—noticing stress levels go down. Normally that's a good thing, but this seems like maybe for the wrong reasons.
[00:18:01] It's going down. What does that mean for how you approach the term paper? Do you think you're going to continue assigning it the same way pre-ChatGPT in the last few years? Or do you foresee that will play a different role going forward?
Varun: I have lowered the stakes because of how we're set up. We have to follow an administrative master syllabus that Wharton County Junior College has told the state of Texas to meet their guidelines. We're state funded, so every ECON 101 and ECON 102 class has these grading components: 30% test, 20% final. The state of Texas mandates that we assess communication skills, written or oral. We do that with the term paper in face-to-face classes. I don't anticipate that changing other than I try to make the topics more personalized and more tied to what we did in class.
[00:19:04] In online classes, we don't do the term paper as heavily. We do more robust discussion between students, and we can assess their communication that way. So just trying to find some AI-resistant assignments. I don't think there's anything that's AI-proof.
Priten: Whatever seems AI-proof right now definitely won't remain that way for very long. Online classes in particular are interesting because a lot of in-person classes have shifted to figuring out how they can bring assessments into the class or maybe have students write the paper in front of the instructor. Your online classes—are they fully online?
Varun: Yes. I have face-to-face, which has an online component through the learning management system, but the instruction is all face-to-face and synchronous. The online classes are fully online and asynchronous.
[00:20:01] Priten: What does assessment look like in your asynchronous classes?
Varun: It's the same sort of setup. The difference is instead of a term paper, they do group discussions where they answer the prompt, reply to classmates, use netiquette, those kinds of things. They have a heavier homework component, which the LMS grades, and a heavier quiz component, which the LMS grades. They do three exams and a final, with questions from a pool that the system grades automatically. They have to take that with a proctor monitoring the camera, and we go back and watch them. That hasn't changed too much other than they're heavy on discussion. The face-to-face difference is their tests are in-person. Everybody gets the same test.
Priten: Has the quality of those posts changed as AI became more popular among students?
[00:21:02] Varun: Yeah, you see a lot fewer discussion replies and prompts and starter threads. Their grammar is spot on. You wouldn't know this is a 16, 17, 18-year-old kid used to texting with little eyes, emojis, and abbreviations. The quality has definitely gone up, but they're very superficial, very wordy, and they repeat themselves over and over because there's a minimum word count. If you tell them you need 200 words, there's only so many words you need to make the point.
Priten: I would assume the dramatic difference in the polished nature of the writing is partially related to AI usage, especially in an asynchronous context. I'm curious to hear when you think about assessing your students, maybe even in a few years, if you notice that trend continue—the discussion posts looking very different than they did pre-ChatGPT.
[00:22:01] Did that change how you view the discussion forum? The background to this question is we're speaking to a lot of college students who are making it clear that the discussion forum is one of the first places they'll use ChatGPT. It's like usually there's a quick turnaround time. It's low stakes. They can put in what their classmates said, the reading, and say, okay, can you make a response to this? Initially, the goal for the discussion forum was to provide a way for students to interact. Would you say it's no longer playing that role? And what role do you think it will play going forward?
Varun: Before I answer the specifics on the discussion part, I love what you said about space of learning. I'm wondering if education—the way it's set up with 16 weeks, semester, whatever, and then at the end you get a grade. Is the goal knowing, or is the goal learning? I think all of us as teachers would like you to learn critical thinking. You and I were students, presumably with some advanced degrees, and you were in school for quite a while.
[00:23:03] There were times when I thought, you know what? I just need to get this off my to-do list. I don't care about Texas government. Let me just move on. There's always going to be that challenge in education with the semester system. As long as we have to assess people and provide grades, there's going to be that tension between knowing and learning. There's a fascinating TED Talk from years ago about this: at what point in our lives will we be where we don't need to know anything? We just need to know how to know. I just need to know what to ask Google. I think that's where the kids are. I like what you said about their perspective—it's low stakes, it's the easy entry. So what happens with the discussion, trying to answer your question now, is they probably use the same intro post for all of them since we all do online in the same way and use the same techniques. There's a little bit of reusing previous content, putting it into ChatGPT.
[00:24:03] Then I can only speak for me. The idea that I'm going to read hundreds of discussion posts and thousands and thousands of words in a term paper—that's just not a reality. Do I have the time? Do I have the interest to read every single word? So you find ways to get this off your to-do list. But just like you said, discussions are low stakes. For me as a teacher, I haven't done it yet because I haven't figured out how. If I was going to let AI grade my stuff—not the tests and multiple choice objective stuff that's done by the learning management system, but if I were going to dip my foot into letting AI grade my stuff, that would be my entry point. Here's the rubric or guidelines for the discussion. This is what the student gave me. You grade it based on this rubric. That would be my entry level. You might call it an entry-level drug. And then where does it go from there? Once you get used to that, the bot's going to run your class. That's obviously an extreme.
[00:25:02] Priten: I want to end by thinking about what you think your role will be in your students' lives in the next three years, five years, ten years. I'm curious about how you see the technology changing. You talked about potentially using bots to do some of the grading, a bot-run classroom. If you had to justify to an admin person or student the role you play versus what an AI-run classroom would play, have you thought about what that argument might look like?
Varun: No, but I will say, if I split that question up in the way I understand it: what do I see my role as in the next few years? I've got three or four more years before I'm invested. I hope to teach another nine years.
[00:26:01] That'll put me at 70. My role as I see it in the next few years is: listen, this is out there. How can you make sure you don't get replaced by AI? How can you leverage it? That's the journey I'm on now. But you have to understand the grind. You have to do all the heavy lifting to get there. Don't take the shortcuts. Try to convince them not to take the shortcuts. It's like taking steroids to gain muscle—short term. Then you need to grind every day. That's the satisfaction. Then there's the conversation—this is going to really be true for face-to-face classes because in an online asynchronous class, I'm sitting here answering emails. That's all I do. It's no different than a bot. They put in, hey, Mr. Gupta, I missed the deadline. Can you extend it? And I'm like, okay, hey, thanks for your email. Sorry. No extensions. It's not different than a bot. In a face-to-face class is where I think the value I play is as a resource. Hey, listen, I have life experience.
[00:27:08] I struggled as an undergraduate. 40, 50% of my students are from our community. It was a real shock to me that not all of us are high achieving and smart. They struggle like many of us do. I said, hey, if I could be a resource, let's talk. Let me give you some ideas about a career path. I think that human empathy, that human touch—figuratively speaking, because otherwise we sit behind screens all day. I try to get them to have a human connection: showing up every day on time, professional work product, helping them do their best, and not wanting those shortcuts. It's seductive to take that steroid to get that short-term gain. Short-term gain, long-term pain. Short-term pain, long-term gain.
Priten: Yeah. That seems accurate. Thank you. Varun's willingness to acknowledge his own increasing reliance on AI, even as he worries about cognitive offloading, reflects the complexity we're all navigating.
[00:28:07] This is the core issue we face: grappling with these same tensions between technological possibility and pedagogical purpose. For more complex case studies that push out these dilemmas, pre-order my book Ethical EdTech at ethicaledtech.org. Thanks for listening to Margin of Thought. If this episode gave you something to think about, subscribe, rate, and review us. Also, share it with someone who might be asking similar questions. You can find the show notes, transcripts, and my newsletter at priten.org. Until next time, keep making space for the questions that matter.