Welcome to The 1909, the podcast that takes an in-depth look at The State News’ biggest stories of the week, while bringing in new perspectives from the reporters who wrote them.
Wednesday, April 2, and this is the nineteen o nine, the state news weekly podcast featuring all reporters talking about the news. I'm your host, Alex Walters. This week, Steven Arch's looming retirement looks a bit brighter. Why is the English professor excited? Because the twilight of his career has thrust him into an AI arms race that's completely changed the way he's expected to teach.
Alex:Our guest today is Emilio Perez de Barguen, a reporter here at the State News who's talked to a bunch of professors about this thing, of being someone who's taught for a long time, these AI tools, generative AI, are coming out and it's complicating the way that they conduct their classrooms. He's here to tell us about what that means, what they miss, what they're doing, and whether they think the university is capable of handling this giant change. So, yeah, thanks for coming on the show.
Emilio:Absolutely. Thanks for having me, Alex.
Alex:So, first, tell me a little bit about You talked to all these professors for your story. Talk me through when they kind of like reminisce about the good old days. What is like the classroom kind of academic relationship that they miss about before AI? What is it that they kind of like, you know, have nostalgia for?
Emilio:I think one thing that came up a lot with a lot of the professors I spoke to is that they miss an era where when they reviewed their students' work, they could holistically believe that no matter if the paper was good, if it was bad, it was their work. And the feedback that they gave them could sort of be transformed and help them along this learning process. And so what a lot of them are seeing now and are really upset about is the fact that not only are you worried about, you know, teaching your students as you always have, but now some students might not even be engaged at all, and you're just grading, you know, AI generated work.
Alex:Yeah. And so is it that they they have this extra role now as like a detective, trying to figure out not just how to help
Emilio:a student, but also like, you know, whether the student actually did the work. Exactly. Professors have the same responsibilities as they've always had, but now even more so. And so with the introduction of all this AI technology, they're spending more time than ever just trying to figure out, you know, differentiating AI work from human written work. And it's an additional workload that they just frankly wouldn'twould rather be spending their time on something else.
Alex:So it's not just, I guess, you know, in the intro I talked about it as like a change in teaching, but it sounds like the professors you talked to framed it less as a change and more as like this whole new thing they actually have to do on top of what they were already doing. Exactly.
Emilio:I think with every iteration of the technology, it becomes a bit harder for professors to really differentiate these two types of work. Mhmm. And it just means more and more time is spent. And you know, they have certain tools to help them out, but a lot of it is also just, I mean, looking at words and trying to see what's human, what's robot.
Alex:Could anyone like, what is beyond just, like, you know, looking and trying to see if it's a robot, mean, anyone get any more specific with you about like, you put a paper in front of one of these English professors and, you know, what do they do to try and figure out if it's AI? How do you make that distinction as a professor?
Emilio:Yeah. So there's two schools of philosophy, really, about it. There are some professors out there who claim that, you know, they can tell the difference between something written by a human and something written by AI. Steven Arch, who you alluded to at the beginning, disagrees with that notion entirely. He says that a lot of the professors who claim that sort of thing are are diluting themselves in a sense.
Alex:They believe that they can just look and they can just feel something in their gut, like it's like it's a subjective thing?
Emilio:There's this idea thatanother professor I spoke to said that, you know, are certain tells, right? So an AI written work will be incredibly it'll use large vocabulary in these incredibly large words, but not really get at anything at all. And that professor also brought up the fact that, you know, students, regardless of the quality of their paper, usually just have something interesting to say. And and in her perception, AI usually just doesn't have much to say at all. Mhmm.
Alex:But what's Steven Arch is thinking? The idea he's disputing that you can tell
Emilio:like that? He's disputing he ran a little bit of an experiment, actually, on this sort of subject. When AI was first coming out, you know, he presented several professors, several colleagues of his with five, you know, random samples of students' work. Some of them were written by, you know, those students, human written, and others were generated entirely by AI. And what he's seen when he runs this experiment is that, professors just can't reliably tell the difference.
Alex:So he's actually like put this to his colleagues and like sort of proved that maybe what they say about being able to tell that this technology might be more sophisticated than that. Right. What about Is there an effort to match the technology with technology on professor's side? You know, I remember in high school, I had to turn all my papers and turnitin.com was one of these things that kind of scanned it. But that was more for like traditional plagiarism.
Alex:Is there like a technology that can evaluate if something is AI?
Emilio:There is some technology out there. I think the reality of it right now is that not a lot of people trust the reliability of that technology. You know, there are several tools out there. Think Turnitin, CruiseAll, Packback. These are all softwares that, you know, claim to be able to provide a sort of score that can kind of give professors an idea of how much of a paper is written by AI.
Emilio:Mhmm. But outside studies kind of disagree with what these companies are saying, and, you know, they're saying it's it's not reliable. You can't really tell when something is written by AI.
Alex:I see. So what are the professors doing then outside of, like, trying to detect if something is AI? Are there ways that they're trying to discourage students from using it in the first place or ways that are altering curriculum?
Emilio:There are some ways. You know, what a lot of professors have taken to doing is redesigning a lot of assignments. And this is particularly in the humanities, where a lot of students' assignments are writing focused. So it'll take the form of, for example, if you're working on a long term sort of research project or an essay of sorts, that instead of just turning in the one large assignment at the end, you know, there will be several checkpoints, for example. Because forcing students to provide examples of their work in progress is, at least in theory, reducing the chance that you can just turn in one large piece of AI writing and say, this is the finished product.
Alex:I since, like, you know, I could ask Chad GPT to write a research paper on whatever, but if every week I have to show, like, you know, these are some sources I read, this is, like, some outlining I'm doing, they think that might deter just sort of, like, the use of AI to write a paper. Right. What about, you know, did you talk to any professors who are embracing the technology, working it into the curriculum? I mean, there any argument that like, if this stuff is inevitable, that we're supposed to just figure out
Emilio:how to use it? There are some. I spoke to the director of integrative Studies of the Arts and Humanities, or IH, at MSU, and he's mentioned that several of the professors that he knows have incorporated AI, whether it's allowing them to prompt the AI in very specific ways to generate responses based on hyper specific assignments. And you do see more of an abrasive AI, especially in STEM majors. I know it's a frequent occurrence in a lot of computer science courses where a professor will ask students to generate large swaths of code and then to correct that code.
Alex:So use AI and then kind of edit the AI's work. Right. I see. What are the professors you talked to who are just sort of opposed to it altogether? I guess, what is the argument for trying to reform college with more of this, like, detective work and, like, rooting it out so that students can't use this technology that's very present in the real world?
Alex:I mean, do they think we're losing by using it?
Emilio:I think there's two ways to think about it from these professors' perspectives. One is that their jobs remain the same, and it's teaching kids to think, teaching students to to really, you know, learn to reason and to argue. And I think what a lot of them are concerned about is that if you allow a technology like artificial intelligence to create and write your arguments for you, you know, you're never forced to really grapple with the intricacies of forming your own argument and finding evidence for it and all synthesizing that into a clear and concise way. I remember I had one professor who likened it to using a calculator but not actually having any idea what the symbols on the buttons mean. If you get this end result, but really you don't understand logic behind.
Alex:Yeah. What about the university? I mean, you've talked to professors who within their classrooms, in their purview as an instructor, are making changes and reacting to things. Is there any sort of like policy or directive from like the colleges or from like the central administration about what faculty are supposed to do?
Emilio:So this is an area where it really is the Wild West in a MSU has no official guideline on this is how you should manage AI in the classroom. It provides a little bit of guidance. It has a webpage dedicated to potential resources and informing professors on, if they choose to go down a certain path of incorporating AI, how to do so best. But there really is no single rule on if professors should or should not use AI.
Alex:How do the professors you talk to feel about that? I mean, many faculty, you know, on certain issues feel strongly about academic freedom and whatnot, but I could also imagine an argument that, like, the university could do more.
Emilio:Most professors actually said that they enjoy the guidance. I think they they recognize the fact that there's such a great variety in the types of disciplines you study at university, that AI really does It is a case used instance. You'd never know when you should or shouldn't be. It depends on the professor and the class and the material. And I think that, you know, they do wish that MSU provides this guidance, and it's out there.
Emilio:But I know one professor who said that it's just another thing you have to go looking for in order to inform, you know, your instruction. So if anything, they just wish that there wasn't this expectation that, you know, here's this new technology, it's now your responsibility to figure out how to manage it.
Alex:Well, mean, it's certainly, you know, a big kind of added challenge for them on top of what they're doing right now, is dealing with this AI stuff, the same way that students are all figuring out, you know, what it means for them. Right. Especially people in fields like ours.
Emilio:Mhmm.
Alex:Well, thanks for coming on the show and writing this great story, which I assume was not created using generative AI. That's all we have for now. We're gonna be back next week with fresh reporting from the great minds here at the State News, not the computers. Until then, the story we discussed and plenty more available at statenews.com. Thanks, Emilio, for coming on the show.
Alex:Our podcast coordinator, Taylor, for everything you hear. And most of all, thank you for listening. For the nineteen o nine, I'm Alex Walters.