The Transform your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio. Join Dr. Rob McDole and Dr. Jared Pyles as they seek to inspire higher education faculty to adopt innovative teaching and learning practices.
Hello, transform your teaching listeners. For the past few months, we have been running a survey and selecting a winner per month who will receive a transform your teaching mug. Our December winner is Sarah Wallace. Congratulations, Sarah. Look out for the opportunities in the future to win or purchase some Transform Your Teaching merch.
Rob:And as always, thanks for listening.
David Slade:Our students are gonna go out and have to use these tools, be faced with it in their professional lives, in their community lives. We maybe owe it to them to show them how to use some of these tools and to use them as a critical thinker, how to bring that curiosity to it.
Narrator:This is the Transform Your Teaching Podcast. The Transform Your Teaching Podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio.
Ryan:Welcome to the Transform Your Teaching podcast. My name is Ryan Liming. In today's episode, doctor Rob McDole speaks with doctor David Slade about ethics and AI. Unfortunately, Jared was sick this episode, but please enjoy this conversation.
Rob:Doctor Slate, thank you for joining, and give us a little bit of an intro as to how you got into higher ed and some of the things that you're dealing with right now in terms of AI and ethics.
David Slade:Thing. Thanks, Rob. I appreciate the invitation. It's good to have a conversation. I think all of this goes much better through some discourse and dialogue, as opposed to all of us staying in our corners trying to figure it out.
David Slade:I came into higher education as a faculty member. My background is in Spanish. I study Latin American literature and taught for a few years at Center College up in Kentucky, had great experience there, came to Berry in 02/2007 on faculty and have been here ever since. And I've spent about half my time on the faculty fully, very much engaged with our Center for Teaching Excellence, with, international education, community engaged, pedagogies. And, yeah, I've spent about half of my time in administrative roles as well, including associate provost and now my third years as provost at Berry.
David Slade:And so, you know, last year, there were so many questions that came up at multiple faculty meetings and academic integrity violation concerns and all sorts of questions that it just became really clear that not engaging the larger question of AI in every way was just not going to be an option. We really had to engage it. And so we looked for various ways to do that. And and really our primary plan this year as a part of the AAC and U Institute on AI learning and pedagogy is to is to really structure conversation, not just for this year to really set us up for for years to come and and how to not just settle all the issues, because I don't think that's really realistic, but really how to set ourselves up to understand what's at stake, to engage with one another in an ongoing manner around the issues and try to do it well. And, you know, we certainly claim no expertise in being, you know, a voice of ethical clarity, but, when it comes to AI, but I think that we're you know, I'm really proud of the work that the team here is doing and understanding what's at stake more and more.
Rob:Alright. Well, before we get to the work that your team's been doing and and your faculty have been doing, I was wondering maybe you could start us off with, have you guys come to a definition of ethical usage of generative AI?
David Slade:No, not broadly speaking. We have, and this is really our, information technology committee as well as, well, representatives from across the college that are on that committee have come up with a policy that has to do with how college data, how protected data is used, what tools could be used to do that. And that is a very technical answer to a broader question. But I do think that the proper management of data and content is part of what is at stake with ethical use. So we do not have a broad based statement that the college has declared about ethical use, but rather have tried to kind of address upfront, even for faculty and staff alike, how do we can we use FERPA protected data?
David Slade:How do we go about even using AI ourselves in a way where we can have at least a baseline kind of confidence in it being safeguarded, you know, the kind of information we're working with?
Rob:Yeah. Well, I think I recall a conversation that you and I and Jared had as we were talking about, you know, bringing you on the podcast. And you had some ideas about ethical usage. I'm and I was curious, would you mind sharing some of those, like, things that you have uncovered or you found or you think might be areas that we need to explore a little bit more in terms of, you know, defining those ethical boundaries?
David Slade:Sure. And for us, I think what what we've spent a lot of our of our efforts on is to be sure that we understand as a, you know, kind of the phrase we've been using is to understand what's at stake ethically. And I think defining exact practice is hard to do without understanding what's at stake. And so I think we could organize it to a few areas. I think one is just proper management of data and content.
David Slade:If I have an article that I have that I don't actually own the rights to, is it okay for me to just upload it to an AI machine out there and to let it do its thing when, I may be giving that content away that's not even mine to give? So kind of asking broad questions about data and content that we share out to different tools. One is kind of an obvious one. I think it comes down to attribution. And, you know, it's funny we're still at a point where as I engage students around the use of AI, I still have to do some convincing that I'm not trying to catch them in a trap of cheating.
David Slade:That's like, no, no, no. That's not what I'm talking about. But using AI to represent something that I have written when I in fact have not written it is an area of what's at stake. In some contexts, if I'm drafting a memo to kind of explain an event that's happening, maybe that's okay. If I'm representing something that is policy for the provost's office and I don't fully own it or represent it as my thoughts in a way, that's another area, that's at stake.
David Slade:Another area that we talked about, and I think this is part of what we talked about a little earlier, Rob, was be careful of how we use AI to make important decisions. So one of our team members is a computer scientist. His name is Xiaoming Yeh, and he is an expert. This is what his research is in. He studies and builds AI applications.
David Slade:And in his, he has challenged us to think about what in the field they call x AI or explainable AI. And he has really challenged us as a group to consider this as one of the primary ethical considerations. And this is we don't understand how AI, an AI tool functions, which almost all of the large models that are available to us as users kind of function in this black box way, but there's not really anybody who understands all of the limitations and biases that might be at play, then if we are using those to make consequential decisions about students' acceptance into programs, about financial aid, about these kind of decisions that have real bearing in people's lives. And this is beyond education. This is a larger concern as well, is that we need to take great care not to use tools that we don't fully understand what their limitations are to make these like really big consequential decisions.
David Slade:And then I think the last category is just thinking about equitable access. As we think about, you know, maybe using this in the classroom in different ways, being sure that we're not assuming, well, that we're aware that not all students have access, equal access to, you know, the same level of sophisticated tools. Some cost more, students who are able to pay for the better tools sometimes have better,
Rob:you
David Slade:know, they're able to use it in different ways, just being mindful of that. So it's kind of four broad categories and there's there's others.
Rob:I can definitely resonate with what you're talking about in terms of being able to trace. I think that's the one that hit me the most is like, can we trace how it came up with these answers? And it seems to me that, you know, validation of truth when you start talking about ethics. Right? Because it's based on value, and it's also based on what you can discern.
Rob:And mathematically, we have ideas about how it can come up with things. Everybody knows what those models are because they've they built them. The problem is these parameters in these large language models are beyond our capability to trace down. So the amount of compute power necessary just to pull it off, you know, to give us the answer is phenomenal. And then to chase that back around to verify how it calculated those responses is equally difficult, if not beyond our abilities, because we we really there's too many parameters to chase.
Rob:I was curious with that one in specific. As the gentleman who on your on your faculty who is creating these AI bots and AI applications, has he said anything to you all as a faculty about quantum computing coming in and what that's going to do with AI? I know I'm throwing in something we haven't discussed. But
David Slade:Yeah. Yeah.
Rob:It's something that I know is on the horizon probably within the next year or two. We're gonna see fully functional, potentially fully functional quantum computers.
David Slade:Sure. We have not talked about that yet. I think what we have talked about is a recognition that, and this gets into some questions that we may wanna talk about a little more later too, is, you know, we want to take action. We also want to be very careful not to set up these rigid, approaches when this is not settled technology. We feel like, you know, what's ahead?
David Slade:And in this, we don't mean this in a positive or a negative way necessarily, but the capacity that some of these tools have, I mean, we have the sense that this is really just the tip of the iceberg. And so if we, in one sense, if we set up very strict models about what's allowed, what's not allowed right now, it will not take very long for some of those models out of date if they're too tied to the actual technology. And so I know even in some of this, we've had a side project just to kind of revise our academic honesty, academic integrity policy as a whole, unrelated to AI concerns. And, you know, there's a certain line of thought where there's, it's a little problematic to just think of academic integrity as something separate than just integrity. And so when we think about ethical practice and how to move forward as really with our own personal and our institutional integrity intact, you really have to stop and look at big picture questions that aren't you're not gonna nail down ethical clarity about a piece of technology if there's not some clarity from a larger perspective, if that makes sense.
Rob:So Totally.
David Slade:So we have not gone in to say, where is this all going, you know, in terms of exactly, you know, what is this gonna look like? I think we are well aware that we're just laying a foundation of anything. We're trying to build some proficiency in engaging with this in an ongoing manner. Because this is not something that our task our, you know, working group is gonna solve this year, and then we just refer back to it for the next five years.
Rob:But you have opened up a lot of avenues for inquiry, and and I I get the sense that it's that way for so many others in higher education, yourself, myself, where you're it seems like every time you start asking questions, you come up with an answer and it just it gives you more questions and you're you know, you find yourself you find yourself kind of lost, I think, to some degree in a sea of like, well, if we make this decision and this policy, you know, tell students that they need to do X, and we haven't really thought through this thing like you were saying, you know, like, the larger picture of ethics. How does our worldview really inform our ethics? Because heretofore, I don't think that's been a hot topic that's come across, you know, the chronicle of higher education. Or you know, we talk about mission and vision, but I don't I don't think that's the same. Right?
Rob:Right. And so it it goes to the heart of who we are as people individually, but it also speaks to the heart of an institution and the groups of, you know, that gather together of all these individuals who share a common worldview of some sort. And there are a lot of assumptions that are made. Right? And and I think one of those that you were kind of hinting at and I've kind of thought about as well, it seems like one of the things we've gone to as a best practice for some would be to say, well, you just just like in a paper, you need to cite.
Rob:Right? So you need to cite. But there's a kind of a little bit of a gotcha in there, it seems to me. But we don't cite everything that we learn. Right?
Rob:We don't cite where we got our information, with everything we write. At least I know I didn't. I don't know about yourself. Yeah. If I and I've done this as a teacher.
Rob:Right? I have somebody come in. They've written a paper. I go over it with them, and I say, hey. You're missing this.
Rob:Right? Yeah. And I say, you need to put this in your paper. Well, the student goes with the notes that that I gave them and even some of the words that I gave them, and they put that in their paper and I see it show up, but they're not referencing me. I don't mark them off for that.
Rob:Right? So there's that kind of, like, grayish area, if you will, when it comes to sight citing, like how much is enough? When should you sight? And these are some of, like, the questions that come up. And I'm just curious, where are you guys landing?
David Slade:Well, know, part of what we're trying to do is to not make complete assumptions about how our students are using this and how even we're using this. Part of where we've tried to start, for example, instead of just jumping in and writing a bunch of, you know, student learning outcomes for what we need to be teaching about AI or rules about how you can or can't use it. We've done a series of focus groups with our students, as well as a survey of our students to learn more about how they're actually using it. Because I think that it's really easy to assume, again, to assume that what's at stake is around a certain issue. I met with the chair of, co chairs of our writing across the curriculum committee, And their perspective was, hey, we would really love a seat around the table for this because clearly this is all about writing.
David Slade:Yeah. And then when we got into the focus groups with our students, what we found was there were many, many students, there's a consensus among them that this is an unreliable tool to use for writing. And if I use this to write something for me and turn it in as if it's mine, I am shortchanging myself, and it's not gonna do as good of a job as I can. Right. So so the way our students seem to be using this, is is more around organizing their work, about around being kind of a study partner, around being almost at a dialogic way where it's a conversation partner in some ways rather than, you know, a kind of scribe that's doing the work for me.
David Slade:So I think that, again, we're trying to, we're taking a little bit of a slow approach to be sure we're not building our efforts around kind of assumptions that really aren't at play. And so I guess to answer your question, we're trying not to jump to prescriptive statements about how to use this and how not to use it on one hand because those could become out of date really fast. And then second, you know, we're trying to actually understand where our folks are coming from. You know, because this is such an endless sea of topics that this could take us down, you know, we really want to focus on kind of the central questions that are at play. I mean, think a lot of it is really trying to strike a balance because I think that there is a bias in some way for some of us, and I'll just say a broad us, there's a bias of inaction.
David Slade:It's like, you know what? I wish this hadn't been the case. You know, I'm just gonna, I think we'll just keep defaulting to what, you know, we'll let somebody else figure this out. And I just think that's a false choice. There are others that are, I think, so enthusiastic about what these tools can do that there's a bias so strongly toward action that one can lose one's critical eye in it a bit.
David Slade:And so I think part of this is how do we strike a balance where you're not having an uncritical approach that simply accepts, alright, AI has changed the world. There's nothing we can do about that. So we might as well capitulate to it and just charge forward without reflection. We don't want that.
Rob:Give over to the overlords.
David Slade:Yeah. That's right. Just go ahead and, you know, take the whatever, you know, step into the matrix fully. But but on the other side, we don't wanna say, well, this is utterly morally, decrepit, and we're not gonna touch any of it. So I think part of it is how do we bring our critical thought?
David Slade:How do we bring what's the core of our education to this discussion? Be thoughtful, be thoughtful users, be cautious users, but also not fearful in giving it a shot. And again, understanding what is at stake in the different domains of like all, you know, all that is implied in when we write and when we create knowledge and when we, you know, communicate with one another. And all the you know, AI has a bearing on all of that.
Rob:Yeah. It seems like you were referencing an idea when you talk about balance as a critical curiosity
David Slade:Mhmm.
Rob:Towards this technology. And with that in mind, I'd like to kinda shift it Sure. To a different gear and specifically focus in on faculty who may be in one of those camps that you just discussed. They may be, I'm never gonna touch it, or they might be in that, I'm I'm all in, maybe too far. But the one thing that I heard from you, which kinda resonated, was the idea of asking questions.
Rob:And I think asking questions of our students and how they're using it in their particular area, so it could be English, but it could also be the sciences. It could be sociology. It could be psychology. Whatever. Okay?
Rob:So as we're thinking about that and we're wanting to understand and and probe these issues with our students, are there any good questions or types of questions that you found to be helpful in pulling down the barriers so that, you know, maybe the students don't feel like you're chasing them to find them and cheating or Yeah. Anything that has resonated with them and you've you were surprised at at at answers.
David Slade:I think we're still figuring this out, but I think part of it is is just have to gauge the students in an ongoing way about it. I don't think that we have to so centrally locate this so that AI becomes the epicenter of all that we're doing. I think that that would be a mistake. But I do think that we cannot just put it in the syllabus and we cannot just have a conversation about it the first day of class. I think that we have to engage with it in an ongoing way.
David Slade:And I think that there are many, and this is part of what our group has talked about, there are many pedagogical contexts where it is just not appropriate to use it. I mean, we all have, we're all, you know, know the disciplinary kind of concerns of our fields. In language learning, some of the struggle that you do, especially at the early stages, is part of what forms your proficiency. And without the struggle, you're not ever gonna have the real proficiency that, you know, that you're there to have, you know, that you're kind of even taking these classes to begin with. I mean, who cares if you can memorize verb forms?
David Slade:And who cares if you can memorize grammar rules? I mean, that's great. But at the end of the day, it's about communicating with people, connecting with other human beings, understanding the human experience more broadly. And so it is completely legitimate to say AI is not gonna it has no place in this kind of a course. Now, there may be a different kind of course later even in that same department where, because our students are gonna go out and have to use these tools, be faced with it in their professional lives and their community lives, we maybe owe it to them to show them how to use some of these tools and to use them as a critical thinker, how to bring that curiosity to it.
David Slade:See, think there's a way of looking at AI where it utterly short circuits critical thought. There's another way to look at some of these tools where they work best when you fully engage yourself as a critical thinker. And I think we just have to find ongoing ways to engage our students in the appropriate in the appropriate mode for, you know, what different kinds of courses and educational experiences kind of justify.
Rob:I think that's an excellent bit of advice. And I I appreciate your your willingness to have a conversation with us. I'm sure if Jared were here, he'd have other questions as well. We may have to bring you on again and, see where we're at because I know there's more stuff coming. Right?
David Slade:Yeah. And we'll have more to report later, you know, in the year. Part of what we're trying to do is to set up ways for faculty and staff alike to learn in an ongoing way, to learn together. And to create, you know, we've even talked about creating some internal kind of courses with modules that different people, you know, make and and, you know, really address it to our local community here. So we got lots of great ideas.
David Slade:I'm sure we'll we'll be in a really we're right in the middle of it. There's very little that's settled for us. But we're you know, I'm very grateful for my colleagues that have been working well on this, and I'm sure we'll be in a in a different place than in March or April.
Rob:Well, I look forward to having that conversation with you and and wish you all well and your work. Thank you for taking the time to spend some time with me, and with our audience.
David Slade:Absolutely. You're welcome. Thank you for the invitation.
Rob:So that's gonna do it for us here at transform your teaching podcast. Thanks for listening. Please like or subscribe on your favorite podcast platform and join us on LinkedIn. And if you wanna see what we're writing about, feel free to move on over to cedarville.edu/focusblog and check out our blog. Thanks for listening.