Transform Your Teaching

We’re starting the new year with a miniseries on the ethical usage of GAI in higher education. In this first episode, Dr. Rob McDole and Dr. Jared Pyles discuss a definition with Dr. David Mulder from Dordt University.

Transform Your Teaching wants to hear from you! Please take our survey to let us know how we can make the podcast even better.

View a transcript of this week's episode.

Resources
Chat with us!

What is Transform Your Teaching?

The Transform your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio. Join Dr. Rob McDole and Dr. Jared Pyles as they seek to inspire higher education faculty to adopt innovative teaching and learning practices.

Dave Mulder:

If students are gonna try to use AI in ways that shortcut or short circuit their learning, well, then we have to ask a bigger question because what is the purpose of education? I think education is more about formation than information.

Narrator:

This is the Transform Your Teaching podcast. The Transform Your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio.

Ryan:

Hello, and welcome to the Transform Your Teaching Podcast. My name is Ryan Lyme, and I'm here with our hosts, Dr. Jared Pyles and Dr. Rob McDole. In this episode, they start a series on the ethical usage of AI.

Ryan:

They're joined by a wonderful guest in Dr. Dave Mulder. Now here's Dr. Pyles.

Jared:

Thanks, Ryan. Yeah, I'm really happy that our friend Dave Mulder decided to come back and join us on this episode to help us define generative AI. He is the professor of education at Dork University and is a cohost of his own podcast called Hallway Conversations. Thanks so much for joining us, doctor Mulder.

Dave Mulder:

My pleasure to be here, friends.

Rob:

Yeah. We enjoyed our last time together, and it's been a little while.

Jared:

It has. But thanks for coming back on. We do really appreciate it.

Rob:

So it was really

Dave Mulder:

Truly a pleasure.

Rob:

It was really interesting. When when Jared came back from ACT and his doctoral, you know, victory tour, You know, he was like, we've got to talk to Dave. I'm like Yeah? What do we need to talk to Dave about? He's like, ethics and AI.

Rob:

I'm like, okay. You're you're speaking my love language. You said the word ethics that gets me closer to philosophy, and I'm, you know, and obviously, I do enjoy AI. But it has been a common conversation that we've had internally. We've had with others.

Rob:

And so we're happy to bring you on today and, you know, just kick this off. So what were you two talking about? What really you know, Jared, doctor Mulder, what really, you know, got your attention when you were at AECT in terms of this particular subject with ethics and AI.

Jared:

So I've mentioned this to you, Rob, and to you, Dave, that it's like we're in the emergent technology phase of this generative AI, and like no one knows yet how to really handle it well. There's no best practices, no one's established that, everyone's really just feeling out the area to see where we are. And the term that kept coming up, and Dave can talk about this too, because all we heard was conversations about how to use it and you know, emerging studies on some preliminary work that people are doing on student usage of it. The term that kept coming up without defining it was ethical usage. Right.

Jared:

And I think it was session five where I heard that, and I was like, wait a minute. No one's defining that.

Rob:

Ethical.

Jared:

Mhmm. No one knows what so then Dave and I, before we went out and got lunch in the great city of Kansas City and had some barbecue, I sat down. I'm like, it would be an interesting study to ask these AACT members a definition of ethical usage of generative AI. And the next question then being, okay, where did you come up with that definition? What are the presuppositions that are in your head about this definition?

Jared:

Because, you know, and maybe I'm stealing all the the air for this question, but like, we we got down to it and I we both talked, Dave and I did, we're like, I don't think everyone has the same definition that they think they do. I think there's an overall idea of what they think ethical usage means. When you get down to it, I think there's more disagreements than agreements in the room.

Rob:

So it sounds like you keep using that word, but I don't think it means what you think it means.

Dave Mulder:

Right. Exactly that. Oh, I was just fascinated at at ACT. I I've been a member for about ten years now of of this organization, and it's been interesting for me. I I I think it gives kind of a finger on the pulse of where EdTech is because this is a bunch of researchers, you know, instructors too, but they it's primarily a research or organization.

Dave Mulder:

Right? And to see I bet half. I'm not exaggerating. I bet half of the sessions just in the program had AI or machine learning or large language models, like, in the title for the session. This this is in the zeitgeist for Mhmm.

Dave Mulder:

For the ed tech field right now. And and exactly to your point, Jared, just I feel like this was, a common refrain. Everyone's talking about ethical practice, ethical implementation, ethical usage, and and what what are we actually talking about? Because, you know, I think probably there's this generally assumed, oh, you know, be a good person. Like like, have have good intent.

Dave Mulder:

And, like, well, yes. I I agree with that. Yes. We we should have good intent. I try to assume good intent.

Dave Mulder:

But what what does that look like in practice? That I I have more questions than answers, I think, for I'd I'd love to I'd love to think out loud with you guys about this.

Rob:

Let's do that because I think, you know, Jared, you put that forward. What is a definition of let's just start with defining ethics. What is your understanding, gentlemen, of ethics?

Dave Mulder:

Pass. Okay. Okay. Should I take a first swing at things here,

Rob:

please? Go ahead.

Dave Mulder:

I I think you're you're this is this is the key question, though, that that I think for Christian educators in particular, we're gonna have to come to some sense of what are we actually talking about. What does this actually mean? And no matter your faith commitments, I think the the big concern that I hear from a lot of educators around AI is what is this doing to our view of people? What what is what is this technology doing to our view of people? And I think that that is a really good question that we should ask and as a way to get us started thinking about what might be an ethical approach to AI.

Dave Mulder:

So I'll just say it for myself. Like, I believe human beings are created in the image of God, and I think that's gotta be a starting point for a conversation about this. Now easy for me to say that because I am a Christian, and, of course, I believe the Bible is is true and is God's word. And so when I read in Genesis that human beings are created in God's image okay. Well, what does that mean then?

Dave Mulder:

Well, I think that means that human beings are created to create in the same way that God is the creator, and we're created in his image. That means we're created to create. And I look at this through the lens then. Like, God has given us these creative capacities that we can do stuff with the creation he's made. And so there's even kinda like this implied, like, because God's the creator and he's made all this stuff, we can use this stuff to create new things.

Dave Mulder:

So I think of it like AI is part of creation in in that sense because it's created by human beings who are created in God's image. Right? There's kind of a ripple effect coming out of this. So when we start talking about ethical usage or ethical implementations of AI, I think it's gotta be in some way recognizing that humans are created in a way that they're going to create stuff. And then how we use the stuff we create has to recognize what humans are good at.

Dave Mulder:

And I think this is the piece where I start to wonder a little bit. Like, are we trying to use the machines in ways that actually deny or minimize the goodness of how God created humans to be? I don't know. What do y'all think about that?

Jared:

That's interesting because it's like a hindrance on personal creativity, and it's getting in the way of what we are meant to be as creative beings. So it's it's kind of are we replacing our own creativity with something that could be creative, quote unquote, for us?

Rob:

Well, it gets down to what is the purpose. So I don't disagree with you about what you said, how we're created. You know, if I were to go to ethics, because I teach business ethics, and and I I've loved philosophy for most of my life. So, you know, wanting to know the why behind things. And some of these philosophical conversations, ethics is born out of our values.

Rob:

Ethics are the things that we do. They are the behaviors that we do. And that's what we're talking about when we talk about ethical usage of generative AI or AI in general. Right? Is what behaviors are okay and what behaviors are not.

Rob:

That's usually which is a moral question that's derived from our axiological views or or our values, the things that we value. And what you stated was definitions of anthropology, you know, cosmology and theology along with some epistemology because you believe

Dave Mulder:

this brother.

Rob:

And and so, you know, those all lead to a value. I think the thing that's missing in that conversation, though, is what you're getting at as what is the purpose, and are we are we tripping over something in terms of God's purpose for us here on this planet? The first thing the first question that pops into my head after you said what you said, Dave, is why did God create us in his image? What's the purpose? Because if I can figure out the purpose, then I can understand whether or not my usage of AI is tripping over something that it shouldn't be.

Rob:

Right?

Dave Mulder:

So I love I love that question, Rob. And if I could lean into this. So I I love the the way the answer to that question, why did God create us in in in his image? I always lean back on the Westminster Shorter Catechism. And the first question and answer in the Westminster Shorter Catechism is, what is the chief end of mankind?

Dave Mulder:

Like, what what are what are people for? And the answer the catechism gives to that is to glorify God and enjoy him forever. And and to me, that is such a great answer to that question. Like, why are we created in God's end? Well, so we can glorify him and that we can enjoy being in right relationship with him.

Dave Mulder:

Right? And so that, I think, drives me towards this then to say, like, okay. Can AI be part of that? Me bringing glory to God and enjoying him? I think so.

Dave Mulder:

I think so. But how? That that still like, there's always a question behind the question, isn't there?

Rob:

Yeah. Then you have the the kick on question from that. I was like, well, k. How do we glorify God? And I think we see some of those things and we see it in the Old Testament and we see it in the New Testament.

Rob:

The very first thing is God creates, and you're not wrong. He creates, but the way he creates is unique. And the way we see him identified and the way we see Jesus identified in the New Testament, and I'll just use the book of Genesis, and let's just say chapter one, in the beginning God created the heavens and the earth. And and the first thing we see, you know, we have that narrative statement and then the first thing we see is God speaks. So there has to be something about the word.

Rob:

Right? And then we see that in the New Testament, is that the word is what creates God's word, and it's more than just letters. Right? Its ultimate meaning, I would say it's the building blocks of the universe, God's word. The reason why I say that is because we go on in John, and and John says that Christ is the word and that everything was created through him, and in him everything holds together.

Rob:

So that speaks to me about this idea of cosmology and how this world that we live in, this physical world that we live in, actually holds itself together, is through a person. And from that, I then start asking myself the question, why has mankind created AI? What's its purpose?

Dave Mulder:

Yes. I don't know. This is a big

Jared:

question, though. It's a very good question.

Rob:

I think. Okay. If you go back to Genesis again, Genesis chapter three, it's the same reason why Satan got himself in trouble. And I think it's the same reason why we're finding ourselves dealing with this because in

Jared:

many ways, we as humans want to be God apart from God, and we want to control the words. There's power in what we're creating here. So then, like, when we come to having a discussion about defining ethical usage of generative AI in higher education. You know, I agree with Dave in that he's talking about to glorify God and enjoy Him forever. So to me, that's the defining line of a Christian perspective, is my usage of generative AI glorifying God?

Jared:

Or is it not glorifying God? To me, that's the if you boil it down, if you go to the foundation, that is my ethical definition of ethical usage of generative AI. But if you're having this conversation with others who don't have that worldview, that perspective, that's to me where it gets super interesting. Well, yeah. Because at one point, I wanted to just stand up, not stand up, but respectfully raise my hand and go, how do you define that?

Dave Mulder:

Right.

Jared:

Because I don't know if

Rob:

They can't.

Jared:

I don't know. They can't.

Rob:

So, like All that's out there right now like, if you look at ethical literature, there's two major approaches to ethics. One is a utilitarian approach, and we could go into some of the subsets and things like that. And there's actually literature on these things. And then the other is a deontological. And so let me explain.

Rob:

One is concerned with the outcomes and utilitarian is greatest amount of effect or positive or who knows what defines great, you know, great effect for the greatest number of people. So if you have to kill a few so that a greater majority can survive, then ethically, that's okay according to utilitarianism.

Dave Mulder:

Mhmm.

Rob:

Deontological is more concerned about the process. You do the right things, but they're not really concerned about the outcome. Does that make sense?

Jared:

Okay.

Rob:

So those are the two main ethical standards that get used in business as far as I know. And I'm pretty sure from the literature that that's the same in any of the other areas that you might see ethics pop itself up. And I would say it's probably the same here as well when they say, well, does A. I. Benefit a greater number of people than it hurts, well, then, you know, that's ethical.

Rob:

That would be one way. And that's I know I'm being overly simplifying this. But

Dave Mulder:

No. But I I think I think you're onto something, though, as you bring this up, Rob, because I think that the way we answer that question says something about, like, deep, deeply held, deeply formed beliefs that we have, like, just our whole way of viewing the the world around us. Right? So one of the courses that I get to teach is a a capstone course in our core program, which is really a course for for undergrads. It's really a course in discipleship.

Dave Mulder:

Like, as you graduate and you go on from Dort University, how are you going to be in this world? Mhmm. And and so we've got a bunch of different options they can choose among. The one I happen to teach is called technology, identity, and community. And so the whole lens that we're looking through the the world at is, hey.

Dave Mulder:

As you think about discipleship, technology is doing stuff to you, to you as an individual and to you as a member of of society. Let's talk about that. So I actually introduced them to, you know, a classic for moral philosophy, the trolley problem. Maybe you're familiar with this. Yep.

Dave Mulder:

The trolley problem, the basic idea is you have a trolley that's going down the track, and it is going to hit there's a five people standing on the track, and it is going to hit and kill those five people. You could throw a switch that's gonna divert the trolley onto a siding, and then it will hit and kill one person. What do you do? Right? If you do nothing, five people are gonna die.

Dave Mulder:

If you act, you're gonna cause the death of of some you are going to cause the death. So right? And so then there's this ethical dilemma. Well, that's a pretty easy one. Most of the students are like, well, you know, utilitarianism comes to play, and they say, well, you know, like, save four lives by throwing the switch.

Dave Mulder:

Right? But then we make it a little more complicated and say, so instead, you can stop the trolley because there's a bridge, and there's a a person on the bridge. And you could push the person off the bridge, and they'll fall in front of the trolley to stop the trolley. It'll for sure kill that one person, but it won't kill the five. And they're like, oh, well, no.

Dave Mulder:

I can't push someone off the bridge. Well, what's the difference? Right? And so utilitarianism falls apart right right there. And then we make a leap from there to talk about, well, how about self driving cars?

Dave Mulder:

Because if you were gonna program an AI to manage a self driving car, you have to make these kinds of ethical judgments. And suddenly, we're having this really rich and robust conversation, and they're starting to say, like, I don't think God wants us to make self driving cars. Right? And, okay, let's let's talk about that then. Right?

Dave Mulder:

But it's an interesting way to get them to actually think about more than just, you know, at a surface level of saying, well, clearly, this is this is the way things are. Right? These are issues that we actually have grapple with, I think, as Christians in education and when we're dealing with the effects of technology.

Rob:

No doubt. So one of the things that I remember in in a pre conversation with David Slade, who we're gonna have on the podcast after after you, doctor Mulder, from Barry College is he's like, once you can't validate and you can't trace those things, we immediately move from the area of observable and definable to the area of faith. I have to have faith in what this AI is telling me is true because I can't validate it. And as soon as it becomes an issue of faith, I'm no longer in the where I'm no longer in Kansas anymore. You you get what I'm saying?

Jared:

Yeah. Yeah. Yeah.

Rob:

And as a result, this ceases to be an academic conversation, and we've moved in to the philosophical and religious because now I'm asking myself the question, can I trust AI? Can I put my faith in AI? And I know what the answer to that question is. Right? Absolutely not.

Rob:

No. And most people right now, if you ask them, they would all say the same thing. No. It it hallucinates. Right?

Rob:

You can't put your trust in this this thing. But that's still that's one aspect. Does that mean I should just throw the whole thing out? I don't think so. I don't think so.

Rob:

But I I think I should at least recognize the fact that if I can't trace it and I can't validate it through normal observation, you know, scientific observation, then I'm in a different sphere. I don't know. What do you think?

Dave Mulder:

Well, I think there is a sense that we we need to know enough to know how things work. Maybe the analogy I have for this is I am not a car guy. Right? I am not a car guy at all. I drive a 02/2001 Ford Windstar.

Dave Mulder:

That's got 230,000 miles on it. Wow. Right? The thing continues to run. It's the cheapest car I'm gonna own at this point.

Dave Mulder:

I don't really care how it works. I'm glad it works. Right? But but that is a very pragmatic statement that I make, and I think a lot of people are treating generative AI a lot like I treat my WinStar. Like, as long as it's running, I'm just gonna keep driving it.

Rob:

Are you praying that the Lord continues to hold it together? That

Dave Mulder:

At this point, yes. I I'm I'm hoping I could keep it going for a couple years till I get my kids through college. They're both good to see. Right? Yeah.

Dave Mulder:

But but no. I mean, I joke, but, seriously, I think there is a sense in which we we need to know enough about how things work. Like, I can check the oil on my on my Windstar. I could you know, I have a few things that I could that I could take care of there, but I don't know all the ins and outs of of how it actually runs. Right?

Dave Mulder:

And and a a car is a far simpler device than these generative AI tools.

Rob:

But you could. That's my point. You could. And and what we have with, like, some of the the data scientists and technology, you know, computer scientists who've been looking at, like, for instance, chat GPT four. Microsoft had a bunch of their scientists look at these things.

Rob:

And they they had a paper that basically said, I'm sorry. We have tried to follow follow the numbers essentially to figure out how it's coming up with this, and we can't do it.

Dave Mulder:

Yeah. That's unsettling for sure.

Rob:

Yeah. That's that's kinda where, like, where it leaves me. I understand what you're saying, and that makes total sense. But this is a case where it's like, well, I would hope at least you've got somebody that knows how to work on your car. Right?

Dave Mulder:

Right.

Rob:

Right?

Dave Mulder:

That's fair. That's fair. Yep.

Rob:

And and here, we've got people creating things and they don't even know how it's working. That causes me a good deal of

Jared:

Or they don't know if it's actually true and valid.

Rob:

Right. Right. But we're we're fine tuning things. You know, there are companies that are using this and using it to great effect. I mean, so somebody's gotta know something.

Jared:

Yeah. And then we already see the effects of that as well. Not only is it being created, but now you see the AI generated images and videos that are being passed on as truth because people aren't evaluating it. So not only is the creation of it being used incorrectly or in a way that's not true, but now it's also being populated and shared to a point where it's just like, well, I don't know. I have to reevaluate everything at this point.

Jared:

So

Dave Mulder:

Well, this is one of my big questions about AI. Like, do we do we think of the output from generative AI as being created or being constructed? And and I asked that question purposefully because I think the answer that you give to that question says something about how you actually view what's going on inside that black box of of the AI. Because my take would be it's constructing.

Rob:

Yes. Something. I would agree with you.

Dave Mulder:

I I don't think it's actually created. And this comes back to my original my very first thing I was saying about being created to create. I think human beings are uniquely different than the machines because we have the capacity for actual creativity. And I'm not convinced that AI generative AI, no matter how powerful the computers are running it, I don't think that they can actually create things. I think they construct things.

Rob:

Because there are so many parameters in in this Yeah. We would need a supercomputer just to do the validation on the computation. And that's essentially what they've done. It leaves more questions than it does answers.

Dave Mulder:

Yeah. Absolutely.

Rob:

And I think at least practically speaking and ethically speaking, you cannot leave your brain at the door. It emphasizes education and being a professional in your area and maybe others so that you know what to expect and what's valid and what isn't, and then being able to to, you know, discern that when ChatGPT gives you a response. Yeah. And evaluate it and then choose what you're gonna keep and what you're gonna throw away.

Jared:

All I know is this series is gonna make my head hurt every time we record. Because right now, I have a massive headache.

Rob:

Well, maybe maybe

Jared:

Thanks a lot, you two.

Dave Mulder:

And Happy to help, Jared.

Rob:

Whose idea was this? Yours.

Jared:

It was mine.

Rob:

It was your idea. You should've kept me in the cave.

Jared:

As Ryan said, it's only three episodes, so I think I'll

Rob:

be okay. Kept me in the cave. So one the things we also wanna talk about, Dave, is the long term effects of using generative AI on on humans. I mean, we've kind of seen a little bit of this with the usage of social media and the Internet. Right?

Rob:

And that the more they use it, the more anxious they seem to become, regardless of whether they identify as believers or not. And I think there's some degree to which the more you do something like actually a behavior that you're programming and we know this, I mean, we go into brain science, we'll figure this, you know, we could we could easily show this. Right? The more you do something, the more you're driving a growth in your brain for whatever this thing is that you're that you're doing. Right?

Rob:

And that and and I wonder what will be the outcome even more so. Will it be anxiety or will it be something even worse?

Dave Mulder:

Well, and I think that's a very, very important question that you're asking, Rob. This reminds me a whole lot of conversations I was having back in around 02/2003 about, you know, this newfangled thing called the World Wide Web. What are we gonna do about this World Wide Web in education? And I seriously, I remember having conversations with colleagues, at the school I was teaching at, and I was a middle school science teacher at the time. And I'm looking at this saying, like, are

Rob:

you even

Dave Mulder:

talking about? Why would we not use the worldwide web? Right? Mhmm. So I I sympathize with with teachers who might go, what are you talking about?

Dave Mulder:

Why wouldn't we use AI in education? And now a little older, a little wiser, I understand what some of my colleagues were afraid of with this. So I I would argue the the World Wide Web has been a net good for education, but it hasn't been without its bumps and bruises along the way. Right? We've had to adapt to a world where the facts are free online.

Dave Mulder:

Right? And that changes things. And I think the same kind of thing could happen with AI. I can't quite predict. I I I can't see past the horizon here to to try to imagine this.

Dave Mulder:

Right? But I think that it's here to stay. I I don't imagine that AI is gonna go away. Right. So we need to address, and we are gonna have to adapt things in in education.

Dave Mulder:

My big question, though, is what is this gonna do to our students in the meantime? And so that's why I echo exactly what you're saying, Rob. And and my best answer right now is I wanna make sure that human beings are still doing the very human work of learning. And if students are gonna try to use AI in ways that shortcut or short circuit their learning that that somehow they aren't actually gonna learn, well, then we have to ask a bigger question because what is the purpose of education after all? Right?

Dave Mulder:

And, again, I would tend to say I think education is more about formation than information. Right? Because the information's available online, but that's not the same as becoming educated. Becoming educated means you're formed in such a way that you have wisdom and you have skill, and you can use that information in ways to do real human work. And I think this is the concern that I have then when I think about generative AI.

Dave Mulder:

I'm using it for myself in my own work. I am gonna try to work with my students to learn how to use it in their own work, these future teachers that I'm working with. But I also wanna be really cautious and really mindful about what kinds of habits am I setting myself into then. Right? And I guess I'm thinking about this in in the light of a book I recently read by Andy Crouch.

Dave Mulder:

I highly recommend it. It's called the life we're looking for. And he just raises the question in there. If you're not familiar with Andy Crouch, he's a great Christian thinker, and, tech wise family was one of his big, books recently too. So he's been thinking about technology and its influence on people.

Dave Mulder:

But in the life we're looking for, this is really what he's asking. Like, what kind of life do you actually think Jesus is calling you into? And it's not necessarily what you think it is. The things you love are not necessarily the things you think you love, And he's just helping us think better about that. Like, I don't wanna live a life where I'm just laying on my couch scrolling on my phone.

Dave Mulder:

That's not the life I'm looking for. I don't wanna live a life where I outsource my thinking and my creative capacity to some machine that can construct some facsimile of it. I wanna do the real work of being a real human being in the world. But I think that's gonna take a lot of hard work for us as Christian educators to convince our students that this is the life they're looking for too, that they wanna be deeply formed to become more like Jesus day by day, and that how education plays a role in that.

Rob:

Yeah. It's good. What's interesting as I was thinking about that as you were talking, thing that came to mind was what Jesus had to say in response to the greatest commandments, to love the Lord your God with all your heart, soul, and mind. Love your neighbor as yourself. And the thing I can't get away from is that those things have to be done in a relationship in a relationship of trust.

Dave Mulder:

Amen.

Rob:

And and I think the ultimate education happens through those relationships with people, much like you and I and and even though we're having this mediated by technology. Right? Yeah. But we still have a relationship with Dave. Mhmm.

Rob:

Right? Mhmm. It's a lot harder to have a real relationship with something that doesn't exist. And I think that's really the challenge is how to incorporate it much like we're using this to have these kinds of conversations. If technology is an amplifier, it amplifies equally good and equally bad, you know, input.

Rob:

Yeah. So I think that gets to it. It's like, how do we lovingly use generative AI? And I look forward to more conversations.

Jared:

Thank you, doctor Mauter, for coming on and joining us and helping us, not answer any questions, but ask more questions. So appreciate

Dave Mulder:

that. It was an absolute pleasure to be with you, gents. Thanks so much for having me on.

Rob:

Thank you. That's gonna do it for

Jared:

us on this episode of the transform your teaching podcast. Be sure to like and follow us on your favorite podcast platform. Connect with us on LinkedIn. You'll see a link in our show notes and description on how to do that. Be sure to send us an email at CTLpodcast@Cedarville.edu.

Jared:

And as always, check out our blog, cedarville.edu/focusblog. Thanks for listening.