Transform Your Teaching

Does community have an impact on ethical AI usage? What problems arise when individuals or organizations have differing ethical approaches? Join Dr. Rob McDole and Dr. Jared Pyles as they chat with Dr. Ross Perkins (Associate Professor of Educational Technology at Boise State University) and Dr. Dave Mulder (Professor and Department Chair in the Education Department at Dordt University).

Transform Your Teaching wants to hear from you! Please take our survey to let us know how we can make the podcast even better.

Resources
Chat with us!

What is Transform Your Teaching?

The Transform your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio. Join Dr. Rob McDole and Dr. Jared Pyles as they seek to inspire higher education faculty to adopt innovative teaching and learning practices.

Ross Perkins:

One of the goals of higher education is to produce knowledge that is verifiable and trustworthy. And if it can't be, then no tool that we use is gonna be worth it.

Narrator:

This is the Transform Your Teaching podcast. The Transform Your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio.

Ryan:

Welcome to the Transform Your Teaching podcast. My name is Ryan Liming. In today's episode, Dr. Rob McDole and Dr. Jared Pyles conclude their discussion on ethics and generative AI.

Ryan:

They chat with Dr. Dave Mulder of Dordt University, as well as Dr. Ross Perkins from Boise State University. Thanks for joining us.

Jared:

Well, I'm very happy to bring on two of my favorite people in the whole world. We've had on Dr. David Mulder before.

Rob:

Wait a second. What? Those are your two favorite people in the whole world?

Jared:

I said two of. I didn't say to my two favorite. Okay. To of. Thanks for interrupting my intro there.

Rob:

That was good.

Jared:

I'm very happy to bring on these two of my favorite people, doctor Dave Malder, who we've had on before from Dort University, and Doctor. Ross Perkins from Boise State University. And Dr. Perkins and I have gone way back, so far back to six years ago when I started my EdS degree at Boise. And he was my first advisor, and he told me to never use further as a transition word in my sentences, and I have kept that to this day.

Jared:

So welcome back, Dr. Mulder, and welcome, doctor Perkins. Thanks so much for joining us

Dave Mulder:

to be with you, gents.

Rob:

It's great to have you. So, doctor Perkins, I'm just curious. I I decided to have chat GPT do a background check on you. Do you wanna hear what it says?

Ross Perkins:

Oh, I'm sure there's there's a there's a number of Ross Perkins that exist out there.

Rob:

I'm I'm just curious if this is accurate. I want you to to listen and then tell me. Doctor Ross a Perkins is an associate professor in the department of educational technology at Boise State University. He earned his PhD in instructional technology from Virginia Tech in 02/2003. His research interests encompass instructional design, diffusion of innovation, STEM education, and the integration of mobile technologies in education.

Rob:

Doctor. Perkins has a notable focus on the contextual impacts of instructional design, particularly in international settings. He serves as the coordinator for the educational technology online EDD program at Boise State University contributing significantly to curriculum design and program management.

Ross Perkins:

Most of that is true.

Rob:

So What was not true?

Ross Perkins:

Well, it's actually there's just some things, some organizational changes that have happened. My research interests have have varied a little bit from that, but I'll just say that the the department from which Jared and Dave both graduated is now called Educational Leadership Research and Technology. And EdTech is now a program area within it along with along with OPWL, which is just organizational performance workplace learning. Those colleagues just joined us this past really this past semester. And, also, I am now the doc coordinator for two doctoral programs, not just the EDD and EdTech, but also the EDD and CLI.

Rob:

Wow. Cool. Well, that was interesting. I thought it would be, you know, in lieu of our conversation about ethics and AI. You know, it's funny to see you know, it started off, let's Google ourselves.

Rob:

You know, Google your name. You remember doing that? Oh, yeah. Oh, yeah. It could be a scary proposition.

Rob:

But, yeah, I I did that just thinking, I wonder what it's gonna come up with.

Jared:

So I should tell you, doctor Perkins, this whole idea of ethical talk and using AI came from us being at AECT there in Kansas City back in October. I don't know if you recall if you were in any of the sessions because it seemed to be every single one of my sessions that centered around generative AI was talking about ethical usage. Because I was sitting there and I'm going, I don't know if everyone in here has the same definition of ethics and ethical usage. So we wanted to have you on just to get your perspective because I know you like to think broadly and deeply, and you're a big fan of going, you know, we've had you and I have had several conversations about, you know, some some deep topics, and I wanted to get you on and discuss this. So if you were to give someone an elevator definition of ethical usage of generative AI, what would you say?

Ross Perkins:

You know, I'd I'd say I'd probably start with how many floors are we going? Like, are we going, like, four or we going this is like Peachtree Plaza Tower.

Rob:

Yeah. You're high. Yeah. Easy. You're at least at Floor 35.

Jared:

A couple stops along the way too to

Ross Perkins:

pick up people. You know, I think I'd probably start with talking about community and the importance of community with respect to ethics and ethical use of artificial intelligence, and what the relationship is between the use of any tool, and how it impacts community, the community that we're surrounded by. And in higher education, and maybe education generally, but I'll speak to higher ed because I've been in it forever. We're talking about the use of a tool that has various impacts on different sectors or segments within the university. And because doctoral education, for example, is one segment, research labs are another segment.

Ross Perkins:

Another part is PR and marketing, and you talk about institutional advancement. So there's many different ways that I think ethics plays out because you couldn't say would make one broad statement about ethics, you know, for university. So I would say that that when you start with community, you look at those. What are those needs? Where are the needs of those communities?

Ross Perkins:

What are the goals and desires of those communities? And I think within if I just speak to and this is more of my wheelhouse. If I just speak to the the research side of things and the doctoral education, the graduate education piece of things, I would say that our community wants to have people within it who are trustworthy, who can be seen as people who are producing work that is their own, but also that there's an expectation that they would be using tools that would help inform them in ways that bring together the latest information and, the best, you know, the the best knowledge that's out there in a way that they might not be able to do, you know, by themselves. I would I would even posit that not using AI is unethical in the sense that we have, for example, in our field in educational technology, and this is just edtech and distance learning. I think there is something like, I don't know, a 30 no, maybe more than that, maybe 200 different journals that exist.

Ross Perkins:

Those journals are publishing all the time. And the back of the napkin math, which is not great math, I did a few years ago, looked at like there's probably 2,500 articles or more per year being published in our field. Well, how can you possibly account for all that information as a single person. You can't. Do you need to?

Ross Perkins:

Maybe you don't need to. But with a tool like with various AI tools now, we can get probably a better sense of what's in that literature and be able to report it more accurately than just living in our own silo and saying, okay, I explored 30 journals, whereas this technology could do that. So that's just one example of where AI could be leveraged in a way that makes our reporting more ethical because it's more true to the data that are actually out there. On the other side of things, and this is this is by the way, I don't know if there's a popcorn machine in the elevator, but probably now is the time to hit that up. But I would say

Rob:

Don't worry. A little kid just pushed all the buttons going up to the

Ross Perkins:

floor. Yeah. So Perfect.

Rob:

We gotta stop at every floor.

Ross Perkins:

So, you know, going back to ethics and and AI, specifically within in higher ed research, graduate education, I would say that when you're looking at community, that you want to you wanna have a an ethic that is responding to the need and to the expectations of scholars who want to put forward data driven information and and information that can be verified. And sometimes that's possible with AI, and sometimes that makes AI makes that actually more difficult.

Rob:

It seems like you're going from a social construct perspective when you're approaching the idea of ethics, especially for AI. Is that is that correct?

Ross Perkins:

I don't have put that much thought into it, but yes. Probably so. Yeah.

Rob:

Kind of Vygotsky esque.

Jared:

Oh, yeah. I remember Vygotsky. You remember Vygotsky? Do. It was Russian.

Ross Perkins:

Yeah. Yeah. Zone of proximal development. That's what we're all about. ZPD.

Ross Perkins:

Yeah. Yeah.

Dave Mulder:

Ross, I really like your take on that, though, thinking about community as as an aspect for ethical usage. Right? Because I think there is this sense where forgive me. In academia, we can kind of get into a game where it's every man for himself or looking out for me or looking out what's what's best for me. And I think it's real easy for that to turn.

Dave Mulder:

There's a dark side to that. Right? And if we're not aware of how we are interacting with other researchers, with our students, You know? There's always another human being on the other end of that communication. I remember learning something about that in one of the courses I took with you, Ross, way back in the day too.

Dave Mulder:

Right? That sense of remember the human. And, you know, the analogy I've been using lately when I talk about technology integration with my students is, you know, this is low tech, but remember that, you know, two cans with a piece of string in between them, and you can talk to somebody. You gotta remember there's someone pulling back on the other end of that string too. And I think a lot of digital tools, we can kind of forget that.

Dave Mulder:

And so maybe that's the way of thinking about this. So okay. Now who's on the other end of the string if we're using AI? Well, the way that we're using it is gonna be in community. It's it's gonna be in the context of interacting with other people, and there is an ethic that that comes with that.

Dave Mulder:

So I just I really appreciate what you were saying to get us thinking about ways in which, yeah, who who we are impacting with our use of AI. I think there's an ethical component to that.

Jared:

I'm curious about that as well. Like, do you think there's a sense of can't find the right word. Want to say cheapening, but I don't want to do that. But it seems that way, which where it's not cheating per se, but it's more of a, you know, I didn't put as much work into this because I incorporated generative AI into what I was doing. Like, as far as the brainstorming or any aspect of the project or the assignment that I'm working on as a student in a higher ed and a graduate level thing?

Jared:

Is there a sense of I didn't truly earn the A or the B that I got because I incorporated generative AI into the process? Is that an ethical dilemma? Is that something that we should be weighing into? Or is it because we've had that experience when we've talked with students a bit where it's some have said, I don't use it because it takes away from my own voice. And others have other reasons, but another reason that's come up is I feel like it cheapens the process.

Jared:

It cheapens that value of the the grade that I'm getting.

Rob:

I mean, you could have a communal weigh in on that, and I think that's one of the challenges of the approach. I understand what doctor Perkins is saying. I think the challenge comes when you have communities that might be in the same strata, but those communities approach it differently. And if you're sharing data in any way, like, for instance, articles, right, and and you have let's say let's just have a little experiment here. Let's say you have a Boise State and a Cedarville University or a Boise State and a a Dort.

Rob:

Right? And you have communal understandings. In other words, you tie your ethics to the community and what the community as a whole has said, we believe this is what we should do, then ethics is going to you know, behavior is going to result from that. Right, as long as everybody's agreed and said, yes. This is what I'm going to do.

Rob:

So you could get into a situation where you have them then trying to partner or exchange ideas or exchange research, and the other one saying, well, that's not ethical. Well, why? Well, because, you know, in our community, we don't accept that, whereas in their community, they do. So there's that part. But I think, you know, that also goes to the point of personal, you know, your own personal conscience, and not offending that, which I think would probably get come out, at least I would hope it would.

Rob:

And and I'd let doctor Perkins speak to this, you know, since he brought it up in the sense of how is that developed within community when you have so many people who may have differing viewpoints or have have, you know, differences of opinion in terms of their own conscience is what they would allow, what they would not. So I would be curious to, you know, hear the guest's response on that. But I think the answer to your question is, yeah, it could. Then It just depends on the community. Depends on the community, and it depends on the basis of which you're deriving your ethics.

Rob:

Gotcha. And I open that up for these gentlemen to respond to.

Ross Perkins:

Oh, no. Dave was itching to say something. So go ahead.

Dave Mulder:

Oh, I'm I'm just thinking about this in terms of Jared, your original question. I think the the the whole idea of Rob, I really like your way of of framing that. Right? Like, putting two communities in conversation with each other and how there might be different ways of thinking between different communities. Right?

Dave Mulder:

If we're thinking about ethics as a community drive kind of a thing. But, Jared, this I remember when we were in Kansas City, and the original conversation you and I had, this was very much the wondering that that we came to. Like like, if you've got this group of researchers sitting together in the room and everyone's talking about ethical implementations of generative AI, what are we actually talking about? Like, is my definition for what I think is ethical the same as what Jared's is, which is the same as what Ross's is? And I think we have to develop that in community then so that we are actually speaking together, that we that we are actually on the same page, that we're tracking with each other.

Dave Mulder:

And so, oh, yeah, Ross, you and I are thinking about this the same way. And I don't know. Does that resonate for you too?

Rob:

I do wonder if it's gonna develop much like, you know, styles of writing. You know, MLA versus APA. You know, are you gonna have groups that pop up and say, this is what we think this should look like? And anybody who adheres to this, signs on to it, you know, here's the manual, like the APA seventh. Yeah.

Rob:

But I kinda wonder if we're gonna end up with something like that. It's interesting if you go to Cornell. Cornell has an ethical AI for teaching and learning. I don't know if you've you all are familiar with their website or familiar with some of the work that they've been doing. It's interesting.

Rob:

I'm just looking at a bullet list of things that they cover. I'll read them real quickly. Transparency and oversight, that's a a standard for them. Political impact, That one's really vague. I mean, impact is vague as well.

Rob:

Environmental impact. Again, somewhat vague. Diversity, nondiscrimination, and fairness. Privacy, data governance, technical robustness, and safety. Those are their major areas of of literacy and AI and areas where need to pay attention to for ethics.

Rob:

Mhmm. And I just found myself with more questions than I had answers after going and looking and seeing, and that seems to be the conversation. So I had a conversation with doctor Schlade, you know, in the previous episode. Yep. And that was what it seemed like we kept coming up with because I kept asking him, you know, what are you all doing?

Rob:

What are you seeing? What are the kinds of things that you're are there any, you know, best practices? Well, we're kinda still working on that. I mean, he was honest about it, and that's that's great, and that's fine. But I'm I am curious from this group, any one of you all, are there best practices that are are what you would say are coming to the top, like cream rising to the top?

Dave Mulder:

I I think this is one of the challenging things with emerging technologies. Right? It's really hard to know what best practice is until we have more research that's being done. So I don't know. It feels like a non answer to a really important question you're raising.

Rob:

That's an answer.

Dave Mulder:

But yeah. Right? And and I think that, yeah, the the nature of emerging tech means that we're going to continue to see some evolution in the ways, people are using really powerful tools. And I think that's the part that gives me pause and concern. I also am gonna be keep on using AI myself.

Dave Mulder:

I wanna keep understanding this. But but at the same time, yeah, like, how do you navigate the that, brave new world that we find ourselves in? We're gonna be stumbling our way through. Right? I guess maybe the analog I think I maybe said this last time I was on the show too, but it reminds me a lot of conversations that I was having in the early two thousands about this newfangled technology, the World Wide Web, and, like, we adapted as as educators.

Dave Mulder:

Right? Like, now that this exists in the world, we're not gonna go back from it, but it does mean we're gonna have to evaluate and, you know, try to come to some sense of best practice. There's gonna be research, and there's gonna be playing around and and trying to come to, some shared understanding of of what promising practices actually look like in real life.

Jared:

Yeah. It's an exciting thing to me. Just feels like we've or I've mentioned this in the episode with you, David, before where it's like we're at this emergent technology and we're still trying to navigate what exactly is going on with this with this in higher education and and education as general, much like when the calculator or the, believe it or not, like the overhead projector was brought in. Like, how was this going to change education? What's the best practices for it?

Jared:

Or should we even be using this, new technology? We need to fill out the space and figure out where exactly we are. Ross, I kinda I kinda didn't, like, give you a chance to answer, but are there any best practices or or what is Boise State doing at this point to handle usage of generative AI?

Ross Perkins:

Yeah. So that's evolved somewhat. I can't and I don't know if I know everything that we've been doing because I know it started last last year, a year and a half ago now where they had a task force on AI. And now there's a there's actually a, you know, a sub portion of our site dedicated to AI and education. There is a working group of, I think, faculty and staff.

Ross Perkins:

The state of Idaho, of course, I don't live there anymore, but I know that the state of Idaho has taken up quite an interest in how AI is being deployed in its various agencies, because obviously the university and the seven other state institutions, they're all using it in different ways. And there's a survey that went out of them trying to draw a circle around how AI was being used. So in terms of how the university is approaching it, I think it's not entirely exploratory still. I think that there are some definite opinions about how it should or should not be used, with an eye on the ethical implications that it will have for, for example, for research subjects, for, the recipients, the audience that's receiving the message, for the expectation of the research community, etcetera, and also within teaching and learning. The last I knew, there was basically three statements that the committee, that the task force put together that faculty could adopt and put into their syllabi.

Ross Perkins:

One is yes, use AI as much as you want to kind of approach, but do so with the following things in mind. Another of the other end of the spectrum is thou shalt not use AI at all, which I know, how do you police that? Don't know. But there's that expectation of allowing for a situation where professors then can make a very clear line. If I can show that this student has used AI in their work, then we're going to shut that down immediately.

Ross Perkins:

And then there's a statement that's more middle ground of like, yes, AI is acceptable in these. And that's kind of the approach that I take quite honestly, is that there are some things like some assignments, I don't care if you use AI. It's probably actually better if you do. But on the other side of things, and I've seen this already even in graduate classes where discussion board interactions, the the text is, to me, obviously AI. Now do I go and check it?

Ross Perkins:

No. I don't spend a I don't do a deep dive on it Just except to remind students, hey. Listen. Like, this is this doesn't need to be perfectly framed anyway. This is just your gut reaction to these, you know, conversations.

Ross Perkins:

It's like this conversation right here. And so I think that generally though, to answer your question, Jared, about Boise State's approach to AI is and I don't know if it's more mature than that other institutions are doing, but I think they've taken a really seriously hard look at it and trying to understand what the different applications of it are and what that meaning is to various communities across the university with keeping in mind that one of the goals of higher education is to produce knowledge that is verifiable and trustworthy. And if it can't be, then no tool that we use is going to be worth it.

Rob:

I like that. That sounds very similar. I don't know. Jared, if you remember an earlier conversation with doctor Schlade, it was one of the things that he said. It needs to be able to be verified.

Jared:

Mhmm.

Rob:

And I think that's one of the challenges of of what we have with CHAT GPT and other LLMs is that we can't trace it back. We can't follow the decision tree. I think that is one of the the major setbacks, one of the things that I struggle with. Now what you can do on the on the flip side is you can do due diligence, and they're providing more of that to try to provide you with sources of, you know, like, some of these answers came from. You can go and look, and then you can evaluate based on that.

Rob:

And I have found ChatGPT from OpenAI, the most recent model, to be fairly accurate.

Jared:

Mhmm.

Rob:

More often than not, the, you know, the resources they're providing actually do, you know, back up what what was written or what was generated. So at least from that perspective, we can at least chase it that way.

Ross Perkins:

You know, it's interesting that these questions about about ethics always follow the creation of a tool that we probably should have been asking these questions about beforehand.

Rob:

Right.

Ross Perkins:

Somebody makes something and you're like, oh, okay. What are the ethical implications where if we've really been asking the hard questions first, maybe we'd say, no. I don't know if this is the best idea. Not that it's gonna stop anybody. Right?

Ross Perkins:

So but it's I've I've heard with respect to what you're just saying that somebody described various LLMs or or, you know, things like chat or Gemini as being golden retrievers. It's like they're they're just really happy to bring you information.

Rob:

Yep. Yep.

Ross Perkins:

They just wanna answer your question. And if you throw a ball and they bring back a stick, they're happy. They're like, listen. I got something from over there, and now it's back to you. Like, no.

Ross Perkins:

Get the ball. Like, no. This stick is awesome. Look how awesome this stick is. So it I think that that's, you know, one thing that we have to be you know, just in just in terms of teaching critical thinking at any level is like, would you go obviously, if you said, hey.

Ross Perkins:

Listen. I got this information. Wikipedia. Like, oh, okay. That's a start.

Ross Perkins:

Not that Wikipedia is all bad, but let's not put 100% of our money into Wikipedia. Let's look at what other people are saying. And I think the same is true with Chad, is that people have to look beyond just the answers they're getting, but that's true of that's true of anything. That's true of any resource, any book, whatever. You're gonna look outside of just that thing or just that person telling you, it's like, is the what is the actual thing behind it?

Ross Perkins:

You know, for example, here on on on my side, I've already seen some students, graduate students, putting into, you know, AI a question of a fairly innocuous prompt or, you know, question that asks chat to give them some resources about a particular topic. Now, obviously, you could do your own work. You could go to dissertations and articles and look at their list of references and find things there. But chat, when it returns responses, will often, not infrequently, give you references in APA format. And the author is a real author, and the title may be and probably is a real title.

Ross Perkins:

And the journal is a real journal, and the issue number and and the issue the volume number and issue number are real. But none of those things are necessarily true together. Yeah. And that's where the problem is, and that's where you have to be like, okay. Is this thing a real article?

Ross Perkins:

And you find out, as I did this past spring, that upwards of 60% of those things can be can be fake. So

Jared:

Yeah. That's exciting.

Rob:

Well That

Ross Perkins:

was good times. Yeah.

Rob:

I mean, it's it's that's not a problem as long as you're not using it to back up your own writing. Yeah. Oh. Oh. So I think that that's just a and there have been quite a few articles, I think, out there that have made it out into the interwebs that people have found that, you know, obviously, use Jet GPT to ride it, and it's like, oh my word.

Ross Perkins:

So I you know, another case of this, and I don't mean to, you know, just blow all these these these stories through, but I have another grad student that I've been working with, and she's written a really nice article. In fact, she's investigating prompt engineering. And she wrote an article or she wrote a paper and I said, I really think you should submit this to a journal. And she has, and the feedback that she got from the journal was that they only accept I don't remember what the number is. Like, their tolerance for AI in the text is like 30%.

Ross Perkins:

How they they said they use Turnitin and some other tool to detect AI. Now I know she didn't use LLMs per se, but she did use Grammarly. And so and I use grammarly. Was an English teacher, and I use Grammarly quite a bit. Yep.

Ross Perkins:

And it really helps me be more succinct. It helps me catch little errors that I didn't see, you know, a comma here or whatever else. She used Grammarly. And so evidently, Grammarly is coming up with enough and she's not an English major. You know, she's a really bright person, but she's not a natural writer per se.

Ross Perkins:

So she was accepting suggestions not only for grammar, but also for clarification, for wordiness. But there was evidently enough of what she was using to trigger, oh, this is AI language or some something like that. But, I mean, I think she was using the technology entirely appropriately. But, yes, it's it's it's being sent back to her as being something that was AI generated, which is very interesting.

Dave Mulder:

There's gonna be new frontiers that we have to navigate. Right? This comes back to that question of what is promising practice, I think. I I mean, to me, using a writing assistant is actually a really great strategy to use. But how do you navigate them?

Dave Mulder:

That's you're you're asking a great question, Russ.

Rob:

This is a very interesting conversation, and and we I think we're gonna have to continue this.

Jared:

Yeah. We're gonna have to come back at some point.

Rob:

We're gonna have to we're gonna have to do, like, a a season two.

Ryan:

Yeah. Revisited. Thanks for joining us for this episode of Transform Your Teaching. Don't forget to like and subscribe on your favorite podcast platform. Feel free to shoot us an email at ctlpodcast@cedarville.edu and check out our blog, cedarville.edu/focusblog.

Ryan:

Thanks for listening.