Demystifying Instructional Design

Like the podcast? Leave a message or ask a question here: In this episode I interview Autumm Caines and Lance Eaton about ChatGPT and how they see it affecting higher ed and instructional design from a variety of perspectives. This episode is insightful and also humorous at times. Autumm Caines: Autumm Caines is a liminal space. Part technologist, part artist, part manager, part synthesizer she aspires to be mostly educator. You will find Autumm at the place where different disciplines and ...

Show Notes

Like the podcast? Leave a message or ask a question here:

In this episode I interview Autumm Caines and Lance Eaton about ChatGPT and how they see it affecting higher ed and instructional design from a variety of perspectives. This episode is insightful and also humorous at times.

Autumm Caines:
Autumm Caines is a liminal space. Part technologist, part artist, part manager, part synthesizer she aspires to be mostly educator. You will find Autumm at the place where different disciplines and fields intersect, always on the threshold, and trying to learn something new. Autumm currently works full-time as an Instructional Designer at the University of Michigan – Dearborn and part-time as Instructional Faculty at College Unbound where she teaches courses in Digital Citizenship as well as Web and Digital Portfolio.

Lance Eaton:
Lance Eaton is the Director of Digital Pedagogy at College Unbound, a part-time instructor at North Shore Community College and Southern New Hampshire University, and a PhD student at the University of Massachusetts, Boston with a dissertation that focuses on how scholars engage in academic piracy. He has given talks, written about, and presented at conferences on artificial intelligence generative tools in education, academic piracy, open access, OER, open pedagogy, hybrid flexible learning, and digital service-learning. His musings, reflections, and ramblings can be found on his blog: http://www.ByAnyOtherNerd.com as well as on Twitter: @leaton01

Support the show

Please consider making a donation to my Patreon account to help support this podcast financially: patreon.com/rjhogue

★ Support this podcast on Patreon ★

What is Demystifying Instructional Design?

Instructional designers specializes in the creation of learning solutions. In this podcast, Rebecca J Hogue interviews instructional designers from a variety of different fields, with the goal of gaining a better understanding of what instructional designers do. The role of instructional designers varies across contexts including formal education, corporate sector, healthcare, non-profit, military, government, and more. Further the types of learning solutions created by instructional designers varies including learning programs, semester length courses, short workshops, eLearning, and job aids. Join Rebecca as she demystifies instructional design.

Welcome to Demystifying Instructional Design, a podcast where I interview various instructional designers to figure out what instructional designers do.

I'm Rebecca Hogue, your podcast host.

If you enjoyed this podcast, please consider subscribing or leaving a comment on the Show Notes blog post and consider helping to support the podcast with a donation to my Patreon account.

Welcome Autumm and Lance to Demystifying Instructional Design.

This is a bit of a different episode of a podcast.

We're going to talk today a little bit about ChatGPT the AI phenomena that is lighting up higher ed anyways, if not other spaces as well.

And so the first thing I'm going to ask is if you could do a quick introduction and give us a little bit of context.

And I'll start with my context is I teach instructional designers.

So I'm teaching at the master's level and my students are instructional designers.

And so for me, I'm looking at it largely as is this tool useful for my students?

That is the context that I have for it.

And I'll pass it over to Autumm and then Lance.

Hi.

I'm Autumm Caines.

I am an instructional designer in a faculty development office at the University of Michigan, Dearborn.

I always think it's important to add what office you're in when you're talking about instructional design, because as Rebecca's audience will know, the context in which you work is an instructional designer can have a huge impact in the type of work that you do.

So coming from the perspective of a faculty development office, it's more than just instructional technology and it's more than just instructional design.

It's also faculty development.

We can get into that a little bit more as we go forward.

But just to set up the context, I'm also instructional faculty at College Unbound where I know Lance from, and I actually know Lance from this podcast as well.

That's one of the reasons that I reached out to Lance was because I heard him on this podcast and then we ended up becoming colleagues at College Unbound, But I teach two classes at College Unbound.

One is Digital Citizenship and the other one is Web and Digital Portfolio.

So that's just a little bit about my context in terms of where I'm coming atthings from, in terms of ChatGPT.

I have been looking into large language models since probably 2021 with the upset with the firing of Timnit Gebru at Google from when she was working, she was the head of Ethics and was working on the Lambda large language model and started paying attention to some of the advances that were happening in that technology around that.

So I do have a tendency to come at it from a little bit more of a critical and ethical perspective.

But I don't want to go for too long.

I want to turn things over to Lance and let him introduce himself.

Sure.

Hi, I'm Lance Eaton.

I am Director of Digital Pedagogy at College Unbound.

I would say what department I'm from, but we don't really have departments because we're still a new enough college and have that new college smell where there's lots of different hats and ways that we try to work around action teams as opposed to traditional departments.

But if I did, it would probably be Academic Affairs, which is I think where I mostly sit.

And my role is that mixture of working with faculty, doing faculty development and kind of helping to support them in the development of teaching and learning courses and using different tools.

And sometimes that tool is a pen and pad and sometimes it's an LMS and sometimes it's artificial intelligence.

For me, I've had a lot of interesting thinking around or just looking at AI for a couple of years now.

I just started to develop a more critical view of technology over the 2010s.

It started to pop up on my radar and then really for much of the from the start of the pandemic until bout a year and a half ago, I was I guess that was halfway through the pandemic or whatever phase we're in now.

I was working at the Berkman Klein Center for Internet and Society and was both helping to run programs and programing around Internet and society, and a lot of that focused on AI.

And so I got to see a lot of different people in the industry coming with those critical lenses.

And so that stuck in my head a lot, especially as ChatGPT became like the pet rock of the 2022-

2023.

And people really started to pay attention to a AI generative tools in a way that they certainly hadn't previously.

Thank you both very much for your introductions and a little bit of context there.

I'd love to ask you a little bit about what guidance you're giving students regarding the use of Chat GPT And if you could tell me what that stands for, that would be really helpful because I think the audience would find that useful.

It's a generative pre-trained Transformer ChatGPT And I do think that Lance is the perfect person to answer this.

I'm going to answer really briefly and just say that I was a little taken aback when this was first opened up, and I really wasn't sure what to do as a teacher. As an instructional designer.

I had some ideas, but as a teacher I felt a little lost.

And working at College Unbound, Lance is the person that

I go to.

When I have questions, right, And I knew people were going to be coming to me.

So I went to him and the school just responded in an amazing way that Lance is going to tell you more about now.

Thank you.

Yeah.

So this is again, being a fairly new school there's things we don't have always to fall back on.

And also a lot of our practice and thinking is student centered.

And so I was playing with it, thinking about it as well along in terms of instructional design and students and whatnot.

And I get an email from Autumm saying, I think I have a student that has used it.

And so that generated a discussion between me and her, and it was at the end of the semester and like there's so much else going on, it gave us an opportunity to really think about in this moment what is what is most useful to do.

And this is where I think both of our collaborative nature and the way that CU structures itself is to go after the student wasn't the right approach.

There was a part of our reaction that was just like a hot dang! Like, go student for being that quick and figuring it out.

We had that moment and we celebrated that for a moment.

I want to give this person like some kudos for creativity.

We had the moment of just having that frustration and a mixture of just not...

Sometimes it's a little ego driven, but like not being happy that the student did it.

And then we just also were like, What's behind this?

And I think that's where we really got our momentum and what has gotten our momentum for our school as a whole is really understanding the students and their uses of this.

I go back to I want to say it's folks like James Lange and when people are doing this type of thing, which is framed in all sorts of deficit language of they're cheating, they're stealing, they're what have you, they're often indicating things aren't working for them.

They're often indicating this is more of a sign for help, a sign for a lack of trust, lots of different things.

And so very quickly, me and Autumm realized like, why don't we try to find out?

And so our first goal was to craft an email.

And I sent it out to students saying, Hey, this tool exists.

And I think some students may have used it and we are interested in learning more about it in this non-punitive way.

We just want to understand what led you to it.

What might we be able to learn about why you found your way to this tool?

Nobody responded to that.

We hoped... nobody did.

That was okay.

My partner recommended, well what if you did an anonymous survey?

And so we put that out to our students at the very end of the semester.

We probably got, I think, four or five students, and this was at the end of the semester and during break that before everything really exploded, before you were seeing references to it in podcasts and in mainstream stuff, we had three or four students who were saying like, yeah, like we started to use this and we're using it in these ways.

And so we thought that was interesting.

And that kind of jolted me to think about the semester for us started January 9th.

This is before most schools start their fall semester.

And so we needed something in place.

And so we developed a policy that we felt was like... recognizing we don't really understand the fullest implications of these tools.

We don't want to just do a blanket ban and be like, Nope, you can't use it under any condition.

And we wanted to create safe conditions that if students use it, then it can they can identify that they use it.

So that can also invite questions and understanding and things like that.

I think the potential for it can be is as vast as some of the challenges and the concerns around it.

Right.

So there's a lot of knee jerk reactions.

There's a lot of valid reactions about the ways this interferes with how we demonstrate learning by students.

But I think there's lots of possibilities for us to leverage it.

If we can find versions of it that aren't steeped in all sorts of exploitative practices.

But I'll pass to Autumm for her take.

Yeah.

So when I saw some responses that I thought potentially could be synthetic text, my first thought wasn't like cheating.

My first thought.

like Lance said, it was curiosity.

It was like, wow, if they figured this out at the same time, I did worry.

It's so new.

It's such a new technology.

This is December of 2022.

It dropped on November 30th of 2022. The technology has been around for a while now.

We can go all the way back to the sixties if we're just talking about chat bots.

Eliza.

was in the sixties.

Weizenbaum's ELIZA.

But in terms of the transformer technology, the idea of using neural networks around large language models to be able to create text so smooth and so clean.

And so it sounds so convincing, right?

That's been around for probably about two years now, but you always had to pay for it.

It's always been behind some kind of paywall or part of some type of product.

I mean just to open it up to the entire world right at the beginning of finals for higher education.

That's not insignificant.

Whenever I talk about this, I always think it's really important to put the context around it.

Yes, it's a huge jump in terms of technology, right?

In terms of the tech that is going on, the interface, the way that you use it, the smoothness, all of that.

But a big part of the hype, a big part of everything that's going on around this has to do with the fact that it's free and the timing in which it was released.

Those are just two really big parts of it.

And especially in that moment, I was hearing all of these news articles and things coming out about people being really punitive with their students.

I read an article about a student who failed an entire class when it was discovered that they had used this tool.

And it's my understanding also, there's really not a way to prove without any doubt whatsoever that a student used this tool.

You can run the text through some of the detectors, but those are flawed.

They're flawed in my testing of this.

And I don't think anybody even tries to pretend that they can return a 100% positive or 100% negative.

There's tons of false negatives and false positives.

So I really... I guess I say all of this to say I'm not surprised that the students, when we sent them that email and just said, hey, we're just curious, did you use this?

Have you heard of this tool that nobody responded?

Because like I said, there were tons of articles and news releases out there with people saying that they were punishing students for using this tech.

So if I were a student and I used it out of curiosity and I also think it's a little bit crazy to say that students wouldn't use it if I were a student, I'd be curious.

I'd want to try it out at least, and I don't know if I would actually submit that work, but I'd be tempted to, especially if it was the end of the semester.

And I was really busy and I had a lot of pressure.

I don't know of any academic integrity statement from any university that mentions AI generated anything.

I don't know of any classroom policy that mentions any of this kind of stuff.

So I think it really does challenge us to think about what we mean by cheating and to critically evaluate and take, retake stock in what we mean, what's valuable in an education.

I felt really lucky that I was working for College Unbound during this time so that I could think about these kind of things with an amazing partner like Lance and with a school that is student centered and student focused.

That's where I'm at with it.

With students right now, I guess I'll throw in at the end.

My class policy right now is actually a little bit more broad than the college's policy I actually say it's fine if you want to use it.

Just tell me that you used it and describe how you used it.

I just think it's way too early right now to be punishing students for it.

You mentioned something about a one credit course, and I would love to hear more about that.

Yeah, absolutely.

To just build off of Autumm's point about her policy, like we created a policy that we put out as this is our temporary policy, folks, individual folks are welcome to adjust as makes sense for their classrooms.

And I think that was, again, like we want to be both student centered.

We want to empower faculty to make the right decisions on behalf of their students.

That was another piece of this as I went into the winter break in conversations me and Autumm had, I just had this brain blast.

That's Jimmy Neutron reference for folks that are interested.

I just had this brain blast of like quintessential way I could help figure out this challenge at College Unbound would be to do it in a way that was student centered.

And so like literally got out, I had this idea, I got on my phone, I texted the provost and I was just like, What about this for an idea?

What if we do a one credit class that is filled with students who are going to play with, learn about and really think about ChatGPT and other AI generative tools.

And through that class we can create a recommended set of policies for institutional usage.

Instantly got back, thumbs up, let's do this. Which also meant that oh I have to figure out this course.

So that was my winter break.

And then I realized there was another iteration or rather again, my partner in conversation came up with this really great insight of what if you could also connect it to writing courses.

And so we're doing this one credit course in Session, One of 17 week semesters and we do eight week courses in session one, and eight week courses in session two, and then sometimes sixteen week courses.

So in Session one, I'm doing a one credit course where we're going to develop a rough draft of policies around usage for faculty and students. In session two, I'm going to try to connect with students that are taking our writing course and have them sign up for this class, and it'll really be an opportunity to kick the tires on the policy.

So they'll be taking a writing class.

they'll be using this policy to inform how they're going to use the AI generative tools.

And that will be a bit of like really trying to figure out like, what are the holes in it, what are the ways that it really works.

And in conversation with the faculty teaching those courses as well, so that by the end of the semester we'll have had students had a central role in developing it and testing it and putting forward the recommendations for it.

That's where we are with it, we're about three and a half weeks into the one credit course and it's just been this there's about eight students in it and it's been this rich conversation around like them getting to use it and them getting to like really start to see the answers it comes back with.

And then also them delving into other content, other things that are helping inform their opinions.

The week by week things change because in the second week we got the Time magazine report about how in order to do content moderation of all of the horrible stuff on the Internet that they scrawled in order to like, make this open, AI, was paying Kenyan workers $2 an hour for content moderation, which is just another way of saying they paid Kenyan workers $2 an hour to be traumatized by like the worst of the Internet.

So every week there's these new things that help to flesh out our thinking about it and conversations that we have.

They hear things like this and they're like, this needs to be on like the UN's agenda.

This is not correct.

Like the ways to get like really invested and start to challenge their considerations.

One of the earliest points that was just great was I had students read some of Autumm's work and raising some of those questions around what does it mean to sign up and get an account with Open AI where it asks for your name, it asks for your email and your cell number?

And we got into a discussion around like digital redlining.

And our students are predominantly students of color.

And so this generated part of when I created the course or part of when I started to create the assignments and the goal was for them to use or engage with these tools, I recognized they would have to create accounts or they would have to have access.

And so I've offered my credentials for them to log in and to use.

And as a result of that conversation I had, I probably at this point, I've had half the students ask to use the credentials to use it so that they don't have to give up their own personal information to an entity that has 10 billion dollars invested by Microsoft and is like gathering up all sorts of data on the users.

So yeah, that that is all where we are now is we're moving into the point in the course where like, besides playing with it, we're really thinking about what would we recommend for usage.

That's the next discussion we're starting to have.

Getting back to your question, Rebecca, in terms of how we're using it with students, I have been pretty vocal and I've written a couple of blog posts that you can link in your show notes, really being critical of the idea of using it with students.

I'm really hoping that those faculty who do teach in a discipline where it makes sense to use it, take a pause, take a beat, and think critically about how they're going to ask students to use it.

And I suggested some techniques that I could employ to make it so that they weren't forcing their students to use it.

They weren't forcing their students to sign up for accounts at the bare minimum.

But I guess I just want to say that I do recognize that.

I think that's discipline specific and especially, of course, I just love the idea of a course that is specifically designed to gather student voice and get student input about university policy, about college policy.

I can totally see it. That's a situation where, yes, the students should be informed to the point where they actually have experience with the tool so that they can give informed input.

But I love the fact, Lance, that you created like a shared account.

So that way nobody's putting your personal information at the account level, but also it muddles up and it creates noise in terms of the questions that they're asking, right?

Because it's not just the creation of that account, but also the inputs that you're putting into it.

And so by having everybody share one account.

I think that does a Greater good in terms of protecting students.

That's your influence at hand.

Makes my day.

I thank you for that.

My next question is a little bit about what do we do for instructors?

Right.

What advice are we giving instructors?

How can they?

How should they?

How should they not?

What do we tell instructors about this new tool?

I personally don't think there's anything wrong with waiting before you use it.

So I guess there is.

What do we tell instructors?

It depends on what the instructors coming to us with needs for, right?

So if they're coming to me and they're saying I'm worried about cheating, that's a different conversation than I'm intrigued and I want to use it.

Right?

So if it's I'm intrigued and I want to use it, my first response might be, Do you really need to use it?

Do you really need to use it right now?

What are you doing with it?

What are you teaching?

Is it directly related to what you're teaching?

But is there a way that you could use it and demo it for the students rather than making the students have accounts?

If you do want the students to have an account, could you have a shared account or could you talk to the students about their understanding of privacy and digital literacy?

Like I would say, if you're teaching like a digital literacy, digital citizenship course, where are your students at?

Is this like a level two, three, four kind of course, if this is intro, they might not have a good understanding.

Most people don't have a good understanding of digital privacy.

I just think that before you dive into using these tools, you should have a good foundation of data sharing and data collection and you should have examined some cases of where things have gone wrong, data breaches and things like that.

You should have an idea of what kind of things could go wrong.

If they're coming to me and they're asking me about cheating because they're worried about cheating, I usually try to do some damage control around helping them to move away from punitive approaches because I don't think they really do any good.

At the end of the day, I think they just degrade our students' trust in us and our students' trust in higher education.

And I guess I try to talk to the instructor and remind them how much is really built on that trust, how much of education comes from that place, and help them to realize that they're sacrificing so much more.

If they take a punitive approach, they're sacrificing so much more than they would be if they took a more open approach, trying to understand where the students are coming from and trying to figure out how this really aligns with their outcomes and the things that they're trying to do in their course.

So I usually bring it back to some type of helping them to articulate some type of policy for their syllabus, because it all comes back to expectations, right?

It all comes back to them really thinking about in their heart and what's going on with them, their expectations for the course, the affordances, the tools like ChatGPT and DALL-E and all of these other generative tools can afford students and helping them to articulate to students why it's important and what they learn in the process.

In terms of ChatGPT, it's not a matter of we want to create essays.

That's not the point.

If that's the point, we're doing it wrong.

Right?

The point is thinking through and being nuanced in your thinking.

So nuanced in your thinking that you're putting it down on paper or on a screen and you're scrutinizing and evaluating every word, every paragraph, every sentence to make sure that things fit together and flow together.

It's that process of writing.

It's not the product of the essay and helping faculty to have that conversation with their students and helping faculty to articulate that meaning to their students in a way that helps the student to understand, I think is so much more powerful than wagging your finger.

And if you do this, we're going to punish you.

I'm going to turn it over to Lance.

Let him talk about it a little bit.

100%.

Everything Autumm just said.

I think the only areas I would add is I think there's some great value in them using it to enhance some of their own work.

And I think that's great with the caveats of the potential concerns around privacy that we've already mentioned.

I think if they are going to do that, I think it is important for them to also be citing and identifying where their course, where their thinking is influenced.

And I come from this having worked with faculty who have done the things that they don't like, that they don't like their students to do.

And so just really demonstrating transparency again in how they're using it with their students.

The other thing that comes to mind in using it is and this is something, again, from conversations with Autumm, is really emphasizing, no matter what they think of it right now, I'm going to, going to take the quote from you Autumm.

This is the, quote unquote, dumbest that AI is going to be.

So I see a lot of dismissal and I see a lot of it's fine, like it's fine.

I'll catch them anyways.

And there's a whole other discourse around that approach or around those concerns.

But I think undermining it or not recognizing like now with this research preview, it is getting better because we're all training it to be better.

And I think within that is really highlighting what I'm starting to see.

And I want to say, is Anna Mills and a couple other folks have, Maha Bali I think, has done this as well where they they're sharing their dialogs and you're seeing through the questions in the further development in iterations of their questions like the actual dialog, you get some really interesting cool things and I think that's a thing that is powerful and interesting and valuable for for faculty and for their students to be thinking about.

I'm influenced by Warren Berger, who's written a couple of different books about questions, and I think this is one of those opportunities for us to really think about the power of questioning and what you need to ask good questions.

And so there's, there's something within this that I think is a possibility for no matter the discipline to really think about how like, how do we ask good questions, how do we ask meaningful questions and how do we refine questions as a means of seeking knowledge.

But in order to do that, we also have to demonstrate some understanding in order to ask those deeper questions.

And I think there's a really rich opportunity there to explore within all of this as well, from both faculty and student side.

Yeah, that was part of what I was thinking with instructional design and with my students is it is even in order to use this effectively as a tool, you need to know what questions to ask.

And it's the same thing when you're doing analysis in instructional design.

When you start, you need to know what the important questions are, right?