25 Years of Ed Tech

Don't call it a comback... A.I. has been around higher ed for a while.

Show Notes

In this episode of Between the Chapters, Laura chats with Chris Gilliard about artificial intelligence (A.I.) in educational technology from Chapter 23 of Martin’s book. If you don’t follow the prolific Twitter account of @hybervisible -- you should. He’s been railing against the broad, sweeping claims ed tech vendors make about A.I. and outcomes of these software/systems in higher ed for a while. How does ed tech codify teaching, learning, and administration needs at our universities and colleges with a.i.? If things seem magical or improbable, we need to have more critical questions and understanding of how these “black boxes” work for our campus stakeholders. We talk about how robots may or may not be coming for our jobs, and what we need to understand about the technologies implemented for our work.
Questions and thoughts for the community:
Are you seeking a robot application for your job? What do you think about A.I. in higher ed? Let us know -- send a message or tweet. Podcast episode art: X-Ray Specs by @visualthinkery is licensed under CC-BY-SA. Remix by Franny French.

What is 25 Years of Ed Tech?

25 Years of Ed Tech is a serialized audio version of the book 25 Years of Ed Tech, written by Martin Weller of the Open University and published by AU Press. The audio version of the book is a collaborative project with a global community of volunteers contributing their voices to narrate a chapter of the book. Bonus episodes are a series of conversations called "Between the Chapters" to chat about these topics and more!

"In this lively and approachable volume based on his popular blog series, Martin Weller demonstrates a rich history of innovation and effective implementation of ed tech across higher education. From Bulletin Board Systems to blockchain, Weller follows the trajectory of education by focusing each chapter on a technology, theory, or concept that has influenced each year since 1994. Calling for both caution and enthusiasm, Weller advocates for a critical and research-based approach to new technologies, particularly in light of disinformation, the impact of social media on politics, and data surveillance trends. A concise and necessary retrospective, this book will be valuable to educators, ed tech practitioners, and higher education administrators, as well as students."

Credits:
Text in quotes from the book website published by Athabasca University Press CC-BY-NC-ND
BG music Abstract Corporate by Gribsound released under a CC-BY license. Track was edited for time.
Artwork X-Ray Specs by @visualthinkery is licenced under CC-BY-SA.
Audio book chapters produced by Clint Lalonde.
Between the Chapters bonus podcast episodes produced by Laura Pasquini.

0:03
Between the chapters, a weekly podcast discussion focusing on a chapter of the book, 25 years of edtech, written by Martin Weller. here's your host, Laura Pasquini.

0:15
Welcome to Chapter 23. It's 2016 and we're talking about the return of artificial intelligence. And I'm joined with a friend of mine, Chris Gilliard. Welcome to the pod.

0:27
Thank you very much. Glad to be here.

0:29
Chris and I are longtime friends and I thought he would be a perfect person to talk about AI and Ed Tech and all the things we might find creepy about that whole topic. I was surprised as chapters in the book, Martin, just side note, I read it and I'm like, the return of artificial intelligence. Let's get into this because it's right. It's not a new thing, but it seems creepier now.

0:57
I felt like the title I felt like, you know, from LL Cool J, right. Don't call it a comeback. For years.

1:10
We just never really talked about the AI elephant in the room, like intelligent tutoring systems was a thing back in the 80s. I reading back to this throwback to what we expected to watch people. This goes back to like, even how we had prisons and the panopticon. Really how we structured learning in general, is all about how we're being watched.

1:34
Yeah, and I mean, the only thing I think is, there's so many things that get called AI that are not Oh, yeah, that that kind of misdirection, right, like the, the you even AI bro kind of thing, I think is a part that can't be removed from the larger discussion.

1:54
All right, let's break that down. Because I don't know who's listening this conversation and what are some things called AI that aren't for listeners?

2:04
Oh, gosh. So I just got asked a very similar question yesterday, by Sean Michael Morris. You know, and again, like I should say, not a computer scientist, not a data scientist. But I think sometimes things as simple as like, basically, kind of like, spreadsheets and flowcharts and things like that, when they become automated get called AI. Like, it seems like a joke, it

2:40
seems magic. So anything magical on a spreadsheet is AI. Okay,

2:44
right. Right. Well, that's the thing, right? It beat has become like the term AI. You know, sometimes people say AI when they mean, machine learning. But sometimes it's like a standard term just for like, a computer process that we don't want to explain to you. Or a computer process that we actually don't understand. We've been in the company, we actually don't understand either. But we want this term to signify certain kinds of things about how our systems work.

3:19
That's a great call out and so Okay, I'm gonna go to the Wikipedia definition and read it out. Ai artificial intelligence is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involve conscientious pneus and emotionality. The distinction between the former and the latter often revealed by the acronym, ai or strong AI. By the way, I don't think all humans have natural intelligence. So do and dogs, if you're training a new puppy in the pandemic, I question some of that. So, that's a really vague definition. So I guess that's why people interpret machine learning with or if something smart that we automate on a dashboard or spreadsheets, but it's not necessarily that, how, how do you think of AI?

4:09
Well, I'm a little bit reluctant to answer this, that's fine. Um, cuz I think it's, like, easier to say what it's not true. Because there's this. I mean, I think there's a really popular misconception and this runs from top to bottom, you know, and I think even computer scientists are guilty of perpetuating this myth that there's some system out there, or some set of systems that actually like is like a human brain, right, like, you know, which like cavemen know, Lee leak, you know, but even like I was reading something the other day by from some high ranking government official and like the Defense Department, and he was parroting this thing, like all these systems, they learn just like a human mind. And like, you know, like, pretty soon we're gonna have the systems that do this and do that. And like 99% of what he was saying was kind of inaccurate, and not at all what systems are currently capable of doing. But I think that that's the, oh, there's a great article. Oh, yes, this, this is there's a great article by Jason sadowski. It's called Potemkin AI. And what he talks about is, essentially, that the myth of what AI can do is like a relevant and important stand in despite the fact that it can't do that. So but it holds that space, in the hopes that at some point in the future, it will be able to do that. But that is that narrative exists to like hold a space, where that people can kind of dump all their beliefs in and ridiculous claims into. I mean, and by the way, like, also, lots of AI is just like human beings, you know, on some remote site, you know, whether that's Mechanical Turk, or, you know, commercial content moderators, or what have you. It's basically human beings at a remote site, doing the work behind the curtain. You know, real

6:25
so it's not even real,

6:27
like a machine's not doing No,

6:30
it's not the easiest definition I can give to listeners, I pulled up IBM's. I like that. They say that it's a system that can include at like an expert system, which may have someone working behind the curtain like the Wizard of Oz, like you just said, You're supposed to apply problem solving application to make decisions based on a complex set of rules like if then, but that's still programmed by someone something somewhere. So the example they gave was something equivalent of the Pixar character. Well, we, the computer, the develops the intelligence, freewill and emotions of human being, we're never going to get to that part. Do I want to be the blobs on the head? Okay, so AI is the bigger picture. And in that there's another circle that's embedded called machine learning, and another circle of deep learning. We are not computer scientists, we can riff on what we little we know about it. But that's the basics that we're going to get into. So if you have comments, and you want to talk more about us, send me an audio clip, I'm happy to add this to the podcast somewhere on these bonus episodes, because that's just it. But everything you said, but the god person talking about it, I think higher ed had the same hopes, like, oh, AI will solve all the little administrative things. We don't want to do, like grade assignments, or help tutor learners. And that's just not true.

7:56
Yeah, I mean, well, there are, um, there are systems, right, that will, for instance, grade grade essays. And so, you know, as a prolific Tweeter, one of the things is, so there's a reason, within the last year, there's an article in I think, The Verge, where students were figuring out, they figured out the pattern by which, whatever technical system that was, was grading their essays, right. So they figured out the pattern, like, you know, whatever it was, so certain length of words, certain length of sentences, you know, certain kinds of grammar. And they were using that to pass their essays without really composing, functioning and meaningful assets. So, teachers were outraged and claimed the students were cheating.

8:55
Hey, they just ate themselves and became smarter and grew their own neural networks. I think they just became a little as themselves.

9:04
And so I tweeted, and I said, Stop calling this cheating. And of course, like a lot of computer scientists and data scientists and other folks in tech, follow me on Twitter. And lots of them, clapped back and said, Stop calling this AI. You know, I'm so yeah, I mean, there is. And again, this is, it's part of a larger problem with how we understand Ed Tech, and how we adapt systems, which is that, um, I'm sure the people who sold that said, it's like, oh, it's this advanced neural network and it can do this thing and like grade papers better than the human being and faster and free student you know, teachers up to do the things that really matter in the classroom. And so, I'm sure they said that, right. Like, you know, I just made that up off the top of my head. But you're

10:00
probably right. That's

10:05
guaranteed if we go back and look at the the promo promotional materials from some of these companies, you know, it's it was pretty, it's pretty close align closely aligned with what I just said. Well, that's a larger narrative. Yeah, exactly. of like, ad tech companies and tech boosters, making all kinds of claims about what their systems can do. And often, you know, administrators, in some cases, instructors overall, buying those by those claims without fully investigating them. Yeah, yeah. I mean, yeah, so in a lot of ways, not much different from what the do D does, unfortunately.

10:52
Well, something you're saying to me just reminds me that we are never part. And when I say use the term, we, when you're in higher education, if you're an educator or a researcher, instructor or an administrator, you're not typically the part on that Dev and design team that's creating any system or tool. It's a rarity that you actually have embedded teams that are come from different angles to support something. And I, I've learned, and you could help some of these creations by addressing the issues early on, instead of having that salesperson give you that script, which is what you said, I believe that is true. This will solve your problems. Why not work with a technology company or an edtech developer in early stages, say, this is the problem? Can we work to build this thing that doesn't often happen, you just get given these things like, here's something like turn it in, it will automatically read and evaluate common things that do let you know if there's plagiarism on your paper, or we know these tools that whether they're, they're kind of trying to make your teaching life easier. But is it easier? Like it doesn't make sense? Yeah.

12:07
I mean, this is a problem with kind of tech writ large, right? It's like why we find ourselves where we are, you know, as a society, in a lot of ways is because the people make these systems do not in no way understand, either, you know, the systems that they no way understand that systems or institutions that they claim to be building these things for, but also don't like threat model, or imagine all the ways that these systems will be used. So they either don't imagine or don't care. I mean, we could dive deeper into that if you want. Yeah.

12:51
Yeah. It is, like, right, I think it's just they don't have the same perspective or lens of how it could be used to do harm.

13:01
That you? Yeah, and because we're not often in the room. This is what we get. I mean, I the textbook case, I think, from the pandemic is zoo. And so as people migrated heavily to zoom, you know, when the pandemic was ramping up in March, it was, of course, you know, that's when we started to see zoom bombing, and other kinds of targeted harassment and things like that. And the CEO of zoom came out and said, like, Oh, we never imagined zoom would be used in this way. And it's like, wait, like, how is this possible? I mean, I know how it's possible, right? Like, cuz I can imagine what the team of developers at zoom looks like. They don't want to like me.

13:58
Whenever Do you mean, Chris?

14:03
Because the idea that, you know, enterprise software, right, for business applications. And, you know, certainly, that that is a different set of requirements than for an eighth grade class, right, or for a college class, or for a grad student, defending their dissertation. But the idea that you would create this system, and never asked the question, the Oh, how will people use this to target and harass marginalized people and vulnerable people? Right, that that would, you know, like, and again, like so this is, zoom is a great example because everyone was forced on to it during the pandemic, but this is also true of pretty much every other technology that we could name like ed tech and not whether that's, you know, proctoring systems, whether it's, you know, some kind of plagiarism detection, Yale quotes system, whether it's a learning management system, really many of these things are not the ideas about like, Who's going to use them, how those people might be vulnerable? What, you know, ideologies are embedded in the systems, like what a teaching and learning means, like, where it was private, you know, like, worrying and people's day they go, like kind of all the things that we always talk about, like are often not conceptualized until harm is done. And then often those fixes aren't even made after that.

15:47
Yeah, for anyone who's listed Chris, my talk on probably other podcasts and other conferences and other things. We've run on soapbox on surveillance, I think it's the agency, and the agency and choice of what people can use. And that includes learners like we don't give our students choice, let alone our instructors, or the adjunct instruction staff that's been tooled up in this digital world now that we're remote or high, high flex, and this is happening k 12. And in higher ed, we're seeing an onboarding of people to say you're the in person, teacher, you're the virtual one, rotate and use the tools that someone else has said we should use, because that's who sold it to us. Versus, like, it's funny, I only use zoom to podcast now to talk to my friends, because I work at a tech company. This is a secret or not a secret for people, I work at a tech company that won't approve zoom. So that should tell you something. Information Security want to prove some of the systems that we need to think about that. What does that mean, if you're forcing people into these spaces, or showing up in different like pockets? Like we don't give people choices of where they put data in the learning management system? And we've ranted about this in previous episodes of between the chapters. But I think you're right is saying there's been a forcing of tools. And this happens to be where AI sometimes lives and other aspects of AI, that we're just in it now for teaching and learning. And who makes those decisions? Mm hmm. Do they actually teach or learn? Probably not?

17:29
Yeah, yeah. And I think I mean, there are so many clues a recent, more recent example, with with in the markup, right, in Julia angwin. Journalism outfit. And there's a journalist named Todd feathers, he wrote, like a great piece about a school that use a software that assigns students risk scores based on the their, you know, say risk of dropping out, or how likely they are to persist at the institution and the particular program, things like that, um, I'm not going to be doing justice by describing it, like, go read it, we'll put it in the show notes. Absolutely. Um, but you know, what it turns out is that neither the professor's nor the administrators knew how the system worked. And so one of the things that the system was doing was taking into account rates. And using that as one of the metrics for determining this risk score, so lots of problems with that. But the people who were using it and using it, you know, against, in my estimation against students didn't even know how it worked, right? The people who bought it didn't know how it works, or at least if they're, their account to the journalists are to be believed. Right. So they procured this system, and didn't know how it worked until someone else dug into it. You know, and again, lay there the claims about what AI can do.

19:12
Yeah, so this ties into the conversation I had with Dragon and AnneMarie around learning analytics, like the idea of predictive modeling, like this is an algorithm case, right? And it's saying the predictors of student success and it really means that people that are using these systems and tools don't understand like, what's being evaluated along the way or the that way that I think this is the nm case, you're talking about the way that it would flag people in a system like if they don't show up to your class or they're not participating. Maybe they're just reading outside the class like they should be doing or reflecting because I really a good point from that episode was like we talked about not all learning can be seen and shouldn't be seen. Really, if we You're letting people process and digest and like, critically right somewhere else off screen, like they shouldn't always be on a screen and be seen.

20:09
Yeah, I mean, there are some widespread and very harmful myths about what learning is, what it looks like, what attention is whether or not you can tell if someone is performing those things by looking at them. You cannot. You know, I mean, I learned this lesson A long time ago, at my institution. Because there was a student who was like, every time we had something do he was on a soul. You know, like, Oh, my gosh, outside, like, some point walk over to him and say, Hey, you know, what are you doing? And he was composing his paper on his phone, even though we were in a computer lab, because he had, um, I mean, that he had gained such proficiency with his device that he was, like, better off in his terms, right. And those are the terms that I should respect, not even his terms like, composing on his phone and typing it on the computer in class. But to just look at that person, right, like, I mean, it's, it would have been easy to assume that he was like, doing something other than composing his paper. And, you know, again, so a lot of these systems are based on mistaken notions like that, right? They are like, like beliefs like that. encoded? Yeah. Like

21:36
tivity. Like, if someone's on it, like I only taught online, and I think of so many times are like, well, they're just not interested or invested or motivated, will know, maybe they're working 4550 hours a week, plus taking three classes and have two children, and also are dealing with living in a car. Like there could be so much other things we don't know, beyond that screen, right? And so it is kind of hard that we're creating these spaces that require like a click log or checklist or check in. And it's really, like you said, performative and not right.

22:13
Yeah. Yeah. And, yeah, I mean, very, very much kind of defining. mistakenly defining what what learning is, right. Like, I think one of the most important things I've learned in my journey, like not only as a student, but a teachers, is that it's often gonna look very different to very different people. Right? I mean, many outside observers, like don't understand how I get work done. Um, but you know, for instance, like, as I mentioned before, like, prolific tweeter? Well, as it turns out, it's like a pretty good mechanism for trying out ideas and, and composing things, right. Please leak, but to the outside observer, it's like might just seem like, oh, wow, this dude has a lot of time to waste on Twitter. I mean, are both could be true.

23:15
No, but I think you're right. Like, I think I've been processing things out loud with folks, I some might say, have a podcast problem. But it also means we're digging into issues. I sit back and edit this later, I reflect on it, or I look for things or I read a bit more. And it's really internalizes some of these ideas I'm talking about. So what does learning have to be me showing up to a Zoom Room, when it could be me going back to listen to something we've talked out, and you've given me like four articles I already have to read right now and put in the show notes, which is great. But like, that's really part of that process?

23:53
Yeah. And, you know, I think this is, yeah, I mean, it's my, it's my problem with anyone, I mean, but particularly with, like, technologies, that seeks to standardize things, and encode them in ways, you know, things that are highly individualized, and, you know, further punish people who don't adhere to what they've established as what the systems have established as a norm. And so this is why I like, you know, spend a lot of time railing against, you know, AI and ml, you know, in in, or so called AI and ml in edtech.

24:41
Yeah, they're like baby a AI, like it's little a little AI because we don't really even have, like, if we actually had AI, it would be such a different world, but like, it's funny, this narrow view and Martin talks about his chapter, he says, He says, it's a very distinct type of AI. The most people call the mine which actually headlines, which is more about general AI, but not actually what like Kira is using it for. So it's kind of bananas to me to think that the return of AI means did it actually even exist and it was these little like, let's have a chat bot to help someone that needs a student services or an orientation or something or the library. And I was like librarians been doing that for years. And they didn't need to be bots, or let's have a grading system that gives me auto grade when they get this many words for a reflection post. Sure, if that's what you want them just to put the words in? And they say yes, done complete. But that's not really AI. Like it's just like automating things that people are making other learners do to get a point or carrot, I guess.

25:49
So. Oh, gosh, the AI chatbot Oh, my God, that

25:54
you have one of those obviously, right?

25:56
Well, that story got so much traction, actually wrote an article about this in the Chronicle where I am amazed that they printed it because I spoke out very harshly about an individual at another institution who ran an experiment on his students where he constructed a chatbot and had students interacting with it all semester. But he didn't tell them it was a chat bot. He told him it was it was a gratis. It's

26:29
interesting. Tell me more. Yeah.

26:32
So, you know, so I mean, students tend to, you know, sometimes they ask repeated questions, right? Like, if you if you teach a lot of students, a lot of them are going to ask you questions, can you feel like you answered on the syllabus? Like, um, but so this guy's this guy's tactic was he he, he made a chat bot, it was based off of like, you know, IBM Watson query. So he named it, Jill. And so he said he had a grad assistant named Jill. And like, anytime a student had a question, like a basic question, he would refer the student to Jill Watson. And at the end of the semester, he like, really, you know, the big reveal, like he told them that they have been interacting with chat bots. And this like, was widely reported in like, many, like, venues that that do like ad tech, right? Like, oh, ai chatbot for students into thinking, like, you know, it's a real grad student or whatever. I'm talking about the way I'm describing, you can tell I have lots of problems with this.

27:51
I thought I thought you loved it. And you were going to now implement it. I don't know what you're talking about.

27:57
Right, primarily, that he used, you know, experiments on students without their consent. Hey, let's like a main problem. But there are others. But yeah, that that, like chatbot. As exemplar of AI has gotten so much mileage in the last several years, right. Yeah, I mean, even now, like lots of institutions are. I mean, I shouldn't say lots. I don't know the exact number. But I seem to, I feel like I see stories about this fairly often. Universities are investing in like these different kinds of chatbots. To assist students. Whether that be like, you know, first year students or students who are struggling in some manner, other things like that, like that this is often pitched as a solution like AI chatbots. I'm very skeptical of this as a as a solution to the problems that students are experiencing.

29:06
Yeah, and I think it's tied into some of the data piece that with learning analytics that we talked about in chapter 21 goes over because it's, you're making decisions on an algorithm that you decide or predicting factor that may not really be predictive of us, it could be student success. It could be retention, it could be whether they're in class or not. But these are all like assumptions that are made in vacuums, and like, if our persons that come to school to learn, we're just people to learn, that would be fine, but they have all these other factors in their life. And that includes the people that work at the university. So I think about staff professional stuff, I think about the academic staff, teaching faculty, and those that are in the periphery that interact the university as an adjunct. There's all these other things they make assumptions for And they're like, well, these are the things that show good performance, whether you're a student or learner, or even an instructor, like, we have these decisions kind of already baked into some of the edtech systems. And that's everything from the learning management system to maybe the people portal, you have the CRMs, to whatever else you're using to flag engagement, whatever that means. I'm using air quotes, interaction gauge, like that's not is that engagement, quote, air quotes?

30:30
I don't know. Right. And I mean, again, we already know some of the things that work, right. Don't have transportation, childcare, shelter, food, right? Yeah. Yeah, like, so. I mean, investment. And that was things by institutions, like, it's my assertion would go much further than a chatbot. But, you know, as people say, sometimes I don't think a lot of institutions are ready for that conversation.

31:01
No, and it's, it's what you said, I think the chapter ends like, we will need humans to do interventions, it's not going to be automated. And I remember talking to like a group of, I think it was academic advisors, and I went to Australia, and we had a talk and they're like, the future won't be automated, because it's not that simple of an if then like, I'm not going to be as an advisor, doing a triage like, this doesn't happen, then let's go this route and pathway. And then if then statement, take you here, know that you need this a career counseling a way you go. Like, it's not that simple. Because those are going to be natural things that we need to always have on. So like, I think there's questions around like, twice, this is 2015. So this is before the chapter comes out. But we talked, they're talking about big data, predictive and learning analytics. And how does that impact both the financial side of advising students to retention of students, and I was like, maybe they need to drop out because someone passed away in their family. And they're like the breadwinner? I don't know, like, the IF THEN statements not going to work for your model of whatever you're trying to up the success of your learner and keeping them in your college or university.

32:15
Right, right. Yeah.

32:17
Do you think 20 2016 was the height? Because I feel like this happened before? Obviously, the chapter talks about earlier AI exemption examples, but like, is, is this where some of this came out? Did you experienced this before? 2016?

32:33
I don't remember. Um, I mean, it seems like that was. So that was, maybe it was the height. Right? Yeah. Part of the part of the reason for that is that might have been the last year, they were tech. Got a free pass. True. Straight not to say, yeah, I'm certainly not going to say that many people, you know, haven't been highly critical of tech and platforms for long before that. But I think in a lot of popular versions, it might have been the last year that tech got a free pass, because of the 2016. presidential election, right? Because it was the thing that woke a lot of people up to real world effects of some of these platforms, right, like, Cambridge analytical and on Facebook, and, you know, the ways that people are able to amplify and magnify their hateful messages on Twitter and things like that. Again, it's not that people didn't know this before. But it is, it's, you know, it could be argued that this is that 2016 was the time when that became like, super clear to more people than it had been previously.

34:04
I think you're right, we came to like a hide on platforms. And question whether or not we should be on these spaces, like a lot of those platforms. publicly, they were used just in social media, were changing their policies to I think it was it was their privacy policy that changed the data policy. And then we also saw like a lot of different just push back on where, where do we want to be and it was also a couple years after the first shooting a couple of shootings, whether it was Trayvon, or there's, there's a lot of push in 2016. I remember early that year, there's a community in Facebook when I was on it loosely. The student affairs professionals had 30,000 people and they had like a blackout. And it was a push for racial voice and equity for the month of March. Twice. 16 was a volatile year, that was also the election in the US. That got taken over by an old 45. Like, there's so much going on that year, that it was a kind of a reckoning. And you're right, maybe that was the tipping point of what people were going to put up with, or what people realize what's really going on online?

35:19
Yeah, yeah. So, um, yeah. And I think we're certainly still seeing a lot of these narratives about AI. But, you know, I think, very fortunately met with a lot more skepticism. Yeah. I mean, yeah, there's all these, you know, sort of like the thing I've been banging the drum on for a while, right, remote proctoring. And there's all these claims by many of these companies that they have AI that can detect cheating. But yeah, like, lots of skepticism, lots of student pushback. Yeah,

35:59
we're not going to solve AI. But it is it is something that's wrapped into other things that people are thinking of. And I know that the end of this book has some that last chapter, we'll get into talking about dystopia and edtech. But this is like part of that this adds to the view of how these tools are used. And it's not just that it's a tool. It's, there's so much more baked into it. And I think Martin alludes to that in this chapter. Was there any kind of questions or things that you're thinking of that you want to pose to the community or Martin to think about when it comes to AI?

36:34
I guess the, I mean, I would go back to not not Martin. I mean, I think the chapter does a great job of representing this and accounting for it. But I guess, I mean, one of the things I would just encourage people to think about or be wary of, is that idea about like, what AI can and can't do. And when we're told that a system can do a thing, that sounds improbable, or like impossible, like, the all signs point to the fact that that these companies are probably lying, right. So like, if they say, like, they have a system that can guarantee, you know, some measure of certainty, like which students will persist, or, you know, which kind of what grade students will get in, in a particular class, or how they're feeling or paying attention based on tracking their, their face. These things, I mean, like, so what is essentially sad is they can tell the future, and they can read your mind. Right? Yeah, I guess so. It cannot do that. Right. Yeah. To be wary of claims to claims about things like things that sound fantastic. Are and maybe not possible, aren't like don't suddenly become possible, because we don't understand the technology. And I think that that's like a persistent claim, about like, the AI is used to occlude that, that facts and like, make those claims seem likely when they're really not.

38:27
Yeah, I think Martin's call it to generalization and performing these tasks that we think are improbable, should be questioned. And I think bringing other people to table I was reminded today, there's a tweet, Patrice put out there of asking, like, Is there a survey that our students could evaluate a tool? I was like, that's great. Like, we should have more people around that stakeholder table to say, is this something we want and their students or learners should be part of that? Maybe as instructors, if you have opportunities to ask questions before something's just given to you. And I don't know if anyone will ever return to conference, I will not miss the vendor Hall. But asking them like questions that unpack some of this fantastical idea of improving your connection and automating your grading. Like, ask them how this happens. And what are the impacts to this, I think we could do better with taking off those rose colored glasses, and gleaning away from any shiny pamphlet or whatever set up to have at a conference. I don't think I hope that doesn't happen again. I hope we move away from that. But I do think we can ask better questions of the what's the impact? And what kind of agency and choice do people have, especially if you aren't able to do things that are baked into systems so it won't be obvious AI won't be an obvious thing to you and a tool, but it might be something that's baked into the platform, what information it gives an output or what data it pulls in? So those are some of the questions, I think, maybe asking more about that.

40:09
Yeah, yeah, totally agree.

40:12
I'm scared about this tracking of information, as you said, like it won't, it'll know my emotions and how I'm going to do on my next next test. Dr. Gilliard, I'm not showing up to your course again. Um, all right, let's end on a positive note. I do think, yeah, humans are going to be important in teaching and learning. And I like that call it suelen brings out those six reasons why AI will never take over from human teachers. So the social connection, the cognitive, the talking out loud, improvisation and making do? Do you think the robots are coming for your job?

40:49
I mean, they are. Absolutely. You know, but I think it's the fundamental understanding, right, I teach writing. It's so I mean, I think it's a fundamental misunderstanding about what it means to write that, unless I'm continually trying to extend to people, and then you know, I gain a greater understanding of myself is how writing is social. I mean, we have this great misunderstanding that people sit alone in a room by themselves and write things. And like, they come out in like, you know, like, show it to the world. I mean, and that can happen, but is often not how, like the best and most important writing exists in the world that exists through collaboration, and feedback, and revision. Not likely that a computer is going to be able to do that well, anytime soon. Unless you're writing for computer and like, we already have coding. So I don't really know like, computers need to read essays.

41:59
Yeah, I don't think they're going to take that away. I think lots of what we do is more iterative and ideation. And like you said, revision, like your first draft is not going to be the last draft on anything, even beyond writing, so don't worry, the robots are coming yet. But if they do, like, let's ask them some questions and understand what they're doing. So that's, that's my thought. Chris, thank you so much for having this conversation. I don't know if we solved AI. Anything. And you know, anyone got me? Tell me what you think it really is. And if we're totally wrong, happy to take your input, so let us know.

42:35
Well, thanks for having me. Yeah, really appreciate it.
42:37
Bye robot, Chris.

42:45
You've been listening to between the chapters with your host Laura Pasquini. For more information for to subscribe to between the chapters and 25 years of edtech visit 25 years dot open ed.ca