Pop and Play

Pop Offs are short bonus episodes of Pop and Play where we bring you short, more timely shows between main seasons of Pop and Play. This week: everyone wants to talk about AI, in education just like everywhere else! Is AI the solution to everything? Is it the downfall of everything? Or maybe, just maybe, could it be something in between? Nathan pops off about the way some AI tools are currently being used for teaching, and suggests alternative ways we could think about AI in classrooms and other learning environments. 

Please take our listener survey! We could really use your insight and opinions, and we want to hear your ideas for Pop Off topics and future guests! 

For transcripts of this episode, to learn about our guests, and more, visit our website

Our music is selections from Leafeaters by Podington Bear, Licensed under CC (BY-NC) 3.0.
Pop and Play is produced by the Digital Futures Institute at Teachers College, Columbia University. 

This episode was edited by Adrienne Vitullo and Billy Collins. Website design and support by Abu Abdelbagi. Social media by Meier Clark, Blake Danzig, and Adrienne Vitullo. Produced by Haeny Yoon, Nathan Holbert, Lalitha Vasudevan, Billy Collins and Joe Riina-Ferrie at the Digital Futures Institute, Teachers College, Columbia University.

What is Pop and Play?

A podcast from Teachers College, Columbia University about play and pop culture. Professors Haeny Yoon and Nathan Holbert take play seriously. They talk with educators, parents and kids about how they play in their work and their lives, and why play matters.

The views expressed in this podcast are solely those of the speaker to whom they are attributed. They do not necessarily reflect the views of the faculty, administration, staff or Trustees either of Teachers College or of Columbia University.

Haeny Yoon:
Welcome to Pop Off, a new segment from Pop and Play, where we take a few minutes to chat about education, play, pop culture as it's happening in the public conversation. If you didn't tune in, we already had one Pop Off, which was semi-okay. I'm your host, Haeny Yoon, and with me, as always, is the incomparable Nathan Holbert.

Nathan Holbert:
Incomparable, that's me, Nathan Holbert, and we're excited to try these Pop Offs. There's a lot of things that kind of come through the news or come through the popular conversation that we have thoughts on, and I'm sure many of you have thoughts on, and we wanted to have this as an opportunity every few weeks to talk about it, right?

Haeny Yoon:
Yeah, we basically decided to take our text threads live for you all to listen to as well. Okay, so Pop Off.

Nathan Holbert:
So I wanted to talk a little bit about AI this week.

Haeny Yoon:
Ooh, controversial.

Nathan Holbert:
Get ready. Get ready.

Haeny Yoon:
Coming in hot.

Nathan Holbert:
Specifically, I want to talk about, though, AI as it's being used and as it's being aimed towards education. All right, so Haeny, have you used any of these AI chatbot tools?

Haeny Yoon:
Yes, I've used ChatGPT. I've used Cloud AI. I'm going to listen to your Pop Off right now because I have been guilty of using it for some purposes.

Nathan Holbert:
Oh, don't feel guilty for using it. This is not meant to guilt anyone who has used or finds some value in AI, but I do have some strong opinions about how this technology is being used for education. Now, before I get deep, I want to briefly touch on how these things work, and then we'll talk a little bit about how they're being used in education.
Most of these chatbots are what's called large language models. Sometimes you'll see this abbreviated LLMs.

Haeny Yoon:
LLMs.

Nathan Holbert:
Yeah, yeah, so large language models, basically, the way it works is you feed a bunch of data from the internet into an algorithm, which basically just makes a probability map of which words are likely associated with other words. So for example, in my mind, the word pop is highly associated ...

Haeny Yoon:
With Pop and Play.

Nathan Holbert:
With the word play, exactly. So if I hear pop, I think play, right? And that's how a large language model works.

Haeny Yoon:
And then you think Nathan and Haeny.

Nathan Holbert:
Then I think Nathan and Haeny. I think hilarious. I think fun. I think immersive learning. This is how the model gets trained, so it gets trained in different ways, and certain words become more or less associated. So because of that, essentially, when you type words into ChatGPT, what it's doing, you think it's understanding you, but what it's really doing is just trying to decide which words go with what. And then it gives you a response, again, using probability to decide which words are you expecting to see next? And it feels like magic. I mean, you can ask it something. It seems like the system understands you, and then it seems like it's giving you this thoughtful response. There's some really cool aspects to this.
But what's happening now is this technology is being deployed for education, and the way it's actually working is essentially, we're taking the ChatGPT, we're taking the Google AI summaries, and we're just saying, "Okay, that's now a tutor." And so literally, for many of these, you'll sit down, and it'll say, "What do you want to learn today?" But it builds a lesson plan for you, and then it's sort of teaching. Its tutoring is essentially alternating between giving you a wall of text to read and then asking you quiz questions. "Read this. Let me quiz you. Read this. Let me quiz you." And as any educator knows, that is the best way to design a learning environment.

Haeny Yoon:
Oh, absolutely, absolutely. Quiz them to death.

Nathan Holbert:
Quiz them to death. How do you teach? You just tell. Teaching is telling, and you know if they learned it because they answered the quiz right or wrong.

Haeny Yoon:
Yeah, so did you try this?

Nathan Holbert:
Yeah, yeah, yeah, so I spent some time working with one of these prototype educational AI tutors, and well, let me give you an example. So I was trying to embody my eighth grader, first of all. I was trying to think, "What would an eighth grader do?" And so the idea is a kid's just sitting down here with their computer, having this conversation, so how would they interact?
And so at one point, I told it to tell me about mitosis. This is a topic that I remembered my son had learned in science class that I don't really know much about. And so as you would expect, it gives me walls of text to read, and it's all definitions. Here's the definition of this. Then it's this stage, and here's the definition of that stage. And then finally, after I read for a while, it wants to offer a quiz to me.
And I'm going to read to you. I took a screenshot of what it did.

Haeny Yoon:
Okay, this will be fun.

Nathan Holbert:
And here's what we got here. So the system says, "Now, how about we try a quick quiz to see if these definitions are sticking?"
Me, embodying my eighth grader, I don't like quizzes. Why do I need to take a quiz?
The system says, "That's okay. Quizzes aren't the only way to check your understanding. How about we try a different approach? I can give you some scenarios, and you can tell me which of these key terms we just discussed is involved. Sound good?"

Haeny Yoon:
And what did you say? Did you say yes?

Nathan Holbert:
Hold on. Before we dive in, I wanted to point out two things. One is these systems are so chipper and positive. It is creepy, how happy they are to be teaching you. And the second is that it says it is going to give me a quiz, but instead of a quiz, it's going to give me a quiz.

Haeny Yoon:
Based on scenarios.

Nathan Holbert:
Same thing, right? Scenarios. So the bored eighth grader disappears.

Haeny Yoon:
An adult Nathan comes.

Nathan Holbert:
And the grumpy 43-year-old educational researcher emerges at this moment, and I say, "Hang on a second. How is that different than a quiz?"
And the system says, "That's a great question. The main difference between a quiz and a scenario-based assessment, I'm suggesting, lies in how they test your understanding. Quizzes: These are more straightforward and often involve direct questions about definitions or concepts. They are excellent for assessing your ability to recall information. Scenario-based approach: This method presents you with real-life situations or examples where you have to identify the relevant key terms. It helps you apply your knowledge and see how these concepts work in action."

Haeny Yoon:
I mean, okay, we're academics, so we could be like, "What do you mean? That sounds like the same exact thing."

Nathan Holbert:
Well, that's exactly what I responded. So I responded, "So wait, in one, I have to recall information, and in the other, I have to identify key terms. That seems like the same thing."
And what was lovely is the system's response to me was, "You raise a good point," and then it just quizzed me.

Haeny Yoon:
And then it's scraping your data and trying to use it for something else. It sounds like basically, they should have asked you, "Do you want to play a game show?"

Nathan Holbert:
That would be fun.

Haeny Yoon:
That's how they definitely define quiz, right?

Nathan Holbert:
Yeah, yeah.

Haeny Yoon:
"Or do you want to me to give you some boring scenarios that you could try to solve, so I can get smarter at this?"

Nathan Holbert:
But even the "solving of the scenarios" is just, "Tell me which definition I've just described."

Haeny Yoon:
Yeah, yeah, right, right, so it's a boring quiz show. "Do you want the fun game show, or do you want the boring one?"

Nathan Holbert:
I mean, it's insane. Recall and learning are not the same thing. This is something that is an important idea that not everyone understands.

Haeny Yoon:
Excellent point, excellent point.

Nathan Holbert:
But recall is being able to just say the same thing that you heard someone else say. It's being able to regurgitate information. It's being able to quote or recite something that you've memorized. That's not the same thing as learning.
Learning is about understanding. It's about making connections between some piece of information or some new idea or practice and your prior experiences and other ideas and practices that you've been exposed to. It's being able to do something with information, and this system really is entirely built around recall and memorization. It's not at all built around learning. But there's this veneer of teaching. There's this veneer of assessment in these systems.
And in reality, it's just a bullshit machine. It doesn't understand what I'm typing. It doesn't understand what assessment is. It doesn't understand what mitosis is. It's just spitting information out at me, quizzing me, and then assuming that whatever I answer is evidence of what I know.

Haeny Yoon:
Yeah, you know what that's making me think of? You know how right now, in some schools, I think this automated teaching is also a part of this landscape. So maybe tangentially related, and I feel like there's actually videos that you could play, of other people teaching in place of the actual teacher in the classroom teaching. And I think that is so scary to me because I feel like what you're saying is that it's okay. I feel like ChatGPT has helped me sometimes, or any kind of AI tool has helped me understand something a little bit deeper. It's like doing a Google search. It's just a fast way to do a Google search.
But I think the issue is that if I want to know more about it, it's the interaction, and it's the motivation, and the tools of inquiry and meaning-making that get me there. And that's the same thing in classrooms. I could play a video that does a read-aloud, but it's not the same as me, as a human, knowing the kids in the classroom, seeing where their inquiries are, trying to follow a thread that they want to follow, and then creating more opportunities for them to engage deeply.

Nathan Holbert:
Building up the context and helping them to make connections.

Haeny Yoon:
Yeah, that's how you actually learn deeply.

Nathan Holbert:
Absolutely, so this is what's frustrating, is there are ways in which these tools could potentially be useful. So one of the things that we like to do in these Pop Offs is to try to not just grump.

Haeny Yoon:
What?

Nathan Holbert:
Primarily what we're doing here. But we also want to try to offer some different perspectives, different directions, and I think it's important to recognize that the technology underneath the AI, the machine learning algorithms underneath it, those have been in use for more than a decade. And they've been really, really powerful for domain-specific tasks. If you have a specific task, a narrow task, these technologies can be really, really useful for helping you organize information, helping you notice patterns that are hard to notice. And those two things are really useful to do in a classroom.
I always feel like this is a surprise to people when I say it, but we actually know how to build good learning environments. We know what good teaching looks like. We know what good learning looks like. It's really hard to do, though, in the complex, messy, dynamic world in which these things happen. And so if we start thinking about these AI tools, rather than them being general-purpose, replace the teacher, replace the student, let's think of them as tools and assistants for educators and for students, right?

Haeny Yoon:
Yes.

Nathan Holbert:
So maybe we have a maker space where kids are building cool complex artifacts or devices. You could imagine an AI tool that helps organize the data that comes in from that: the pictures, the video, the audio. And then a teacher could use that to see, "Well, who's working on circuits? Who's struggling with circuits? Who's actually doing really well with it?" And now I could not only know who I should pay attention to and how I can navigate that space, but I could also think about, "Maybe we can bring these two kids together. They could support one another." And so it can be a tool that the teacher can use to really notice things that are hard to notice in those spaces.
We can imagine tools that are AI tools for creativity for kids. So maybe it's an assistant that allows a kid who's engaged in a design practice to get some immediate feedback or immediate advice.
So that's it. I'm done Popping Off. I think there's potential for these tools and technologies, but we really need to understand what learning and thinking and teaching actually is and be building it as a tool, as an assistant, not as the whole ball game.

Haeny Yoon:
Yeah, yeah, yeah, so what you're suggesting is that we should not introduce this AI tool as just another thing to enter into the market, but that it's an actual assistant and tool and can actually facilitate the learning process.

Nathan Holbert:
Yeah, and there's all sorts of other ideas that we could come up with. But it starts with moving away from thinking of it as replacing the worst versions of education that we all already know are empirically bad, with a tool, as you said, a thought partner, an assistant, supporting domain-specific kinds of tasks and experiences that teachers need or that students need.

Haeny Yoon:
Yeah, thanks so much for saying that. I feel like that was really helpful.

Nathan Holbert:
Good.

Haeny Yoon:
Helpful to me, too.

Nathan Holbert:
Hooray, we've Popped Off.

Haeny Yoon:
We've Popped Off. Thank you. Before we leave, we want to ask you to take our quick and short survey to tell us what you're looking for in our next season of Pop and Play and what you've liked about our past seasons.

Nathan Holbert:
Yeah, if you have not subscribed, you should definitely smash that subscribe button.

Haeny Yoon:
Smash it.

Nathan Holbert:
And spread the word. Tell a friend. If your friend hasn't listened to Pop and Play, send them an episode that you like and see what they think. Share the word.

Haeny Yoon:
Yeah, and then finally, follow us on Instagram @popandplaypod. See you next time.

Nathan Holbert:
Bye.