Make It Mindful: Insights for Global Learning

Justin Reich (MIT) on “local science,” AI hype cycles, and why schools need to do less.

Justin Reich returns to the podcast with an “applied historian” lens: not dismissing generative AI as just another hype cycle, but insisting we treat early classroom uses as experiments—because history says our first instincts about new tech in schools are often wrong.

We talk about what Reich learned while making the excellent podcast The Homework Machine (hundreds of teacher conversations, dozens of student interviews), why “policy” isn’t enough without social movements, and what educators can do right now while the research base lags behind practice. The throughline: experiment with humility, collect local evidence, share what you’re learning—and beware the trap of “efficiency” that just increases the amount of work schools try to do.

A late pivot goes straight at the emotional core: if Justin had the power to “turn off” AI forever, would he? His answer is less about tools and more about what developing humans most need—time with their own thoughts, and time with each other.

Key moments (approx.)
00:00 — Back on the show + Seth’s “homework” assignment: The Homework Machine
02:18 — “It is different… they’re all different”: tech revolutions and the education pattern that repeats
06:47 — Tech won’t solve inequality; social movements change norms, politics, and resource distribution
09:05 — The web literacy cautionary tale: 25 years of teaching the wrong methods
11:19 — “Local science”: teach as experimentation, then look hard for evidence it helped
15:11 — When there’s no historical control: talk to students, use “Looking at Student Work” protocols
18:49 — Why “big science” takes so long—and why expert practice has to exist before we can teach it
20:45 — The “copilot” problem: even elite engineers don’t yet know how to train novices well
32:46 — What’s likely to happen: business incentives degrade “consumer” tools schools rely on
35:06 — “Subtraction in Action”: schools are maxed out; improvement often requires doing less
38:57 — Listener question: if he could turn off AI, would he?
40:33 — The case for schools as a refuge from attention-harvesting tech: boredom, thought, and people

Themes you’ll hear recur
Reich draws a sharp line between healthy teacher experimentation and premature system-wide adoption. He argues schools can run experiments, but they should label them as experiments, gather some evidence (even simple comparisons), and share results—because otherwise we risk repeating the web-literacy story: good-faith instruction that felt right, wasn’t obviously failing day-to-day, and later turned out to be counterproductive.

He also pushes against the fantasy that AI will “solve” structural problems (inequality, overburdened systems, disengagement) without political and social work. And he returns to a point that’s easy to miss in the AI noise: when systems get “more efficient,” they often don’t get simpler—they just try to do more.

Links mentioned
Closing thought
If you’re waiting for definitive answers about “best practice,” this episode is a reality check: we’re early, the expert playbooks are still being invented, and schools can’t afford to improvise at scale. But you can run local experiments with honesty, protect what already works, and prioritize the rare thing schools can uniquely give students now: space away from the machines—space for thinking, writing, and relationship.

Support for Make It Mindful is brought to you by Banyan Global Learning, creating live, human-centered global learning experiences that help students use language in real contexts—through virtual field trips and international collaborations.

Creators and Guests

SF
Host
Seth Fleischauer

What is Make It Mindful: Insights for Global Learning?

Make It Mindful: Insights for Global Learning is a podcast for globally minded educators who want deep, long-form conversations about how teaching and learning are changing — and what to do about it.

Hosted by former classroom teacher and Banyan Global Learning founder Seth Fleischauer, the show explores how people, cultures, technologies, cognitive processes, and school systems shape what happens in classrooms around the world. Each long-form episode looks closely at the conditions that help students and educators thrive — from executive functioning and identity development to virtual learning, multilingual education, global competence, and the rise of AI.

Seth talks with teachers, researchers, psychologists, and school leaders who look closely at how students understand themselves, build relationships, and develop the capacities that underlie deep learning — skills like perspective-taking, communication, and global competence that are essential for navigating an interconnected world. These conversations surface the kinds of cross-cultural experiences and hard-to-measure abilities that shape real achievement. Together, they consider how to integrate new technologies in ways that strengthen—not replace—the human center of learning.

The result is a set of ideas, stories, and practical strategies educators can apply to help students succeed in a complex and fast-changing world.

Seth (00:00.683)
Okay, testing, testing.

Justin Reich (00:01.916)
Any better, any better, any better? Success.

Seth (00:04.558)
Sweet. Okay. Thank you. Thank you. All right. Here we go. Justin, welcome back to the podcast. it's funny. not too many guests have given me homework, but you did. And part of that was, the homework machine podcast, which I'm so happy you gave me that assignment. cause I am now a huge fan.

Justin Reich (00:12.446)
Thanks for having me, Seth.

Seth (00:29.614)
There are like two pieces of media that I have recommended to basically everybody. One of them is Project Hail Mary by Andy Weir, a book that I absolutely love and treasure. And the other one. Yeah, it's and it's for everybody too. It's like it's so tender, beautiful, nerdy. I love it. The other one is the homework machine. Basically anybody who will listen, I recommend this podcast. I've had a lot of conversations myself here on this podcast, including with you.

Justin Reich (00:38.602)
It is truly outstanding.

Seth (00:58.706)
And I, I, I see the value in these one-to-one conversations. It's why we're having one now, but your project was different. You went out and you talked to hundreds of teachers, dozens of students, and it shifted the conversation from, Hey, what could this be to this is what's actually going on. And I really, really appreciated that you, when you were on here last, you had a very sober view of all this.

That has not changed at all. and even at the top here, when we were chatting a little bit, you were talking, calling yourself an applied historian, someone who, these tech hype cycles come around, your job is to sort of tell everybody what happened last time and, kind of bring a certain measure of like, okay, this is what didn't work. Let's see if we can do it better this time. within that tech hype cycle, obviously we're like, we're in that right now.

My question is, is it possible that this one is different, right? Like this is a Swiss army knife with infinite number of knives in it. Obviously the internet was kind of, kind of similar to that. Is there, is there, and you're nodding your head. is, is it just that is, are we simply in this tech tech hype cycle or could this be something different that needs a different approach?

Justin Reich (02:18.3)
It is different. They're all different. It is very easy to dismiss the magic of prior technologies. I don't know if you've encountered this argument in my head. call it the web was meh. people are like, AI is really, you'll be like, didn't we just do the web? The web wasn't that big a deal. AI is really the big deal. I was like, guy.

We just put a handheld supercomputer in the pocket of every person 13 years old or connected to them to the entire corpus of the world's information, every person they know, billions of people in networked world. And actually, it may not have helped learning that much. And it might have actually not been great for literacy and some other things like that. Really, it like this one? So now you have now like.

Seth (03:00.398)
Might have broken us as a people, really.

Justin Reich (03:13.084)
now the handheld supercomputer can write text and we need to, you know, that's the thing which is gonna be transformed. I mean, I also think people shouldn't underestimate like when film strips came out and things like that, the idea that you could see events unfold that were not directly in front of you, totally transformative to human society and people in those moments thought that it was gonna be totally transformative to education and it didn't help that much. And so,

I'm, a thing that I try not to be is ideological. I don't, I don't want to come across as saying every technology is the same. They are all incrementally helpful. Nothing will ever be different. It's totally possible that it could be different and we should be attentive to evidence that it's different. But to the extent that evidence points us in the direction of actually, this is operating a lot like sort of previous generations of technology have, then,

you know we should be responsive to that kind of evidence as well. I mean, either way, to some extent, it doesn't matter to me how, I mean, other than I'm rooting, Like I'm rooting for anything that helps kids learn. If LLMs help kids learn, if rocks help kids learn, anything that helps kids learn, I'm rooting for. And, But I do think there are lots of things that we can learn from the introduction of prior technologies that can help us do, make fewer mistakes and do more better things than we've done the last time around.

or the last many times around.

Seth (04:43.2)
And if I can, I'm going to try to kind of paraphrase what appeared to be like your, your theory of this is an emerged from the homework machine where your colleague was going out and doing a lot of the interviews. And then he would put together a story and there's a part of the episode where you and he would kind of unpack this from the perspective of you as an educational researcher. And that research piece is, is a huge component of this. Right. So.

it seems like you recognize AI as like a, an arrival technology, has definitely has a lot of potential in a lot of different ways. you recognize that some of the very best ideas on how to use AI are coming from individual motivated educators working through classroom level experiments, like one teacher at a time. And you encourage those practitioners to do that if they have the bandwidth, if they have the ability.

that they should be doing stuff like that. And even at one point, kind of reference a guide that you have to perhaps making their work a little bit more research oriented where they can actually collect and publish some data that might help the rest of the educational, sorry, we'll just cut that out. That might help the rest of us make sense of the work that they're doing.

at the same time, you argue that those shining examples don't justify widespread mandates or adoption until we have like a greater body of evidence around what really works, what the risks are, what doesn't work. And that more generally that these issues that we're seeing in education are ones that should be solved by policy, which should be informed by research.

Not by these new tools that are being placed into teachers hands and asking them to solve this more systemic problems one teacher at a time. that was that like a, would you say that's a fair,

Justin Reich (06:47.38)
There's a lot of good stuff in there. Yeah. The last thing you said about policy. One claim would be something like we really should not depend on technology to solve big social problems. Social movements solve big social problems. So if people are really concerned about educational inequalities, about the yawning gaps in opportunities that are available to kids from more affluent and less affluent neighborhoods and schools, technology will not fix that.

Historically, technology has accelerated those divides rather than closed them. The real tool that we have for making a more egalitarian society is social movements, which change our politics, change our politics, our resource distribution. mean, not just, you know, movements don't just change policies, they change norms, they change the expectations, they change the way we relate to each other. Like, it's not like it's just like, when I say policy, social movements is bigger than

Like it's not gonna be solved by state bureaucrats or legislatures, although they're important alone, it's gonna be solved by we the people making these changes. I would put a fine point, but I think maybe an important one on your characteristics, how you characterized classroom practice advancing, which is, do think it's, anytime teachers are enthusiastic,

about new tools that they think can improve what they're doing. I think there's all kinds of room in education systems for experimentation. And I think that's something that's healthy and necessary, can energize teachers and things like that. But even as I celebrate that, it doesn't mean that we're necessarily going to have a lot of good ideas come out of those experiments. The number of good ideas that come out of the experiments is sort of unknown.

A concern that I have is that historically, a lot of our early ideas about how to use new technologies have been wrong. And not just like a little, sometimes they're a little bit benignly wrong. Like people were like, smart boards, like that'll help students learn. Smart boards have zero effect on student learning. I don't think they hurt anyone unless they fell over and hit someone on the head or something like that. But in a norm, you know, every investment is smart board.

Seth (08:55.183)
Hehehehehe

Justin Reich (09:05.29)
was a dollar that could have been spent on something that actually improves learning. Every professional development that a teacher sat through is an hour that they could have spent with their time doing something that could have improved learning. So that was kind of a waste. But there are other examples, not just a waste, of actually miseducating children. The best example of this is the history of web literacy. So Google was founded in 1995. The first peer-reviewed paper,

that has like a pretty robust description of how educators ought to teach young people to evaluate information on the web was published in 2019. So that's about 25 years. In between that time, the instruction that we've included myself as a high school history teacher during that period of time, the instruction that we provided to young people was wrong and made them worse at evaluating websites.

We had these methods that were basically encouraging students to look really closely at websites, looking for markers of credibility, oftentimes using things like the crap test or other sort of checklist-based approaches. And then we did some pretty robust research, people like Sam Weinberg, Sarah McGrew, Mike Caulfield, and they're like, no, this does not work. If you watch really smart people use these methods, they very slowly come to incorrect conclusions.

And then they came up with these ideas of lateral reading. It's probably important to talk eventually about where these ideas came from. And they could prove, look, experts who use these different approaches very quickly come to correct assessments of websites. let's try to stop teaching people the ineffective approaches, and let's start teaching people the effective approaches.

I think it's useful to think about that 25 year period where teachers like me were using things like the crap test or other ineffective tools. It was not obvious to me in my classroom that when I was teaching those things that it was ineffective. I didn't really have an evaluation infrastructure. I I certainly didn't have a research infrastructure as a second year teacher in 2004, 2005 to be able to figure out that what I was doing was not working.

Justin Reich (11:19.082)
And so we need experiments, but we need experiments that are conducted with a great deal of humility, with the idea that like a lot of our early ideas might be wrong and that one of the best ways that we can find those ideas which are not right, well, two things we can do is one is we can tell everyone what we're doing, that what we're doing is an experiment and we're not quite sure what we're doing yet. Like I think as we teach young people to use AI, we need to say things like,

I'm going to try this method for teaching you how to use AI to help you revise your thesis or to help you organize your plan or whatever else it is we're doing it. I'm not sure it works. Like you might actually get new advice five years from now, seven years from now, years from now that like what I'm teaching you is not a good idea. But let's go ahead and try it. Let's see whether or not we think this is improving. And then let's work together to really evaluate, we getting better? Do we really have some pretty good evidence that we're getting better outcomes, better thinking, better learning?

with us using this thing than without. So some of it is humility and sort of talking to people. Some of it is doing these experiments and then really trying to have some evaluation in these experiments. I mean, the simplest way to frame this is lots of teachers have been giving the same assignments for a long time. You've given this, you your cell division lab report. When you let students use AI on the cell division lab report, you instruct them in some what you think is good, productive way of doing that. Like look at a pile of them beforehand, look at a pile of them afterwards.

and see like, can I really see evidence of improved learning? If so, what was it that I was doing that I think I can do more of or do better? And if I don't, then that should give me some caution. I mean, it very quickly gets more complicated than that because you also have to test whether, like, if you really want to do this well, you have to test whether or not the proficiency persists when you take the AI away. Like, is the AI just creating an illusion of competence or is it actually developing competence? But as a perfectly reasonable starting point,

Seth (13:11.478)
Mm-hmm.

Justin Reich (13:18.78)
A great thing to do is to conduct some experiments, tell everyone involved that these are just hypotheses that we're testing, and then try to collect a little bit of evidence to see whether or not the hypotheses are working and invest more time in the things that seem in your local area to be working better. And then share what you're learning with your colleagues.

Seth (13:37.007)
Yeah. And which hearing hearing that from an educational researcher makes a ton of sense, right? It's like, it's like these, this is the process that drives this body of knowledge forward. I do think it's clever to run experiments on assignments that you have a historical control for, right? So you can actually create or simulate a, uh, a control within the experiment. But a lot of the things and a lot of the things that were highlighted on your podcast are things that are kind of.

brand new ideas, new approaches. You were talking about like a lot of the different ways that teachers are addressing cheating, right? That's the word that is used more than anything when you talk to teachers about AI in schools. And one of the things that teachers are doing in order to address cheating is like AI proofing their assignments. And that's also where a lot of the like innovation is coming from. And maybe not even AI proofing, but like, hey, like

like you did in your college course, instead of having one assignment where I'm having you analyze an EdTech tool from an instructivist versus a constructivist approach, why don't you have AI do that for six different tools and now analyze them based on which outputs you think were the best. In those situations where there isn't that historical control, is there anything else?

anything else teachers can do to make this more like a research project that where the data might actually be used outside of their particular context.

Justin Reich (15:11.124)
Well, start with the optimism that you do have something historical to look at. I mean, the example you just gave, like, I do actually have, you know, 150 essays that I had students write in previous years on that topic. And then a new one, which is only somewhat different. It had some different outputs, but it really had very similar learning goals. So I could look at what they were writing and doing. A second thing, which is a great place to start, is you just talk to your students.

Which is what I do with my students' philosophy. Hey, we're trying this new thing. I don't know if it's gonna work. How did you all feel? Like what felt good? What felt bad? What do you feel like you learned? What do you feel like you didn't know as well? I let students in my class revise anything basically until the end of the semester semester. No, I don't know if I really let them revise it multiple times, but I guess if they did it, I would probably let them. So I had one student who kind of waited until the end of the semester to revise the first few paper.

first two papers in the semester. Paper one, both of them he'd been allowed to use a bunch of AI with, but paper one he used a bunch of AI with and then in our conversations he was like, I probably ought not do that. And in like, I don't know, in all caps or some big bold font and email that he sent me, he was like, oh my gosh, revising the essay that I wrote with AI was much, much harder than revising the essay that I wrote myself.

Like I knew that stuff much better. The implication. I knew that stuff much better. I knew what I was writing much better. And you know, and and that's just me listening. And that definitely sinks in my head like, oh, that's not a good sign for AI-assisted writing. I really believe I think there's a lot of good evidence that revision is essential to developing as a writer. And if there's a tool which is making revision harder, that's not a promising set of circumstances to be in.

Seth (16:31.566)
Yeah.

Seth (16:56.654)
Yeah.

Justin Reich (17:01.034)
But I can listen to other students who is a second language learner who hadn't done a lot of writing, who was like, I would never have taken this course if you hadn't let me use AI in it. That basically being allowed to use AI is what let me submit the essays in this class. Otherwise, I just would have taken other technical classes, but I was really interested in the content. So I got to listen to that too.

And what are some of, you know, we can talk with our colleagues. There are protocols called Looking at Student Work. Anybody, there's one by an organization called Atlas that I really like, but there's lots of other protocols out there called Looking at Student Work. Some of them I referenced in this book I wrote called Iterate the Secret Innovation in Schools. And there are structured protocols for having a community of educators look at a piece of student work. And you usually do things like you don't say that much about the assignment, you read the student work, the teacher assigned it isn't allowed to say that stuff.

and then you have to assume the best intentions of the student and that they're really trying to learn and you have to, then you talk together as a group about what's happening and not happening intellectually. And so without any points of historical comparison, you can still start doing things like, what is the intellectual work that's happening? And is it the kind of work that we want to see? I...

Yeah, so those are some other strategies that, talking to students, talking to colleagues, looking at work without any comparisons, just looking at work in relation to the benchmarks for the learning goals that you have for your students. Am I seeing the evidence of thinking that I want to have happen there? Those are all, you know, I've been sort of calling this local science. I've been calling it local science because big science is gonna take a while.

Seth (18:35.373)
Yeah.

Seth (18:45.582)
Hehehe.

Justin Reich (18:49.757)
One of the ways that we figured out how web literacy worked was these guys, Joel Brakestone, Sarah McGrew, Sam Weinberg, they did a bunch of assessments of students' ability to evaluate the web. And there's a funny story where they're like, give a bunch of seventh graders assignments, they all fail. And they're like, ah, we made a terrible assignment. You're supposed to be a bell curve. Like, not all the students are a Let's make it easier. They make an easier version. The students still all fail. And they're like, oh, we have a problem.

Then they tested Stanford freshmen and tenured historians and Stanford freshmen also fail. And the tenured historians actually don't do a very good job themselves. And they're like, we gotta find some experts, man. And so who they found were Wikipedia editors and fact checkers at newspapers and magazines who they gave all the same tasks to, but they do the tasks universally right, really fast, using a totally different set of methods. And they're like, okay, what we need to do then is look at these expert methods. And then we like,

Now you can't, a person who's a full-time fact checker, you can't teach a seventh grader to do all of those things. But we can abstract that expertise and be like, okay, this is what we can teach a fourth grader, this is what we can teach a seventh grader, this is what we can teach an 11th grader, and there are other harder parts of it, but we're not gonna get to all that. That is, I think, actually a big part of what big science needs to be doing, is saying, what does expert practice in the professions and disciplines look like, and how do we abstract that expert practice into things that we teach? Now, until the disciplines and professions,

invent that expert practice, there's nothing to teach. So for instance, a couple of months ago, I had a chance to go to DeepMind and meet with Google folks and DeepMind folks in London. And it was like an AI and education event. But a thing that I was doing there is I cornered every engineer that I could find in groups, individually, in small groups, whatever. And I asked them the question, do you know how to teach a junior engineer how to code with a copilot? There is no one in that organization

Seth (20:19.714)
Yeah.

Seth (20:43.064)
Hmm.

Justin Reich (20:45.172)
who said anything other than no. We do not know how to do that. Now, part of the reason why that question is consequential is that there's some evidence that when you code with a copilot, your outputs can be worse, that it can make you take longer, that you can solve local problems in a way that causes a mess for bigger systems. There's also a concern, especially among novice folks, that when you use AI, you don't do as much learning as you would otherwise.

that the machine does too much thinking for you and it hinders your learning. So this is a pretty consequential question to which, and Google has billions of dollars at stake, billions of dollars at stake coming up with the answer to that question, which is yes, but they do not know, and they have had access, we've all had access to these tools for three years, they have had access to these tools for longer than that, and they do not know what to do. What is a middle school computer science teacher supposed to do? If the...

Most well-paid, most sophisticated computer science engineers in the world cannot tell you how to teach with a co-pilot. What is a middle school computer science teacher supposed to tell their student? I mean, the only option they have is to make stuff up. Now there's a certain degree of experimentation that might be interesting, might be fun or something like that, but a much better, a place that we will be a decade from now, like 10, I don't know, eight years, three years, 15 years, 20 years.

We'll go to Google People Operations and say, how do you teach a junior engineer how to code with a copilot? And they'll say, you do this and this and this, and you don't do this and this and this. And then computer science education researchers will crawl out of universities all across the country, and they will grab that material and go, OK, we can teach this part to a fourth grader, and this part to seventh grader, and this part to an 11th grader. And then the middle school computer science teacher will have something to teach that will not be fabricated. It will be

of the disciplinary expertise of their discipline and their profession. But we are in a hard, uncertain phase for a period ahead where that computer science, you what are the choices that computer science teachers have? They can say, well, let's just not use that AI stuff. Many teachers will make that choice. And if you listen to what I'm saying, I hope your listeners will be like, well, that's potentially a principled choice. Like if you have a choice between making stuff up,

Seth (23:05.39)
Well, when you put it that way.

Justin Reich (23:05.726)
that has no basis in disciplinary expertise and doing some stuff that you know that works, you're like, well, why don't we sort of stick with the stuff that works? Another approach you might have is be like, well, let's make sure we maintain quite a bit of our historical practice around coding, and then let's have these spaces for play. Let's say we're not really teaching you how to co-pilot. I was talking to my colleague, Jesse Dukes, about this distinct. Two things that teachers do are teach and say, hey, look at this.

Seth (23:34.408)
You

Justin Reich (23:35.21)
And teaching and hey, look at this are kind of two different pr- you know, I was a history teacher, so current events is our hey, look at this. Like I'm not really teaching you stuff when we're doing current events in the same way that I'm like when I'm teaching you the industrial revolution. Like I know something about it. There are a bunch of primary sources about it. There are secondary sources that analyze what's going on. But then amazing things happen in the world and we just talk about it and explore it together. And there's probably room in a middle school computer science class for that's kind of, you know, I overheard somebody say that-

Generative AI is pretty good at generating unit tests. Like, let's see what we come up with when we generate a bunch of unit tests. But by the way, like, all this might not work. You might get much better advice from this 10 years from now when you're in your computer science degree. In addition to coding, let me just say the last thing. The two other big places we need this are writing and research. We need to find groups of expert writers who improve their practice using LLMs. And then we need to abstract that into teachable materials. We need to find people who gather and synthesize information.

Seth (24:14.252)
Yeah, I mean... Yeah, please.

Justin Reich (24:32.137)
Those are three big things that will happen in schools, writing, research, and code. Until the experts have that knowledge, it's really hard to figure out what to tell teachers to do with it.

Seth (24:43.372)
Yeah. You started this talking about the 25 years in between the beginning of the web and the first like web literature literacy study. And initially I was like, wow, why didn't anybody study it before then? But your answer to that is because there were no experts to consult. Right. Like, okay.

Justin Reich (24:57.128)
Well, unfortunately, there are studies. There's actually a whole body of research which is generated around stuff like the crap test, which is wrong. Science does not generate things that are always right. It is self-correcting over a long enough period of time. There's actually, Mike Caulfield was just pointing me to this 1998 study which looked at checklist methods and was like, hey guys, these things don't work.

Seth (25:05.783)
Okay.

Justin Reich (25:22.09)
Like people come up with spurious conclusions when they use this method, but it was just one little paper and one little thing. so like research is happening around all of these things. But I don't know, there's a certain magic that happens in fields when there's a body of research which is like, here's some theory that makes sense. Hey, we take this theory and we test it in like actual classrooms.

And look, the things that we want to happen happen. then we test them six months later and 12 months later. the students actually, yeah, we didn't just do it in one school. We did it in 100 schools. Then we did it in 1,000 schools. And I was on a call with the superintendent the other day. And she was like, it just can't take 10 years to do that. And I was like, oh, I'm sorry, ma'am. There's not a thing that makes it go.

Seth (25:50.476)
Yeah, let's scale that up. Yeah.

Seth (26:07.534)
Hehehehehe

Justin Reich (26:11.592)
I mean, there are things that make it go faster. We need to somehow figure out how we convince the federal government and philanthropies and the European Union to invest more in these kinds of things. that, you there is going, I do believe we're going to figure a bunch of this stuff out. Like you and I are going to be a lot grayer when that happens. you know, so the question is like, what do we do in that interim if it takes a while for science, for disciplinary practice to come along?

Seth (26:16.718)
Sure.

Seth (26:38.636)
Yeah. And you outlined a lot of those things, right? And it's, it's be humble, recognize that there aren't any of these experts out there. You yourself are not an expert. You are running an experiment. Let's be truthful about the fact that that's what we're doing. you were also talking about like, Hey, look at this and these spaces for play and to frame these experiments as a piece of play, I think is a perfectly rational thing to do. it's something that you, said the last time you were on the podcast was like, Hey, this stuff's super weird. Like let's, let's play with it.

and the, the other things that you talked about the, the structured protocols for looking at student work, struck me that that is essentially like, replicating a peer reviewed kind of, approach to, to educational research. and then that communication piece talking to your students, finding out what they really think about it. That mirrors one of the approaches that teachers are using to,

address the cheating problem, which is to simply have close relationships with their students to understand when a piece of work comes out. knowing their student well enough to understand whether or not that is actually their output and, and not like trying to shame them into admitting that they did something wrong, but like really trying to understand like, why did you do this the way that you did it? And how, was that experience for you? and I think that's all great advice.

I do want to pivot back to something you said at the beginning here when you were talking about social movements being the solution to these systemic problems more so than technology. And I'm struck by the idea that social movements

in conjunction with technology create a whole lot of change. I'm thinking about like the printing press and the ability to organize because of the printing press or same thing with like Twitter and like the Arab Spring. I'm I, there's a movement in education right now to individualize teaching, right to make things more personal to students to find things that are more compelling to their lived experience.

Seth (28:43.554)
to give them exactly the support that they need. All of these things that have been a part of teaching for a long time, we call it differentiation, but it's super, super difficult for a teacher to pull off effective differentiation, especially given the post COVID gaps in ability and achievement that we're seeing in any given classroom. I'm wondering if this can like satisfy the, the, the,

idea that it isn't just a technological solution, but it is coupled with a social movement to try to individualize education using technology that is now available. Does that resonate with you at all?

Justin Reich (29:24.586)
Yeah, so new technologies don't disrupt education systems. They're domesticated by education systems. New education systems go, oh, this thing, it's actually good for this kind of stuff that we were trying to do. And there are certain kinds of translation activities or other kinds of individual practice generation. I mean, I am confident there is no doubt.

we are going to find useful things to do with LLMs that improve student learning. That could be totally washed out by all of the ways in which sort of mobile device culture erodes literacy broadly. But within schools, I'm sure the teachers are gonna come up with ways that these things really help. And to the extent that there are communities of people that come together and say something like,

man, really want to try, like I just, my classes are filled with people with really different academic backgrounds, really different levels of preparation. I want to work with other folks to figure out how to solve that problem. That's great to be, you know, working in collaboration for something that's worthwhile. And, and if, you know, if technology helps students learn, you know, I'm, I'm here for it.

Seth (30:45.95)
there's another piece as I was listening to the homework machine that I kept finding myself in this, in this place of tension. And I, it, it wasn't resolved until your last piece of audio in episode seven, when the tension was, it feels like what you are communicating is the way things should be.

Right? Like you are seeing this historical perspective. You're identifying the patterns, the correlations, encouraging people to look at things in, that way, adopt the lessons from the past, apply them to what's going on right now. and the, the, so the, what you're trying to tell people to do is to, is to be patient and be cautious. Right? Like, like these experts will emerge.

best practices will emerge. We're not there yet. If you want to tinker, you want to experiment, you want to have the freedom to do that you have the freedom to do that, go for it, but let's not pretend that this is like the second coming that like, let's not put too much importance on it. And let's not expect that teachers are the vehicles of change here where the, the change should be coming from these more systemic places. And then the very last piece of audio that the, is in episode seven, you were like,

Basically like, so this is what I think should happen, but here's what I think is going to happen, which is that, capitalism will continue to dictate most of the choices that are being made and teachers are going to be saddled with all of this anyway, if I can paraphrase. I'm wondering if there's any sort of shift in your thinking when it comes to talking about what should be.

versus recommending what we should do given what likely will be.

Justin Reich (32:46.398)
Let's see. I'm very much not a revolutionary. We'll keep using capitalism. I really hope we keep using democracy. I think generally speaking, they're pretty good for all their faults. We won't...

schools will face challenges as these technologies shift because venture capitalists have invested billions or trillions of dollars in these things and right now they are free. But at some point those people want their money back. And the way they get their money back is either raising prices exorbitantly or putting ads and sponsored content and erotica, pornography, other kinds of things in these systems.

so that they become extremely compelling engagement machines rather than sort of tools that purportedly serve us. And schools should be prepared. That's a bad thing, which I'm pretty sure is going to happen. And schools need to be preparing, what are you going to do when the consumer grade technology that you're using in your school gets much worse? And this consumer grade technology, not that the technology will keep getting better, but the consumer interfaces we have, the user experiences, will get worse.

You know, I mean, I think...

I don't, sometimes I don't worry too much about the gap between what I really wish schools could be and what they are because that gap is eternal. know, like, we're very fortunate in the United States that our cadre of teachers are pretty incredible group of people. You know, like there's 3.4 million of them, half of them are below average. They're, you know, like, they're not saints, they're just human beings, but they're pretty good, like.

Seth (34:22.158)
You

Justin Reich (34:39.498)
If you watch their work, if you go to schools, they're like, oh, these are pretty good human beings. We're pretty lucky that 3.5 million people. Yeah, a lot of times, know, not all of them, they're not perfect, but, you know, they care a lot about the kids. They're not in for the money and they'll continue, you know, we will continue to make progress with you. There'll be steps backwards, there'll be mistakes, there'll be schools who go, oh, that was not the right way of doing this. We need to go back and refresh some of these things.

Seth (34:43.796)
may have their priorities in the right place, I think.

Sure. But they care about the kids. They're not in it for the money.

Justin Reich (35:06.89)
I will tell you though, I've been reminded recently that what I was actually doing in the fall of 2022 before Chad GPT, I had a project I was working on called Subtraction in Action, where one of the main things I wanted to help schools think about doing was becoming simpler. That what I was observing in schools in the fall of 2022 were that people were exhausted and maxed out and what they were doing was not working. That is a terrible place to be. I think a lot of educators still feel that way somewhat.

And you can't improve, when a system is maxed out and not working, you can't improve it by adding stuff to it. It is often our intuition to keep doing more, but what you actually have to do is do less. Educational Leadership just published an issue called The Power of Less in Schools. And I was glad to be reminded of that work.

So what have I been doing for the last three years? I've been running around to schools being like, this AI thing, you actually have to do something about it. Even if the thing you want to do is like mostly protect core historical practices, there's still a bunch of stuff you have to do about cheating and other kinds of things like that to manage. And so here I am running around to schools telling you, you have to do one more thing, which is terrible, which is not what I wanted to be doing. The only reason I'm doing it is because I went and talked to teachers and students across the country and they were like, yeah, if we don't do something about this together,

that's the worst case scenario. like, this has to be addressed and so we have to kind of address together. But I really do want to try to get back to some of those ideas of part of the reason why school feels like it doesn't work that well is because it's trying to do too many things for too many people. And we need to look across schools and say, like what are we doing right now that's not the most important thing that we could just take off the plates of teachers and students? Like how could we make these institutions simpler? It's part of what has

Brought me around to cell phone bands Cell phone bands are the only national initiative in which school after school after school state after state after state has said we will do one less thing and you know

Seth (37:01.159)
god.

Seth (37:09.996)
Yeah. And they're fantastic if, if they're enforced.

Justin Reich (37:14.984)
Yeah, yeah. mean, there are good examples. There are probably examples of less good at. There's a group of people who are out there saying like, man, but isn't it the job of schools to teach students how to responsibly use their cell phone? And I was like, wow, that might be a good thing. And maybe some people should do that, but maybe parents should do it, or maybe just people figure it out. like an initiative that gets schools to do something less is really important. I actually think one possible mistake that we're making around AI is

Efficiency and simplification are not like efficiency and doing less are not the same thing. When we make tasks more efficient, systems tend to like just make more of that task exist.

Seth (37:53.41)
Yeah, it's, it's just, it's just increasing capacity, right? Like it's, it's, it's increasing efficiency for the sake of capacity.

Justin Reich (37:59.53)
as opposed to being like, let's make this simpler, let's do fewer things. And so that, I'm trying to think more about that and trying to get back to that. That is like an ideal that I would like to get back to more. I don't worry too much about, things are hard in schools, the ideal is far away, but whatever, the job is not to be perfect tomorrow, the job is to wake up and look around and be like, all right, what is the thing that I can do today?

that makes some incremental improvement in what we're all doing together. And lots and lots of teachers are good at that.

Seth (38:39.502)
had a guest on the podcast from Michigan virtual a little bit ago. Her name's Carly Delo. She's doing some great work there with teachers and students. I, she expressed to me that she's a fan of the homework machine as well. We talked about it on the, on the podcast. So I wrote to her ahead of time and I asked her, do you have a question for Justin? this is her question. if you, if you could turn off AI and you'd be the only one left who ever, who knew it ever existed, would you do it? Why or why not?

Justin Reich (38:57.021)
Excellent.

Justin Reich (39:07.214)
that's a, that's yeah. mean, knowing what I know today, I probably would turn it off. just in the sense that I think we as human beings underestimate how immensely important connections with other human beings are and connections with other human beings are difficult. Jesse Dukes, Jesse is the co-producer of the homework machine. He was telling me a story about, he remembers actually the first time that he

Seth (39:31.479)
Yeah, yeah.

Justin Reich (39:36.042)
pumped gas with a credit card at the machine and didn't have to go inside and pay anyone. And he was like, this is pretty convenient, but like, actually it might be bad. Like, it seems weird, but it's actually pretty important for human civilization to go and talk to the storekeeper. And I mean, I think we're going to find that AI chat bots, character bots, things like that are real, real bad for young developing humans, that they're real, real bad for our relationships.

Seth (39:39.63)
You

Justin Reich (40:05.386)
with each other. I think if I'm being truthful, if I had that imaginary power, I'd probably use it. But it doesn't matter because I don't. If generative AI is bad for schools and bad for learning, we should absolutely fight it and we should absolutely keep it out of schools. If cell phones are bad for learning, they're bad for kids and they're bad for development, we should absolutely fight them and keep them out of schools.

Seth (40:13.868)
You

Justin Reich (40:33.438)
But I'm not convinced that they are yet. What I want to do, I have a bunch of biases about things, but I really want to be driven by evidence. And there's plenty of stuff you can get me optimistic about. It's really hard for teachers to provide a lot of formative feedback on writing. If we can build some machines that give people some formative feedback on writing, that give them a few extra cycles that help them do their development, that could be pretty good. There's a lot of languages that people use, that people speak at home.

teachers don't have access to all those languages. And if there are machines that can translate stuff and make more of our materials available to more students, I think that could be pretty good. And eventually, there are going to be things like we are going to know how to code with copilots. And we'll know how to teach people to be good at coding with copilots. And learning how to code, manipulate computers is pretty cool. And so I'm enthusiastic about a bunch of those things. But I do bet a lot of teachers will find themselves in the situation of being like,

Man, know, like 20 years ago, was kind of my job. Like my students didn't have a lot of access to technology. And one of the things that I could do that was special in schools is create these spaces where they could engage with technology for the first time with some adult supervision and play around with some stuff. I bet many more teachers are going to feel like, know what, my students are like a wash in technology, which is increasingly sophisticatedly designed to grab their attention and take it to things that are not good for them and not good for society. And one of the things we can do at schools is create spaces

Seth (41:50.446)
Hmm.

Justin Reich (42:01.13)
in which we give students the gift of time with their own thoughts and time with each other and time with people. And that, which would have been just like a totally, like it wasn't a gift in the 19th century. was just like what we did. But it may be something special and important that schools provide in the years ahead.

Seth (42:05.518)
Hmm.

Seth (42:16.718)
Yeah, as a parent, I'm constantly trying to encourage my daughter to just sit there and be bored. Uh, well, thank you so much for being here. Uh, there are a couple of things I'm going to link in the show notes. Uh, this includes, uh, MIT, MIT, MIT teaching systems lab link to the homework machine. Uh, we talked before we started recording about teacher moments, this, uh, platform that you have a system for digital clinical simulations and the national tutoring observatory. I'll link that in there as well. If people are curious, is there anything else you would like to point my listeners to?

Justin Reich (42:46.51)
you know, on, on the teaching system website, we have this guidebook, the perspectives for the perplexed, which is one way that we have of sharing some of the experiments that people do. and I think has some of this ethos of like, Hey, we don't know what we're doing, but like, here's some way that we can share together what we're learning. So that might be helpful for folks, but yeah, that's the sort of, court that's, that's the homework right now. Good job. You pass. You get, you get an A.

Seth (42:54.104)
You

Seth (43:14.382)
Thanks, Professor.

Justin Reich (43:15.638)
No, but, thanks a lot, Seth, for advocating and sharing the homework machine. think the, Jesse and the reporters and a bunch of other stuff did a really great job getting some important stories out there. So we appreciate people boosting the signal and so forth.

Seth (43:30.882)
Yeah, absolutely. Great work. Thanks so much for being here again. And remember to our all our listeners, if you want to bring positive change to education, we must first make it mindful. See you next time.