Make It Mindful is a podcast for globally-minded educators seeking thoughtful conversations about how education can adapt to an ever-changing world. Hosted by Seth Fleischauer, a former classroom teacher and founder of an international learning company specializing in digital, linguistic, and cultural competencies, the podcast dives into the "why" and "how" behind transformative ideas in education. Each episode features educational changemakers whose insights lead to practical solutions and lasting impact.
Seth Fleischauer (00:00.75)
Hello everyone and welcome to Make It Mindful, the podcast where we explore how to keep schools relevant by looking through the lens of mindfulness and asking the question, what's really worth paying attention to here? My name is Seth Fleishour. My cohost Lauren Pinto is on a break, but together we delve into the world of education by interviewing change makers and focusing on practical, transformative solutions for teaching. And this week, my guest is Lindy Hockenberry. Lindy, welcome.
Lindy (00:27.467)
Hi Seth, thanks for having me.
Seth Fleischauer (00:29.518)
Yeah, nice to see you again. Lindy, could you start by please introducing yourself to our audience?
Lindy (00:36.075)
Yes, I'm a lifelong educator. I live in Bozeman, Montana, so I'm in the Southwest part of the state. I am a product of Montana Public Schools and I taught in small rural Montana Public Schools. My classroom was a computer lab and long story short, I always tell people, you can get the short version is that that's what.
led me into working into technology integration and educational technology. Cause I was luckily I taught business education and business education in the US can be either very businessy like accounting, marketing, or very technology or a little bit of mix of both. And mine was very technology heavy. So hence why my classroom was a computer lab. So I always had that natural, what's now referred to as a one -to -one environment. I'd never heard that.
at the time, one -to -one environment, because every one of my kids had a big old, and mind you, they were big old desktop computers with huge towers and the really, really deep old school monitors that took up a whole bunch of space in my classroom. But it was always natural for me to use technology as a learning tool because I have that. And that led me into a path of working as a technology integration specialist, and I did curriculum development. We did a lot of professional development on that curriculum.
Seth Fleischauer (01:31.63)
You
Lindy (01:52.523)
That kind of led me into professional development. And now I have a tech consulting business where I help schools, educators, teachers, unpack technology, do lots of trainings, sharing, writing books, everything I can do to help figure out the effects of technology on education and what happens in the classroom and how we can use it as a learning tool.
Seth Fleischauer (02:19.822)
So how can we use it as a learning tool?
Lindy (02:21.867)
The hundred million dollar question.
Seth Fleischauer (02:24.878)
Yeah, I mean, and I guess making that a little bit more specific is that, you know, we talk a lot on this podcast about adopting technology in a way that's mindful, not just using technology for the sake of using it, but really using it for a purpose. I'm wondering if there's a way that you think about that in your work. When you are
talking to teachers when you're talking to companies about how to make sure that technology is not something that's just checking a box, but something that has a real impact and an impact in the direction that we are intending for the impact to go. What's like your framework for thinking about that? How do you how do you advise people?
Lindy (03:06.411)
Yeah, exactly what you said. Don't just use technology to use technology. Technology, I use the analogy and I actually, somewhere in this closet behind me have a physical toolbox. And I have on top of it, it says Lindy's Teacher Toolbox. And inside of it, I have sticky notes and index cards and markers and highlighters. And then I have little slips of paper with different technology tools like Book Creator and Canva and Padlet and Flip, right?
of this idea of this visual analogy that as a teacher, you have learning outcomes that you're trying to reach that are based upon your standards that you need to teach, right? And you need to figure out how do I get from A to B? B being the learning outcome, right? How do you get your students to that learning outcome? And you have to pick from that teacher toolbox in order to choose which instructional tools am I going to use to get from A to B? Sometimes technology,
like Book Creator, I'm just throwing random examples out there, tools that I like, not so much random tools that I like. Maybe you do want to use Book Creator, but then maybe your next learning outcome that you look at, it doesn't make sense to use technology. Maybe the technology is even sometimes a hindrance towards meeting that learning outcome. And sometimes it might depend on the student, right? I am a tight, I have not great handwriting.
for multiple reasons. I've now never had great handwriting. Now I have carpal tunnel as an adult and it's just even worse. So for me, like handwriting notes is really miserable. I would much, much rather type notes and I'm a really fast typer. I can type like 90 words a minute. So like I'm a super fast typer, but I've talked to a lot of other people that are like, if I had to type notes, no way. Like I have to hand write notes and that's also how it helps like solidify in my brain. So that's a perfect example of
using technology or not using technology based upon a student's specific way that they learn. But as a teacher, they don't know that, right? They don't always know how they learn best. So that's part of the adventure is helping them figure out, like helping me figure out that I am a much better typer when I take notes versus writer. And I have tried to sketch note and it is very ugly.
Lindy (05:29.003)
Nobody wants to see that business, right? So like that might be another way. And you might have a student that likes to sketch note on paper. You might have a student that likes to sketch note on a tablet with a digital thing. It's just whatever tool is right for the job is kind of my motto.
Seth Fleischauer (05:45.358)
Hmm. So I'm hearing that, that it's kind of part science, part art, right? That where you're selecting a tool, you've got the science part is you've got an objective that you're trying to reach. You can kind of match the tool to that learning objective. you've got even in assessing, what a student's, known qualities are, what their known preferences are for learning, what their known abilities are. You can kind of match those up with a given tool. but then there's what the student.
doesn't quite know about themselves yet or not. It's something that they might be surprising to learn that something resonates or not. And that's more of like an art to it where you're just kind of like going with a flow of what you're feeling at the time and selecting a tool that works. Do you think about this in terms of like science and art? Is it purely science? Is it just a matter of like...
having enough information to be able to make the decision or is there some jazz when it comes to selecting the correct EdTech tool for the job.
Lindy (06:51.723)
Great question. I think exactly what you said and I think it's Marzano that has the book, The Art and Science of Teaching is absolutely correct that we have the science part. We can look to studies. We can look to good pedagogy to what to do. But we as teachers are making a literal million decisions every minute based upon.
particular students in our class, what's going on in that moment. this isn't going the way that I thought it was gonna go. I need to switch gears and go some other way. And to me, that's the art of teaching. And the thing with technology, and we are hitting this very much against the wall with this right now with, I had to say it at some point, AI, right? Artificial intelligence is that.
There is no research. This technology is so new that we don't have research that tells us what is good pedagogy when it comes to utilizing artificial intelligence. We have studies and research that tell us what's maybe good with other types of technology, that AI is a very different type of technology than we're used to, right? We're moving from rules -based computing to intent -based computing. Very different. And we don't have studies that show that. So,
Right now I'm working with a lot of schools and educators and I'm telling them like, you guys are the brave ones, right? Somebody has to do the trial and error and do the art. And we will see more studies come along. We will get more of the science, but it doesn't mean that we can't ignore this technology that's changing everything in our world until the science comes to us. Does that make sense?
Seth Fleischauer (08:36.654)
Especially not educational science, which takes an entire generation to really, if not an entire lifetime, to really get the results of the impact of education.
Lindy (08:38.923)
So finally, cheers.
Yes.
Lindy (08:48.747)
Yes. And we're at this point with AI now that it's moving faster than any technology in human history. Education, formal education could already not keep up with technology changes, right? In most cases. And now it's going like infinitely faster. So by the time that the studies that are currently being conducted come out, that technology is going to be, I don't want to say obsolete, but...
significantly different and improved, which is a crazy thing to think about.
Seth Fleischauer (09:24.462)
Yeah, I mean, I was definitely sucked in by that whole hockey stick argument at one point, and I'm less so now because I hear about the cliff of information, right? Like no longer having information to feed the models, and therefore, it's not going to continue to improve at the pace that it has been. And then I think just fundamentally, like I don't think that...
that how they work is fundamentally going to change. It might get better at what they do and you might rely on it for more things. It might be able to, for example, operate without prompting to really solve problems for you just by you stating that problem. I think that is a possibility. But I hope that some of the things will stay the same enough that we're able to study it, measure it, understand.
what it is helping, what it's not, when it's helping, when it's not, and be able to act on some of that research. In the meantime, obviously what we have is our best guesses and the brave experiments of some of the people that you're working with. And here we are, this is gonna come out at the beginning of the school year, this is a back to school episode. We're recording it, of course, earlier in the summer. And so whatever you say right now might be totally changed by the time we get to...
Lindy (10:30.859)
Thank you.
Lindy (10:45.803)
You will.
Seth Fleischauer (10:46.798)
to September when it comes out. But you know, one of the things we were talking about before we started recording here was AI literacy said that you're really leaning into this. This is a concept that's very important. I'm a teacher I've never heard of AI literacy before. What are you talking about? Why should I care about?
Lindy (11:01.483)
The way I'm thinking of AI literacy is it's another branch of the digital assistantship tree, right? So we've been talking about the importance of digital assistantship and teaching students to use technology responsibly for years in education and as parents too, for those that have kids. And now AI literacy is just, it's like another branch. That's a, you know, I feel like that's a way to think of it and make it more of a manageable thing.
And it's basically the idea that we have to teach people to use this technology responsibly. And it's much more than that. The best framework and definition I found is from Digital Promise. So if you just search Digital Promise AI literacy framework, it should pop up in your search right away. They give a really good definition and they break it down into three areas. They have use, evaluate,
and shoot, what's the third one? Understand, understand, use, and evaluate. Basically, the three categories of AI literacy is we have to understand at a basic level how the technology works. We have to use it, right? So you have to be integrating it into your daily life. If you don't use it, you don't know how to use it responsibly. And as an educator, if you don't use it, you don't know how to teach your kids to use it responsibly.
Right. And then the third being evaluate that, just like we've talked about media literacy for years, we have to know how to look at it and say, I'm not sure that's quite right. Right. Hallucinations are a big deal with AI and currently in the way that large language models work. I believe that will be.
less and less is the technology get, I already noticed it's less and less. Like even from November of 2022, when Jack GPT came out, I get way less hallucinations now than I did a year and a half ago. So that will improve, but it will never not be there. And there will never not be a reason that us as humans that have human brains don't need to look at anything and say, hmm, maybe I should fact check that, right? Maybe I need to double check that and make sure it's right. So that's kind of the idea.
Lindy (13:20.331)
behind AI literacy and why it's so important is that this tech, AI is very, it's a very different type of technology. I've been talking to a lot of people that are building AI. They're the ones creating AI systems, creating large language models, training AIs, building training data sets, all of that kind of stuff. And I've asked every single one of them, like what, how could we do this wrong as humans?
because there's so much uncertainty and fear around AI. So like just saying to them like you're you really understand the technology better than anyone in the world. How can we do this wrong? And every single one of them has said to ignore it or not use it because in order to again understand and evaluate you have to use. And they said if we don't use it, we don't understand the technology and then we don't understand how to teach how to use the technology responsibly.
That's AI literacy in a nutshell.
Seth Fleischauer (14:22.062)
So you're saying that the creators of AI, they're saying if you don't use it, then other people will use it or when you do use it, you'll use it irresponsibly. Is that the idea or like what is wrong about not using it?
Lindy (14:36.459)
Yeah, yeah, more the idea, and I guess I'm interpreting what they have told me, is that their idea is if us as humans ignore, right, or just act like it doesn't, act like this doesn't exist, don't embrace it as part of our educational system, for example, then it's such a powerful technology, exactly what I said, if you don't use it, you don't understand it, and if you don't understand it, you don't,
know the limitations, the concerns, what it can do well, what it can't do well. You don't understand things like bias and hallucinations of knowledge cut off if you're not using and understanding it. And those are all important digital, not just important, vital digital citizenship skills for any human from this point on in history, and you know, for this point forward in human existence to understand. Does that answer your question?
Seth Fleischauer (15:36.366)
Yeah, yeah, I mean, it's making me think a lot of things. You know, one of them is that,
Recently, like Google's AI search has gotten a lot of attention. This will be like two months ago for people listening, but it's gotten a lot of attention for having like famously bad responses to things like you should lock your dogs in the car. That's good. And this is how many rocks you should eat every day. And I have a kind of a different attitude towards that story than I think a lot of people who
You know, rightfully are concerned about the danger that exists there in terms of false information or bad ideas getting into the populace at a more efficient and more visible in a more visible way. But I really enjoy the fact that it seems it seems to be pushing the conversation about media literacy in general to the forefront.
where people understand that, okay, they scraped this information from the internet somewhere and whether people are getting that information via the AI summary at the top of search or if they're getting it by the first thing that they clicked through, which, you know, is also AI organizing the order of the search results that they see. It is critical when you're investigating important information.
to cross -reference that information and make sure that it is accurate. And I say if it's important, right? Because it would sort of cripple the entire internet economy if we double, triple, quadruple checked every single thing we read on the internet, right? Like there's a certain amount of like we are finding sources that we trust and we are...
Seth Fleischauer (17:28.846)
maybe cross referencing some of the important things that we're learning on those sources, because even though sources can be wrong, but generally speaking, if I see it on this source, like I'm going to believe it, I believe that it's true. And I think what's what is concerning about AI there is that they don't often tell us the sources. And so they might be it's here's this trusted source of Google, all of a sudden is less trustworthy, because they're scraping from the onion to get their facts that they're putting in the AI summary. So I think,
But again, though, what I really appreciate about the moment is that this powerful new tool, which does hallucinate much like people do, which does give you false results, much like much of the web does, is really bringing to the forefront the need to have better media literacy. And when I say media literacy, it's AI, it's stuff that's on the web, but it's also stuff that's written in books. It's stuff that is on billboards. It's commercials. It's like, you know,
Who are the people that are putting this information out there? What is their motivation? What are they what do they want out of me like asking all these questions having that moment of mindfulness around it that moment of objectivity I think is is really critical for being able to pursue a better life of media literacy And so I I appreciate that AI is is giving us that opportunity for it and you know another thing that a
got me thinking about, sorry, I'm gonna pause here. What was I thinking about? I have no thing. Do you have a response to that?
Lindy (19:04.043)
Yeah, I can talk a little bit about... Yeah, I love what you said about media literacy and AI bringing it to the forefront. And I think the ultimate thing is, is we have to be very specific about teaching people how to be critical consumers. And that's not a new thing. Like, I loved how you said that. It is not a new thing whatsoever. We needed to teach this for...
Seth Fleischauer (19:05.742)
Yeah, go.
Lindy (19:30.699)
a very long time, right? That's another part of digital citizenship tree. Some reason I can't say that today is this idea of being a critical consumer and part of being a critical consumer is being mindful and not just believing always the first thing that you see, knowing where the source comes from. I get so many spam text messages a day now. It's the new thing is the spam.
text messages. So being able to look at that and go, you know what, I really don't think my bank would probably contact me in this way, right? Like that's an example of being a critical consumer. And yeah, AI is just, it's bringing that to the forefront because it's bringing us information in new ways that we're not used to as humans.
Seth Fleischauer (20:20.174)
Yeah, and the other thing that what you were talking about got me thinking about in terms of digital promises, AI literacy framework, the understand, use, evaluate. I think the understanding is not just like understanding what it can do, but also how it does it, right? Understanding that it's just predicting the next word that comes along, understanding that it's scraping the internet for all of this information, that it's working on multiple levels at one time, that it's...
working in terms of meaning but also in terms of grammar and syntax and and it's got all of those things going on at once and then if you if you understand that maybe you're less likely to to like Anthropomorphism you're more likely to understand the results that you're giving and it's giving you the use I think is critically important of just like understanding what it can do not just for you or sorry not just in general but for you specifically because I think that there's this kind of
I've found, and maybe you agree with this, maybe you don't, but I've found that there's kind of this spectrum of, of, any, any person's given depth of skill and how helpful AI can be for them. And I feel like on this side of the spectrum, you have people who have no depth of knowledge about a given subject. And if they use AI, it can be very dangerous because they can't evaluate it. Right. They're going to get something and they're going to be like, that sounds great. I'm just going to copy and paste that and.
send it off as if it's true and fact. Right. And then on the other side of the spectrum, you have people who have really, really deep knowledge about things, including like veteran teachers who we've trained to try to use AI for things like lesson planning or evaluating student work. And what happens there is they can't possibly like write a good enough.
prompt that's going to replace what they would say, you know? And so for some things, not for all things, but for some things for those teachers, they're simply not going to get a lot out of the tool because they have such a deep well of knowledge around that subject area. And then you have that sweet spot in the middle where people know enough about a given topic that they can be critical about the output, but they don't know so much that
Seth Fleischauer (22:39.086)
It's not going to be helpful to get some real help from from AI in terms of generating some of that work And so the evaluate piece is so critical But there's a certain amount of like non AI Experience that is critical in order to be able to evaluate right like we think like how is this changing education? Like are we you know? Should we not require students to like write essays anymore because now the technology can do that and I think that
And when it first came out, I was really kind of, attracted to that idea of like, maybe this will revolutionize everything that we're asking kids to do because the sliver of, of, achievement that is possible by humans, but not by computers is getting thinner and thinner each day. maybe we'll start to focus on those things that really make us human. But I think I've come around to the idea that, you know, these, the output is.
is always in need of evaluation. And so in order to be able to evaluate, you have to build the foundational skills to be able to get there. So in order to be able to know if that five paragraph essay it spat out was a good essay or not, you have to know what a good essay is or not. And so I think that in that way, it's not going to revolutionize like what we expect of that people are able to do. It may change how we get them there.
Right? Like we may, it may be like teaching people how to have those critical thinking skills in order to be able to use AI more effectively rather than teaching them the critical thinking skills in order to write an essay. So it's like, you know, it's the same, it's the same skills. It's just going to be applied differently and the path to get there is going to be different. But I don't think that like fundamentally it's going to change what we do all that much, or at least the jury's out on how.
But I wonder what you think, from your vantage point, how do you think this is changing education?
Lindy (24:42.891)
Yeah, no, I love that question. I love everything that you just said because I can't tell you how many conversations I have had in recent months about this exact same thing. The way I think about it and how I'm teaching educators for them to then teach their students is in order to collaborate with AI, which is where we're moving as humans. From this point on, we will never not collaborate with AI in some way or another.
It doesn't mean like every second of every day we're going, like you said, there might be times if I have a really deep knowledge of something that where the technology currently is, it can't assist me or save me time in some ways. But a lot of the times we're going to have to think about that. In order to know how to collaborate with AI, you have to think about the when and the how. So you've got to know when during accomplishing this task, whatever it is that I'm trying to accomplish, writing an essay, creating a pod.
whatever it is, right? When can AI help me? And in order to know the when, you have to understand what you're doing at enough of a knowledge to identify that, right? So there's still that foundational knowledge that we need to have as humans to identify the when. And then once you identify the when, you've got to identify the how, right? Like which tool am I going to use and which tool is right for the job?
And you're not going to know that unless you interact and try different tools and figure out what it's good at, what it's not good at, where it can help me, where it cannot help me. And that's, nobody has the answers to that currently. Like people are trying to figure that out. I have spent hours upon hours upon hours in several different AI chatbots still just trying to figure out even in my daily work. Like,
Seth Fleischauer (26:21.454)
Mm -hmm.
Lindy (26:35.051)
where it can help me and then advancing that on to the effects of education. I liked what you said about, you know, maybe the ultimate output is the same, but we change the process. Cause that's exactly how I've been thinking about it too. And using writing specifically, cause writing is what's really in the middle of the target board right now, the dark board when it comes to AI. Cause AI does really good at text to text, input to output like current.
Seth Fleischauer (26:55.662)
Yeah.
Lindy (27:03.595)
That's what it's really great at. And so yes, we still need to know what is good writing? What is a good essay to be able to evaluate that output? But if we don't change and focus more on the process of writing, then we're going to be in this constant battle in education of fighting the technology, which I don't think is healthy for anyone. Like we've done that for years in education. I was.
Use the example, I feel like every time a new technology comes out, it's like, we have to strap on our battle gear as education. I need to know educators and go stand on the front lines and like, I'm ready to fight the battle. Like we can't do that. It's not healthy in terms of wellbeing and it's not what's best for our students and their futures and where their lives are gonna, the technology is that how it's gonna affect their lives, right? So instead we have to think about,
Seth Fleischauer (27:34.542)
Hehehehehe
Lindy (27:55.371)
changing the process, focusing more on process over product. And for example, with writing, we can't not use AI during the writing process and teach students how to use AI, right? Because it can be really powerful. For example, a lot of people struggle with writing. A lot of people struggle with writing. And...
A part of it, a lot of people will say like a blank page just intimidates me, right? I don't know where to start. I don't know how to organize my thoughts. Well, right there, AI can help a learner make an outline, at least get a start, give them a draft of something to start with that then they can learn how to go in and edit. But it takes a very mindful, thoughtful educator that understands how to do that, understands the technology in order to help.
guide students in determining how AI can best support them on that path. But if we decide to just totally say, nope, we're not, we're going to completely ignore AI in the writing process and we're just going to keep teaching it how we've always taught it, to me, that's not where we should go in education at all because we know that they're going to use it. We naturally as humans, we try to save time.
Right? Time is our most precious commodity as humans. So giving a task to a student, an employee, a person of any kind that they can just go and copy and paste from someplace else. Like, let's not lie. Us as adults, like especially we're very busy. We have jobs that take way more time, especially teachers take way more time than we have time to do. Like if you really stop, stop and put yourself in those shoes, would you not?
If your boss gave you something that you could just go and copy and paste, would you not do it? Right? Like if it would save you a ton of time. Maybe some people like, the output was good. Some of you might be like, no, no, no, no, I would never ever do that. But really just stop and think about that, right? So anyway, that was a very rambling way of going round.
Seth Fleischauer (29:53.774)
even if the output was good.
Seth Fleischauer (30:04.814)
Yeah.
Seth Fleischauer (30:08.718)
Yeah, no, I, you know, I think about, like you were talking about, like, you know, just experimenting with this, what's so crazy about this technology. I think in the beginning you talked about, what did you say? rules based computing, as compared to intent based computing, right? This idea that like, it's just, it's giving you something different every time it's, and, and the relationship that each person has to the technology is going to be different. It's almost like the tool kind of.
adapts itself to the needs of the person. And this is unlike anything that we've ever had, right? So the experiment that we're running right now of all of these users across the world, trying this out and seeing what's good for them and what's not, I think is, is what needs to happen in order to understand how best to harness it. The question I think for me is, okay, so what about kids, right? And how, how young, right? Like, what, what are your thoughts on that in terms of?
when is too early to introduce kids to a technology like this?
Lindy (31:11.051)
great question. I have thought about this so much. I've had so many conversations about this. There are people out there that are in my shoes that are saying like, no, like elementary kids, we shouldn't even do it. We shouldn't even go there. I don't agree. And my don't agree is followed by many, many months of thought and many, many conversations going back to the idea of AI literacy, right? We have to teach how to use responsibly.
they're going to be using it at home, right? We went through this, right? Like we went through this about 2011, 12, I was known as the iPad girl because all the schools were buying iPads. Like most people, they didn't even know my name. They're like, you're the iPad girl because all these schools are buying iPads and need to know how to use them. So I was going in and doing lots of training and support and they're like, the iPad girl, the iPad girl comes in, right? And at that point it's still,
Seth Fleischauer (31:44.622)
if they have access, you know.
Lindy (32:09.131)
wasn't super common that like every kid had one at home or necessarily every parent had a smartphone. And so there was this idea of like, we were having the same conversations. Should we have the kindergartners on the iPads? And I had schools that like K -12 weren't using the technology, but just using as an example. Maybe they didn't start using the technology till third grade. But then guess what? We realized that they go home and they're using that technology.
But this is what I really want educators to think about is that, yes, we have digital natives, they've been around technology. Technology typically comes pretty easy to Gen Z, Gen Alpha especially, right? But they know how to use technology for fun. They don't know how to use technology for learning and productivity. So that's what we have to do in schools. And we can sit and debate.
on and on and on about whether this should be the job of educators, it should be the job of schools, but it is what it is. Like it ultimately falls on schools, which then ultimately falls on educators. And I think that's where we're currently at. So to me, I don't think we can set like a, kids, the example of, you know, only kids after third grade can use this technology. That does not mean, I want to make sure and add this disclaimer.
that that does not mean that we don't make sure that the technology is safe, secure, age -appropriate, developmentally appropriate. That is so, so, so important. And part of that is, again, having educators that understand the technology, that know the pros and cons, that can understand the limitations to be able to choose the right tools that are developmentally appropriate for the different ages. I do have some even...
primary K -12 teachers that are already using school -friendly AI tools, not necessarily having students access the tools, but let's use the example of we're doing story time. We read a book, we learn about a rabbit. Well, let me give a better example. We read about the, what's the book? You can tell I'm not a primary teacher, but the caterpillar, the hungry caterpillar book.
Seth Fleischauer (34:26.702)
yeah. Yeah.
Lindy (34:30.347)
That's not the exact name. Let's say we're reading The Hungry Caterpillar, right? We could then, the teacher could create an AI to talk to The Hungry Caterpillar or to talk to the author of the book. And that could be part of a follow -up. So not, we don't have kids interacting directly with it, but it's up on the screen and we're having this conversation. And as part of that conversation, the teacher is able to explain, hey guys, this is an artificial intelligence. Do you know what that means?
It means it's not a human. It doesn't exist, right? It's a, talk about, it's a machine and it's trained this way and start introducing some basic, you can see me turning to my chair like I'm doing this. I'm really getting into the role for those seeing the viewer not listening to the podcast right now. Teacher mode here, teacher mode here, right? But it opens up that opportunity to have those conversations and guess what? Say, hey,
Seth Fleischauer (35:12.43)
You
Lindy (35:27.691)
You know, the hungry caterpillar said this as the AI. Do you think that's something that the hungry caterpillar in the book would have said? No? Why wouldn't you think that's something that the hungry caterpillar in the book would say? Well, guess what? AI doesn't always get it right. Just as humans don't always get it right, right? You're opening up that conversation. You're starting that responsible use talk that has to have so that when they go home and they grab their tablet,
smartphone, whatever it is, and start accessing these tools, which by the way, we're already at a point that now ChatGPT, you don't have to have an account to use the free version of ChatGPT anymore. That was in the last month or two. MetaAI is the new chat bot on the scene. In the last month or two, it came out and it does not require an account. So literally, anybody can go to the website and start using it. And there's a lot of discussion about the goods and bads, but...
It's the reality of where we're at right now. So to think that just because a kid is five, six, seven years old, whatever you want to set the age at, that they're not using and having access to this technology outside of schools, even if you're creating that walled garden within the school, a lot of them, not all, are going to have access outside of school. So I think it's imperative to, in the school, have those conversations with kids of any age.
moral, their long -winded way of coming to my conclusion.
Seth Fleischauer (36:58.094)
Yeah, and I think what you were illustrating there too is like some different levels of use, right? There's modeling the use of it. There's teaching about how it works as a first step. There are a couple of steps before you actually put the tools in the hands of the students, at least in a way that is sanctioned. But yeah, I do think it is, you know, these tools clearly are going to be ones that are critical to people being able to do their jobs well or many jobs well.
in the future and I think that denying them access to a tool like that at school is an equity issue. And you can kind of start and stop there as an argument for teaching the use of them in schools. How we do that, I think that's the topic of maybe another podcast. But Lindy, thank you so much for coming here today. Where can our users, sorry.
Somebody's opening my door. I know it's a cat. Lindy, where can our listeners find you on the internet?
Lindy (38:05.707)
Yes, so I am on all the social medias, at Lindy Hockenberry, L -I -N -D -Y, and then Hockenberry is a B -A -R -Y at the end, tricky. And then check the show notes and you'll get links to my websites where you can connect with me there as well.
Seth Fleischauer (38:21.262)
Awesome. Well, thank you so much again for being here for our listeners. If you would like to support the podcast, please do tell a friend, follow us, leave a rating or a review. Thank you as always to our editor, Lucas Salazar. And remember that if you'd like to bring positive change to education, we must first make it mindful. See you next time.