Legal Lunch Room

Artificial Intelligence, or AI, is a hot topic for almost every industry these days, including education. What happens when students and teachers want to bring new technology into the classroom? What are the inherent risks with any new technology? Will AI change teaching and learning forever? On our next episode, we dive into the transformative role of artificial intelligence within educational institutions. We hope you’ll tune in!
Be sure to Subscribe to the Legal Lunch ROOM wherever you get your podcasts.

Legal Lunch Room is a production of the KingSpry Education Law Practice Group. It is meant to be strictly informational and does not constitute legal advice. Should you have any questions about topics covered on the show, please consult with your local legal counsel. You can also visit our website at www.kingspry.com to find more education law information and resources.

Our audio engineers are Steel Pixel Studios. Our theme music is by Don Loughney and our research assistant for this episode was Morgan Inman. Have a suggestion for a future episode? Write to us at legallunchroom@kingspry.com.

What is Legal Lunch Room?

The Legal Lunch Room, a new education law podcast from the KingSpry Law Firm, invites attorneys and school leaders to sit at our table and share how current education law trends and court decisions impact school boards, students and our community.

Jonathan 00:19
Artificial Intelligence or AI is a hot topic for almost every industry these days. And that includes education. What happens when students and teachers want to bring new technology into the classroom? And what are the inherent risks with any new technology? Will it change Teaching and Learning forever? In this episode, we dive into the transformative role of artificial intelligence within educational institutions. I'm Jonathan Huerta.

Rich 00:40
And I'm Rich Campbell. We're attorneys with the KingSpry law firm in Bethlehem, Pennsylvania. Welcome to the Legal Lunchroom. Each episode, we'll be looking at the laws that affect school boards, administrators, students in our community. Thanks for tuning in. Joining us today is education attorney Nikolaus Baikow. Nikolau and Jonathan have been presenting trainings and answering questions for our school clients on AI. So welcome, Nick, it's great to have you with us on the podcast. Thanks Rich, it's great to be here with you. So just to set the stage AI in its current state is more accurately described as generative AI based on large language models, such as Chad GPT, and Meta's LLaMA or generative AI used to create images, code and audio, with prominent examples being Dall-E and Stable Diffusion. Now, I asked Chat GPT before we came on what the most frequent concerns regarding AI and AGI were, and to name a few it stated job displacement, perpetuating and amplifying bias and discrimination, privacy, relinquishment of human control to AI systems, security risks, social inequality, and even existential risks such as the long term threat to human existence. But today, we're here to talk about education. So Nick, let's dig in. What kind of privacy concerns should districts be cognizant of when considering using or implementing AI in the classroom?

Nick 02:41
Well, rich number one AI and FERPA student information cannot be released without parental consent, districts need to read the fine print of the program they are using to see if a profile of a student is identifiable or being sold. Contracted provisions should be reviewed to strictly prohibit FERPA protected info to is confidentiality district should be transparent with what information is being collected. If a program uses AI, such as if the program uses a student's timed answers and keystrokes to help them learn or if lessons are being recorded. Many teachers worry about their own information beyond the program's intended use will be used for things like targeted advertising, or for student centered conversations like mental well being or personal strengths and weaknesses. The best policy is thorough review and transparency.

Rich 03:27
What's your instinct on a district plagiarism policy?

Nick 03:31
Schools should maintain a zero tolerance policy for grade assignments and work substantially produced by AI? is plagiarized work and should be treated that way?

Rich 03:40
Well, that's a good concept then work substantially produced by AI. Do you have any sense of how districts can enforce this policy? Are there AI programs, for instance, designed to police the AI programs?

Nick 03:55
Well, Rich? The answer is yes. And there are being developed right now. There are in fact, some options right now for districts to do this. This gets into what we call AI detection tools. There are a number of programs geared for this, but they vary in their accuracy and how far advanced they are. One of these is turned it in, which is geared for kind of detecting AI generated writing. And one of turning it into concerns is limiting false positives. So I think one of the challenges for students is how to write well, but not sounding like they're AI generated.

Rich 04:45
What about using AI for purposes beyond plagiarism, such as hiring and reviewing resumes or even lesson planning?

Nick 04:55
Well, Rich when it comes to misuse like false information provided by AI gibberish essays with buzzwords and the intrinsic bias of AI different aspects of running a school district can run into, but all aspects can be helped if AI is only referenced as a tool, not a replacement for something a human employee should review and oversee anything done by AI. This is called the human in the loop approach.

Rich 05:19
To your knowledge are district's reviewing or integrating this this human in the loop approach, or is, is that something that we would advise clients?

Nick 05:27
This is something that we would advise clients to do. There is no statewide policy on AI use in schools within Pennsylvania, Oregon has already put one together and we expect that we'll see more guidelines come about in Pennsylvania,

Rich 05:40
I guess there's two frameworks you can look at. There's one for the students and one for the teachers, do you see a distinction between the two in the sense that the students might be submitting AI generated work? But should teachers be allowed to lesson plan with AI, if that can sort of assist or enhance their ability to help their students?

Nick 06:01
There are a number of different approaches right now, and one is allowing teachers to generate content. However, when teachers do so they just have to be mindful of what the risks could be. And if they do so it could it could benefit their curriculum.

Rich 06:14
Let's come back then to this sort of the human in the loop approach. Does that require any specialized training, for example, are colleges or universities providing any kind of coursework or degrees designating a human as qualified to oversee AI?

Nick 06:32
So Rich, I think the short answer is not really right now, human oversight has been seen as a key mechanism for the governance of artificial intelligence and its development. It is a key focal point in the EU's commissioned policy on AI safety as human oversight. Human oversight right now is described in pretty general terms. We look at things like understanding the capabilities and limitations of the AI system. Being aware of automation bias, being able to correctly interpret an AIS systems output, and being able to decide not to use an AI system, or to disregard, override or reverse what an AI system recommends, or suggests.

Rich 07:12
It seems like this is it's a very valuable skill set, perhaps, you know, looking into the future the the ability to oversee AI, so perhaps we'll see some evolution there in terms of accreditation or educational offerings.

Nick 07:34
Right now, when we look at a quality AI system rich, we look at things like aligning with high quality learning, including enhancement of equity, as systems must be inspectable explainable, they should provide human alternatives to AI based suggestions, they should be able to minimize bias, they should promote fairness, while avoiding additional time burdens for educators and students. They must account for the context of reading and learning and be accessible to teachers while in the classroom. Thinking

Rich 08:03
about the data that the AI is trained on is is a really important concept. So Nick, why is equity such an important part of AI in education?

Nick 08:15
It’s because of the type of data sets that are used to create AI is pattern recognition. Its initial data sets come from usually a small limited pool of people who may not represent all of the learning capabilities of all students. So students not represented could be at a disadvantage. If the system is unfamiliar with their particular needs.

Rich 08:34
Do you believe that programs or systems should be developed that evaluate a particular AI models ability to incorporate equity, such as assigning a score based on a series of questions designed to assess equity?

Nick 08:50
Absolutely. And they are being developed as we speak. One approach is called equity as code. And this is each time a model is updated, it must undergo testing before being deployed and used. Continuous Testing, monitoring and observing prevent bias models from being used. We call this new approach to mitigating AI bias equity as code because the tests that enforce equity are built into the automated software itself. One example, Amazon software recently detected its own bias in resume promotion and hiring, they were able to point that out and solve that before using the technology.

Rich 09:42
So let's pivot a little bit then let's let's talk about disruption because that seems to be one of the topics that is top of everyone's mind. What about when someone uses AI to be disruptive?

Nick 09:54
AI is a tool and it should be used as a tool, not to be used as replacement. If it becomes a disruption in the classroom, the school should investigate whether more stringent controls should be placed on its usage.

Rich 10:08
Nick, let me ask you this. Any other advice you would like to give districts on the subject of AI?

Nick 10:28
It would just be to be mindful of the technology you were using. Technology is a valuable tool. It can enhance the classroom in the curriculum. But like other forms of technology, it can be misused or abused. And it's important to be mindful of that.

Rich 10:41
Well, thank you very much, Nick. It was a pleasure having you on the podcast. And we very much enjoyed having you on.

Nick 10:48
Thank you, Rich likewise.

Jonathan 10:54
Joining us today is Northern Lehigh school district superintendent, Dr. Matthew Link, and director of technology and operations for Colonial Intermediate Unit 20. Dr. Tom Kalinoski. Welcome, gentlemen. It's great to have you both with us in the legal lunch room. Thank you. All right, let's keep this conversation about AI going. How are you seeing AI used most in the classroom so far? Matt?

Matt 11:14
For us, right now, it's on a limited basis, we're just kind of scratching the surface of it as as a school district, we have a lot of work to do professional development, making sure that our policies are in place to do it in an effective and responsible manner. But right now, it's kind of simple things, things that have been around for a little bit things like using the virtual and the augmented reality type glasses, maybe for science experience, or kids can get inside the structure of a cell or have an Oregon, those type of things. We're starting to consider it more often. From I guess, what would be an administrative and faculty lens of how can we use this to be more efficient in developing curriculum.

Tom 11:54
I think efficiency is key. I mean, so we, too, are probably in the exploratory phases. That being said, I think that applications have used technology like AI for some time, it just hasn't been called AI. So think of the piece of software that you have that reinforces skills that your teachers are already teaching. When the students go through and they start working with that program. What will happen is, depending on how they answer the question, Did they answer it right? Did they answer it wrong? Did it take them 10 seconds to answer it, or 60 seconds to answer in the background? The application is modifying already, like what question they're going to answer next. And that's, that's sort of AI I mean, so while like chat GPP, and things like that are certainly on new on the horizon. There's been very similar technologies in the past that many schools have used already for quite some time. But I'm absolutely excited for what it has to bring, especially as it relates to efficiency.

Jonathan 12:52
Yeah, so So my experience districts have certainly taken the one of three approaches review integrator bands. So it sounds like both of your schools thus far I've taken the maybe the review and some integration, is that fair?

Matt 13:05
I would say review and kind of starting to creep towards integration. Yes.

Tom 13:09
Very small, I guess. Small pockets of integration, meaning, like, okay, let's, let's see what this can provide us. So let's use this tool to provide the individual, not the whole community, on guidelines on what to do, what not to do. And let's report back what did it and how it works? Well, what didn't work? Well. But yes, that definitely a review.

Jonathan 13:32
Is it a building by building kind of decision? Or is it really coming from the central administration and filtering down?

Matt 13:38
I would say, Northern Lehigh, it's almost like who are our teacher leaders, who are the ones that are maybe brave enough to start trying it to either, you know, to help with lesson preparation, and then actually get it in front of or in the hands of the students, we are certainly not banning it. If we have some teacher leaders that want to try it. We're not necessarily getting in their way. But we are watching we are learning.

Jonathan 14:04
You know, from that same perspective, then are you getting more and more contracts that you have to review from Ai usage? Or do you see educators allowing students use open source software like GPT, under like you supervise conditions, and then certainly concerns with like, say, COPPA, or CIPA?

Tom 14:19
Just like all contracts, we you, of course, want to make sure that the education agency maintains ownership of that information. You want to make sure that that vendor isn't using what it learns from your students behavior and sharing information that it shouldn't be, just like you said, with regards to FERPA and COPPA there.

Matt 14:40
So just to kind of carry, you know, build upon what what Tom said is, you know, we are concerned, you know, we're looking at we just so we look at all contracts, right, whether it's related to AI or not, but now my Director of Educational Technology, curriculum and instruction and my technology director and I were looking at things like well, who really owns the data, you know, for student data, teacher data, student privacy, teacher privacy, those type of concerns, other legal concerns and perhaps even ethical concerns for me include, you know, accessibility for learners? Are we ever putting it in front of our students can all students access it at the same level, whether it's a child with a disability, a child that is maybe an English language learner? If we're expecting them to use it at home, you know, we have to be cognizant of where our families are coming from maybe some families have those resources, maybe some families they don't. So as a school district, we need to try to level that playing field, if we expect them to actively use it. As you know, our learner in our district, we always go back whenever we're using technology to consider what's our acceptable use policy. Now we're trying to figure out how does current AI models fit into our acceptable use policy? Or do we have to change the policy?

Jonathan 15:52
Not you're saying all my favorite terms right now, during the last segment, we talked a little bit about AI and individuals using AI to be disruptive. What do you recommend in terms of the policies that we're talking about the acceptable use policy, when either a student or an educator uses AI to be disruptive?

Tom 16:07
Again, I guess I would probably fall back to the acceptable use policy. So I mean, whether a student uses AI, like you said, for deep fake or to initiate a swatting call, that acceptable use policy needs to outline what the acceptable behavior is, as well as what the repercussions are for using technology to do something that they shouldn't. And we need to make sure that we are clear to our students, so they can really be good digital citizens. Like we don't want to ban it. I was related to scissors. Yes, this was a really dangerous, but we're not going to not allow our students to use scissors, we need to teach them about how to properly do so. And I think the same is true of AI. So maybe like in if your school has like a digital citizenship class that the students participate in, that would be a great opportunity to educate them on AI specifically, as well as lots of other tools. But I think with regards to discipline and using it inappropriately, you would probably follow whatever your acceptable use policy says at this point.

Jonathan 17:10
You mentioned the term swatting, and you know, for those listening out there, some of them may have been fortunate enough to not have ever experienced that. Can you just briefly describe to them what what is a SWAT call or swatting call?

Tom 17:20
Sure. So swatting is when an individual purposely tries to solicit a response from a law enforcement agency. So if I were to say something like, Hey, Billy has a gun, and he's in the bathroom right now prepping to shoot up the school. That's something I guess that a student can do to try to like solicit a response.

Jonathan 18:01
Matt, what about what are your thoughts on policy?

Matt 18:03
Again, going back to the acceptable use policy, if it's a student that's trying to use it in a malicious type of way? Or, you know, certainly we hope not. But if it was an employee that was trying to do something like that. So we're going to look at the policy, we're also going to look at things like our Student Handbook, our employee handbook, Student Code of Conduct, see where it falls. But again, you know, we have to revisit those things every year to make sure that it captures something like somebody misusing AI. Certainly case law, we know now that if a student's doing something on a district device during the school day, during the district network or on the district network, you know, we have kind of domain over that, and we can consequent our school board has the authority to do that our administration has the authority to do that. But if it's happening outside of the school day, not on a district device, not on the district network. It's hard for schools to intervene now, whoever, and Tom referenced this kind of swatting context. Well, you know, school districts can also notify our local police departments of this, it may not be a school code violation, or violation of the Code of Conduct, but it still can be a criminal activity, depending on whatever it is. So we have to kind of work and work together to educate our students, whatever that looks like at the appropriate grade level.

Jonathan 19:32
Defining the school yard has certainly become much more difficult in recent years, both with technology AI and, you know, social media, certainly and we have a whole other podcast specifically on that. So, you know, we've talked a lot about students use what about in the employment perspective? Have you started to bring AI at all into the employment perspective, whether it's from reviewing applications or resumes or from generating job descriptions? Have you used it at all in the employment perspective. Let's go Matt first.

Matt 20:02
Sure, thanks. So not necessarily to review the candidates materials, we're not doing that yet in Northern Lehigh, but you know, maybe an area for us to look into. But when it comes to us, so a lot of things that we like to do during the interview process is to kind of build in real world scenarios for the candidate. So possibly using AI to generate, you know, what are some things that a, you know, third grade mathematics teacher may be expected to do, our third grade teacher may be expected to do? What are some things that a director of buildings and grounds may be expected to do? And then how have you know, access AI to help us come up with those type of scenarios?

John 20:41
And, Tom, what about your role as director of technology and operations is familiarity and strengthen AI and maybe even the code, something that you're looking for?

Tom 20:50
In a Technology Operations Department it certainly provides advantages. And I think that the staff that are in the technology department probably naturally are more adept to technology as a whole. So they're a little bit more comfortable in playing with it, and working together to see how I can help them solve problems. And while we are only in the sort of review and small, exploratory phase, it definitely has its advantages from I mean, writing like Excel formulas like so anytime you're dealing with student data, all of like, superintendents, or school boards are always like, show me the data held, right. So talking to the applications and figuring out like, Okay, this is where the assessment data is, in this column, this is where this data is in this column helped me write a formula that sort of outputs this, and something that our staff certainly have the capability of doing on their own, they might have to sort of work their way through it. Again, though, back to that education part interacting with these tools, we of course, don't want to put any student or staff specific information. And that's just part of the ongoing instruction that needs to be provided to staff like saying, hey, yep, it's alright, you can sort of use this to help come up with a formula to output data, for instance, but we're not putting our student names in there. We're not putting our student ID numbers in there, etc.

Jonathan 22:50
Makes sense. Well, then the final kind of question for both of you is what do you see is maybe the biggest opportunity? Or were you the most excited about with AI? And then the flip side of the coin? What are you the most cautious are concerned about? We'll go to Matt first.

Matt 23:08
What I'm most excited about is how can we make our people more efficient? How can we make our employees more efficient, to to, you know, get whatever the project done, get it done, but also give him the oversight of actually saying, I need to ultimately make the decisions, right. So AI is a tool, right? For our students, I'm most excited about, you know, this is where the workforce is going. And if we can help them get out ahead of that and truly understand how to use AI in a responsible and in an ethical way, then they're that much better prepared to go into higher education or directly into the workforce. So I think there's a lot of opportunity there for both our students and our employees.

Jonathan 23:49
Absolutely. Tom?

Tom 23:51
You said the efficiency word again, and I'm gonna continue on it. I mean, doesn't matter what school district you're coming from, you're always asked to do more with less. So if you can use a tool like this to help you do exactly that, do more with less? I think I'm all for it. With with regards to cautions, or things to watch out for? It's tough. I mean, I think we as as schools need to take a look at how we're assessing our students because AI and we didn't necessarily get a chance to talk about it yet. But how are people using AI, maybe in a way that they shouldn't be or unethically like cheating on tests or exams? That certainly exists there. So I think there's going to be a lot of back and forth working with teachers, and making sure that as we're providing instruction and instruction and providing assessments to our students to verify competency in specific areas, we sort of probably need to rephrase those questions that are on those assessments. So you can't necessarily look it up in AI. So you can see the student it gives the student a chance to actually demonstrate their competency there. So I do have a little caution there and just making sure everyone catches up. And we're using the tool to the best of our ability both on the teacher and the student side.

Jonathan 25:10
That's a great point. You know, the comic book nerd in me is feeling compelled to say the Spider Man quote, which is, you know, with great power comes great responsibility. Right. So, you know, I want to thank you both, you know, Dr. Link Dr. Kalinoski, thank you both for joining me today and talking about at least where we're starting to go. And and both you are good examples of where schools don't have to be afraid don't have to do an outright ban and can only start to dip their, their feet into the water that is AI. So thank you again for joining me today.

Matt 25:37
Thanks for having us.

Rich 25:42
You know, Jonathan, it's interesting to see such diverse and substantive issues raised so quickly with the new technology, and it really gives me pause to think that this technology is in its infancy. I've interacted with Chad GPT, and Dall-E at length, and I believe folks need to have serious conversations now. And assume that AGI will arrive sooner than later. The implications are far reaching and profound, and it makes more sense to be prepared rather than caught flat footed after the fact.

Jonathan 26:13
You know, Rich, I couldn't agree with you more. As we conclude this insightful journey into the realm of AI in schools. You know, it's impossible to ignore the potential it holds to revolutionize education, from personalized learning experiences to efficient administrative processes. But it's also crucial to remain vigilant about potential ethical concerns, data privacy issues, and ensuring equitable access for all students. Let's continue to navigate this technological landscape with care and consideration, prioritizing the well being and development of our students above all else. Thanks for joining us today for the legal lunch room. If you like our show, please subscribe wherever you get your podcasts. You can also visit our website at Kingspry.com To find more educational resources and programming.

Rich 26:59
This podcast is a production of KingSpry's education law practice group, it is meant to be strictly informational and does not constitute legal advice. Should you have any questions about our topic today, please consult with your local legal counsel.

Rich 27:14
And this episode was produced by Steel Pixel Studios. Our theme music is by Don Loughney. And our research assistant for this episode was Morgan Inman. Have a comment or suggestion for a future episode. Write to us at legallunchroom@kingspry.com. Until next time, I'm Rich Campbell.

Rich 27:31
And I'm Jonathan Huerta. And we hope you will join us again in the Legal Lunchroom.