The Transform your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio. Join Dr. Rob McDole and Dr. Jared Pyles as they seek to inspire higher education faculty to adopt innovative teaching and learning practices.
This is the Transform Your Teaching podcast. The Transform Your Teaching podcast is a service of the Center for Teaching and Learning at Cedarville University in Cedarville, Ohio.
Ryan:Hello, and welcome to this episode of Transform Your Teaching. In today's episode, Dr. Jared Pyles reflects on his experience and presentation at the Ohio STEM Innovation Summit twenty twenty five.
Ryan:Thanks for joining us.
Jared:Welcome to the Transform Your Teaching podcast. A little different today. My cohost, Dr. Rob McDole, is sick. And in the past, when I've been sick, he has continued on through.
Jared:And I thought, well, if he can do it, I can do it. And I'm one upping him, and I'm doing it by myself. And I'll full disclosure, Ryan's over here as well, but Ryan also has a really rough voice, so it's almost like I'm doing it by myself. And I I figured I used to teach high schoolers, and I taught college students as well, and that was almost like I was teaching to an empty room. So this feels very similar to that.
Jared:Then we'll we'll see what happens. We'll see if I can go our normal twenty five minutes, might be more like twenty, or as we used to I at least I didn't like as a educator, when we would have our professional developments and the speaker would say, or they would end like five minutes early and say, we give you the gift of time. I'm like, that's not a gift. Time is a thing. You just can't give it or take it.
Jared:Well, you can take it. You're taking it right now. But maybe I'll give you the gift of time with this episode, but we'll see. What I wanted to talk about is something that, is near and dear to my heart. It is I enjoy going to conferences and going to, learn.
Jared:I'm a big fan of being a lifelong learner, and I figure I have to model that. So, I go to conferences to ingest, and sometimes I've been known to present. I'm not totally comfortable with presenting. If you know, I am an introvert. I think I've said that in the podcast before, and it tends to drain me a bit to do public speaking.
Jared:But I do enjoy doing it. I like to pass on, the knowledge that I've found and I've researched to others because I can I know how engaging and how informative professional developments can be on me, and those conferences can be especially? So recently, I went to the Ohio STEM Innovation Summit. And, I want to reflect on the conference and I also want to share what I presented on, which was on, generative AI and prompt engineering. If you're in Ohio, I would strongly recommend attending this conference.
Jared:I've been to several conferences in Ohio, and this is by far the most active and engaging conferences that I've ever been to in the state of Ohio, and it's only in its third year. I've been every single year. Now, again, full disclosure, my sister works for Ohio STEM Learning Network, and she is a, I believe her title is project specialist. She runs this entire summit herself, and, it's a one day conference. It's just brimming with excitement.
Jared:The sessions are incredible. It's a very vibrant community. I think it comes with being a STEM educator, as well. If you're in STEM, you know the tribe of people that you are around. They're very engaging, they're very collaborative, because STEM itself is very engaging and collaborative.
Jared:So imagine all those people at a conference together, it's pretty exciting. The sessions that go on are on innovative teaching strategies and talking about manipulatives. There's always the robotics and there's the, Legos and puzzles and everything to get students thinking that way in STEM, more interactive, stuff. And then, collaboration as well. There's a lot of talk about collaboration and networking.
Jared:Interestingly, there is not as much on generative AI. I mentioned that I spoke on that. I was only one of two sessions, on generative AI, but it's slowly sneaking its way in there. More people wanna know more about it. And again, it's not really a fault of the conference organizers and everything else.
Jared:It's because it's a fairly new like we've talked about on this on this podcast, it's a fairly new endeavor, and it's a new technology, and we're at the point where or the calculator or online, where we're just not sure where it fits in education. And, you know, considering the field of STEM where you've got computer science, programming that your students are learning, it could just be that a lot of STEM educators see it, and this is all speculation at this point, it could be that STEM educators see it as something that's the end product of what they're trying to teach, which is the programming languages and the whole concepts of computer science. And I'm an outsider on this. I mean, I was a language arts teacher. I started integrating technology into my teaching fairly early into my teaching career, and then I transitioned over to higher education and instructional design.
Jared:So I've never really been in the field of STEM, and I've always felt like I stuck out and didn't really fit into language arts because, like, you know, my my big struggle was, yeah, but how are they gonna read books and write essays, in a a STEM school? And I've been educated by those at the at the, STEM summit and others over the years about how it fits, and it's been a very rewarding experience. But I think generative AI is at this point where they're trying to figure out just where it fits. So my presentation was how to approach GAI through prompt engineering. And I just wanna share a bit about the presentation and share with you the framework that I showed them.
Jared:Again, this is all readily available on Google, but I feel like it's very overwhelming to do a search for a prompt engineering framework because everyone has their favorite or everyone has, seems like, a prompt framework to use. But I found one, and again, I guess I'm just adding to the pool of everyone at this point, but I just wanted to share my experience, in using it and what I presented to these educators at the STEM Summit. You know, as we've talked about on the podcast, generative AI is very daunting, because it's new. When you try to integrate GAI into anything you're doing, sometimes you'll get strange results, you'll get things that you don't really understand, like I I asked you for this and you gave me this, or you hear stories of people getting strange results. I shared with them I've shared on this podcast too my story of trying to do the the trend of taking a picture of your dog and having generative AI turn it into a human.
Jared:And the results I got were terrifying, and I showed the results at the STEM conference, and they agreed with me. They were terrifying. So but sometimes, you know, you don't if you don't have the understanding of how to use it, I think that can be an issue. So not knowing how to use it is very difficult because it's relatively new innovation. And if I go back to the Diffusion of Innovations, by Everett Rogers, you know, there's the different aspects of a, innovation, a new innovation that someone will adopt.
Jared:There's the relative advantage, the compatibility, the complexity, trialability, and observability. I won't define all those, but I wanna hone in on complexity because that can be a negative factor in trying something new is that if it's difficult to learn how to use it, then it's less likely it's going to be adopted. And that's where support and training comes in, for those who want to use it. But right now it seems like there's not as much training on using generative AI, especially with prompt engineering. So I I that's how I addressed it at the conference.
Jared:I said, let's let's kind of peel back some of those layers and help out, by providing a framework and also talking about prompt engineering because, quite frankly, it helps because it also will help get rid of those extraneous, frustrating, confusing, and wrong results that you get. So there is a at least now with generative AI, there is a definite need to control the input so you can control the output of it. If you go into it with just a random prompt like, for example, I used it to help me prepare for this podcast session. I could have gone in with just a simple, hey, I'm doing this by myself. I need twenty five minutes of talking points on generative AI prompt engineering.
Jared:Who knows what the results I would have gotten? They would have been okay, but I could have spent more time trying to refine it than would be if I just used the correct prompt engineering or used the correct prompt. So it is a valuable, a valuable thing. But there are things that happen behind the scenes that happen with generative AI and the bot that you're using that I think we've talked about on the podcast before. Things like temperature.
Jared:Temperature is a parameter that controls how random the AI's responses are. Usually it's a range of zero to one, zero being more focused, more analytical, and one being by far the most creative. You can usually check those in the free version by just going to settings and seeing. Usually they're at about 0.7, so it's definitely more on the creative side. So if you've ever used the free version of ChatGPT for example and asked it to review a student's paper using a rubric.
Jared:You have all good intentions about using it, wanting to use it, and then you scan through the results and you realize it's adding sentences into the essay that aren't even there. It could be because it's more on the creative side. Right? If you're able to bump that temperature down, you can help with that. You could also run into issues with tokens.
Jared:Tokens are like the currency. Basically, the models need tokens to process or generate responses. Both your input and your output are based on tokens, so if your prompt is too long or if your information's too long, you'll get an error about token limit or your output is too long. Or if you have the conversation going on for too long, you can run into an input and output limit. And the other one that I think, is really important is the context limit or the context window.
Jared:The context limit is, you know, the way I described it to, these educators, what it's like if you're reading a book and every time you turn to a new page, you have to go back and read every other previous page first. So imagine doing that. That's kinda like the context limit or window. Every time you input a new prompt or add something else to it and then the output comes, there sometimes is a limit or a window on how much the bot is then going back to read. It doesn't keep everything you say in its quote unquote memory.
Jared:It has to reread everything that's already been said, and there's a point where it doesn't read anymore. There's a window there. So the window shifts down with every prompt and every response that you get. So if you ever had like, before I knew about this, I was so upset because I would have this fantastic bot that I had refined because I didn't engineer my prompts well. I'd spent a long time refining it, and I was super happy with my results.
Jared:And then all of a sudden the results start changing. The output starts changing and it starts to forget, quote unquote forget what I'd already said. I'm like, what is going on? Until Rob and Hai Song in our office were like, well, what's the context limit? And I was like, I don't know.
Jared:What's that? They also asked me what the temperature was at one point, and I said a 104. And, Rob went, that's the temperature of your, CPU. I was like, oh, I don't know what it is then. So they laughed at me.
Jared:That felt really good. So the important part is prompt engineering. That's why my focus was on and really when I talked about frameworks and when I talked to them about how to engineer a prompt, I said all of these, you know, you can Google search, you can find tons of frameworks to help out. And we did this in the office and it's just nonstop. But it really boils down to three elements that I presented, and I think this is important for you to know as well.
Jared:Task, persona, and context. Task meaning what is it specifically that you want the output to do. Do you want to explain, summarize, compare? Persona, what role do you want the output or the model to have. This can shape its tone, style, expertise.
Jared:If you want it to be, if you are asking it to read over chapter one of your dissertation, not that I would know anything about that, you would say, you are an expert in introduction to dissertations. Read this chapter one and give me feedback. And then the context, what background information or audience level or even constraints do you want to give the bot so that the output can perform the task effectively? The context can guide how the model performs the task. If you just remember task, persona, and context, that's a really important part.
Jared:TPC, task, persona, context. Every framework is going to have those three elements. So I shared with them again, STEM conference, here's the prompt that I shared with them. You are an engaging sixth grade science tutor who uses real world analogies. There's the persona.
Jared:Here's the second sentence. Explain the process of photosynthesis step by step. There's the task. Then the context is, your audience knows basic plant parts, but has never seen chemical equations. Keep it under 200 words and include one everyday analogy, like a solar panel.
Jared:So, again, task, persona, and context. And it's not that difficult if you just break it down sentence by sentence. Your task is a sentence, persona is a sentence, context in a sentence. What I shared with them is my current favorite framework because of the name alone, CRISPE, c r I s p e. I love the CRISPE framework because you can come up with some CRISPE prompts.
Jared:That's what we came up with when I presented. So let me break it down. It's it's an acronym. Right? So c r I s p e.
Jared:The c and r are capacity and role. In other words, you define the function of perspective, and I'm going to include all this stuff in the show notes. There's a whole I mean, again, I didn't invent this. I found this online. This is by far the best one that I that I found in my opinion.
Jared:But I'll put the link up, for that so you can find it where I found it as well. So capacity and role define the function or perspective. Insight provide relevant background or context, that's the I, and there's also your context there. Right? Capacity and role again is the role.
Jared:So where's the where's the task come from? The task comes from the rest of it. S is your statement where you specify the task clearly. There's your statement. Personality is the p.
Jared:Determine tone and style. And then experiment. And I was a big fan of this part of this. This why this one really sold me on presenting crispy. Experiment is to encourage multiple examples or approaches.
Jared:So it gives you a chance to take, you can say, give me three examples, me two examples, give me 10 examples. But it gives you a chance to look over all of them. You have options, you can maybe synthesize, or you can choose which one that you want. Now, a crispy prompt is a lot longer than just the task, persona, and context one. So here is what I shared with him as an example of a capacity and role insight statement, personality experiment.
Jared:And it goes in order, so here's the first sentence for capacity and role: You are a renewable energy consultant specializing in solar power solutions for educational institutions. There's their role, the capacity. Schools are seeking sustainable and cost effective ways to reduce their carbon footprint and educate students about renewable energy. Now here's the statement, here's the task: Develop a comprehensive solar power plan for your school that includes insulation, cost analysis, and educational components. Be professional and encouraging, fostering innovative thinking and practical problem solving.
Jared:And here's experiment, my favorite part: Provide at least two different solar power setup proposals, highlighting their benefits and potential challenges. So again, CRSPI, capacity and role, insight, statement, personality, and experiment. We did some stuff with, photosynthesis. Mean, these these STEM educators love doing this stuff. And so, I gave them plenty of time to practice and give examples.
Jared:It was overall a very rewarding experience, for me. And it's something I ended with though, which I think is something that I wanted to discuss with Rob at some point because Rob and Haysong both brought it up to me. They helped me brainstorm and research this is we found an article that said AI prompt engineering is dead. And I thought what a great way to close this session on prompt engineering. But it sparked a lot of really good conversations because the headline was a bit misleading, you know, kind of one of those, gotcha headlines or clickbait headlines.
Jared:It didn't say that prompt engineering was dead. What it was saying was that it's it's changing. The way we currently approach it is going to change because as you know, as AI advances and as these bots are being trained by people using just trial and error, which is what you do when you refine a prompt anyway. It's a lot of trial and error. Input, output, revise, iterate.
Jared:This is what's so great about, applying this to STEM education is like STEM education is very of a iterative process for students. It's always let's do projects, let's get feedback, and let's revise. It's that circle. It's really, really great. But as this happens more and more, the need for having a framework, especially the prompt engineering idea, was going to go away.
Jared:Because the bots are going to get smarter and there won't be as much need for us intervening and doing trial and error. I gave example of NeuroPrompts. I'll put a link for this as well. NeuroPrompts is an adaptive framework that automatically enhances a user's prompt to improve the quality of generations produced by text to image models. So basically what it is, is instead of like if I would have had neuro prompts when I tried to get them to change my dog Oliver into a human, which sounds so weird to say outside of, you know, turn my dog into a human.
Jared:He'd be the worst human ever because he's I mean, he's a great dog, but, anyway, sorry. I I digress. But if I use that prompt, could feed it into neuro prompts and it what it does is it communicates with the other bot and it refines my neuro prompt. And there's a great picture that I'll put up there as well that shows the human prompt versus the neuro prompt. And the difference in the images, it's staggering.
Jared:Is this the future? Is there going to even be a need for prompt engineers? So I I was stuck with this in my office and I was like, well, what is the future of this? So what did I do? I asked Chad GPT myself, what is what is the answer of prompt engineering?
Jared:So I use o four minutei and this is the answer that I got. Prompt engineering isn't obsolete. It's evolving into a core AI design skill integrated with data retrieval, tool orchestration, automated prompt tuning, and model refinement. And this is the part that's really interesting. It said humans will act as architects, curators, evaluators, and stewards, defining what AI does, why it matters, and how it stays aligned with our needs.
Jared:And I think that's where the future is because there are still going to be humans involved in this process. It's just going to be different. And instead of it being prompt engineering where people can get, concentrations or certificates on being a prompt engineer, there's going to be a need for creators, architects, curators of AI, defining what it does. That to me is the future of it. It's going to shift, but there's still going to be a need for humans because as Rob and I have talked often, it's just a tool.
Jared:The tool is going to need to be sharpened, refined. There could be new tools that come out, but it's still a tool that we need to know how to use correctly. So I think that's what the future of it is. So, yeah, I mean, I enjoyed it. I, again, I presented and felt like I needed a nap right away.
Jared:I felt like I should probably go home first and I almost napped in my van. I'm kidding. I didn't do that. Again, it's the Ohio STEM Innovation Summit. If you're new to STEM, if you just want to attend, especially in higher education, there's a lot of k 12.
Jared:There could be some nice higher education influence if we get some people to go. But again, it's a great experience, very vibrant community. And they're curious about AI just as much as as we are here in higher education. Sometimes I feel like there's a bit of a divide between the two, but, they're facing the same challenges that we are. And so it's I think it's important to have this conversation, not only in higher ed, but in K 12, as well.
Narrator:Thanks for listening to this episode of Transform Your Teaching. If you have any interest in the Ohio STEM conference, or if you just wanna chat with Jared about his response, please feel free to reach out to us at ctlpodcast@cedarville.edu. You can also follow us or connect with us on LinkedIn. We'd be happy to chat with you there. Also, don't forget to check out our blog, cedarville.edu/focusblog.
Narrator:Thanks for listening.