Podcast by State Support Team 11
Eric Neal:
Welcome to the State Support Team 11 podcast. I'm your host, Eric Neal. Today we are joined by Dr. Thomas R. Guskey. Dr. Guskey is an award-winning author, recipient of many prestigious honors, including being named a fellow in the American Educational Research Association, and is professor emeritus in the College of Education at the University of Kentucky. Welcome, Dr. Guskey. It's a great honor to have you on the podcast.
Dr. Thomas Guskey:
Well, thank you, Eric. It's an honor for me to be here.
Eric Neal:
That's great. Can you tell the listeners about yourself and the work that you do?
Dr. Thomas Guskey:
Sure. I am professor emeritus at the University of Kentucky. I'm coming to you today, from my home in Lexington. I began in education actually, as a middle school teacher in 10th grade, seven and eight. Moved on to the high school level from there, and then decided to go back to graduate school to learn how to become a better teacher. So I began in Boston, and ended up at the University of Chicago, where my advisor and the chair of my dissertation committee was Benjamin Bloom. So it was from that background, that I became keenly interested in notions of educational measurement and evaluation.
Bloom at that time, headed a program at the University of Chicago, which was called the MESA Program, and MESA was an acronym for measurement, evaluation, and statistical analysis. So I graduated from that program, began working with Chicago Public Schools. I began as a program evaluator, and then became the director of research and development for Chicago. Did that for several years, and then decided that I really wanted to go to a university position, so I accepted an offer here at the University of Kentucky, and moved to Lexington from Chicago. I've had opportunities to go and teach part-time at other universities, but this has been my academic home now, for some years.
Eric Neal:
Yeah. That's interesting. It sounds like you've always, in your career, been focused on making improvements and helping others to make improvements. The bio on your website describes your work as helping teachers and school leaders use quality educational research to achieve better outcomes for students. What are some of the barriers that educators face as they move to turn research into practice?
Dr. Thomas Guskey:
Well, I think that one of the dilemmas we have is that those worlds often don't overlap. You might know that in recent years, I've been engaged in a lot of work in the area of grading and reporting student learning. In fact, in the year 2016, the American Educational Research Association was celebrating its 100th year anniversary, and as a part of that celebration, they put out a call to the field to different scholars and researchers, asking if they would take on the challenge of trying to summarize the research in these different aspects of education that had taken place over that 100 year period. So together with my friend Susan Brookhart, we put together an amazing team of scholars and researchers in the area of grading, and then we're able to summarize that research base for the 100-year period. What surprised us, I guess, the most in looking at this, is to understand how much we know, how long we have known it, and little has found its way into practice.
I'm convinced there's just not another area in education where there's greater gap between our knowledge base and our practice. And it's simply because our worlds, as researchers and practitioners, often don't overlap. I recall some years ago, I was invited to present at two different conferences. The first was the American Educational Research Association, which is the largest group of researchers in the world actually. And ASCD, which is the Association of Supervision Curriculum Development. They both hold their national conferences in March or April each year. On that particular year, they were two weeks apart.
I was able to look at the attendees of those two conferences, each had over 10,000 people had attended. And among those 10,000, I just did a merge of the attendance list of those two conferences, there were less than 20-
Eric Neal:
Oh, wow.
Dr. Thomas Guskey:
That attended both. Yeah, so it just shows that the world of practitioners and world of researchers, often doesn't overlap very much, and often we don't talk to each other very well. I think that if we're going to make progress, what we need are more people who are willing to take on that role of translating these amazing research results we have, to what it means practically, and what it can mean for teachers who are teaching under very demanding conditions and trying to do the best job they knew how.
Eric Neal:
Yeah, I definitely see that. My work, a lot of it revolves around continuous improvement, and I see that all the time. Where myself, I've received a lot of professional development, attended a lot of trainings. I do my own research when I'm trying to find ways to help and support people. And part of what I do, is act as that bridge to try and help get that knowledge and make it accessible, but also try and weave that into the context of where they're at and what the capacity is to be able to absorb that information, and how to integrate it into things that they're already doing so that it doesn't feel like something extra, or something different, and it is one of my big challenges.
Dr. Thomas Guskey:
Yeah, that's really the case. And I think that it's important, especially for your organization and people in your roles. And I think that more and more, people recognizing the importance of that role, to really serve as that conduit between the research community and the community practitioners. Too often in education, we get caught up in what's hot and what are the hot ideas that are out there, or who are the top presenters, and who are the most entertaining and most engaging for it overall. And not really asking the question about how is this going to translate things that are better for our students. I think if that guides our work, then we can be a lot more productive and a lot more effective in what we're trying to accomplish.
Eric Neal:
Definitely. I discovered your work on planning and evaluating effective professional development, a few years ago. Specifically, the five levels of data used to gauge impact. It really stood out to me because I found myself falling into the trap that a lot of people do, where we do a lot of things and people leave, and you're like, "All right, well, we've kind of just accomplished that." But could you explain the concept behind the five levels of data used to gauge impact?
Dr. Thomas Guskey:
Sure. I've always been fascinated by this notion of teacher change, and how they change, what prompts them to change. As a former classroom teacher, I always was trying to be aware of how to get better, and not really sure how quite to do that.
But when I graduated with my degree in measurement and evaluation, I then began working with the, what at that time was called the National Stat Development Council. This is a council that was formed actually, in 1960s to bring together people in different schools and education organizations throughout the country, that were engaged in this work of professional learning. And one of the major challenges that was posed to me as a new graduate in measurement and evaluation, was how can we evaluate professional learning experiences for educators? I had met with some of the organization leaders at that time, and this was a problem that they recognized, and really needed some guidance on. So I spent some time investigating it, and I came upon a model that was developed back in 1950s, by a fellow whose name was Donald Kirkpatrick. And Kirkpatrick was interested in evaluating programs to train employees in business and industry, and he had developed a four-level model.
The first level was just people's reactions to training. The second was what they gained in terms of their skill development, from the training. Third was how they use that in on-the-job performance, and the fourth level was just an increase in their productivity. So I thought that would provide a model for us, that we could use in education. So it had translated that to what it would translate to be in terms of education, and specifically, professional learning programs for teachers.
So we talked about their reactions to the experience, what knowledge and skill they gained, how it implemented, or how it was implemented by them, how it affected their classroom practices, and then finally, results in terms of students. And we tried that on several occasions over a couple of years, but things kept breaking down and we weren't sure why. When we began investigating this a bit more, we discovered that oftentimes it had to do, not with the educational experience of the training itself, but it had to do with the organization. That we were doing everything right from a sort of learning or training perspective, but then sending people back into organizations that were not set up to support them to do the things that we were asking them to do.
Oftentimes, we found that organizations actually imposed barriers or obstacles to that implementation. So we revised that evaluation model to include a third level that we called organization support and change. And so the five levels of the model are their reactions to the experience, what they learn, the increase in knowledge and skill, then organization support and change, which is, is the organization really providing what's necessary for people to gain high quality implementation. The fourth level talks about implementation, the change in practices and policies, or the implementation effort in the school, and then finally, the results, in terms of students. So that seemed to be providing us with really valuable information about what was working and how it was working and how we could make improvements. So we wrote about this first in the, gosh, the late 1990s, and then developed a book called Evaluating Professional Development in 2000, that talked about these five levels, and there being really three major implications from that evaluation model.
The first was that all the levels are important. Each requires a different type of evidence, and each requires gathering it in different ways, but each level was important in its own right. Second, each level builds in those that come before, and once it breaks down, it goes no further. So people really need to have a positive reaction to any learning experience, to learn anything. I mean, if they're sitting in that room and they're so cold that they're worried about their body temperature, they're not going to be able to learn very much. So that seems to be first, once they have a positive reaction to the experience, then you can look at what knowledge or skill they've gained from that. Then you turn and look at what organization they're going into, and is the organization set up to support them to gain high quality implementation. Four, you look at the implementation and the fidelity of that, with regard to what the program specified. And finally, its impact on the students. Each requiring a different kind of information, but each being important, but realizing that once it breaks down, it goes no further.
But the last implication, and probably the most powerful, is that when you plan professional learning, that order is reversed. That in order to get to level five, that is where you must begin. You must begin planning professional learning experiences for educators in terms of what impact you want to have on students. What student learning outcomes do you want to influence? What evidence would you trust to verify that impact it made? Then you take a step back, and you say, "Well, if that's the impact we want, then what are the policies or practices, or curriculum, or instructional strategies we'd want to incorporate in order to gain those improvements?"
Take another step back. What does the organization have to do to support people to use that? What do we need from materials or resources or time? What kind of support do people need? What kind of administrative leadership is required? Then you take a step back, and say, "Well, if that's going to take place, what knowledge and skill do people have to have in order to be able to implement it really well?" And the last thing you talk about, what professional learning experiences can we plan to provide people with that knowledge and skill? And so those are the three major implications from the model, but again, it's that last one that seems to be the most powerful.
Eric Neal:
Yeah, I love that you included that part about if you miss one of them, and that's it. We're not going any further. And I think I've seen that lots of times in my work. Most of the professional development that I've attended, it stops at the level one. They just ask, "Hey, did you like it? Did it meet your needs?" Why do you think people stop there and think that just a survey is going to be enough?
Dr. Thomas Guskey:
Yeah. Well, because it's easiest, and we probably do that better and have the most experience in doing it in that way. But as we began, it is probably one of the reasons too, that so much of what we do becomes ineffective. Just because you enjoyed it, doesn't mean that it's going to have an impact on your practice.
Eric Neal:
Right.
Dr. Thomas Guskey:
And I know I've gone, and I think this is especially true when you go to these large-scale conferences, national conferences in particular, you'll leave a presentation with a truly dynamic presenter. And people will leave that presentation, and say, "Wow, that was so wonderful. It was so great. I really got so much out of that." And I said, "Well, that's great. How's it going to affect your practice?" I have no idea. We really didn't gain anything from it. Or how are you going to find out if this makes any difference for your students? Because if it doesn't make any difference for the students, why are we doing it? And so that's why our field tends to be dominated by, I think, well-intentioned consultants, who are trying to do a good job, but we focus more on the entertainment value and the engagement value, than we do the value in terms of how it's going to translate to what we do at the classroom level, and how that translates to important improvements for students.
Eric Neal:
That makes a lot of sense to me. Level two measures that participant learning. In my mind, I go right to thinking like pre and post test, or something like that, where they can demonstrate that they actually developed the knowledge, skills, and abilities that you wanted them to leave with. Are there any important differences between this and thinking about how we assess student learning through building when you make your lesson plan and deliver instruction?
Dr. Thomas Guskey:
Yeah. No, Eric, there's really not. In fact, if you go to the evaluation guidelines that were developed in conjunction with the American Education Research Association, the American Psychological Association, and the National Council of Measurements in Education, that book was actually devoted to Ralph Tyler. And very few people know Ralph Tyler, although they do know Benjamin Bloom. What's ironic is Ralph Tyler was Benjamin Bloom's mentor, and Ralph Tyler is pretty much known as the father of evaluation. And what he said in his early writings on evaluation, is that if you are an evaluator, you don't have to answer all the questions. You just have to know what questions to ask. And he said, "When you plan any program, the first thing you need to address is what outcomes do you want to achieve, and what evidence would you trust to verify that you had achieved those?"
So the same would hold true in professional learning. If I want teachers to leave with a certain set of knowledge or skills, then as a teacher, or leader of that session, I need to be able to describe what knowledge and skills those would be, and what evidence I would trust to verify that that impact had been made. Now, one of the challenging aspects for those that plan professional learning, is that not all stakeholders trust the same evidence, and you need to be aware of the people that are going to make decisions and what evidence they do most trust. For example, when I'm asked to defend a professional learning program to a board of education, I always use testimonials. I parade teachers in front of them who say, "I was frustrated. I was burned out. I was getting ready to take early retirement. I got involved in this program. It's the best year I've ever had. I am so much more effective. My kids are doing so well and I'm so happy with what I'm doing."
I parade students in front of them who say, "I hated school. It was disgusting. I was missing 30 days a year, just hated going in the morning, got involved in this program. It is amazing. I can't wait to get to school. Learning is exciting to me. I'm doing two hours of homework every night, and the letter from Harvard arrived yesterday."
Now from a technical perspective, testimonials are a very poor source of evidence. They are highly biased, and they're based on a very limited sample of behavior. Why do I use them? It's a kind of evidence that board members trust. Board members are moved by testimonials much more than they are by charts and graphs, and other displays that I might give them of data. And so I always adapt to the stakeholder group to which I'm presenting that particular evidence. Now, you need to be pressed, or if pressed, you need to be able to provide evidence that it is leading to the benefits that you would hope, that impact on students or the impact on teaching practices. But I think being sensitive to the stakeholder group that's going to be involved in decision making about your program, is also very important.
Eric Neal:
Yeah, I think that makes a lot of sense. And I think something else that I've seen as kind of a barrier, in that we almost feel hesitant to treat participants who are adults, in a way that might be perceived as something that would be more student-like or childish, which is around being assessed.
Or I think maybe sometimes people might think, "Well, wait a minute. I'm a professional. I've come and I've listened to you. Of course, I've received the information." But it really is important that you're able to make sure that they left with the understanding that you wanted them to have, and that they're able to go and do that. I do something a lot of the times, where I'll ask people at the end, "Do you feel like this met your needs, and you really understand it in a way that you can go and implement it?" And everyone will raise their hands, and then I'll say, "Okay, can you go teach someone else to do it," or "Could you come up here and model it?" And all the hands kind of go down pretty quickly, and it's like, "Well, it feels good, but I'm not sure confident enough to go out and really show you that I've mastered it."
Dr. Thomas Guskey:
Right. Well, and you make a good point. Because it's absolutely true, when we think about evaluating what people gain from an experience, the source of information you use are going to be dictated by the nature of the learning goals. So that if just some knowledge base is what I'm trying to expand, I can get at that by asking questions, or even a paper-pencil instrument.
But if I want to know if people have developed a set of skills, then paper-pencil instruments aren't going to do that, just as they would not with students, I have to go to skill demonstrations, or to examples, or practices, and be able to give people feedback on how well they displayed those skills, so that they can make revisions and alter accordingly, simulations or things like that. You adapt your evidence to the learning goals. And so when you plan professional learning, when you think about those learning outcomes you want people to gain, that's the second question. What evidence best reflects those? And can I gather that information as a part of the learning experience so I can give people feedback on their learning? We can make adjustments as we go along, to not only judge the quality of their learning, but judge the quality of our professional learning experiences.
Eric Neal:
Oh, definitely. I think of it in that way. This is an assessment of us, the people who are providing this, not an the assessment of the people who are participating. Just like a teacher, it's not a sign of students who aren't doing well. If they're not doing well, it's a sign that the instruction wasn't meeting their needs.
Dr. Thomas Guskey:
That's exactly true. And so much of what... I think we, for some years, got off onto this track in professional learning, that somehow adult learning is different from student learning. But as we look at this more finely, we're discovering that this is human traits, it's not age-dependent. And the way that students learn well, I mean tying it into their experience, giving them opportunities to practice, relating it to their other experience in their life, giving them ways to build on the acknowledge, works just as well for students as it does for adults. And so we need to build those kinds of activities into more of what we're doing from professional learning perspective, for our educators.
Eric Neal:
Definitely. Level three looks at organizational support and change, and I see this as the conditions to support implementation of this new learning. I've had some training in implementation science, and they have a tool that I really love, which is an initiative inventory, which is a de-implementation tool. So we do a great job of adding things to people's plates, but we don't do a good job of taking things off of people's plates. Do you think we do a good enough job of preparing instructional leaders to do this type of work, to make those changes in the organization that are going to support this? I feel like sometimes, we ask a lot of, especially principals, to spin straw into gold, and just keep finding more time in the day to make things happen.
Dr. Thomas Guskey:
Right. Well, no. Again, that's a really crucial point, and I think you're absolutely right. In my experience working with school leaders in all parts of the country and around the world, they are truly dedicated individuals who want to do a good job, it's just they don't know quite how. And with guidance, and with direction, and with suggestions on what might work, then they're much more willing and able to make those particular differences. I find that too, we also need to be very contextually aware. So that there was a concept developed back in the 1970s by researchers out of the Rand Corporation when they were looking at program implementation, that they called mutual adaptation. And mutual adaptation implies that change from any innovation, takes place in two directions, not just one. That the people have to adapt to the innovation in order to have it be useful in that context, but the innovation has to be adapted for them to use in that context.
Now, you can go too far, one way or the other. If the people are asked to change too much, then you'll get implementation, but it'll be very mechanical and won't be really sufficient. It won't be applicable to that particular context. But if the innovation has changed too much, you might lose critical elements and deny aspects, with regard to fidelity of it over all. Leaders have to be able to find that critical balance, and that means offering that the guidance to teachers is now they can adapt this to fit their context while maintaining the principal elements of fidelity to that overall scheme. But again, over and over again, we found that when this evaluation model was developed, I looked recently and found over 6,400 articles have been written on the application of this model. There have been over 25 doctoral dissertations based on the model, and it seems that most of them are looking at level three, just as you suggest.
And they consistently find, as you mentioned as well, that the principles tends to be a critical aspect of level three. In fact, it's so important I even will tell people after participation of building leaders, especially in the role of principals, it's so crucial that if I'm called upon to work with any staff, before we begin our day, I always have a conversation with the principal, and I say, "This is my one rule. My one rule is if you leave during this session, the one thing you can trust is this, I will be right behind you."
That communicates so much to the faculty, that if something in your schedule is more important than this, then they don't think this is that important. And most of them that, well, I should say at this point, everyone I've ever talked to said, "Oh, okay, I understand that," and they're willing to do that, but no one's ever said that to them before. And I think that it's just that we seem to be so overcome with these everyday tasks, that sometimes we lose track of the bigger picture and what can really make a difference. But I think it's their help in making that mutual adaptation, to be able to take ideas and adapt it to the unique nature of that particular context, recognizing that in a school, we always have greater variation between the teachers within a school, than we have between schools, between districts, even between states.
There's huge variation, and what that means is, in every school, there are some truly remarkable teachers who are doing outstanding work. They are far above anything we could even imagine. And in every school, there are some teachers that need help, and to be able to make those judgments and see those differences, and be able to provide appropriate guidance for those differences, tends to be really, really important. And that's where the sensitivity and sensibility of principles really comes into play.
Also, in terms of scheduling, follow up. We find that if any program is going to be really useful, it has to be implemented with some guidance and direction. The most serious questions people always have is once they start implementation. And you need someone with building rights responsibilities to set it up. It doesn't have to be a big deal, but you need a principal who's willing to say, "Okay, on Thursday after school, I'll buy everybody a Diet Coke and a donut. We're just going to sit down and see how things are going here." Now, I've gone to a lot of those meetings, and I'm supposed to be the expert. Rarely do I say a word. Because one teacher will bring up a problem, and before I have a chance to say anything, another teacher says, "You know, I had that same problem. Here's the way I've worked it out." But again, it's that leadership to provide those structured opportunities, that seemed to be most crucial in bringing that kind of collaboration about.
Eric Neal:
Yeah, that's a challenge to help them develop those skills. Level four reviews participants' use of the knowledge and skills. And I think of that thinking of your backward planning, starting at five and going backwards. If you do this part well, it'll really help you understand what needs to be done, to be able to do that at the level three. But I think a lot of times, teachers, if we're really wanting to implement something, you need to support, which means you're not really being evaluative. And so how can we help measure the participant's use of these new knowledge and skills that they've received, in a way that doesn't feel punitive to them?
Dr. Thomas Guskey:
Right. Well, I think just as we talk about assessment, that they're good student learning, we need to focus on the formative aspects of it, rather than the summative aspects of it. And this again, is very similar to what we would do to assess student performance. To assess student performance, you need to develop a rubric of what excellent performance looks like. We know that in developing their rubric, there are two essential elements. First, is to identify the criteria of what excellent practice or use looks like. We've got to get that down to three or four things. People can't keep track of 10 things, just as students can't either. See it down to three or four criteria of what it would really look like. Then you identify levels of quality, graduations of quality within each of those criteria. You use that as a part of your educational experience.
You use it as part of your professional learning experience, so people know the criteria by which the performance is going to be evaluated. You give them feedback as part of the training, in terms of their demonstration of that. And then when you go into the classroom to see if it's going on, we use the same criteria. Now, it could be that a particular innovation requires more than three or four criteria, and that's okay. It's just that at any single observation, just as with students, we've got to get it down to about three or four. And so it could be when I'm first starting the program, I identify these three things of which we really need. And I might observe a group of teachers and give them feedback on these three things. Here's what you did really well. Here's some ways in which we can make a little bit better.
But after maybe the third observation, or in the third observation, everybody's doing one and two really well. So now we might look at three, four, and five instead, and that would be okay. So you can build on those, but you have to get it down to something that's reasonable in terms of what's a valuable learning tool. But it does require us to be very specific about what those criteria are, and how we would make note of it. When it's being implemented well, how do you know it? What would you see there that would say to you, "This is really being done well?" And if those criteria are clear, we find that not only does a principal's evaluation reflect those things well, but the teacher's evaluation of themselves, and the principal's evaluation are almost identical. People become much more attuned to being reflective and looking at themselves in this, and they have a sense of when things are going well and when it's not, and what changes they really want to make.
Eric Neal:
Yeah, that's really the good part about that, is making sure that they're clear, just like you would with students. It's not a got you or a come around and check. You're building that in to the actual training. I did one once, where I facilitated the group through creating the rubric together. And so it was a way to kind of measure that they really understood it, because we were able... I could facilitate them if I needed to go a little bit deeper. Well, does this really capture all of it? But it helped them feel like it wasn't evaluative, because when we came around later to look, they had developed it and they knew exactly what people were going to be looking for.
Dr. Thomas Guskey:
Yeah, that's exactly true. And just as we said before, what works for students, also works for us as educators. When students are engaged in the process of developing the rubric, to look at an example of excellence versus one that wasn't so good, and talk about the differences between them, carrying over that same thing to what we need as feedback to get better, has a lot of similarities.
Eric Neal:
Yeah. So the last of the levels, the level five, the one that we start with when we're doing our planning, measures the impact on student learning. And it seems like this should be the most straightforward, but in my own thinking, this is kind of one of the most challenging ones for me. Not identifying an outcome that you want, but being able to tie that outcome to the action that's going to create that outcome. What are some ways we can support educators in not just saying, "Oh, we want to increase something 10%," or we want to do that, but to actually get them to look at data in a way that then they can connect it to the actions?
Dr. Thomas Guskey:
There are two important ideas at this level, that seem to be crucial for success. The first is what we described a little bit earlier, and that is that not everybody trusts the same evidence. I did a study some years ago, where I took 15 different indicators of student learning, and these range from things like large-scale standardized assessments and state assessments, district level assessments, end of course tests, done things like quizzes, homework, completion rate, things like that. I had 15 of them. And I gave them to different groups of educators, and I asked them saying, "These are all we consider legitimate or valid source of evidence in student learning. What do you think is most trustworthy? What do you think is the best indicator? If you were to choose, what best would indicate to you whether students have learned and are able to do what we set out to teach?"
And I asked them to rank order them, so that they had to compare each one with every other one. After they rank ordered them, I compared the rank ordering of school administrators to classroom teachers. It was almost exactly reversed. Administrators always identified those large-scale assessments. The state assessments, that's what's in a newspaper, that's what the board cares about. That's what we used for evaluations. Teachers said, "They're not all that important. We do it once a year. We don't get the results back for two or three months, but that time, my kids are out with another teacher. I trust my own evidence. I trust the things that I see." So what does that say to us as evaluators? What it says is you can't trust a single source of evidence, that you have to think of multiple source of evidence here.
But the second aspect of this is that if implementation is going to be sustained, evidence of impact has to come pretty quickly. A lot of times we say, "Well, we want to make sure that this has some impact on the state assessment results," but teachers are not going to persist for nine months using a particular technique, unless they know it's making a difference. I always equate it like if you're on a weight reduction program, you don't want to wait for a year to weigh yourself. You need some evidence pretty quickly, that this is mattering. And I can recall some years ago, I was being interviewed, and was asked in this interview, when teachers are engaged in a new innovation, a new strategy, a new instructional approach, a new curriculum, how soon should they see results? And at that time, good friends of mine, I mean Andy Hargraves, Michael Fuller, were talking about, "Oh, change is a long scale process and needs to be sustainable multiple years. We might not see this change for a year or two afterwards."
I said, "In two weeks." They were stunned, absolutely stunned. And I said, "Because all the evidence we were gathering on teachers, showed that if they don't see results pretty quickly, they're going to abandon it and go back to the tried and true things they've used in the past." And it's not because teachers are afraid of change. It's not that at all. Teachers are so committed to learning of their students, they're unwilling to sacrifice their students for the sake of innovation. And there is this belief that if they persist in this innovation, their students might learn less well, and they're unwilling to sacrifice students for the sake of innovation. So what that means is, that in planning any professional learning experience, we need to think of what indicators would teachers be able to gain fairly quickly, to show that this is really making a difference.
Instead of looking at end of year assessments, you look at classroom quizzes, you look at formative assessments, you look at evidence that teachers might be gathering on a more regular basis, homework completion rates, engagement levels in class, confidence in learning situations, things they can see and gather on a very regular basis from their students. Because if I see it's making a difference, then yes, I'm going to keep doing it. And teachers don't mind working hard, and working even harder, if they see it's really having an impact on their kids. But we have to be able to provide that evidence to show them that it does.
Eric Neal:
That's great. So State Support Team 11 is excited to be bringing you to Ohio for a workshop on March 15th. I personally, have been looking forward to this for a long time. What can participants expect?
Dr. Thomas Guskey:
Well, I would hope that the people that are coming are going to be those committed to really improving the professional learning experience for educators. So we will start off looking at studies that have tried to figure out what does make professional learning experiences effective and what doesn't. We're going to dispel some myths that has been out there for a long time, about what really can contribute to effectiveness, and what seems to really make no difference at all.
Then we're going to talk about gathering evidence on effectiveness. We're going to talk about the five levels, and then we're going to spend a major portion of our time thinking about that backward planning process. And maybe take people through the experience of if I started with what I wanted to accomplish in students and worked back from there, what would it look like? What would my questions be? What kind of things would have to go through my mind to plan in that way? Rather than planning on what are the new topics we're going to take on? What are the new things we're going to explore? Who's going to be the exciting motivational speaker we're going to bring in? Let's think about really what we're going to accomplish with regard to the kids.
Eric Neal:
I can't wait for this. It's going to be great. So if people would like to know more about you and the work that you do, where should they go?
Dr. Thomas Guskey:
Well, I do have a website. It's just TGuskey.com. If you go there, you can find articles that are written on the topic. If you go to resources and articles, you'll find there's a whole section on professional learning. You can go to books and see the books we've developed on that topic. There's a section of reports that might be helpful there too. And from there, it just offers a lot of ways you can branch out and explore other things, but that would probably be the major resource I'd recommend.
Eric Neal:
Great. Well, thanks again for joining us, Dr. Guskey. It's been a real pleasure.
Dr. Thomas Guskey:
It's been my pleasure, Eric. Thank you so much for inviting me, and I look forward to our work together on the 15th.
Eric Neal:
It's going to be great. Well, that wraps up this episode of the State Support Team 11 podcast. If you'd like to know more about us and the work we do here at SST 11, go to our website sst11.org. Give us a call at (614) 753-4694, or hit us up on Twitter, we're @SSTregion11. If you'd like to get ahold of me, I'm at eric.neal@escco.org. Until next time, I'm Eric Neal. Thanks for listening.