SST 11 Podcast

Collaboration is a cornerstone of educational improvement, but structured collaboration is the key to making a real impact. Host Eric Neal is joined by Barbara Shreve from the Carnegie Foundation for the Advancement of Teaching to discuss Networked Improvement Communities (NICs), their relationship to Improvement Science, and the components that distinguish them from other collaborative frameworks.

Creators & Guests

Host
Eric Neal
State Support Team Region 11 Consultant and Podcast Host

What is SST 11 Podcast?

Podcast by State Support Team 11

Eric Neal:
Welcome to the State Support Team 11 podcast. I'm your host, Eric Neal. Today, we are joined by Barbara Shreve. Barbara is the managing director and senior associate in the Networked Improvement Science Group at the Carnegie Foundation for the Advancement of Teaching. Welcome, Barbara. How are you today?

Barbara Shreve:
Hi, Eric. I'm doing very well. It's great to be here with you.

Eric Neal:
It's great to have you on the podcast. I've been really looking forward to this. Can you tell us about the Carnegie Foundation and the work that you do there?

Barbara Shreve:
Sure. The Carnegie Foundation for the Advancement of Teaching was founded in 1906, and it's had a really rich history and legacy of supporting different changes in education. So Carnegie introduced the Carnegie Unit or the credit hour. We introduced TIA, CREF, and the GRE, Pell Grants among other things. And more recently, we've been working in improvement science. Today, our mission is to catalyze transformational change in education so that every student can live a healthy, dignified, and fulfilling life.
And so we're really dedicated to enhancing educational and economic opportunity for low income, first generation, and underrepresented young folks, and we're doing that by building rigorous learning and engaging learning experiences that prepare kids in our PK12 systems to thrive. We're working to transform post-secondary to advance the social and economic mobility of underrepresented students. And we're also advancing the use of improvement science such as where I particularly focus in service of addressing our most pressing educational equity challenges.
So a lot of what I do focuses on building capacity for people and organizations to use improvement science in their own work. We help to create learning experiences, make resources and evidence accessible, and I collaborate a lot with other organizations to help elevate examples of success.

Eric Neal:
Yeah, this is all very exciting to me. There's so much crossover with the work that I do around continuous improvement and equity and just all around trying to get better outcomes for kids. I'd like to start our conversation talking about how I discovered your organization's work. During the pandemic, there's a lot of time to dig in I think deeper to subjects than we've had in the past when we get so busy. I took a course or a series of courses on edX around improvement science. I've been trained in implementation science and had a background in that and I was curious, what is this?
There was a MicroMasters that was offered through the University of Michigan and a lot of the people and resources that were in that course were from the Carnegie Foundation. That really made me even more curious. And as I started applying some of the things that I'd learned, I dug in and got really interested in networked improvement communities and that's how we crossed paths. I ended up taking a course from you from the Carnegie Foundation around networked improvement communities. That was actually a lot of fun. I really got a lot out of that.

Barbara Shreve:
Thank you. We love having you.

Eric Neal:
So can you talk a little bit about improvement science as a discipline and what are the six core principles of improvement?

Barbara Shreve:
Sure. Improvement science is something that really started in manufacturing where it's often called quality improvement and later it was brought over into healthcare. And Carnegie was looking for ways to really think about how we're going after learning to improve, getting better at getting better, and was looking to other industries for what we could learn about how other industries do this well and how we could bring that into education. And so improvement science is really just a disciplined way of approaching change.
And our former president Tony Bryk and his co-authors, Louis Gomez, Alicia Grunow, and Paul LeMahieu, in their book Learning to Improve introduced these six core principles for improvement in education. And they set out what you pay attention to as you go through an improvement effort and how you think about this approach to change. So the first three of the six principles really talk about how we understand problems and what we're working to improve or to solve.
So principle one is the problem focused and user centered, and really this is calling out that improvement focuses on solving a specific problem and engaging the people who are closest to that problem, who experience it in their daily lives and work every day as being part of finding those solutions. So for example, if we were tackling a problem like low literacy rates in third grade with an improvement approach, we'd name that problem specifically.
That's a very specific problem to work on, and we'd engage parents and early grade educators and understanding the reasons behind the literacy rates and use that to really explore and understand what was happening, rather than what I've often experienced in my career in education of quickly pivoting to let's pick a solution or let's pick up a new curriculum to try in the moment. Principle two is attend to variability.
We're trying to reduce variation in the system so that we're not just finding something that works and moving up average performance, but we're understanding what's underneath that average score and all of the discrepancy or disparity that might be there. We really know that underneath an average, there's a whole lot of students or a whole lot of performance that's above that average, as well as that might be falling below. And so we want to look at how can we understand what's happening better and find solutions that are specific to those different groups.
How can we learn how to make different innovations succeed for different students in different contexts so that we are finding what works for whom and under what conditions and really trying to therefore move the average up. Principle three is all about seeing the system. It's this idea that it's hard to improve what we don't understand. Improvement science says quality is a system property. It's not the result of individual motivation or people's innate talents or will in a system.
So we use investigation tools to really try to see what's shaping people's behavior, what's shaping people's ability to act. We look at mapping out processes and talking to different stakeholders to understand their experiences and also here really importantly bringing in research, recognizing that we need to change systems to be able to see different outcomes. So we want to see those forces and try to redesign in ways that we can realize different results.
If those three principles really speak to how we understand what we're trying to improve, principles four and five shift us into how we're improving and learning grounded in that understanding of the problem. And so they're really the science at the heart of improvement science. Embracing measurement is principle four. We want to not just wait until the end of the road measures of success or failure, we want to track along the way whether the changes we're making are leading to improvement.
And so that speaks to finding data that comes out of actually doing the work every day that we can look at and that will fuel our learning. So we want to know quickly whether what we're doing is resulting in what's expected so we can adjust. If we go back to that third grade literacy example, it might mean that we are looking week to week or month to month about what's happening with students' reading fluency because we know longer term that's going to help us understand what we might expect to see at the end of the year in their overall literacy results.
And that's going to give us some quick feedback to know if we're making progress. Principle five is using that what we're measuring and learning with it, and it's learning through disciplined inquiry. So the problems we know we're working on are really pressing. We're talking about students and young people's lives, teachers' everyday work experiences. So we want to work and learn and improve quickly. So we try to be really intentional and really pay attention to evidence and reflect on that evidence.
So we use inquiry cycles to help us articulate what we think is going to happen, and then collect data to test against that to see if the things that we're trying are showing potential. And so in this, we're expecting some change ideas to work really well and some actually not to pan out and need to be abandoned. We start small because we're expecting some failure along the way and we want to make sure we're learning from that failure so that it isn't a problem. It's only the failure to learn from it that would be a problem.
The final principle is the how we go about doing this, like organizing as networks. Networks are really powerful sources of new ideas and innovation. They also create contexts and social connections where we can accelerate our learning, and they allow us to see patterns that we might not otherwise see if we're looking in a small setting to think about what we might change. And so we talk about organizing as networks because it's such an accelerant for learning.
So those principles ground how we think about going about change and approaching it, and they show up in different ways throughout an improvement journey, but they work together in that process to help us think about what we mean by improvement science.

Eric Neal:
Yeah, I love that. That was a very good synopsis of a very big subject. I love that one thing that distinguishes improvement science is it's that focus on the problem. A lot of other things are focused on the solution, but I'm really interested in the impact of seeing the system that produces the unwanted outcomes. Can you talk a little bit more about some of the ways that improvement science goes beyond a simple root cause analysis tool, like the five why's?
I'm thinking empathy interviews or other things that we discussed during our training. I think I remember an example actually now that we are sitting here talking about a student that was checked in with years after having struggles with learning how to read and they learned some really interesting things about each teacher and their learning experience as they went through the process.

Barbara Shreve:
Yeah, I think seeing a system is really a tricky concept, because the systems in education that we work in, all of the things that are interacting around and in a child's education are complex. And any one of us where we sit can't really get visibility into the whole thing, but we can really clearly understand what's in front of us and feel like that's this problem to solve and what's really compelling.
So we talk about really trying to find tools that can help us see our blind spots, can help us widen our aperture or step back and see more of what's happening, like those experiences of that young person, to understand what we may not see in our day-to-day practice. And so gathering the perspectives of people who are working on the problem or who are experiencing the problem is incredibly critical. That's where empathy research comes in.
Sometimes that looks like shadowing that student through the day or going back to talk to them to map their experience over years and see how that's changed or what we may be missing in the transitions year to year. Sometimes that's also sitting down for an empathy interview, just having a conversation with someone who's very close to the experience about their experience, what they see that we may not. I know you've had folks on your podcast talking about empathy interviews and they're just an incredibly powerful way to see something you might not otherwise.
I think about when I was a math teacher, I would have very clear ideas when I saw groups of students disengaged in particular lessons or units about what was going on and what I might need to fix. But in talking to them, they would often reveal something I was entirely unaware of that was actually very different and sometimes even an easier solution. Something about what they were walking into class thinking about from their last class, for example.
It had nothing to do with my lesson or sometimes it had everything to do with my lesson because I was using one word that never resonated, but things that I wouldn't otherwise understand if I wasn't talking to them. We also try to make work visible to check that we have shared understanding with folks about the processes. So we use things like process maps or system maps that help us see what's happening. One example of doing that is working to improve attendance and chronic absenteeism.
We might have worked with folks who've mapped out the flow of how do we actually go from a student's absence getting recorded to figuring out we need to communicate home with a family because there's an accumulation of absences. And mapping out that process can actually reveal where there are bottlenecks or just sticking points, steps that some people are skipping.
In some cases, it's helped people realize there's a set of steps that are really reliant on the individual teacher, but there's not a consistency of support for teachers to do that classroom to classroom, so practices end up looking really different. But we won't know that until we sit down and make the work visible and can put it out on a piece of paper and be able to talk about it. So process maps and system maps help us engage those kinds of questions.
We also, coming back to that principle of variation, really try to see a system by digging into data and being able to see how it looks different in different places. So we look for variation what we say by units, right? How does it look different across different schools or different classrooms? How might data look different also over time so we can see where things or trends or patterns where they might be anomalies?
And that can lift up in incredible bright spots to learn from where a system is working really well and it can also lift up and point out or help us see differently how a system may be not serving a particular group or particular breakdowns might be happening. So those are some of the things that we try to do to dig in and help make visible what can be invisible around us as we're swimming in the water.

Eric Neal:
Yeah, it's so helpful to me when I went through the training and I learned about some of these things because this really is what my work is. I work with district leadership teams and building leadership teams. I think people say things like root cause analysis or things like that, but it's very different having heard of it or knowing a little bit about it and being able to lead that kind of work.
And so having those different points to be able to use as different coaching touch points for me has been very helpful to say we're not just looking at outcome data and then using the think, feel, believe method to figure out like, oh, well, this is why we think that's happening, but to go in and map it out, look and see those visual representations are huge, or to say, I had this conversation once. People said, "Well, they're chronically absent because they're just not motivated."
And I was like, okay, well, what data do you have that tells you that? Did you survey the students and they said, "Hey, we're not motivated," or did you ask them one on one, are you motivated and they told you no? And it's getting beyond that what we think is going on into really actually seeing what's happening.

Barbara Shreve:
I think improvement really you're describing requires you to be curious and to enter into that what you think you know with some wonder and humility and hoping to test and challenge yourself to be wrong. And I think that's one of the reasons that I've really love working with other folks who come into this work with that curiosity because it also opens up so much more potential for us to make progress on some of the things that have been challenging us over time.

Eric Neal:
Oh, definitely. And I think helping people to be successful when they're improving, it cuts down on teacher burnout and people retention and all of these other things that we're trying to do. So having this methodology and this framework to approach it, it really helps us do that rather than just treading water and try and stay afloat.
So Plan-Do-Study-Act, it's a fairly universal way of testing improvement in organizations. In Ohio, we use the Ohio Improvement Process. It's aligned with that. It seems pretty straightforward, but I often see teams struggle to make the process meaningful. What are some of the challenges to making PDSA cycles produce results?

Barbara Shreve:
Great question. Because I think when you hear the Plan-Do-Study-Act cycle or the Ohio Improvement, those cycles seem deceptively simple. It seems like, oh, we've got four steps and we can move through. But actually carrying out each step I think can be challenging. And a couple of the things in particular that come to mind to me with your question, being specific about the change you're testing and how it's related to some larger hypothesis or theory about what you think is going to lead to moving whatever you're trying to move in terms of your problem.
So that connecting to some idea of what might happen and why I think helps you make sure you're not just testing random things, but you're learning towards a larger objective. So if you're testing a discussion protocol, what do you think that's [inaudible 00:18:14] for students in class, for example? What do you think that's going to move and what are you hoping to see happen? Keeping really good track of that I think helps you focus and helps you know even also what data and evidence you might need to collect to check if it's having the impact.
We can often have very different things we want to learn also at different points in testing something. And so that's the second challenge. I think narrowing enough to what you want to learn about from a particular test I think is really helpful. So in an example where you're testing an instructional protocol or something in class, in this moment, if it's the first time you're trying it, what you might want to learn from that test might actually just be, is this feasible?
Can I explain it quickly in a way students can understand and follow it, even if it's not yet leading to that student outcome result that you're getting at yet? So once you've worked out some of the kinks and you know it's feasible, then maybe you want to really shift to thinking about in our students, is it leading to students interacting in the ways that I wanted to and that I believe is going to lead to my larger outcome? You shift your questions as you go too, and then you start to look for different things that you're collecting, and I think that helps focus the learning from a PDSA cycle.

Eric Neal:
You're so right. While you were talking, my brain was going back to all of those steps. If you're not problem focused, it makes it super difficult to have a potential solution to test that is aligned with the problem, which makes it difficult to make a plan and develop the people who have to implement it and do all those other things. And we often get to that point where you get to the act or we're into the implementation and monitoring phase in Ohio and people are like, "Well, what do we talk about?"
And because you didn't have that clear line drawn from the problem to your theory of change to the development of the people who are going to do it, to all those things, if any one of those things are missing, it does get hard to have those conversations and then people are kind of lost a little bit.

Barbara Shreve:
Yes. I think it both focuses that conversation you have at the end and it helps you, I think, make sure you've collected what's going to fuel that conversation, because there are so many things that we could be paying attention to as we're executing any one of these steps or changes. How do we make sure we've got the right things to check against those predictions that we're making or those theories that we're having in that conversation? I think that that's what really brings those final monitoring or act steps to life where we get to reflect back.

Eric Neal:
Definitely. Speaking of getting results, as I mentioned earlier, I took a course you led called Organizing the Work in a Networked Improvement Community. It was very beneficial to me. Can you talk about what a NIC, a networked improvement community, is and how working in this way has been shown to get results?

Barbara Shreve:
A NIC or a networked improvement community is a scientific professional learning community, and it's a little bit different from other networks in that very particularly, all of the members of the NIC are committed to working together towards a very specific common aim, something that they intend together to achieve and they want to learn together as they work towards that. So they share this common understanding of the problem to start with, what are those key leverage points they want to work on.
They have some hypotheses about what's going to lead to improvement. And really importantly, they have a shared definition of success, so they have a way that they want to measure the outcome they're seeking to move. And that grounds their conversations and their work together. Then they can use all of these different improvement tools, the PDSA cycles, being able to look at data in different ways, map processes to be able to share their learning together and to share honestly also what they're learning about what doesn't work, right?
They have opportunities in sharing their learning. And because they have this shared commitment to the goal, it opens up a different space for learning from failure as well as success and a commitment to each other to be really transparent about that that I think is really important. By working together in that way, NIC seek to accelerate learning.
So if we have five schools working together towards chronic absenteeism, they may not all be working on the same change idea at the same time, but they've agreed some big areas where they want to learn and that they think are going to advance success. And so as they refine those changes in different schools, they can actually share with each other, so the next school that picks up a change idea doesn't start from scratch.
They start from building on all of the learning that's already taken place and seeing what happens in their own context. In that way, they're really able to accelerate how they're learning together.

Eric Neal:
Yeah, the course was very good and it was full of examples and outlined a real clear process for administering and organizing to work in a networked improvement community. Is sticking to the process important from the perspective of a network improvement community? Or can you see benefits anytime you work collaboratively with others doing similar work?

Barbara Shreve:
Well, I think personally there are huge benefits to collaboration. I had a sign on the wall in my classroom at every classroom that I taught in that read "no one of us alone is as smart as all of us together." But I do think that NICs are especially powerful because of that commitment to shared action. Many of the networks and collaborations that I've been a part of that may have been focused on a particular problem of practice were really more focused on sharing.
And I learned in those networks, but I sometimes had to do a lot of translation from what I heard from someone else to figure out how it applied in my context. Or I've been in networks where we do a lot of sharing of ideas, but don't actually share or commit to sharing evidence of how they work, which can also make me have different questions and different things I need to learn when I try to apply them.
So I think NICs are really powerful and unique because of that shared goal and theory of improvement and the common hypotheses that guide their work because their sharing has been really specific and they're sharing evidence of what they're learning, that we've all agreed that's worthwhile evidence to be able to talk about. And we may be able to talk about, I'm trying something and you're trying something, we're looking at the same results in our systems to see which one has impact or how or for which students in ways that can be really powerful.
I think I already mentioned this, but that joint accountability to being really open about the things both that are working and that are not working along the way I think is another feature that is incredibly powerful for the idea of accelerating our learning.

Eric Neal:
Yeah, I would have to agree. I've had great experiences collaborating, but I think you risk missing some of the benefit, in my opinion, if you're focused more on the topic or the subject than you are on the aim to solve the problem.

Barbara Shreve:
I think that's well put. Yeah.

Eric Neal:
So how does the Carnegie Foundation support the work of improvement science?

Barbara Shreve:
Well, we are committed to building the capabilities and the capacities of others to use improvement to go after equity challenges in their own local practice and also bringing people together around that. So we offer opportunities to learn, like the things that you engaged with, Eric, and we also have other courses that just give people a chance to understand improvement concepts and get familiar with some of the tools, as well as more applied courses and really digging into Plan-Do-Study-Act cycles or improvement teams or even supporting some analytic activity in a network.
We also share resources and research with the field so that people have more tools at their disposal, and we try to support a community of improvers through our annual conference, the Summit on Improvement in Education, because we recognize there are so many people doing powerful work in this area and the opportunities to learn from each other and learn from each other's successes and efforts along the way is really an important asset to the field.

Eric Neal:
Yeah, I went to my first summit last April and it was great. That's where I met our friends that came on the podcast working on empathy interviews, and there was so much benefit in just information, but just that networking also of people out there who are committed to making positive changes. That was amazing. So can you tell us about some things you or the Carnegie Foundation have coming up in the near future? I know that the next year's summit is coming up.

Barbara Shreve:
It is, and we actually just opened registration for it. We're really, really excited. The summit is that gathering of people who are coming together focused on equity and improvement in education, and they come to learn from each other and connect and build momentum. And this year's program is going to be full of people who are presenting stories about how they have impacted outcomes and are seeing progress, and they're also going to be sharing how they use different improvement science methods and tools in their work.
So certainly encourage people to check that out. We'll also be debuting a new introductory course about improvement science in the new year, which I'll be excited to share in a month or two. So those are a few of the things we have on the horizon. There's such an amazing wealth of folks working in this space right now, and so we're really excited for any opportunities we have to partner with folks and elevate other people's work.
And I think that's one of the particular reasons we're excited about the summit this year. There's lots of those opportunities built into it for people to not just engage with the ideas, but really engage with each other.

Eric Neal:
That's great. Well, if they'd like to know more about you or that work and the things coming up, where should they go?

Barbara Shreve:
You can learn more about our work at our website, which is carnegiefoundation.org. There you can learn both about the improvement work, as well as our other work around re-imagining secondary school learning and innovating in the post-secondary sector. In the improvement sections, you can find a bunch of different resources for doing improvement work, information about our course offerings, and in particular, some resources about measurement for improvement as well. So invite people to find us there.

Eric Neal:
Great. Well, thanks again for joining us, Barbara. It's been a real pleasure.

Barbara Shreve:
Thank you, Eric.

Eric Neal:
That wraps up this episode of the State Support Team 11 Podcast. If you'd like to know more about us and the work that we do here at SST11, go to our website, sst11.org. Give us a call at 614-753-4694, or hit us up on Twitter. We're @sstregion11. If you'd like to get ahold of me, I'm at E-R-I-C.N-E-A-L@escco.org. Until next time, I'm Eric Neal. Thanks for listening.