Best Ever Podcast

How do we thrive as humans in a world shaped by constant technological change? In this episode, Scott talks with Richard Culatta, CEO of ISTE and ASCD, former Director of the U.S. Office of Educational Technology, and author of Digital for Good: Raising Kids to Thrive in an Online World. Richard shares how he’s navigated a career spanning classrooms, government, design thinking, and global education leadership, always with one mission: to make learning more human and more meaningful.

From practical ways to use AI as a baseline for self-management, to the nightly “rabbit hole” learning ritual he shares with his son, Richard models curiosity, empathy, and humor. This conversation explores what it means to create healthy digital cultures, the mindset shifts we need to thrive in a transformational age, and why the best leaders don’t pretend to have all the answers: they create the conditions for exploration.

(02:14) Why ISTE and ASCD merged: tools + problems finally aligned
(07:47) From Spanish teacher to education policy and IDEO design thinking
(13:54) How tech and teaching converged early in Richard’s career
(15:25) The dump truck and the sports car: Richard’s favorite analogy for AI
(23:41) Using AI for feedback, board prep, and better communication
(28:32) Are we cheating with AI or just evolving like calculators?
(34:18) Freeing time for deeper thinking and higher-order skills
(38:35) What it means to be a “better human” in digital spaces
(43:53) How to create welcoming culture online and off
(46:27) Richard’s personal routines: faith, curiosity, and rabbit-holing
(49:45) Why “screen value” matters more than “screen time”

Explore Richard’s work: iste-ascd.org/ 
Get the book Digital for Good: amazon.com/Digital-Good-Raising-Thrive-Online
Connect with Richard on LinkedIn: linkedin.com/in/rculatta/


Ready to have the Best Life Ever? Start here: www.courses.eblingroup.com/p/best-life-ever-course

👉 Connect with Scott Eblin
Website: www.eblingroup.com
LinkedIn: www.linkedin.com/in/scotteblin

✉️ Inquiries
For anything podcast-related: hello@besteverpodcast.com
For business or speaking inquiries: contact@eblingroup.com

📬 Get leadership insights and updates
Join the newsletter: www.eblingroup.com/sub

What is Best Ever Podcast?

The Best Ever Podcast with Scott Eblin is your insider’s guide to what it takes to lead at the highest level at work, at home, and in your community. Each week, Scott sits down with remarkable leaders for real, revealing conversations about the mindset shifts, self-management habits, and everyday routines that fuel extraordinary leadership impact. Drawing on his 25 years of experience as a top executive coach, Scott brings a coach’s lens to every episode to help you bridge the gap between intention and action.

Scott - 00:00:10:

Welcome to Best Ever, the show where we explore how effective self-management creates the foundation for positive leadership outcomes. I'm Scott Eblin, and in every episode, I sit down with notable leaders to uncover the routines, mindset shifts, and strategies that have helped them lead at the highest level, and the difference that's made for their organizations, families, and communities. My guest today is Richard Culatta, a leader who has spent his career helping people learn, adapt, and thrive in a rapidly changing world. Richard began his career as a high school Spanish teacher, but went on to serve as the director of the U.S. Office of Educational Technology in the Department of Education. Today, he's the CEO of ASCD and ISTE, two of the world's most influential organizations for educators that have recently merged. Together, they provide resources, professional learning, and a global community to help teachers and school leaders be more effective, innovative, and connected. Richard is also the author of the book Digital for Good. And beyond titles, Richard has been a passionate voice for how we prepare not just students but all of us to use technology thoughtfully, to lead with humanity, and to become, as he puts it, better humans. I'm excited to explore the lessons he's learned about self-management, leadership, and change at scale. Richard, welcome to Best Ever.

Richard - 00:01:35:

Thanks for inviting me. I'm very excited to be able to talk to you today.

Scott - 00:01:38:

Yeah, just so the audience knows, Richard and I will probably allude to past conversations we've had. I guess you still live in D.C. I used to live in D.C., and we traveled in some of the same broad circles, it seems like a million years ago. It wasn't that long, but a while ago. And I've followed your career with great interest over the years. And I've touched base a few times via LinkedIn and email and so forth along the way. But I just thought you'd be a great guest for this show because, you know, the whole premise is the intersection of effective self-management and creating positive leadership outcomes. And you're a very thoughtful person. I know that from knowing you and I know that from reading the work that you share online and elsewhere, listening to you and different videos you've done. And I think you're also a very a person who thinks a lot about technology. And I think technology is obviously just such a key component of that connection, you know, between self-management and leadership outcomes. So I'm really looking forward to a rich conversation together. I introduced you earlier, but I would love to hear... Like when you're at a party. Or out in a group of people you haven't met before, you do the standard who are you and what do you do conversation. How do you describe what you do? What's the summary on that?

Richard - 00:03:02:

I don't know. I'm still trying to figure out how to say it and not have everybody just like run away and talk to other people at the party. I often say sort of one of two things. If it's a group of people that care about learning, I will say, you know, my role and my passion is to help make learning experiences far more meaningful for everybody, as many people as I can. That's one response. If I'm with people who understand technology and care about technology, I will often say something like my goal is to help people learn how to use technology to help them be better at being.

Scott - 00:03:36:

Mm-hmm.

Richard - 00:03:37:

Between the two of those, you know, every once in a while, somebody will still hang around and talk to me.

Scott - 00:03:41:

Yeah, yeah. Okay. So in the intro, I mentioned you're the CEO of two organizations that have combined like peanut butter and chocolate, I guess. ACSD and ISTE. You can unpack the acronyms if you like. But just essentially, what do those two organizations focus on and what are you focused on as CEO of the two?

Richard - 00:04:06:

Yeah. So the two groups, one ISTE, or ISTE is a group that has been around for a long time, about 50 years, really focused on innovation and technology to support education and learning. So that's one side of the house. The other side, ASCD is the group that has really been for even longer, actually closer to 80 years, has been around focusing on curriculum and instruction. How do we, particularly in public schools, how do we make sure we're doing learning in the right way? Interestingly enough, neither of those groups had much overlap before the merger. And that's really why we decided to do it, why we made the case to do it, because we said in this world that we're living in now, you can't think about innovation and technology for learning if you're not deeply thinking about what are we teaching and how are we teaching it. And on the flip side, you can't say that you are being a responsible, you know, steward of learning experiences for people if you're not also considering how the technology is being used to help make that happen. And in some ways, it's a little crazy to think that it's taken this long for those groups to merge, but we're really excited about it. And the response from the community has also been really powerful. I think people on both sides have felt that there was a little bit of a, you know, something missing without having the two come together. And so that's really the work that we've been doing.

Scott - 00:05:33:

So it sounds like you're really kind of bringing together the two questions, what are we doing and how are we doing it? Is that a fair summation?

Richard - 00:05:40:

Yeah, I think that's right. In particular, I think both in some ways we're asking those questions, but there was a tool set and a set of approaches that were being used over here that weren't being used over here. And in some cases over on this side is where the more important problems are being addressed, right? And so it's like, there's some great tools for tackling tough problems, but the real problems are over here. If we can get the tool lined up with the problems, we actually can do some really cool stuff.

Scott - 00:06:06:

So as CEO of the two, what is your dream success scenario? How will you know that it's time for me to hang it up and declare victory and go home?

Richard - 00:06:16:

Well, I think there will always be more to do. And at some point, it'll be time to, you know, let somebody smarter than me take over. But as long as I'm here, my goal is really to bring these communities together. I particularly want to see higher quality ed tech products out there. I think we have been complacent a little bit with the quality of products. There's some really great stuff out there, but there's a bunch of garbage too. We want to make sure that all products are really high quality. And I think we've also become a little complacent with the experience of school. You know, I worry when I talk to kids and kids, you know, we work in K-12 schools, also working with higher ed to some extent. So when I talk to students, whatever level they're at, often we talk about what's exciting, what they love about school. And unfortunately, many don't love school. But those that do, they'll often talk about the great sports that they're participating in or the clubs they're participating in. And the learning part is often the least interesting part of their school experience. And that, we cannot continue to do that. I often think, Scott, I joke, you know, on the internet, they have those like, you have one job memes, you know, you can have great sports programs, you can have beautiful welcoming murals painted on your wall, you can have great lunches, but you have one job and that is to make learning awesome. And if the learning isn't awesome, you're failing.

Scott - 00:07:38:

Yeah.

Richard - 00:07:39:

And so I am just, you know, as long as I'm here in this role, I'm going to be focused on how do we make sure learning always feels like the most awesome part of the learning.

Scott - 00:07:47:

I love that. So let's talk about awesomeness over the course of your career. You started as a Spanish teacher. Is that right?

Richard - 00:07:56:

I did.

Scott - 00:07:56:

Yeah.

Richard - 00:07:57:

High school Spanish teacher.

Scott - 00:07:58:

Probably not a lot of tech back in that job, was there?

Richard - 00:08:02:

Can I tell you, I got to tell you. So I was one of the first people to use tech in the school.

Scott - 00:08:06:

Okay, let's hear it.

Richard - 00:08:07:

There was a projector on a cart, and I figured out a way to plug it into who knows what I was even plugging it into back then, probably a desktop computer. And I would use, you know, really basic video. At one point, I made a really basic video of one of the topics that we were studying. And I remember the kids just getting excited and emotional. But if any of you saw what I was doing, you know, back then, you'd just laugh at how rudimentary it was. But I saw there was really power in using technology to help engage the learners. And that's kind of the spark of where it all started.

Scott - 00:08:43:

A step beyond film strips is what that sounds like. Several steps beyond film strips, yeah. What year was that, roughly?

Richard - 00:08:51:

That would, oh gosh, that would have been 2000. 2000, 2001, somewhere in there?

Scott - 00:08:56:

Yeah, about 25 years ago. And so I didn't really do justice to your career path when I introduced you. I kind of leaped from a Spanish teacher to being a big deal at the Department of Education, U.S. Department of Education. What else stands out as highlights for you over the course of your career?

Richard - 00:09:17:

I was very lucky in between those two to have a chance to work in the U.S. Senate on education policy. And that was really helpful. I did it through a kind of a fellowship program, but it gave me a chance to work in a senator's office, to work on policy, to just understand at a deeper level how education policy is set in this country, which was really helpful. I really appreciated that. The other piece that was, I think, really useful for me is I had a chance to work for a company called IDEO. IDEO, if those aren't familiar with it, is a design thinking firm. They help mostly companies build new products, new services, new approach that are human-centered. And that's their whole big deal is human-centered design.

Scott - 00:10:01:

Give the audience an example of one of the great IDEO success stories because there's a lot of them.

Richard - 00:10:06:

There's many. One of the ones that I love to point to back is toothbrush. We have toothbrushes. Now, all of your toothbrushes have these sort of rubbery grips. But if you remember back in the day, at least when I was growing up, they didn't have. They were these hard plastic.

Scott - 00:10:20:

They're plastic, yeah.

Richard - 00:10:21:

Right. And so they would constantly, when you use them, they would get wet and they'd slip and they'd fall and your toothbrush is falling on the ground in the bathroom. You don't want to do that. And so IDEO and their team sort of observe how people, and this is how it works. They use these designs. They watch people using products and they see these issues and they build products around how to make those issues better. And so that came up with the rubber grippy toothbrush, which now everybody uses, right? That's one of probably many, many of your products in your house have been designed by IDEO. You don't know it because it's all under the brand name of the company that was doing it. But that's the work that IDEO does.

Scott - 00:10:58:

Okay. Before you move on, let me push the pause button for a second and just ask you about the IDEO experience. How many years are you there?

Richard - 00:11:06:

I was there for about three years.

Scott - 00:11:07:

Okay, that's a pretty good amount of time. What did you take away from that that influenced the rest of your career?

Richard - 00:11:12:

One of the great things by having that opportunity to see how a company who is built around human-centered design works is one, some strategies for doing human-centered design, right, that was really helpful. But the other is a realization of how much we build, whether it's processes or products that are not built around being human centered and you start to see it everywhere. I go, I go to my way crazy. We go to, we go to a store and the line is backwards from the way people are walking in, or the sign is confusing. And you're like, these are all things that we do that, that are not built around making it easier, you know, making it around human needs. So I've tried very, very hard in all of the work that I've done, whether it was in government, whether it's the work I'm doing now with, with this ISTE, ASCD to try to apply those principles of what, what is it, what does something look like? Whether it's an event, whether it's a, an online course, uh, if it is designed really around understanding the needs of humans. And just one quick example, Scott, we do a, um, every year, my organization, we shut down, uh, and everybody goes out into schools and we observe, we shadow teachers in order to build empathy for, to keep that, that user centered design, uh, embedded in our work.

Scott - 00:12:26:

Yeah. And so it's funny, our episode of best ever to just dropped earlier today, in August, late August of 2025, um, is with a fellow named Donagh Herlihy. And Donna was the CTO at Subway, uh, the sandwich, uh, company. And the very first thing he did, even before he started his job officially, was go to a store and became a sandwich artist, right? You know, because he really wanted to understand it from that customer and frontline perspective. I love the fact that your folks are going into schools.

Richard - 00:12:56:

You forget. It's very quickly. You know, I was a teacher. As you mentioned, most of my team worked in schools. And you can have this idea, well, I know what it's like. I used to work in schools, right? That was 20 years ago. Even two years ago, stuff has changed. And so thinking that we know an industry, that we know what an experience is because we experienced it a number of years ago, that is a recipe for disaster when coming up with new products and services.

Scott - 00:13:26:

So you're... We didn't finish your career path. I'm sure we'll get to it here and there along the way. Uh... Your interest in technology, I probably predated your high school Spanish teacher experiment that you described. Yeah, at what point did you become focused on that as an enabler? How has that interest evolved over the years for you? I'd love to hear a little bit more of that.

Richard - 00:13:54:

Yeah, so the spark really was that moment when I was teaching, you know, using this clunky old projector in that high school classroom. But I had an interesting thing happen after, which was the school that I graduated from, Brigham Young University in Utah, was redesigning their teacher education program. And one of the areas that they had identified, you know, with now this thing called the Internet out there and, you know, technology like projectors coming into classrooms, even if they were still on carts, right, was that they needed to be thinking about how to prepare future teachers to use technology effectively. And they went, you know, looking for people who were doing this. And I had, you know, as a recent graduate, and one of the few people that was actually doing stuff with technology was, you know, called and they said, hey, would you come and help us think about, well, we should be training future teachers. And so that was when I really started to dive deep. Like, what do future teachers need to know about where technology is going? And it allowed me to really look at what technology I thought was really had potential and what was just noise. And trying to sort that out was exciting. And that's kind of then from there, as they say, the rest was history.

Scott - 00:15:05:

So the technology 800-pound gorilla these days, obviously, is artificial intelligence, right? And how are you thinking about best and worst practices when it comes to AI and learning? I want to start in the schools, but then I want to extend it beyond the formal education.

Richard - 00:15:25:

Happy to. And I think actually the answer that I'll give is probably equally applicable in both places. And I would say, I think largely we're getting the conversation about AI wrong. Much of what I hear and we think about with AI, the conversations that we're hearing misses what I think is the most important thing, which is how back to how I say I introduce myself sometimes, how are we using AI to help us be better at being human? And we don't teach that well in schools. We don't teach that well in the workplace. We don't teach it well in college. An analogy that I use, Scott, that may be helpful here when I'm trying to explain this is to think about a dump truck and a sports car. Two different vehicles with two different purposes. And if you think about the dump truck is really good at some things, moving piles of dirt, right? Not really good at others. The sports car, very good at some things, going fast, being comfortable, not good at hauling rocks around. That's a good analogy for us to think about human intelligence and artificial intelligence. There are things about human intelligence that are just better fit for certain tasks. Certainly when we talk about our relationships and humor and creativity and empathy and leadership, there's so many things that fit into that bucket. There are also things that we have to recognize that the human brain is just not very good at, right? If we're the sports car, for example, you know, in my analogy, right? There's the things about the sports car brain that we struggle with. Humans are not very good. The human brain is not very good at analyzing lots and lots of data, right? That exhausts us pretty quick. It's not a good use of, we could do it. We've done it for hundreds of thousands of years, but it's not a good use of our sports car brain, right? Here's another one. Brainstorming. Human brains actually, we really struggle to brainstorm because we have all of this lived experience that shapes how we see possibilities. And that is good. It makes us really good at some things. It makes us not very good at coming up with lots and lots of different possible solutions. Whereas AI is really good at some of those things, right? That dump truck AI is really, if you want to, if you've ever done this, where you say, hey, brainstorm, you know, five different ways that I can, you know, make chocolate chip cookies, whatever. Here you go. He's like, you know, give me 10 more ways. Okay. Like, you know, just keep creating solutions. It's really good at synthesizing information. We, we as humans have gotten really used to consuming so much information that is below or above our level. And, and AI is very good at adapting information to be right at our level. And so the, the, the point of all of this is we need to understand when we want to use the sports car. When we want to use the dump truck. And if we get that wrong and we start hauling rocks in the sports car or trying to move, you know, drive quickly across the country in the dump truck, we're going to be in trouble. And that's what we've got to get ahead of.

Scott - 00:18:35:

Okay. So you gave a couple of examples of where AI is probably better than the human brain, right? So one is synthesizing lots of information and summarizing it. Another one you said was brainstorming because it doesn't have to deal with the lens of lived experience that we have as humans. Where is the human brain better than AI? When should we rely more on our brains than the artificial brain?

Richard - 00:19:01:

Yeah, so I think, you know, back to that idea of this lived experience is that because of all that lived experience, we're actually quite good at choosing the right option when presented with options. And so I think, you know, handing the decision off to AI on what to do is something that I would not recognize. That is handing a sports car task off to the dump truck, right? So us being able to choose, you know, with our thinking expanded by options from AI, that's a nice example of using both for what they're optimized for, right? Here's another one. Building relationships, right? Holding relationships, maintaining relationships. That's a human task, and it matters, and it's critical that we can do that. And sometimes humans are really bad at knowing how to best at work together. We struggle with that. AI can do a really good job of saying, here are the different skills of people on your team. Here's how you might work with this person better. Here's how you might overcome some barriers to do that. It can be really powerful at helping us know how to work better with other people.

Scott - 00:20:08:

Okay, let's dive deeper on that because there's a leadership application right out of the gate, right? It is team dynamics and leading your team. How does AI, how do you use AI well to determine what the different people on your team are good at? That's the first thing you said. How does that, what's a good ethical way to use AI for that purpose?

Richard - 00:20:29:

I mean, I think one of the simple things, and I've done this before, is you just say, here's a project that we're working on. Here are the people that are working on the project. What viewpoints might be missing? That's an alien will do a pretty good job of giving you an answer on that. You know, it's not is it going to be perfect? Is it going to be right? No, not all the time. But is it going to say, hey, I'm just noticing everybody on this team is male over 40 or whatever. Right. Or maybe everybody in this team is from North America. And your project that you're saying is talking about an initiative that has global impact. Right. And I'm getting really obvious ones. But I can actually be much more nuanced than that. And so I think that's a good way of saying, again, not that it should make the team for you, but it should help show you-

Scott - 00:21:11:

Where the gaps are. Yeah.

Richard - 00:21:13:

Right. Exactly. Exactly. That's what here's another one, Scott, that I'll mention, which is maybe a little bit tangential to this. But humans struggle sometimes at telling stories. There are certainly humans who are fantastic at this. Right. But, sometimes we struggle. We over rely on text because it's hard to or was hard to produce media or graphics that were really compelling. And so we do a lot of right, a lot of long narrative. But minds and hearts are often captured much more powerfully by, by a short video or short image or graphic than a long narrative text. And so that's an area where we where you can partner up. Right. Using that that that sports car human brain to to be able to recognize when we need to win a heart and mind and use the AI, to help us generate some of those the storytelling artifacts that can make that story much more powerful. And by story, I don't mean, you know, back in the day when I was a kid, I mean, telling a story of why you're, why your company matters or selling a story of why you're why the data that you've just collected from a survey should change the way people think about a problem. Those are the sort of stories that I'm talking about. And and I, and I think AI can help us be much more compelling in our storytelling.

Scott - 00:22:30:

Have you used it for storytelling yourself recently?

Richard - 00:22:33:

All the time. All the time.

Scott - 00:22:35:

What's a great example from recent history?

Richard - 00:22:38:

One of the things that I do in all my presentations, I use images that are created by AI. And I always put the prompt at the bottom just because I think it's fun to demonstrate what I'm doing. So that's one. I also use AI to create video and graphs. It's also something else I use it for. Just the other day, I said, you know, I'm trying to explain this. It was a situation where we needed to make sure we were training somebody more in a particular area. And I said, I need a good analogy. Can you give me a good analogy? And it gave me four or five really powerful analogies that I was able to use to just drive that point home.

Scott - 00:23:16:

So let's, since we're talking about you and how you use it, back to my two legs. I should have three legs of a stool, but I only have two in this show. Self-management and leadership outcomes. How are you using AI to enhance your own self-management? I'm sure there's a time management application, but, beyond time management? And you can talk about time management too, but let me just not qualify it. How are you using it for self-management?

Richard - 00:23:41:

I mean, I will tell you, and this may be, I think a lot of people talk about using it for efficiency purposes. And you can, right? Time management and writing drafts, saving time on drafts, which you should. No human should ever draft anything ever again, honestly. Right? Unless it's a fine work of literature, any email, anything, just draft. Have AI, have AI give the first draft, right? That's what AI is good at. AI is good at drafting. Human brain, really good at refining. And we spend so much of our human brain's time drafting and don't have much time for refining, which is exactly the wrong order of operations there. But I would say that where I find it most helpful is giving me feedback on, you know, as a leader, I don't always get as much feedback as I need. And I know that. And so, you know, I have a coach and he's fantastic, but like I need more than that. And so often I will say, here's something I've written that's explaining a concept to my team. And I'll put it in AI and say, if somebody were going to be confused about this, where might this be confusing? Right. Or if somebody were to be offended by something I have in here, where might they be offended? Or maybe it's just as simple as, is there a way that I can explain this that's more concise so that people aren't wasting their time? I get really good feedback from AI on that. And that helps me improve my leadership.

Scott - 00:25:05:

Right. You're kind of using it as a really skilled editor almost, right?

Richard - 00:25:09:

A hundred percent. And even to prep me for conversations. So if I'm going to meet with my board and I'm going to say, here's a big change that you need to make, I will go into AI and I will literally put in the prompt that says, I need to explain to my board a change that we're making. Can you please respond as if you were my board and ask me the type of questions they might ask?

Scott - 00:25:27:

Mm-hmm.

Richard - 00:25:28:

And we'll just go back and forth with the dialogue of the question so I can practice what I'm going to be presenting to my board with AI. And it's pretty darn good at that.

Scott - 00:25:38:

This is a great opportunity, Aslan, that I've been thinking about for a while personally. So I went to Davidson College and Davidson College still today, I believe. And when I went there, certainly. I know it has an honor code today, and the honor code is a big way that Davidson is organized. And one of the biggest violations you could make of the honor code was to plagiarize work. So as somebody who, you know, in their teenage years and early 20s was super just careful, careful about not plagiarizing work, you know, to have AI write every first draft I do, that's a shift. How do you think about that? And how should everybody be thinking about it? If they're concerned about, well, this is not my original work. How should they be thinking about that these days?

Richard - 00:26:26:

Yeah, that's such a great question. And by the way, I recently wrote a blog piece about this just because I was getting so many questions about it. So if anybody wants to dive more into this, you can just search Richard Collada AI and cheating and you'll get a whole bunch of information about it. But let me tell you this. Look, we have got to we've got to reassess our original work thing because it doesn't make a whole lot of sense. And, you know, I was I was talking to my to my wife the other day, who's a professional musician, very, very talented musician. And, you know, we talk about composition and when somebody writes a piece of music, right? All Western music is all based on ideas from classical composers. So like, don't let's not pretend that we were the inventors of all of our ideas. Now, that is not to suggest that there isn't a unique value add that people bring to something. I don't mean to say that at all. But just this idea that like, like we aren't already tying to works of other others is, is, I think, a little misleading. I think what is so critical and this is the big shift we need to pay attention to is what are the areas where we really need humans to add value? And then just be transparent about that. For example, we were the other day hosting, we actually hosted a big event. We hosted an art competition. This was an art competition for students who were telling stories through art. And it was AI-generated art. Now, interestingly enough, if you went and talked to those students and said, is this your art? They would say, absolutely, it's my art. It came from my ideas. I had to regenerate and reprogram and reprogram the AI until I got what I want. Right. Now, if you said to them, did you paint that? They'd say, no, of course I didn't paint it. I'm not a painter. And so I think it's just being clear by what we mean. If the thing that we are measuring, if I am having a class and I am measuring somebody's painting ability, their ability to use a paintbrush, it would be totally inappropriate to use AI and say, I painted this.

Scott - 00:28:32:

So, you know, the analogy I'm thinking of as you describe that is, in the early days of humankind, the paintings were on cave walls and then canvas was developed and we painted on canvas. And then photography came around and we, you know, we create art through photography. Is this just another step in that lineage?

Richard - 00:28:53:

Exactly. And you know, it's such a great example that you brought up is this idea of you'd say, everybody who is a photographer is cheating.

Scott - 00:28:59:

Yeah.

Richard - 00:29:00:

They're cheating because they click. They just click.

Scott - 00:29:02:

Right. They didn't have to paint it. They didn't have to paint that.

Richard - 00:29:03:

Yeah. Well, no photographer said they painted it. They weren't saying they were a painter. They're saying they're a photographer. And so I think we got it. We got to really think about that. And part of what this means, Scott, is being brave enough to realize that there are some things that we ask students to do that really is not a good use of their time. You know, I have a kid who's in, just recently graduated and he's in college now. One of his senior activities, we spent a lot of time formatting a document he'd written in MLA style, right? Lots of time, lots of review. What a terrible use of time. Nobody ever, ever in their life is going to need to manually format MLA style. That is gone now, right? So taking that time, that is time that was taken away from deeper level learning. And I thought, you know, a great, just sort of one final example to make this point. I was talking to a bunch of teachers once and they were really concerned about, mostly English teachers, they were concerned about how this was going to, you know, destroy all of their work. And there's another group of teachers that are just kind of laughing. And it got to the point where it's actually a little distracting. And I stopped and I said, sorry, pause for a second. What's going on? What's so funny? And they said, oh, sorry, we really didn't mean to be disruptive, but we're math teachers. And I said, Okay, like, does that mean you're obnoxious in meetings? Like, what's the deal? And they said, no, no, no, look, we just, we've been through this before. I said, what do you mean? AI is new? And they said, yeah, AI is new. But calculators.

Scott - 00:30:33:

Yeah.

Richard - 00:30:34:

We had a moment where calculators came in and we thought it is going to destroy math as we know it. It will ruin everybody's brains and nobody will be able to do math. And then he turned his group and he said, how many of you require a calculator in your class? All of their hands-

Scott - 00:30:49:

Require. That's interesting.

Richard - 00:30:51:

It doesn't mean we don't want to see basic, basic mathematics. We do need to know that they understand basic computation, but once they do, we can go far further into concepts faster by using the calculator. And they just said, we know for sure that our English teachers and other teachers, they will catch up to this too. They just have to go through what we already went through with calculators. And at some point they will understand that we can now go far deeper into the skills we need to when we're not spending so much time on the logistics of writing.

Scott - 00:31:19:

To pull back the lens from a writing example and just more generally. From your experience as an educator, as a leader, and you can think about it, you've been in both the private sector and the public sector, and now the nonprofit sector, you've been in all the sectors. What's the best way? What are the best ways? There's no one single best way. What are some of the best ways? Like to coach people, teach people, whatever verb you want to use. How to do higher order thinking, how to go deeper in their thinking, because I think... Everything you've said so far and everything I've read. That's what's going to keep humans in the game. Right. Is, is that kind of value added critical thinking that can't maybe can't be replicated on online. So. What have you learned about that? And what would you pass on to folks listening to this or watching us that, they can implement wherever they are as leaders.

Richard - 00:32:21:

I mean, I think, let me just pause for a second, because I think this is the most important question we need to be asking. You know, I hear, when I talk to schools or companies, and they're worried about cheating with AI, and I'm like, this is, you are just focusing all your energy on the wrong thing, right? What you brought up, that's what we got to focus on, which is how do we make sure that we are not just replacing, you know, we do a lot of these sort of laborious, sort of logistic tasks right now as humans, far too many of those. And if all we do is just move those over to AI and don't reinvest that time into deeper, you know, deeper, more meaningful, engaged activities, we have not come out ahead, right? That is not, so I think those are the conversations we need to have, which is how does, you know, again, we'll just take this example of my kid, right? How does the fact that he is no longer spending so much time doing an early initial draft or formatting something for a particular style? How does that free him up? And what should he be doing with that time? That's the question we need to be asking. Now, in that particular case, I think it's, should be spending far more time refining. And generally, that's my answer, right? I feel like, I don't know if you feel this way or, you know, maybe some of your listeners do that. We spend so much time just getting basic tasks done that I don't have the time to put into the refining that I would want to on a presentation, on something I'm writing, even just on my thinking. And, and so if I can, if I can be smarter about offloading some of the basic block and tackle, you know, drafting of things to AI, can that allow me to have a bit more time to go deeper? Into refining my work and my thinking. That's what we need to be careful of. But if we're not careful, truly, if we're not careful, that can get eroded. And we end up, again, just sort of replacing the logistical part and not getting the benefit out of the extra refining and time that we need to be putting in.

Scott - 00:34:18:

So what are some good structured ways to do that, though? I mean, you know, so now... The good news is I've got more time to think. The bad news is maybe I don't know how to use that time very effectively. So what are your top 10 tips, your top three? Give me, I don't care what number you use. How do we think more effectively?

Richard - 00:34:40:

Right. I mean, we've got, this is something we have to learn to do. And I think that's the first part is just being open and honest about needing to learn how to do that and talk about it, right? So for example, one of the things I think that we try to do on our team, and I think other companies can do this as well, is just have conversations and say, what are some things that you are able to do differently now in this AI world that we're in? Hear what people are saying. And you'll hear some examples and you'll be like, oh, I just, I never thought of that, right? So just being open, that's the one is just being open and encouraging those conversations. I worry that a lot of us are figuring this out, this meaning, you know, how do we add our value, our human value in new ways? I feel like we're all sort of figuring it out quietly on our own. And that's a really bad way to learn. So being open and sharing, I think that's really the main thing. I think the other thing, though, is being able to look at some of our historic practice and recognize where we've become complacent with something that really could be much better. And you might say, well, what does that have to do with AI? And you're like, yeah, yeah, we'll get there. The first part is when we're so used to, you know, when we've produced things the way we've produced them forever and ever because it was the only way to do them. Sometimes we need to pause and go, there's actually far better ways to do this that we never could because they would be too time intensive. You know, having eight different versions of something at different levels for different people is something we never would have considered when I was, you know, when we were on a team of instructional designers, you know, building a course, right? Now that's very easy. It's very easy to have eight different versions of something. And so I think some of it is also just taking the moment to relook at what your work looks like through fresh eyes. And that's tough. But that's where things like those human back to the beginning of this conversation, those human centered design ideas, you know, spend some time shadowing your end user. See what their experience is like. Don't pretend you have the answer. Don't pretend you know what the solution is. Just sit with the problems a little bit more. Watch for those areas of frustration and then use some of this, this extra time that we have to go back and say, all right, where and how might we redesign some things with these new tools to address these problems a little bit better.

Scott - 00:36:56:

I love it. You're a fan of the old movie airplane. Are you? My wife.

Richard - 00:37:03:

Don't call me Shirley.

Scott - 00:37:04:

Exactly. Please don't call me Shirley. My wife and I have all our, I'm sure you guys do all of our inside jokes. And one of our inside jokes is the scene where Lloyd Bridges asks Johnny in the control tower, Johnny, what do you make of this? And Johnny says, well, I can make a brooch or a hat. I mean, we love that question. What do you make of this? Right? Because there's so many things, so many different ways you can answer. Right? And that's kind of what your advice reminds me of.

Richard - 00:37:30:

Right.

Scott - 00:37:30:

You know, is like, I have a time to step back to, well, you know, we could do this. We could level this out for eight different learning styles or eight different levels of competence or whatever and give customized solutions because now we've got the toolkit to do that.

Richard - 00:37:46:

Right.

Scott - 00:37:46:

And what would have taken weeks to do now takes five, 10 minutes to outline it, whatever it takes, you know, prompt, prompt, prompt. Yeah. Okay.

Richard - 00:37:55:

It puts an option on the table that wasn't ever even possible to be considered before. And like, Scott, how awesome is that? Like, how awesome is it that we have this moment? I don't know that we are ever going to have this moment again, at least in my career of having something that's disruptive enough that it allows us to go, wow, we actually can think about redesigning everything. At least it gives us the excuse, if nothing else, to be able to say, should we even do the things that we've always done this way? And that, even if at the end, we don't use AI or don't even change anything, just the gift of being able to stop and go, should we keep doing this the way we've done it?

Scott - 00:38:34:

Question the assumption.

Richard - 00:38:35:

It's magical.

Scott - 00:38:35:

Yeah, totally. Well, let's talk, let's kind of bring it down to individual life a little bit. You used the phrase earlier, better humans. I've seen in doing the research for our conversation, you use that phrase a lot. You know, how do we become better humans? What does that mean to you to be a better human?

Richard - 00:38:57:

I feel like there is so much need for humans to be more creative, to be more empathetic, to be more civil, just to be kinder, to be funnier, more willing to be engaged and tied closer to our communities and our families. These are all things, and we could go on, these are things that are uniquely human. And I just feel like for so long, it feels like those important attributes are kind of the afterthoughts. And I would just love it if that could be flipped. And if we could be in a world where those things, those skills were the most important thing. And that we were leveraging every tool set we had to help us be better at those tasks.

Scott - 00:39:43:

So you're a leader of an organization doing important work with how many people on your team?

Richard - 00:39:49:

About 200.

Scott - 00:39:50:

200. And there's a whole lot of thousands of people that rely on your organization for inspiration and guidance and whatever else. How do you lead in ways to encourage? Uniquely human. Interaction and engagement.

Richard - 00:40:07:

Well, yeah, man, if you ask my team, I'm probably the last person they point to. They point to a bunch of other people, which is totally fine. So it's a little awkward to point this on me, Scott. Way to do that. But look, I think I try very hard to push for more creativity. I think a lot of times my team members will come to me with ideas that are good. And we're used to taking a good idea and running with it. And so part of what I try to model is to say, it's a good idea. Can we get to a great idea? Can we go back and find four or five other possible ideas that are equally as good before we choose this one? And so trying to just model that, like using that, tapping into that human creativity in a way that sometimes we just, you know, are willing to move through too quickly. I also separately, slightly separately, I spend a lot of time talking about how we can be kinder in virtual spaces. We do a lot of work training teachers and students and even in cases some companies on how do you create a digital culture that is, you know, how do you say, how do you create the conditions for really healthy digital culture? Yeah. And that's an area that I hope I've been able to model a bit by being able to show how do you, you know, how do we use these virtual spaces to spotlight other people that are doing amazing things to make people feel more welcome in the virtual spaces that we're in. I think those are all, you know. Ways that we really need.

Scott - 00:41:44:

How do you, how do you make people feel more welcome in virtual spaces? Cause I would think whatever you do virtually would also translate to physically in some fashion, right?

Richard - 00:41:52:

Correct. And I love that you say that because I think, I think part of the problem, part of why our virtual world is, is, you know, not as awesome as we want it to be is we've made the assumption that how we've learned to be respectful humans in physical spaces will just automatically transfer over. And it turns out it doesn't. Right. And so I think we do need to do exactly what you're saying, which is talk about. What does that look like in virtual spaces? So so look, here's, you know, some examples, I think really easy ones are if you are ever in an environment where you see somebody, you know, making a comment that's disrespectful or you're not kind, being able to respond to it, not in a in a fight back way, but in a simply, you know, simply a post like, hey, we respect all viewpoints here, even even people who may disagree with us. Right. Just a simple post like that can really, can really

Scott - 00:42:37:

You know, like on the chat thread or chat thread,

Richard - 00:42:40:

you know, somebody's Facebook, you know, this Facebook post and somebody's right. Somebody has to has to make that snippy comment and to be able to respond and not attack them back, but just to say we welcome you know, I welcome all opinions here. Thank you. Right. That's one. But there's also so many other ways that we can do it, too. I think one of the ways is just highlighting and not a braggy way, but highlighting how we are using digital interactions to help those around us. You know, simple thing. You know, hey, here's, here's this, this GoFundMe account that I created for, you know, a family that's that that's struggling. What do you think? You know, right. With my kids, you know, something that's really interesting with kids is they see they see dad, mom just at their computer or at their phone again. And they don't necessarily see that there are ways that we're using those devices to to help improve our community. I, you know, served as the president of our school PTA. Right. And so the other day, my kid came up and again, there's dad at his computer. And I took it as a moment, say, hey, we're trying to, you know, we're trying to do this activity to get more volunteers to come into school. Come take a look at this, you know, this post that I'm making. What do you think? Right. It's just a simple way to show and to model that we can be using these technologies to really pull our community together.

Scott - 00:43:53:

I want to... That's perfect. Thank you for sharing that. I want to talk about what makes you you. A little bit. Anybody listening to this is... I hope concluded that you are a very high energy, very creative guy. If they haven't concluded that, they're not paying attention. When you think about how you are at your best, what great looks like for you personally, what's the list? What are the words that you hope others see in you or that you feel and see in yourself when you're kind of in your own version of peak performance mode? What's the descriptive list?

Richard - 00:44:30:

Wow, what a great question. Curious? I think when I'm at my best, I'm curious. I don't think I know all the answers, and I'm interested in understanding the problem more. And so I think that's probably the first one that comes. You know, we've used this word empathy a couple times now, being empathetic, trying to understand what an experience feels like for somebody else. And I think, you know, funny. I actually worry. I worry that we've lost humor. I think one of the ways that we, you know, based on my role in government and some other places, I've had the responsibility of dealing with some really tough challenges, some really tough problems. And I feel like one of the ways that I've been most successful about being able to move through those is being able to bring humor, not to make fun of a tough problem, but to be able to treat tough issues with a certain amount of lightness to help us be able to keep coming back to the table around them. And in my book, you know, my book is a book about how do we stop, cyber bullying, basically, right? How do we how do we help kids kids keep safe? And there, you know, there's nothing funny about cyber bullying. But I think we can approach a tough topic with with a sense of humor. Because as we can laugh at some of the areas that we've made some mistakes, it actually makes it easier for us to not make those mistakes in the future.

Scott - 00:45:52:

I love that. I want to ask you about routines that help you be curious and empathetic and funny. And all those feel very authentic to me, even just from this conversation, let alone other conversations we've had over the years. Physical, mental, relational, spiritual is the categories that I typically think of in terms of routines and self-management. Any stand out for you that's being particularly important in any of those domains to help you be the person that you want to be?

Richard - 00:46:27:

Wow. Yeah. Gosh, those are, those are all, um, all good areas. I mean, I will, I will say, I don't, I don't talk about this on a lot of podcasts, but I, but I am, uh, you know, I do have a faith, faith, faith base. It's quite strong. And I I've served in, in our church, I've served a mission for, for two years for, for, for my church. And I've, uh, spent a lot of time in, in church service. I do, I do believe that that's, uh, really key to staying grounded for me. And also just sort of reminding, it helps me, um, baseline the problems that I'm tackling, right. When, when there's a sort of a broader-

Scott - 00:47:00:

Say more about, say more about baseline. What, what's, what does that look like?

Richard - 00:47:04:

You know, I feel like we can get, I can, I'll speak for myself here. I can get very caught up in the details of a particular problem that I'm trying to deal with. And that's part of being excited about fixing problems, right? But sometimes I need to be able to zoom out and recognize that while this problem is important on the grand, and I would say on the eternal scheme of things, it's actually much smaller. And if I can keep that perspective in mind, it actually makes it easier for me to solve the problem. And so that sort of eternal arc helps me, helps me keep grounded. So I'd say that's one thing. The other thing, Scott, is I would just say having a, a habit, a routine of always learning helps me. I'll just tell you what is sort of a funny example about this is my son and I, we have a shared notepad app on our phones. And throughout the day, whenever we come across something that we want to learn more about, right? How heavy is a cloud? Or who is the first person to eat sushi? Or like what, you know, whatever we come- we jot it down on that notepad. And then at night, before we go to bed, Um, you know, We do something called rabbit holing, where we pick something off the list. And for 10 minutes, we go as deep as we can in the rabbit hole to find out what the heck we learn about clouds or sushi or astronauts or whatever it is that we're choosing. It's just a nice way every day to have a habit of learning something.

Scott - 00:48:40:

How long have the two of you been doing that? I have never heard that routine. That's a great one. So how long have the two of you been doing that?

Richard - 00:48:47:

Oh, we've been doing it for a number of years now. I think it started, we used to, when my son was little, we would read at night before going to bed. And that's great. He now reads far more than I ever, ever do. And so as we outgrew that, we were like, well, what's the next activity we could do? And we're always curious. We're always talking about things. And so we said, well, why don't we just make this tradition of doing this every night before we go to bed?

Scott - 00:49:09:

I love that. And so one of the things when I coach folks about routines is I always encourage them to look for high leverage routines where they touch on at least two of those four domains. You know, like in that case, mental and relational, you're just, putting a sweet spot on both of those, right? You know, it's a curiosity factor, which, you know, one of your words was, I'm curious, right? And so you spend your day being curious, you capture what you're curious about, and then... You and your son... Go down the rabbit hole. I love that.

Richard - 00:49:39:

Go down the rabbit hole, that's right.

Scott - 00:49:41:

I love that. You mentioned your book, Digital for Good.

Richard - 00:49:44:

Yeah.

Scott - 00:49:45:

And that's about healthy tech habits for kids. What are the takeaways, and you mentioned cyberbullying just now, what are the takeaways from that book that focused on kids that... Really apply to everybody.

Richard - 00:49:59:

I think one of the biggest lessons, again, that applies to everybody is this idea of balance. And I start right off the beginning of this because everybody worries. It's sort of the biggest thing about screen time comes up a lot, how much time do we use on screens. And screen time, and I won't go too deep into this because you can read the book if you want to, but screen time is actually a pretty misleading way. It starts in a good place. It starts as a way to find balance. But the problem with time is it doesn't ask you to look at the quality of the digital activities that you're doing. And so we strongly, in the book I make the case very strongly, to stop thinking about screen time and start thinking about screen value. What is the value you're getting out of a particular activity? Or what's the value that one of your kids is getting out of a particular activity? Some digital activities have great value and could be done for lots of time and still be appropriate. Some have very little value and probably shouldn't be done for much time at all. And so thinking about this idea of how do we find balance between our physical and digital worlds, between the different digital activities that we do, and really keeping that front and center and always be evaluating, are our digital activities bringing us the value?

Scott - 00:51:10:

That's sort of like quantity versus quality, really. What's the best ways or what are the best ways to assess the value? Because that's an objective statement that can be a subjective judgment, right? How do you assess value or what are your recommendations on that?

Richard - 00:51:28:

So I give some very specific things to look for, just to share a couple here for time. One of them is how does it make you feel? Right? When you've done this activity, do you feel better afterwards? Do you feel more excited, more rejuvenated, healthier? Or do you feel worse? I compare it to eating. Do you know sometimes you eat a, you know, you sit down and eat food and I'm like, man, I feel better. Like I feel energy now, I feel better. And sometimes I sit down, you know, and I eat four Twinkies and I go, ugh, that was not good. I do not feel not good.

Scott - 00:51:57:

If you're not watching on YouTube, you need to watch what he just said on YouTube because his facial expression was amazing.

Richard - 00:52:02:

Yeah. So you could tell I've been there right now. So I think, I think that's a similar thing. How do you feel? Another is what's the purpose of the activity, right? Sometimes the purpose of a digital app or engagement is literally to have you watch more ads. And so, so that's a question you have to ask is, is that value? Is that, it is me watching a bunch of ads, giving value back to me. Is there value? So, so it's how do I feel? What's the purpose for the developer of it? Right? Did it, did it, am I, you know, am I, am I just helping somebody else make money or am I really getting value back? That's another one. One more that I would ask is, does it help connect me to other people or does it isolate me from other people? Mm-hmm. There's some great digital activities. My, my kids in the evening get on Minecraft and they play Minecraft with their cousins together. Their cousins live all around and they have this way to like connect with their cousins. That's, that's an activity where, where, where it's actually sort of bringing them together and having them create memories very different from an activity where I'm sitting by myself and it's actually pulling me away from other relationships. And so, and there's more, there's more in the book, but those are a couple ideas that you can think about when evaluating if a digital activity is really bringing you the value that, that it deserves.

Scott - 00:53:13:

Yeah. Thank you. Those are great ideas. I know we need to wrap in a moment, but just an observation and then a couple of more questions. We'll wrap it up. The observation is everything you just suggested is a really great example of self-observation. You know, looking at yourself as... Not just part of the system, but trying to step back and look at the system. I'm sure you're familiar with the term double loop learning. Yeah, from Peter Argyris in Peter Senge popularized it. Yeah, yeah. I mean, I can either be in my do loop or I can get out of the do loop and look at the impact of the do loop and start asking questions like you just asked, right? Terrific, terrific stuff. So one more substantive question and one more kind of fun question. What's your one best piece of advice to leaders who are navigating? Incredibly technological. How would you fill in after technological? What age are we living in? Transformation? I don't know. The transformational age that we're living in. What's your one best piece of advice for leaders in any field?

Richard - 00:54:21:

I think the biggest thing I would say is don't feel like you have to have the answers. I think as leaders, we often feel like our role is to be the one that comes in and provides the answer, provides the structure. And in this particular case, some cases I think that's needed, right? I think that very much is the role of a good leader. In this case, I think we need to realize that our job is to help be explorers. And we need to help create safe space for people to explore. We need to be exploring ourselves. And the answers will come. But if we try too quickly to think we know the answer in a world where the technology is evolving so quickly, we will end up actually backing ourselves into a corner. I was actually working with, I was at a national level, a country, and they'd come up with some guidance for how their employees were supposed to be using AI, right? And I read it through and they were asking me for my opinion. And after I read it, I stopped and I said, did anybody who wrote this actually use AI? And there was this sort of awkward silence. And you're like, don't pretend that we have the answers when we haven't taken the time to explore.

Scott - 00:55:28:

Perfect. Here's my fun closing question that I like to ask everybody that comes on best ever. What's been in your ears lately? Are you listening to that's inspiring you or shaping your thinking or just fun?

Richard - 00:55:42:

Well, oh, interesting. What a great question. I, um, the audio book that I'm listening to right now, I'm a little late to the game on it, but it ties to what you were just talking about is the book Habits, right? Oh, yeah. I know a lot of people have read, and again, I'm a little bit late to the game, but I'm just reading it.

Scott - 00:55:57:

Is that the Charles Duhigg book, The Power of Habit?

Richard - 00:55:58:

Yes, exactly.

Scott - 00:55:59:

The Power of Habit, yeah.

Richard - 00:56:00:

And I'm reading it. I read sort of two books together, so I'm reading sort of Habits. At the same time, I'm reading Dan and Chip Heath's book, Switch.

Scott - 00:56:09:

Oh, yeah. Those are two classics.

Richard - 00:56:11:

Again, both are ones that I should have read a long time ago that talks about really how do we rethink tough problems by looking at them in very different ways. And there's this kind of magical combination of reading on one hand, on one ear, hearing about habits and how if you can get habits down right, it can be transformational. And on this other ear, thinking about how you can switch the way you look at things and solve problems in the right ways. It's just this kind of cool mix of two ideas together that maybe will turn into something great.

Scott - 00:56:38:

I love it. Yeah, peanut butter and chocolate. That's how we started. That's how we'll end.

Richard - 00:56:42:

Yeah.

Scott - 00:56:43:

If people want to learn more about you, your organization, your work, your creative output, where would you send them?

Richard - 00:56:51:

Welcome to follow me on LinkedIn. I do a lot of posting of my videos there. If you're in the education space and are interested in the work of ISTE and ASCD, you can go to iste-ascd.org. You can find us there. And also certainly, you know, for a shameless plug, go buy a copy of my book, especially if you're looking at how do you create a healthy digital culture at home and not, you know, not have the tension of, you know, too much tech or no tech at all. There's actually a really happy medium that you can find. And I hope that that will help.

Scott - 00:57:19:

And the book is called?

Richard - 00:57:21:

Digital for Good: Raising Kids to Thrive in an Online World.

Scott - 00:57:23:

Thank you very much.

Richard - 00:57:24:

I'm going to make a shameless plug. I might as well leave it in the book.

Scott - 00:57:26:

Yeah, you might as well tell us the title of the book. It makes it easier. Hey, listen, so great to talk with you again and to have such a spacious conversation. And thank you so much for sharing everything you've shared, both the energy and the ideas, Richard. Fabulous stuff.

Richard - 00:57:39:

My pleasure, Scott. So great to connect again. Thank you. Next time we won't wait so long to read.

Scott - 00:57:44:

Yeah, I hope not. Let's not. All right, coachable moments from the conversation with Richard Culatta. First takeaway for me is Richard has a ton of positive energy, and I love that in a leader. He's just so enthusiastic and his enthusiasm is grounded in personal practice and lots of great professional experience. He's got a point of view on things. You can agree or disagree with the point of view. But I love a leader with a point of view who presents it with energy. And Richard clearly does that. A couple of things he said really stuck with both me and my producer, Cece, as we were talking about what our takeaways were from this episode. Two things in particular. One is the concept of baselines. And Richard used that in a couple of different ways in the conversation, kind of like at the beginning and then again close to the end. The baseline at the beginning was when we talked a lot about... Best practices around using AI. And I think we're all sort of still figuring that out. But I think one best practice that seems very valuable to him, and I've kind of done it myself in different ways for my own purposes, is to use AI as a baseline to sort of test yourself and test your assumptions around your communications. I loved what he was talking about. If he's writing an email to his team, or maybe they didn't say this, but I could easily see you're working on a talk for a town hall meeting, you know, and you feed what you've written so far into ChatGPT or Cloud or whatever platform you're using. And ask it like, what's missing from this? What groups might be offended by what I'm saying? What might resonate with different groups? And you can be very specific. You know, the more... You input, the more it learns about you and your... Operating environment. And so I thought, you know, using that as a baseline for improving the communications before he goes live with it, I thought was a really, really great point. The other baseline story that he told was about his point about learning throughout your life. And Cece and I both love the story that he told about him and his son. They're using some sort of mutual app. There's a lot of them out there. You can go find them easily, where they both can take notes in the same app during the day. And as little random thoughts pop in their head about things that they're curious about, that was actually one of Richard's words to describe himself at his best as curious. I think he totally embodies curiosity in the very best possible way. I'm sure his son does, too, based on this story. So they jot down things in the app all day long that they're curious about. And then in the evening, they spend 10 minutes together in what Richard called rabbit holing. Let's go down the rabbit hole and learn everything we can about this crazy, curious thought that we had during the day. And just learn stuff, you know, like every day they're learning something new. I said to him, it's such a terrific example of... Doubling down on routines, you know, getting the most leverage you can from the routines in your life. It's a terrific mental routine. I mean, it totally keeps you acute and fresh mentally. And what a great relational routine. And him and his son have been doing that for years. And I'm sure they've had a lot of laughs, you know, during that together. I'm sure they've had a lot of like huge ahas and just really, really connected, you know, as a result of that technology. That's another great point, actually, that he made about connection and technology was, he was talking about his kids playing Minecraft online. With their cousins, you know, and so using technology to connect with extended family. As opposed to going down whatever. Individual rabbit hole for hours that you might online. And we do that as adults too, right? And I think it... We had some really good conversations, which I'd encourage you to go back and listen to, or look at the show notes for how do you make the digital experience more? He used the phrase uniquely human. And that's a really, really powerful thing. I think, you know, that uniquely human experience online. So. Three takeaways maybe from Richard, the coachable moments for me is, one, the authenticity that he brings as a leader. He's really clear about who he is. He's empathetic. He's curious. He's funny, I think. Checked all three of those boxes in our conversation. He's doing things in his life, professionally and personally, that reinforce that. So that's one of the big premises of this show. And I think he's a great guest in terms of embodying that. The whole point about learning every day and being so intentional in the example with the son about how they do that together. So the relational routines come as... With that as well. And then finally, that kind of transitioned into that third point about keeping it human. You know, he's in the question about how to do that, which I really love about him, is that curiosity keeps him in the question. And it's always about continuous improvement. And applying that to leadership at a technologically transformative age. So provocative conversation. You may or may not agree with everything Richard said about AI, but I'm okay with that as long as it makes you think and ask some new questions yourself and look for some applications that you can put into practice in your own leadership, in your own life. I'm good with that. So thanks for listening to my conversation with Richard Kulata, and thanks for listening to Best Ever. You found today's conversation valuable, be sure to follow Best Ever on your favorite podcast platform and leave us a review and a comment on this episode. I want to know what's landing with you and your engagement really helps others discover the show. And if you're looking for more on how self-management fuels lasting leadership impact, connect with me through eblengroup.com. I've learned it takes a village to make a podcast. Thanks to executive producer Cee Cee Huffman and editor Mark Meyer, both of Wavestream Media. And thanks to my other team members, Lindsay Russell, Mary Motz, Sophia Shum, and Diane Eblin. Best ever is a production of the Eblin Group. Thanks for listening to Best Ever. And until next time, keep taking those small steps that lead to your best ever outcomes.