Unbound is a weekly podcast, created to help you achieve more as a leader. Join Chris DuBois as he shares his growth journey and interviews others on their path to becoming unbound. Delivered weekly on Thursdays.
0:00
On today's episode we discuss AI, data literacy, and the importance of soft skills as technology advances. Are you a leader trying to get more from your business in life? Me too. So join me as I document the conversations, stories and advice to help you achieve what matters in your life. Welcome to unbound with me, Chris DuBois.
Kevin Hannigan is the chief learning officer at clique, a data and analytics company. He is also the chair of the advisory board for the data literacy project and an author of multiple books. But in a world where technology advances rapidly can mistake an interest in how our brains process information, make decisions and deal with biases, coupling knowledge of soft skills with his experience and data. Kevin has a unique take on how we can get more from our professional and personal thinking. And today. He's sharing that with us, Kevin, welcome number.
0:55
Perfect. Thanks, Chris. Looking forward to this one.
1:00
Yeah, you are welcome. Thanks for joining. Show Brandon the origin story.
1:05
Yeah, well, yes, absolutely. So it's actually a unique one, I actually have a technical training by trade. So in university, I was computer science and math geek and technology every every semester learning new programming languages, learning math, and, unfortunately didn't learn much of the soft skills. But as I started going out into the workforce, I realized I had a knack for teaching complicated things and dumbing them not dumbing them down, but simplifying them and got into learning started going back and got my master's in adult learning, instructional design, so forth, and I just had this like lightbulb moment that technology is going faster than then our brains are kind of keeping up from the soft skill edge. As you mentioned, they enjoy right now working at a company called click where we get to do customer facing product training, internal learning development, that the company is a data analytics company. So there's always this technical edge, but we also teach them how they should apply their soft skills towards this. Right?
1:59
It's awesome. And there's, I mean, I'm sure we're gonna get into kind of the difference of hard skills, soft skills, and how you can kind of work on each. But first, I want to go into what you're saying about just technology moving faster than our brains are kind of able to keep up. So I guess in your words, why is it important to kind of maintain the human touch in in all of this as AI starts to handle more analytics and more of our daily tasks?
2:27
Yeah, did I mean that's the thing, even now, with generative AI, there's a big portion of stuff that gets done, that people used to do, but it's not, we're not losing out. They're meant for two different things, right? AI is really good at processing data, especially when you're trying to like machine models, number crunching things that our brain can't do as fast. And we're good at adding the human element, the context, the different perspectives, and they're supposed to work together. But there is that fear that as more and more technology comes out, we won't need us anymore, we'll be in a situation where we're not there. That's not true. We always need the human element because technology is, you know, I feel like we keep learning the new technology. But there's this deficit, where we're not learning how to use it, we're learning what it is or like how to program, we're not learning how to consume the output, how to ask the right questions, how to validate those things. So it's like a black box where we're making the black box faster, quicker, smarter, more technical, but we still need to know the inputs and outputs. And that's where the human element comes in.
3:28
Right? Yeah, I think I was talking with or interviewing Paul Teasdale. And one of the things that we brought up with anytime you create systems and processes and all these automations, you should always have a human in the loop so that you can actually look or make sure the right inputs and right outputs are coming out here. I guess how do you approach that? That face to make sure you're just doing the right thing? So you know, from Yeah,
3:51
I mean, that's, it's, it seems I wouldn't say it's paradoxical. But that's where I think there's a gap is people think that, you know, maybe they're infallible like machine learning algorithms or AI is infallible. It isn't in the sense that we have to give it directions and we have to use the output. And so those directions are created by us. Those are infallible because we're humans, we could be sending it the wrong training data. There's famous examples where massive things like Google cars and autonomous car cars and crime detection software, had the most amazing technology. But where it failed and failed miserably, is the training data going into it was biased, it was missing a population or didn't think of a different economic group. And so as it goes into that, the model doesn't say, Hey, I'm missing this data. It says, Here's my whole data. Let me build around this. And it's completely misleading. So that human element, just like your previous guests, that said is critical, because you have to tell it the right questions, you have to give it the right data, and all of those things are susceptible because we're humans. Right?
4:53
And I can't remember the term for this, but the idea that we as humans because the Like AI is spitting out something that's, that's in a language that feels very genuine and how we would say it, we tend to believe it more than if, you know, then viewing it as just as the output of whatever. And so I'm sure there's challenges.
5:12
And that's one of the unique challenges I think we have today kind of as a society. And there's a corollary there. So we talk a lot about we see, like visualizations on news channels or in articles or newspapers, and we're all susceptible to misinformation. And the reason is that the brain, it's a very powerful supercomputer, but it has limitations, right? It can't process everything that's coming in. So it, it does these things where if it looks like it makes sense, and it thinks like, it makes sense. And it visually sounds like it makes sense. The brain says, okay, it makes sense. So you might see a visualization that says, there's a correlation between our marketing spend and sales and your brains gonna say, Okay, well, that makes sense. Because that conforms to my beliefs. If you invest in marketing, you'll probably sell more. But then the famous examples where as soon as that visualization has different variables, same data, like there's more shark attacks, because we're selling more ice cream brains, like, I have no idea how those things are related. Does that make sense? So now flip to AI. It's it's sending out stuff that's conversational. That's easy to read. That's intuitive. same analogy. It looks like it makes sense. Really, why would I challenge it? It kind of makes sense. But that's the that's the downside is it might make sense. But it might not make it might not be accurate. Right?
6:29
So do you have like a system, I guess for for approaching the output in a way that kind of ask those questions, right. Like, is this? Is this a reasonable output for based on what I put in? Yeah.
6:42
So there's a couple of ways. I mean, the lowest level I'm a huge fan of Socratic questioning is just question everything. What? Why did is there another perspective that you could get is there another scenario you would read this Are there other variables that are driving this could be driving this result, it's all based off of questioning and actually generative AI, like tag GPT, and others are really good at helping you with that, because those are skills that we kind of forget and don't use that much. So at the first level is it's using that questioning, but then taking a step back, if you add an organization level, build a framework that's like systemic, and also systematic, to include different perspectives, to bring all of your assumptions that you have and verbalize them out and get those different perspectives and structured way, you're going to minimize those situations, because we're looking at it from it makes sense to me, it might not make sense to someone else, because they have different experiences and beliefs. And if you do that in a structured way that follows the same process more or less every time, you're gonna lessen the chances of them. It's not foolproof, but it definitely lowers the chances of misinformation or being susceptible or bias or similar.
7:52
Right. And so I guess, with all the algorithms, right, kind of spitting out the city or suggesting decisions for us, what are some of the ways that we got to make sure that we're, we're applying better judgment and ethics as we review those recommendations to decide what we're actually going to do?
8:10
Yep, it's very similar to the inputs to is you build a framework that's questioning the outputs? Does this output ethically wrote you know, is is it good ethically? Is it marginalizing a group so we go back to like the credit card fraud, or examples, if there's training data that doesn't include a certain economic group, and then the model spits out, we're always going to decline this person's credit card. You know, factually, that might be a true statement based off the model. But that's where we as humans have to interject and say, okay, is that the ethical thing to do? Is that the rational thing to do? What are situations where we could update the model to incorporate these into it? And again, it's a diverse perspective, which is why I'm not worried robots are taking over the world. I mean, if anything, we need more people looking at it, not less people.
8:58
Yeah. Yeah. I mean, that's how I view AI. It gives a way for people to move faster and get more done. But it doesn't stop necessarily removing the need for that humans actually do.
9:10
So if anything, I think, I think we need more teamwork working on output than just one person. Agreed.
9:18
Okay, so let's, let's just get into like some cultural stuff. As far as working in organizations where you're, you're leaning on data as a requirement. How can you kind of facilitate doing that, like, making data is such a critical piece without sacrificing creativity within your organization?
9:34
Yeah, it's a good point. And just to kind of level some of that. So there are some organizations where they don't use data, they just use their intuition. They're like, I've done this for 30 years. And this is why I'm going to do it because this works. And there are other organizations where it's okay, well, the data told me to do X, so I'm going to do X and I'm a big fan that anything that's to an extreme in general is to an extreme for a reason. You've got to be somewhere in the middle. So if you're you using your intuition, the downside with that, and a lot of people don't realize this intuition is actually data and information. It's all those things coming into your brain over the past, however old you are, all of the things that your brain thought was relevant are stored somewhere in the long term memory and you don't realize it but unconsciously when you make a thought, or a decision or judgment without thinking, your brain is doing it for you. Problem with that is, as we mentioned, the beginning technology is changing, the world is changing, things that were relevant 10 years ago are not relevant. So the long term memory might be relevant from an answer 1980. It's not going to be relevant business and 2023. On the flip side, when you go completely to the data, there could be missing data, there could be uncertainties, there could be changes in the model. And the over exaggerated example is probably an urban legend where someone drove off a cliff. And they say, why didn't he says, well, the GPS told me to turn right, you basically take out all of your common sense and critical thinking blindly trust the data, you really need to do it in the middle. And so going back to your question, the more and more people are using data, the less and less they use the creative side to think of different alternatives. It's like they think it's a calculator, you enter, you get the answer, move on. It's a mind shift. And that's where it's hard is sometimes people are not open to mindset shifts, or mind shifts, just like some people have fixed mindset. Some people have growth, the mindset is that the output isn't the answer. It's a starting point. It's a hypothesis. It's something to look at, and then try to invalidate. And unfortunately, like you said, it makes sense. It looks intuitive, it looks right. It's this really expensive black box that gave us an answer, we see a calculator and we say two plus two, it says four, there's no way it's not four, right? In this world it is. And that's just a different model that we're not used to. So same thing build practices, where you question it, where you use it as a hypothesis, use the scientific method, get different perspectives. Ask when's it not true? Sometimes we'll actually say where we want two answers. Don't just give me one, give me two, because the benefit there is then the brain is doing the pros and cons of two, as opposed to trying to just confirm one of them.
12:10
Right, I like the idea of viewing it as like a piece of the puzzle or kind of building off what you just said, where it's almost like you're just getting insights from the input in order to now facilitate the like, the AI or whatever tool you're using, right for data is giving you an output, but now it's on you to actually create that final output. Based on the insights you just received. I think that's a that's probably a really strong way to look at that.
12:36
Absolutely.
12:37
Yeah, good.
12:38
I was gonna say there is a scenario where like, the decision or the problem is so operational, that you probably wouldn't stop. And that's where you go into decision intelligence, and you have like, you know, RPA automation. So do you want to give someone a coupon in the back of their checkout? Yes, you there's no time for someone to stop and say yes or no, like human intervention, it just makes the decision. So in the in those situations, that's where you still have to do what we talked about, but you're doing it after the fact and then updating the model with results. You're not I don't want people to think like I'm not saying anything can be automated with decisions they can. They just shouldn't be things that are operational, not strategic, like what's the next five year business strategy for my company. And even the operational ones you're going to assess and evaluate, you're just going to do it after and then iterate it for the future. Right,
13:28
just continuously train that model, so that the exact results just keep getting tighter. So that's great for understanding, I guess, how to not become over reliant on on data. So you're still actually critical Think critically thinking about everything. Are there any? Are there any mistakes? I guess, you see businesses making, like separate from that as far as how they're using data versus, you know, just their own human touch inside that data? Yep.
13:58
I think the biggest thing that I've seen is, is going back to the there's a over reliance on it and an under Reliance or maybe an under appreciation that we're not practicing soft skills. And this is something I said for a while I was like, we grow up, we come out and we're very curious. I mean, we all I have four kids, they ask why every day, it drives me crazy. But that's because they're again, going back to the brain, it's empty. It's a blank canvas, so they don't have anything to compare. So like why, why, why? Then you start going to school and at least where I'm from. If you ask why the teachers like don't talk back to me, I'm the teacher. So we kind of suppress the questioning, then you go to your first job and then you stand up in a meeting and your boss is like, don't talk back to me don't try to show me up in a meeting. This is the answer. So I feel like over from from you know, childhood to your first job. You kind of learned the opposite. You learn not to question you learn not to be creative. You learn not to challenge and it's unfortunate. So it's I think businesses then don't know that If there's a problem, and they don't know that people might not be speaking up when they should be they might have groupthink. Or they might say, well, I don't know how to be curious. So I think you're gonna see it more and more in the coming years organizations are going to kind of go back to the future and teach critical thinking, teach communication, teach active listening. I say it all the time. I've been in school for 12 years even longer, never took a course and listening, but it's the most important communication skill we have.
15:25
Right? Yeah. So I guess the expectation would be that technology advances so much that it levels the playing field, and now soft skills become the differentiator.
15:37
Exactly. It's down combinations. And
15:40
you're there. Yeah. How do you recommend people actually prepare for that? Yeah,
15:48
I mean, I don't want to tell people stop learning technology, there is technology, there's still a gap there where automation can't do it all, you still have to learn the tools to do your business, you still have to learn the various programming languages, although there's no code, low code solutions, there's still a need for that. But I would tell everyone, if you had time in your day, to learn one thing, I would go back and take a course on critical thinking. If you don't think that's relevant for you, then I would take a step back and say, take a course in psychology or how the brain makes decisions. Realize that bias is a real thing, realize the brain is great, but it's limited, then you might want to go take a critical thinking, but if everyone starts there, I think exponentially we get better as organizations, because it's not a lack of wanting to do it. It's a lack of awareness that we need to do it.
16:38
Right. Yeah, they're a couple of good books, Thinking Fast and Slow. But Daniel Kahneman, and think, again, by Adam Grant, I think, yeah, I think both of those were very good books for just like, how am I actually viewing this? And how am I thinking about a problem? recommend those to readers? I think, a valuable skill I get, or if Simon skill, a valuable way to approach skill building I found with soft skills is that it's really hard to quantify him. Actually, let's talk about this first. How do you go about quantifying soft skills? So you can actually know if you're improving on them?
17:18
Yeah, I mean, there are some psychological assessments. So take like emotional intelligence. The challenge is a lot of the ones that are, you know, not costing the free ones you see online, their honor system tests. So it's like, we don't want to believe we're infallible. So if someone asks a question, How often do you do this, the results most likely gonna be skewed, you really need to be put in a situation where there's a psychological situation where they can extrapolate from that, okay, I'm using critical thinking I'm using not and those things usually cost a lot of money. But I have found many of those we use on hiring decisions that we use when we're trying to do team building. And that shows me I can take them multiple times, it shows me the progression that I want to do. The other thing is, is it's harder to do because it's easy to say, take a quiz, and let me see the score and you want the score to go higher. It's a little bit harder, but getting feedback. And that's why we do a lot of 360 reviews. I might think I'm improving. But if my peers and my leaders and everyone else thinks that No, I don't use these skills. That's kind of an eye opener. So I like to balance using assessments with like a 360 survey from different levels in the organization to get that perspective.
18:33
Right. Yeah, that makes sense. Yeah, something I've done with soft skills, specifically is like to break them down into the individual actions you would have to take within that skill. And so like for how do you get better at networking and like meeting people stuff, and I'm just gonna focus on making eye contact and smiling when I meet someone, let's just see how many times can I do those two things? If I'm, if you can find the right skills, like individually right to be working on, they actually play into multiple different areas. And so yeah, identifying what those are for you, I think is a is pretty great. As far as skill stacking, I guess, if your points are important. I want to shift back to data for a second, though, and talk data literacy. I think that in itself is probably a, let's say a fleeting skill, but it's not. It's something a lot of people don't pay attention to. It's like they see data and they say, Oh, look, we have the numbers, but they don't look for like what story is that data telling us? And you know, when I have a dashboard of multiple reports, what am I actually seeing, what insights can I pull from this? How do you recommend people go about actually, what improving their ability to kind of see the story within the numbers?
19:48
Absolutely. to kind of take a step back. I think the the first step to do that, there is a challenge out there where people hear the term data literacy and they say I'm not On a data person, not for me. And I think that's the first challenge because like you said, you might be not building data, but you might be reviewing a dashboard for your key performance metrics. You're using data. You're consuming it, you're trying to make decisions out of it. Just a few weeks ago, I bought a coffee pot on Amazon, I looked at the surveys, I looked at the feedback, that data. So I think the challenge is data literacy. It's fleeting, partly because some people say it's not for them, because they hear the word data. But to your point it, there's something about an insight. And then data literacy wants you to validate the insight. And that's where critical thinking, creativity, curiosity, that's where the soft skill comes in. But then you have to do something with that. And so you have to act on it, you have to usually do something to change, so you have to communicate it. And I've seen situations on one end of the spectrum where someone tries to share an insight and they use the most technical convoluted visualization that anyone in the room looks at, I have no idea what they're saying. To someone that just tells a story without using data. And studies show, there's like a perfect in the middle, you have to use data, using data makes people more engaged and more persuaded. But you have to do it at a level that the audience understands. So it's an art of communicating by simplifying prioritizing what's relevant. If you know a common example, if you're just trying to show a result, some people will show that like a number of the average sales, you don't have to use this big visualization that has all these different variables on it's going to confuse people the cognitive load is like through the roof, just show the number doesn't have to be this long, complicated visualization. Those are useful when we're trying to come up with the insights. But if it's a simple answer, and a simple insight to show the number, right,
21:46
yeah, actually, example from another guest was they was a medical meeting sitting around a conference table, they put paperwork in front of everyone that has the year that patient was born, and then all the other medical information. And so everyone was paying attention, like trying to do the math, right to figure out, okay, how old is this patient? Because of that, yeah, the dates here, rather than just putting a it's a 44 year old male, right. And so now, instead of giving that insight, you're not forcing people to actually go figure that out, they're not paying attention to everything else, like you've lost them in your presentation. And so that's one, something. I mean, sometimes it's as simple as pulling out the view with data as well, where I've had some clients looking at, you know, like a weekly or monthly chart doesn't look very exciting for the growth rate of the company. But then as soon as you pull out to a quarterly view, it's like, whoa, that's huge growth. Like it, actually, we can see it now, you know, because we just pulled back. And I don't know that a lot of people are thinking that we're just like, how do I need to view these numbers in order to get what I need from that information? Absolutely.
22:48
And that's the you hit the nail on the head. That's the to me, one of the biggest, you know, lightbulb moments is to kind of summarize what I think you said is, it's almost important, it's most important to know the question that you're trying to solve and the problem to ask, and a lot of times, we see these ad hoc visualizations like someone puts in front of us, and we, it's not answering the question we want. So like, to your point, if the answer is like, how are our sales trending over the first five years? Well, then you want to look at five years and even beyond that. But sometimes we see it as a point in time someone hands us a data sheet. And we don't it's not answering our question. But again, going back to that makes sense. The brains like, make sense. So we believe it to be true.
23:31
It's not. Yeah. I kinda want to go back to looking at inputs for algorithms. How do you feel like we're just jumping jump way back? The how do you kind of mitigate? So we talked about, like making sure you're actually putting in the right inputs, and you're not throwing in, like your biases and everything. But how can you mitigate some of those things, or even use the technology to kind of talk back to you and say, Hey, make sure you're including these types of data points, in order to reduce the chance that we're gonna get a fallible output?
24:06
Yeah, so like, take generative AI, for example, it's come out really well, where the difference now is, you know, traditional AI will use existing data and find patterns and trends, generative AI will create new data, new content, and people think of them as kind of mutually exclusive. But to answer your question, you can use generative AI to help you train the input for the traditional AI, you can ask it like, pretend you're giving me the Socratic questioning what data would be more relevant with this? What what could I be missing? You could even ask it. What are some potential fallacies in my insight statement here? And again, it's not 100% Perfect, so you can't just assume it to be true. But it's a starter. It could say, Hey, you might be missing this, Hey, you might be missing this. So I'm a huge fan of using general value and then using going back to diverse perspectives, using other people get their thoughts about it going in as well. But I do have to say that the results I get from something like Chad GPT to help me find potential things I should be asking of the model has been really positive. Right?
25:13
Have you ever tried using? Or I guess, debating within the tool, so just get more of the you can. I've tried this before where I've had using GBT multiple agents created within a single instance. And so it's like, Agent one, this is your view, Agent two, this is your view agent three, you're the mediator. Now I need you guys to debate this question. And they'll actually go back and forth like showing who's talking. And the AI itself will debate itself until it gets to a logical answer that the mediator says this is. This is what we've come up with and agree on. And I don't know, I find that a super strong argument for being able to now it's feels more validated, right? Because it's actually showing you like the steps behind its work in order to get there. I don't know if anything similar. Yeah,
26:02
I mean, it's, you know, daisy chaining the prompts or the commands to get to that point, I haven't actually gotten it fully automated to where you do that is really cool, because that's one of the tools we use outside of AI is just set up two different opposing views like devil's advocate, and just going back and forth. And the more and more you can automate that it's tying back to in business, we should use the scientific method, or we should have hypothesis and try to disprove it, and that technique, there will help you try to disprove it. Whereas most of the time, just because we're tired, we're busy, we're going fast, and the brains are limited. We're trying to find data that proves our argument. What you said was a perfect way to say, Okay, here's my argument, debate it, and then let me see the output. You're basically automating the scientific method, which is awesome.
26:47
Yeah, and it's a lot of fun to see what it spits out. So exactly.
26:51
Well, you come up with different counter arguments you wouldn't have thought of.
26:56
Right? When I found even with most debates, they it's like they're, they come up purely because we're not aligned on definitions. And if we could agree on the definition of whatever we're trying to solve for, usually it solves itself. But that's, that's the actual core problem. And so through AI, we're actually able to maybe not come up with the actual definition of whatever we're trying to do. But we can remove some options from the table. And so now it reduces this kind of scope, and we can focus better. Yeah, one of the, one of the quotes that I love, I don't know who to attribute it to. And every time I think of the quote, I forget to go look at who it's from. So someone can email me later and tell me, but it's the idea that you should not believe everything you think. And I would love to talk to you about some of the benefits of kind of questioning our beliefs. Just admit, we've already taught critical thinking and all of these things, but I want to kind of hear your perspective on it.
27:56
Yeah, it really goes back to you have to understand how the brain works right in the brain and everything in our long term memory. So I'll, I'll tell it to a story. I mentioned I for kids. When I was growing up, when I was younger, the neighbor would always like shoot BB guns at our house and was always just like a brat. And they they had red hair. And I realized it but as I was growing up, that was in my membrane every time I would see someone for their mother, a troublemaker. And then there were even movies where the troublemaker I had red hair. I'm like, see, there's my point there was for like, 40 years, my third kid comes out of the womb, flaming red hair. Like he had a hole. Most kids don't have a lot of hair. He had a lot of hair and it was red. I'm like, well wasn't born evil. Right? What that was the shock, my long term memory to say, wait, I just made a stereotype incorrect judgment. And that's one of the triggers that got me thinking, Okay, well, how do our beliefs impact our decisions? Well, in that case, it wasn't even a conscious belief. That's why it's scary, is I wasn't going around and saying I hate redheads like it was somewhere inside of me. And it was based off of a generalization one person, you know, I could have used any other attribute. But for some reason I use the hair because it was the one that stuck out to the brain. And again, all unconscious. I'm not, you know, against redheads, obviously. But the point is there is that we see these situations happen. And sometimes we don't have all the context, but our brain makes a decision. And that gets stored in there. And that decision then when we're unconsciously seeing similar things doesn't have to be exact. It comes out and that's how stereotyping that's how judgments come out. And the good news is, we know how it happens. The bad news or even better news is we know how to fix it. But we need to get those judgments out in the forefront and then be challenged. And there is a segment of population that don't want to be challenged because they're like, Nope, it's not a judgement. It's a true statement. I'm not talking about that. I'm talking to the people that do it unconsciously and don't mean to do it and don't think they're doing it. Bringing it to the forefront changing those beliefs. Hopefully everyone doesn't have to have a situation like mine where it wasn't a bad one, it was just my son was born, that was the trigger. But the more we can bring these to the surface, the more our beliefs will be less judgmental. Right.
30:15
I think he just gave a good a good case for actually spending some time just being self aware. Because I think with so much content and media out there, like for things, things to happen, think things for people to actually consume very few times where we actually like sitting back and thinking about how we got to, you know, some feelings, I thought. Yeah. Hey, Kevin, this has been a great conversation, I got three more questions for you. With the shoot the first being, what book do you recommend everyone should read?
30:46
I believe in I was gonna say Thinking Fast and Slow. So lots of pivot here. I guess the other good ones, if you've ever heard of atomic habits, but I'm a huge fan of trying to figure out like good habits, bad habits, how to break them in the book, James clear, I believe is the author does a really good job. It's helped me immensely in life and in business.
31:05
Yeah, that's great book. What is next for you professionally?
31:10
I think continuing the same thing. I think there's a niche here, teaching soft skills to technical people. And then at the same time, on the other end of the spectrum, teaching data literacy to people that freak out when they hear the word data, if I could achieve or Moo evolve those two Pentacles writing articles, doing papers, doing training classes, workshops, to educate everyone, data's not scary, and educate other technical people that we still need soft skills. I feel like I'm making a difference. Awesome.
31:39
Finally, where can people find you? Yep.
31:43
So if they want to go LinkedIn, I think there's only two Kevin handguns. One of them, not me, but the one that writes about data literacy is me. And then I have a website turning data into wisdom.com, where you can find articles, blogs, white papers, related to all these different things.
32:00
Awesome. Thanks for joining me, I appreciate you coming on.
32:04
Awesome. Thanks for having me.
32:10
If you enjoyed today's episode, I would love a rating and review on your favorite podcast player. And for more information on how to build effective and efficient teams through your leadership, visit leading four.com And as always deserve it
Transcribed by https://otter.ai