IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.
We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.
We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.
Technology doesn't erase old skills. It changes the ones that are valuable. And we are heading towards a world that is where deep thinking is becoming optional.
Speaker 2:All these things like AI systems now that we're using to reduce cognitive load, what is the effect of that going to be? Like, is it helpful or is it really hurting the whole species?
Speaker 1:Hello, everybody. Welcome back to another episode of Make an Impact podcast. I'm your host, Makoto Kern. And today, we've got a I think what we're doing is something very interesting. We're living through a major shift.
Speaker 1:It's not just technological, but it's cognitive. And AI is no longer a tool that we reach for just occasionally, but it's becoming something we lean on constantly. So today we ask, what happens when we start outsourcing thinking itself? And of course, today I have my cohost Brindley Evans. He's our UX strategist at Impact and someone who's not just thinking about this, but he's actually helping implement these systems in the real world.
Speaker 1:Welcome, Brady.
Speaker 2:Hey, listeners. Thank you. And this is a topic that I've been giving more and more thought to, or rather, is it less and less thought the more I use AI? I guess that's the question we need to ask ourselves. I mean, AI is amazing.
Speaker 2:The speed at which we can receive answers, problems, decision points you want to make, it's blinding. And just how quick that is. We've got to ask ourselves at what point really do we stop thinking for ourselves and we just really begin outsourcing the thinking to AI? And what's the cost of that? So all this and more in today's episode of the Make an Impact podcast.
Speaker 1:Great, so I'd like to start off by framing kind of the landscape of things. When we talk about this, it's kind of like AI has become, there's an evolution that has happened and it's gone from, you know, the chatbots to being more of a helper, to becoming a partner. And now it's more of a proxy. And some recent things that we've seen, I mean, you've seen AI agents are becoming obviously the big hot trend and it's basically replacing, you know, your marketer and your salesperson. There's AI tutors now that I think Khan Academy has released.
Speaker 1:It's basically helping out students, but is it too much? Is it removing critical thinking?
Speaker 2:That's the question there.
Speaker 1:Yeah, these are things that if AI can write for us and decide for us and remind us, do we need to still remember or decide or create ourselves at this point?
Speaker 2:Exactly. I was thinking about it from an evolutionary standpoint, and you think we're sort of wired for with these organisms wired for efficiency, and that sort of efficiency drives a bit of laziness as well. It's kind of, well, if we go back to caveman times and say, well, are we going to go across the long plane with the saber tooth tiger only to get the woolly mammoth there? Or should we get the woolly mammoth right here? It's right next to us.
Speaker 2:It's easier. We just go for that. Less energy. Everything's around that energy conservation. And I think we need to look at it as well for effective cognitive energy conservation because our brain is actually an energy hog.
Speaker 2:I didn't realize, probably it uses something like 20% of the body's energy for just 2% of our body mass, which is pretty crazy. So we've evolved to optimize energy usage by also sort of making the right decisions. And one of the things as well from a cognitive perspective is just trying to reduce that cognitive load. We don't have to think, we tend not to, or we tend to fall into different categories of thinking. And we naturally then form habits or shortcuts really that are more sort of heuristic or that rule of thumb to avoid thinking through every single solution.
Speaker 2:That's where we're also designed to avoid effort unless there's really a clear payoff. So there's a kind of type or categorization of thinking that was popularized by It was Daniel Kahneman, who was a Nobel winning psychologist. He proposed this, or again popularized this system one and system two thinking. System one thinking is more that heuristic thinking, that rule of thumb, like we develop it, brushing your teeth, anything like that. It's kind of a you do something without really inducing or having too much cognitive load.
Speaker 2:Then your system two is much more sort of slower, deliberate, intentional thinking where you really have to analyze something and it takes a lot more effort. And I think that's where we're sort of seeing a potential problem today or something that we need to recognize, is that we're starting to shift away from maybe that system two thinking and starting to cognitively offload it to AI. And that's where we're maybe staying more in a light system one thinking. And what are the effects of that? And that's where I think we can go on to just starting by looking at maybe different generations and how it's affecting those and kind of touch on it in different areas in this podcast.
Speaker 1:When you think about it from the different generations, so you got your Xenials, Xenials, is that right? They grew up with basically pen and a modem, Generation X, I guess. It's
Speaker 2:What's that sort of sweet spot between generation X and millennials? I think I'm a Xennial actually. I think I just hit that Xennial.
Speaker 1:Yeah, I'm there too. But I'm very immature, so I think I'm still Gen Zer. But I agree, we grew up right at that age before you got to all this screen time and everything.
Speaker 2:And we chatted about it before that analog era, analog and digital.
Speaker 1:And it's funny too, because all that is my kids, they love that stuff. They love buttons and physical, like retro style of things. They do appreciate that and think it's cool now. Yeah, I mean, AI natives now, anybody that's born in the 2020s, they're gonna grow up assuming ask the bot is normal. Absolutely.
Speaker 1:And I think, one of the things that whenever we're growing up and even if you have kids or somebody who you're friends with and you're comfortable when you're like, Hey, how do I do this? It's like, Hey, figure it out. You know, it's that, that whole common of like struggle through that task and figure out of how you do something. When my kids say, How do I do this? I'm like, You have the power of omniscient being in your hand.
Speaker 1:We had to go to the library before we had to look through things. You had to like go through the effort of trying to figure something out with a much more difficult process, but you learned a lot more through that and struggling through that task. But the fact that they can just pick up the phone and look up YouTube videos, online stuff, or now with asking AI, the value of that journey and the speed at which you get these answers, it's almost like it's a commodity that is not like, if you know a lot of things as an individual, people aren't as impressed because it's like, oh, well, I could just ask AI and I basically have more knowledge than you do. But that's the kind of thing that I think is gonna be interesting to see how that affects.
Speaker 2:And what is the consequences of not having that? There were achievements in every single project or task that we took on back then before the internet or before even If you had to sort something out in the evening and the library was closed, you really had to figure, maybe you had an encyclopedia set at home, but if you didn't find that information, you had to think about it. You had to create We were always in UX talking about mental models and how powerful they are for users, but we were doing that. We were creating our own mental models of certain things where you can analyze and had the time as well. There were less distractions to be able to refine your opinions and look at those certain items you're working on and mull over them and refine.
Speaker 2:There was so much to that. And then eventually I think there's an achievement where you're like, I thought of something new or I did something different there, which with the speed now, I look at my son and his first thought is, okay, well, let me speak to that chatty PT on your phone. Like he's just sort of translating to his own phone now, but he's like, want to ask the AI something. It's just completely different. Yeah.
Speaker 2:It's a great tool, but how much of it is a is a crutch as well? And I'd love to hear from the listeners, like, if you can leave comments on on how you rely on it as well. If you have kids as well, do you feel that these are tools that are helping kids learn? Or, again, they're just turning them into these regurgitators of prompts, like, tell me how to do this, and then start reading it, and you haven't really even taken it in. You're like, I'll just get back there if I need to get the rest of the answer.
Speaker 2:Or you haven't consumed the answer correctly. You're like, Yeah, got what I need. Moving on. Not storing any of that. It was interesting as well.
Speaker 2:I was chatting to this topic with a good friend of mine over the weekend, and he was just saying that just that morning he had had coffee and he was overhearing some medical students talk. And they must have been sort of quite far along in their studies, but they're saying, no, AI is the best thing because I just sit at GBT to write my thesis and I just Netflix and chill. And you think one of the most crucial parts of your education is developing that unique opinion on something where you define something unique and that's your achievement and that's how you learn. You think if you're robbing what will be sort of quite critical people in society of that, what else are you taking away? What else are you shortcutting where those people won't learn as well and won't be able to serve as well?
Speaker 2:Or do you see it another way? Is it something that is just allowing them to speed through and get to where they need quicker?
Speaker 1:Yeah. Again, some of the you know, tools as a student and check out our podcast on where we actually interviewed a teacher on AI and how their schools are thinking of integrating and how they're using it. But you know, the thing is it's like, do you know, I have three kids at different ages and different stages of their life, 17, 15, and 12. So one's going into college trying to figure out what he should do. The other one's just starting high school and the other one's in grade school.
Speaker 1:So they're all using AI and the teachers are at different levels in which they think they should introduce them. But I have friends that are kids that are graduating college and basically saying they're unemployable because they don't know how to use AI and they're basically using AI to replace any introductory roles within an organization. And so on one hand, yeah, I agree with you. I've gone through, I just remember going through engineering school and our lab professor, man, he was tough. And the thing that he would tell us every time when we asked, he's like, you're engineers, figure it out.
Speaker 1:That's your job is to learn on how to learn and how to figure tasks that you don't know how to do. That connecting of the dots is so insightful. It's that pattern recognition. That's what makes us really smart human beings because we can pattern recognize things. You know, where I struggled in math or control systems and engineering, but then when something clicked, I was like, wow, that is absolutely amazing where your eyes are open to like a whole another world of things.
Speaker 1:And are we taking that away from people by just, you know, it's like a calculator. I don't need to know long division off the top of my head because it's like the calculator's there. So why should I learn that? And is the same kind of logic happening with kids where they're like, well, why do I need to learn that when it's just right there? I can use my brain for something else.
Speaker 1:And so we get into those things. Sure, you can find the answer in seconds, but are you able to ask it the right questions? You know, is the cognitive offloading, is that helpful or harmful? And you know, the Google effect, we've gone being Xenials to saying, Hey, just Google it. And that's searchable, but now it's Google, Hey, Google it on steroids.
Speaker 2:Still think like when you used to, like early days of Google, you wouldn't have the AI Google there, you'd have, you know, all right, that would be a bit of an achievement as well to to say, well, I needed to look at that site and that site and that site. Piece it together, come to a middle ground, and then find my answer or, you know, use some critical thinking. But, I mean, that was the start of it.
Speaker 1:Yep. It was literally yeah. In the early days, it's like having a digital library. You still had to go through things. You still had to read it.
Speaker 1:You had to research. So there still was that effort. It was just easier where you don't have to go to a physical library and you can find things faster. Yeah, everything now is just basically at one prompt away. And so are we building a hive mind or are we losing our minds?
Speaker 1:Yeah, that's
Speaker 2:it. The whole concept of cognitive offloading is interesting. And just sort of if you think how we can sort of shift mental work onto those external aids, like you were mentioning a calculator or a spreadsheet. And I think another interesting one was GPS, where they actually did studies afterwards and found that using a GPS would take away your spatial reasoning ability, your navigation ability. So people who are used to that trying to return to navigating by themselves, like, I don't know how to do this.
Speaker 2:I'm lost without this GPS. And I think that's where we've got to say, well, all these things like AI systems now that we're using to reduce cognitive load, what is the effect of that going to be? Like, is it helpful or is it really hurting the whole species? And I think there are two sides to it. But we can think that over time, it going to make us coming back to that system one and system two, where system one's the quick thinking, system two's the sort of labored, concentrated, analyzing thinking.
Speaker 2:Is it gonna make us just less practiced at that, where we sort of go, I don't I don't really wanna go there. You know, I don't wanna like, this is this is putting some cognitive load. Not interested. I wanna defer that. And is it then just gonna make us more reliant on that whatever tool that is, mainly now sort of AI, you know, that tool's output?
Speaker 2:And it's obviously gonna make us faster, but is it gonna make us more error prone as well if the tool fails to give us what we need or gives us the wrong data, gives us data that's biased in a way that is going to affect us.
Speaker 1:I'm curious. In video games, cheat codes let you skip months of grinding to unlock special abilities instantly. Have you ever wished for something similar for your software challenges? What if there's a way to instantly access twenty plus years of specialized expertise instead of developing it all internally? What if you could solve in weeks what might otherwise take months or years?
Speaker 1:Would you agree that most organizations faced a steep learning curve when implementing new software solutions? At my company, Impact, we serve as that cheat code for companies looking to transform complex software into intuitive experiences that users love and that drive real business results. Would it be valuable to explore and how might this work for your specific situation? Visit impact.i0 for a free strategy session focused on your unique challenges. I'm just gonna make a comment.
Speaker 1:I think I'm the one who always gets lost if I come out of a building and I'm like, okay, which way did I just come from? So I love GPS and I have to have it. Otherwise my wife is like asking me where did I go? Because I get lost. And so that way I can focus on actually doing the tasks, like getting there versus having to figure
Speaker 2:out So which Now it happens when it's it's taken care of. You get there and even the thinking that you need to do is you're just a really passive participant. Don't ask me. Just ask this here. I just got me here, and I'm all good with it.
Speaker 2:And what I think I was trying to describe what I was thinking is this is taking away from us. And the best term I could try and coin was micro accomplishment deprivation. Because you think of all those little achievements, like if you think you're working on a report or you're working on, if you're in school, a project or even coming back to the thesis, you make a series of micro accomplishments every few minutes where you're having to think about how you structure something, you're having to think about what you want to say in that presentation, in that writing assignment, in that paragraph, for instance. You're continually having these wins to get to a superior product that you build. We see it in UX as well.
Speaker 2:If you're building a design, you've got a whole series of micro accomplishments to get to a great design. Start building the base and you're making all these things that are clicking. But now what does that mean when those sort of small decisions are just being offloaded? You think, well, I could decide what I want to write in that paragraph, but I could also ask AI what it thinks would be best for me to do there. That slowly, I think, just starts wearing at your mental strength.
Speaker 2:This is my thought. Like, I'm having to stop myself and say, No, I don't actually need to ask it for that because it's more difficult doing it myself, but I know that I'm still going to build that resilience, that sort of mental processing strength and resilience, that otherwise it would be eroded. Because I know if I just keep doing that, it's going to be more and more difficult to make that decision myself. And I think that's one of the biggest things we're sacrificing with AI. And what we need to be conscious of with younger generations is how do you stop that being eroded and you make sure that you can keep accomplishing things yourself, even if it's those micro accomplishments.
Speaker 1:Yeah. And I think this is a good thing to kind of introduce what is our potential three different futures that can happen as we evolve and get into AI. Basically we have kind of the first option is that co evolution where it's that humans plus AI in partnership. And this is where you still have to introduce that curiosity, the discernment. It's taught alongside that prompt engineering where, you can give people harder things to solve, especially at school, like give them a more complex task or project to go through leveraging AI for some of the more time intensive stuff and then have them do the critical thinking of putting things together.
Speaker 2:Yeah, absolutely.
Speaker 1:That's what they should do because you are able to do more like calculators, you're able to do long division. So now you could create much more complex accounting and spreadsheets and things like that faster by doing the calculations automatically while you can figure out the strategy.
Speaker 2:Yeah, exactly. I don't know, Mikhon, if you've heard of automation bias.
Speaker 1:Possibly.
Speaker 2:Was interesting. It came up when I was doing the research and it's a tendency to favor suggestions from automated systems and then to ignore contradictory human judgment or data. So as part of this co evolution, you think, okay, how do we handle automation bias as well? Because you can't sort of repeatedly choose that sort of system one thinking backed by machine. You go the easier route with machine opposed to actually going back to the micro accomplishments as well, deferring back and saying, all right, we can use it as that great tool, but how do I still keep my own independent thoughts and just use that as a support, not a source of complete truth and direction, I guess.
Speaker 2:Just wanted to add that in there because I thought with coevolution, that's something that we definitely would come up against and have to think about how we move forward with that.
Speaker 1:Definitely. Now, the next one is atrophy. This is where, you know, the deep knowledge and critical thought start to erode. And you've got this generative sludge becomes a default output where there's a lot of AI slop on social media that is being, basically outputted. Google's mentioned before that they have, algorithm to try to combat that, but I mean, whether it's X to Google results, to LinkedIn and YouTube, I mean, it's just filled with that you have to discern what's real, what's not, what's actually useful.
Speaker 1:I mean, take for instance, what I saw the other day is where somebody's created this AI bot on X where they're saying, Hey, learn how to make $100,000 a month with starting a YouTube channel today. I've been, of course, posting. I'm like, You can't even start optimizing or you start earning until you at least have three thousand hours of viewing. And it's just AI slapping. Everybody's like, oh yeah, send information, send me information.
Speaker 1:So people are probably buying some kind of crappy course that was auto generated by AI thinking they're going to earn all this money, but there's no way. We almost not to toot our own horns, we're getting close to 100,000 subscribers, which is amazing, but we're about 65% of the way there before you can even start monetizing. And so people just automatically, they get gullible to what's out there and you don't know what's real and what's not unless you do real research.
Speaker 2:And this sort of falls back into everything becoming quicker and easier. And then I go back, I feel like some really old guy who's like, oh, day, that was hard work. We worked a full day. And I kind of feel like everyone's attention spans are just going shorter and shorter, and these short videos are just driving that. So people are like, Oh, I can't make money because your hard work is gone and you're looking at those quick sort of, oh, a 100,000, Yeah.
Speaker 2:A $100,000. I mean, where do I sign up for it? That sounds great. I wanna do it the next sort of two weeks, opposed to like, was your journey? Like, where did you get to?
Speaker 2:And I think coming back to the point of deep knowledge and eroding that critical thinking, I think we've got to look at the outcomes as well. Like, what is that going to leave us with? Are we going to have a generation of or multiple generations with just diminished ability to properly reason or reason independently? I mean, they could reason through AI, but it's like just one second while I check, okay, that's my opinion after I've consulted with AI. And what's it going to mean about spotting bias or coming back to the critical thinking, evaluating conflicting evidence and say, well, this actually, this doesn't seem right for me, having that internal thought process.
Speaker 2:And I like there was a term that someone coined of fast food cognition and drifting towards a society with this fast food cognition, which is just instant answers without reflection, which is worrying because that's that's what it is. And to some degree, that's what I saw, you know, with my son. You know, like, I got the answer moving on. There's no reflection. It's like, alright.
Speaker 2:I'm gonna digest that and think whether does it sit well with me? Does it sit with my beliefs? So it's like, okay, got the answer, let's move on. And there's no depth there. So we've got to be careful, I think, that our education systems as well don't utilize AI.
Speaker 2:We're training people more as AI operators rather than as independent thinkers. Like, this is what you need to do to operate the AI and the AI will tell you how to get where you need to go, how to answer all the questions when you're there. And that comes back to that analogy with GPS that I mentioned earlier. If that sort of erodes our ability to navigate, how does this start eroding our ability to to think? And all the other things as well, like reduction in the way we collaborate and we empathize with people.
Speaker 2:You know, what happens when we're not just looking to the AI for for data, but, you know, we're we're sacrificing our social cognition, things like negotiation, empathy, teaching. All of that's outsourced to AI as well. So you sort of get these weird social interactions that are all mediated by AI, which means what? We're going to have sort of reduced interpersonal problem solving and reduced sort of empathy. So just like, couldn't care.
Speaker 2:AI will tell me whatever I like. You know, cheer me up AI.
Speaker 1:Yeah. Or it's like have incest, talk to my have my lawyer talk to your lawyer, have my AI talk to your AI.
Speaker 2:Exactly. Yeah. Just spin up a fancy video, sort of two meme for me so I can laugh. It's interesting. And I think those are looking at some of the darker sides.
Speaker 2:You know, there's so much positive that you can spin. But, yeah, it's just scary. Like, you know, will we become functionally dependent and lose a lot of of what we want?
Speaker 1:Yeah, and I think that's where we get into our third kind of future scenarios at hybrid where, you know, the AI literacy becomes part of the school curriculum. And maybe, you know, that analog thinking becomes a luxury skill. It's almost like at a job interview where you're put in a room, no internet access, just a whiteboard and a problem and you have to solve, no prompting anything. It could be the most highly valuable and highly paid skilled in the economy in the future. It's just another episode of Idiocracy two, the movie.
Speaker 1:Yeah, I mean, it's a scary thing where that analog thinking is that premium skill in the future.
Speaker 2:And I think that's where we need to look at things like cognitive amplification is where can we use AI to actually augment and amplify or, what's a better word, refine our system to thinking rather than completely replacing it? Like, how do we get it to that spot where we look at outcomes, people to think more broadly, to kind of explore counterfactuals faster, and really democratize an access to expertise. And I think it needs to be done in such a way that it's not providing you with the answers, but it's providing you with ways to think and with with points of maybe conflicting sight, you know, where you could say, look. I'm looking for details on this. Instead of saying, like, this is your perfect answer, it says, well, based on who you are, these are the pros of it, these are the cons of it.
Speaker 2:And it leaves you with more information to make a better decision on versus saying, this is the decision that it's made for you. And you think, great, that seems like a good decision. I'll go along with that. There are probably ways that we can kind of emphasize what I believe is called metacognition, which is sort of thinking about thinking, which is really what I think it needs to do, and knowing kind of when and how and what we should be offloading and what we shouldn't. Just some thoughts there on where that hybrid would really work, and I think if we wanted to do it.
Speaker 2:But again, coming back to the the evolutionary standpoint, in my in my opinion, we're wired to take the easy way out. So it's gonna have to be those people who are dedicated enough to say, no, I'm keeping my own cognition and I'm just augmenting it with AI. I'm not outsourcing it.
Speaker 1:I think you have to, almost as a teacher or even a researcher to understand when is it important to It's like the decision to give your child a phone or iPad or whatever. It's like, when do you do that? When does it start to diminish versus enhance their thinking? It's like, you don't want to introduce it too early because then it'll stunt their brain cognition growth. Yeah.
Speaker 1:I mean, that's something that, it's, it's hopefully will evolve for the better versus worse as, as things move forward. I think absolutely, you know, to kind of close on this technology doesn't erase old skills. It changes the ones that are valuable and we are heading towards a world that is where deep thinking is becoming optional. And so there are gonna be people who are choosing to still do it, know, still be bored.
Speaker 2:Funny time to be living in like, yeah, still gonna, I think I'm still gonna think for the next few years.
Speaker 1:Maybe I'm done thinking today.
Speaker 2:Yeah. And I was thinking like, where does it go from here? Because you think our entire journey with UX has been one about reducing friction. Everything that we've done, you wanna make the user's journey easier and say, Well, at what point now does it really become detrimental if something's actually reducing that friction? Because at what point does it become, Well, we're just going off the rails now and you're on your own.
Speaker 2:And I kind of think of the inventor of the virtual scroll where he's like, Damn, I wish I didn't do that virtual scroll because that's eroded people. It's made things so much more addictive and engaging that you think, Wow, that was something that was created and we can't really get that back because it's such a great pattern.
Speaker 1:There's apps to stop you from doom scrolling.
Speaker 2:I know, I know, but you have to, again, like choosing to think, you have to actually install those. So I guess to close-up, we say, well, at what point do we need to say enough's enough, we should be augmenting our cognition, not offloading it.
Speaker 1:Agreed. So I think this is a good place to end this podcast. Great topic. And if you have any comments or any ideas on what you think the future should be and how, if you're going to start thinking or have your AI write it. But thanks for joining in again for another episode.
Speaker 1:And we appreciate everybody who's been liking and subscribing to our podcast. We're hoping to hit that a 100,000 mark before Christmas. So it's been an exciting ride so far. Yeah, tune in again for an extra episode and, take care everybody.
Speaker 2:Yeah. Thanks for listening.
Speaker 1:All right, have a good one. Have you ever played a video game and discovered a cheat code that instantly unlocks abilities that would have taken months to develop? I'm curious, what would it mean for your business if you could access a similar cheat code for your software challenges? What if you could bypass months of trial and error and immediately tap into proven expertise? You know, I've noticed that many organizations spend years developing specialized software expertise internally, often through costly mistakes and setbacks.
Speaker 1:Would you agree? That's a common challenge in your industry as well. At my company, Impact, we function as that cheat code for companies looking to transform complex software into intuitive experiences. Our clients gain immediate access to twenty plus years of specialized knowledge and the experience of launching hundreds of software digital products in many different industries without having to develop it all internally. You might be wondering how does this actually translate to business results.
Speaker 1:Well, companies we work with typically see go to market times reduced by up to 50%, their overall NPS scores rocket up, and their product to development team's efficiency significantly improved. Instead of struggling through costly mistakes, they accelerate directly to solutions that work. This is why organizations from startups to Fortune 500 partners with us for years. We consistently help them solve problems in weeks that might otherwise take months or years. If you're responsible for digital transformation or product development, wouldn't it make sense to at least explore whether this cheat code could work for your specific challenges?
Speaker 1:From boardroom ideas to code, this is what we do best. Visit our website at iiiimpact.io. You can see the link below to schedule a free strategy session. It's just a conversation about your unique situation, not a sales pitch. And you'll walk away with valuable insights regardless of whether we end up working together.
Speaker 1:Thank you.