The development world is cluttered with buzzwords and distractions. Speed, focus, and freedom? Gone.
I’m Nicky Pike. And it’s time for a reset.
[Dev]olution is here to help you get back to what matters: creating, solving, and making an impact. No trend chasing, just asking better questions.
What do devs really want?
How can platform teams drive flow, not friction?
How does AI actually help?
Join me every two weeks for straight talk with the people shaping the future of dev.
This is the [Dev]olution.
Dr. Gloria Chance (00:00:00):
That's what AI does. It looks across all the data and says, Ooh, let's put this together and this will be created and let's do it this way. But what the imagination does inside of a human being is it's the brain simulation engine. AI can help with developing and designing, but the thought had to come from the human that was out in the world, seeing how all these things connected and weren't working and said, we need a solution.
Nicky Pike (00:00:27):
This is Devolution bringing development back to speed, back to focus, back to freedom. I'm Nicky Pike. So everyone is talking about AI. This is not a tidal wave. It's a tsunami promising productivity gains and threatening job displacement. But something few are talking about out loud. The real threat isn't losing jobs to AI. It's the psychological warfare that's happening inside every organization right now. Developer burnout is spiking. Kids are having psychological breaks after chatbot conversations. Executives are making billion dollar decisions based on hype. Meanwhile, we're ignoring the critical question, what happens to the human brain when AI becomes our coworker, our therapist, and our decision maker? Today, I am truly honored to be joined by Dr. Gloria Chance. She's a former CIO who earned the Most Powerful Woman in Banking Award and then walked away to become a performance psychologist. She now runs The Mousai Group, helping leaders navigate what she calls the imagination age, where human creativity becomes our most valuable currency. So before we dig in, here's the challenge. Leaders are fearing being left behind. So they're cutting junior developers in favor of AI workers, fear of replacement. They're creating this fixed mindset that prevents the reskilling that could actually save the jobs. AI is being deployed without understanding the impact that it's having on mental health bias or creativity that it's supposed to enhance. How the hell do we break this cycle? Dr. Gloria, welcome to the Devolution.
Dr. Gloria Chance (00:01:52):
Oh my God, Nicky, you said a mouthful there, and thank you so much for having me today. And I just have to say, it's not that fun when the rabbit has the gun, right?
Nicky Pike (00:02:02):
Right.
Dr. Gloria Chance (00:02:03):
And I love that saying because for years I was a tech person. I grew up in tech. I started out running cable on microcomputer desktops
(00:02:12):
When everybody would tease us and say we were playing with toys. And what I found, even in going through that, Nicky, is that as technologists, we were actually a small group of people that impacted tons of people because from the beginning of tech, there was always this fear of replacement and automation. So we had the other human beings freaked out and now guess what? We get to be freaked out because now we've developed a tech that could replace us. And so I think that this fear thing is real, but I think we have responsibility in that fear. And this is what I'm talking about in that it is creating not just bias that we have to be uber concerned about, but also it's not causing the mental illness, but it is contributing to the anxiety of everything that's going on today. And if you look at the data, we have a significant increase across all generations, across all levels and organizations of mental illness.
(00:03:12):
A lot of it's starting with stress and anxiety, and that's about fear, how we work. And I just want to say the psychological warfare is real because as people, we always want to be in our comfort zone. And in our comfort zone means that we don't want change. And what is AI? It is massive change. Even if it doesn't impact us today physically, it's impacting us mentally, psychologically, what am I going to do? How am I going to do it? When is it going to happen? When it happens, am I going to be ready? And so those are the types of things that we're looking at. But what I believe and what you see in the data is that the companies who get it right, or Microsoft who two years ago said, we see AI coming and instead of this big replacement, let's have a strategy. When people have anxiety, it is controlled by either having information about the future that calms anxiety or being able to do one thing that you can control. So those are two ways without medication to get rid of anxiety. As leaders, if you aren't preparing like Microsoft who said, I'm going to go and do an assessment of who I think or what roles will be impacted by AI, I'm going to proactively not hire when those people leave. And I'm going to take those savings every investment in people so that we're ready for AI.
Nicky Pike (00:04:34):
Far be it for me to say we're old enough, but we got to live through the computer coming up. We got to live through robotics. We've got to live through the.com era. We got to live through public cloud. We're seeing another disruption coming in. And you've got this statement that I absolutely love, which is you can't collaborate with something that you're afraid of. When we're looking at the data that you're saying, 84% of developers that are using AI tools today, but we only have 33% that say that they actually trust them. To me, that's like taking a medicine that you think is poison what's actually happening in the human brain when we have fear and adoption collide like this,
Dr. Gloria Chance (00:05:06):
Adoption is about change, and fear is about, I don't want to change. It's a bit scary because as human beings, the majority of us don't want to change. So what happens is when there's something new that comes up and we all go through the cycle, okay, something's new, I'm in my comfort zone. Oh, let me try it. I go out and it doesn't work so well. I make mistakes. I kill stupid, I run back to my comfort zone. So most people in general stay in that faith the majority of their life. And I would argue Nicky, that as technologists, we have this all reversed. We have been robotic as human beings for some time now. We've allowed ourselves to become robots. And what I mean by that is when you spend your time in fear running back between the comfort zone and the fear zone and you never get to the learning zone, or it takes you a long time to get there, which is where we want to be. The learning zone allows us to take something like AI and go, oh, what does it look like? What does it smell like? Oh, lemme try. So you're curious. Now you have curiosity, which is where we want to get to when we have change. And we're far away from that in general as human beings.
Nicky Pike (00:06:17):
And you say in the comfort zone, but we are technologists. It feels like that's part of our job description. Change is something that happens in our industry literally every single day. New products, new technologies, new stacks. So why do you think the technologists are more afraid of AI than what we see with open source projects coming out or new technologies like Kubernetes? Why are those not causing the same amount of fear that we're seeing with AI as it's coming through?
Dr. Gloria Chance (00:06:41):
In the olden days, I would say it that way, tech was, things sort of came out and it was like all the hype, but it was sort of one at a time. It was kind of like, okay, here's a new technology and you could integrate it or you could not. You have an option. But with AI, it's not an option, right? There is this pressure to say, I have to be on the bandwagon. I have to get this right, or I have to figure it out in order for me to be relevant. And I think that's what AI is doing is how all of us, especially developers who are creating this to now feel what others have felt when developers were implementing systems. And I think that developers have to come up with an understanding that A, not only that their fear should motivate them to actually learn and be more curious, but also now they have a greater responsibility in tech.
(00:07:34):
Before the architecture was kind of, again, one offs. It's sort, you could implement something in an environment and it could be sort of safe and you could test it. Like my back, and I was banking in healthcare, so when we implemented something in healthcare, for example, systems were really critical because a patient could die. And so we had all these things built into the production, testing, all that stuff so that we could ensure that we would have as much as possible, no issues. The challenge with AI now is that we're not just implementing a technical system, we're now implementing learning because we got to have nudges to help people do whatever. So we want to train people just in time. We're making assumptions about human cognition that we're assuming that developers understand, which is a cognition is sort of how does our thought process work As human beings, we've always been processing steps. We always said, okay, that's what, where, what do you call that? Six sigma and lean. And all of that came from let's process human steps. We're now processing human thinking. How we do cognition. That means that we're asking developers to have a lot of knowledge that if I were them, I'd question because what developer understands human cognition?
Nicky Pike (00:08:51):
I think this is one of the things that we're seeing here. As you brought up, adopting AI is not an option. We're seeing this in all facets of our life. We're seeing AI in our cars, we're seeing it in our chat bots, we're seeing it in coding. And I think that's adding to a lot of the stress. Every time we see a new technologies, we see this pendulum swing. And right now we're at the far end of a swing where CEOs and CIOs that are saying, we got to fear a missing out. We've got to do this. And they're making these broad statements like, we're going to fire all of our junior developers because AI can take over for that. And you said that this is dangerously shortsighted. I agree with that, but this is causing stress for our developers when they see this. And it's adding to kind of that mentality of do I even take the time to learn it? If it's going to take my job, what can I use it for to help me not lose my job? So we're seeing this tool that's supposed to be helping us, our productivity, but it's causing a bunch of stress and people out there that want to use it.
Dr. Gloria Chance (00:09:41):
This is what I think in terms of the stress. So first of all, if you don't have processes to manage stress, now’s the time, right? Psychological safety is about, again, understanding probably am safe because I won't get fired. So we don't have that anymore. Oh, I probably am safe because I can speak out and say, I don't like this and I'm not going to do that. We really can't do that anymore. And so what we have control over now is ourselves. And this is what makes it really hard because in the work that I do, I talk about the imagination being the key. And if the imagination is the key, guess what the problem is with that? Well, since we're from the time we're two years old, what do we do to our children? We say to our kids, imagination is a powerful tool inside of our brains that helps the mind go beyond what's immediately known. And so that's like looking into the future. That's like a powerful engine that we have inside of us that works.
(00:10:41):
But what we do, and we're taught as parents is around two, as our kids are using their imagination and exploring and being curious about the environment, what do we tell them? No, remember the terrible twos, the complaint is, oh, they want to touch everything. They're everywhere. No, no, no. But that's the very moment when we as human beings can have that sort of building up of our imagination, but we take it away, but it would become kind of robotic because we follow norms that we're taught. And there's nothing wrong with that. That's important. But the problem with norms is it makes all of us look at each other and say, we have to be the same. So you see what I'm saying, that we actually have been conditioned to be this way for some time. So developers, I think become leaders in a lot of this, and they have to break the cycle. They have to have the mindset and the brainpower to say, this is scary, and I'm going to use the tools and their psychological tools that will help me become less afraid of this because I need to understand how I work, how my brain works, how my stress works, how my anxiety works, and how my creativity.
Nicky Pike (00:11:54):
You go back to the cognitive portion of this. We've see AI hitting the cognitive functions rather than manual ones. We've seen this repeating pattern like we talked about desktops, mobile, internet, cloud. Each one of those brought in automation. They brought in panic, they brought in layoffs, and then they brought in this redistribution of work. So we've all seen this before, but now we're seeing something that actually touches something that was uniquely human, and that was the cognitive function. Do you think that this makes this fear cycle a little bit more justified than what we've seen in the past?
Dr. Gloria Chance (00:12:21):
Oh, it totally does. It totally does. Because we're not prepared. Think about this at work, I remember being a young executive intact. One of the things, and we remember IBM way back in the day where you had to wear the blue suits and all that stuff. And so I grew up in that time, and one of the main things that people would say in business is to take the emotion of business as a woman, they would say, oh, women are too emotional and so on. But in all of the research as a technologist who ran very technical organizations, I know that one of the most important things that made me successful in tech was understanding the human being, understanding what was important, like the human-centered design aspects, the emotional aspects. And so what we find is in the cognition now, things like perception, that's something that we do as human beings.
(00:13:10):
What is perception is how we feel. That's something that is separate from what AI can do, but what does feeling lead to? Meaning? What do I mean by that? You and I watch a movie. My dad loved baseball. You never watched it. I've come out and I love the movie. Why? Because I have meaning, because it connects me to my father who's deceased. You don't because you don't have that connection. So that's the difference. A machine won't ever create those connections. So that's where, again, if you think about it this way, any computer system is basically structure, right? AI is still going to be patterns and structure. It's going to be absent, the feeling and meaning, but it will mimic some things that we might recognize as feeling and meaning. But it won't be that because the human database tracks emotion, meaning, and lived experiences. Think about that. Robots do not have that. And so to not be afraid of it, you could see the robot as being very similar to how we've done tech before, right? It's structured code, et cetera. The differences. Now we've got to add in these nuances that make it look and see it human
Nicky Pike (00:14:23):
Well, and I think there's a difference there because you're exactly right. What does AI do? It's a pattern recognition and repeat system. But the way that we're doing that, we're patterning across human speech and across human thought. We're learning from humans. So that gives us, when we're talking to AI, it feels like we're talking to a human. But one of the things that we didn't talk about in there is that it's still a program. It's still something that's run, which means that it has bias into it, right?
Dr. Gloria Chance (00:14:48):
Pause there. Because the bias is not the machine. The machine is only as good as the program. If the coder does not have values, if the coder does not have wisdom, if the coder does not have empathy, as a technologist, I can build code, but if that code doesn't work and it doesn't order, let's say a blood lab that I need, or I'm on the operating table and I need additional blood, and somehow that system prompt doesn't get and that patient could die because I didn't get the blood right? That's like a basic sort of process for regular tech. But in AI, the challenge there becomes because we'll have so much pattern recognition and automation that as all basic systems like that start to happen, now we can predict that, yeah, we had 100% of delivery, the blood to the operating table, let's say. But for example, we won't have the nuance of, oh, but the blood tight and the actual delivery and the doctor, and some of the things that a new doctor might come in and say, well, I don't do it this way because this patient has a special thing.
(00:16:00):
In that example, not a big deal. The blood is sort of there. It's not being used. But you can see where that human wisdom to say, I've had patients with this issue before and now I have the wisdom to say we don't need that blood. But if I was delivering that blood to a robot, that roadblock may just use the blood if we have a robotic doctor. Do you see what I'm saying? So not having the ability, I think that programmers, they have to be reeducated in how cognition works, how empathy works, how creative thinking works, and how bias works.
Nicky Pike (00:16:36):
I guess that's my point. I 100% agree with you that the bias is not in the system, it's in the programming. But from an interaction feel, I don't think a lot of people can tell the difference and know that, I mean, we've all heard the story. A good example, AI is learning from us. Microsoft released a chatbot several years ago, and we all remember the story of how it became extremely racist in a matter of hours because of people dealing with it. So to somebody that's maybe not technical, that doesn't really understand that, they see that as a bias within the chatbot, I don't know that they really attribute that to the engineers that wrote it or the fact that they wrote the system in a way that it can learn from people, even the worst of people.
Dr. Gloria Chance (00:17:14):
So
Nicky Pike (00:17:15):
That is something that I think has to be looked at. So how do you work with developers to try to remove their ingrained bias? You can't get away from bias. Every person has it. You've got bias. I've got bias that's going to come through. How do you work a system to kind of put some guardrails around that bias and keep something like a racist chatbot from coming back out again?
Dr. Gloria Chance (00:17:34):
So in my work, one of the first things I do when leaders is I ask them to do a self-awareness assessment. You have to either know your identity and recreate your identity, especially in our relationship with AI like us individuals who will be having an AI companion who's going to help us be better. We have to know who we are so that we can leverage AI in the best. And it's the same way when we're thinking about developing bias or the guardrails for being safe, building safe systems. And we've always had to worry about building safe systems. We don't want to hurt people. But now building those safe systems require that we understand what is our minds, what is dignity to us? Because a person who has dignity or understands dignity will make sure that that's built into the system. How am I creative? What is my creativity?
(00:18:26):
I teach people what is your creative process? Big part of AI for us, while AI cannot be creative, we are, and so understanding not only our creative process, but how does creativity work in our brain? Because only right now, and the reason I do what I do is because when I started working on my doctorate and doing research as a technologist, what I realized was, wait a minute. We actually have a technology. It's humans inside of our brain. And that technology is the imagination, the tool that allows us to basically time travel to the future. It's an engine inside of our breed that lets us think out there. And so for me, if you don't have that capability to build that, you got to get your creative thinking expanded. So the training is significant around empathy, emotional intelligence, creativity, and these things have been here all the time, but we've ignored them as human beings. But these are the things that make us different from robots and from AI, and we're kind of lagging behind to mind those tools.
Nicky Pike (00:19:38):
And when we're looking about this. So one thing that kind of gets me is there's a variability there that I don't think we considered. Every person is different. Every person is a variable that we can't possibly program for. And I think this is coming out, especially when we see, again, coming back to we're kind of programming AI to feel like a human, to talk, like a human, to act like a human. And we're starting to see some incidences now in the news where people maybe that don't have the right social skills or something to that effect are coming in and they're using AI as a surrogate for a friend. And this has had some bad outcomes. We've heard stories of kids having psychological breaks because they're treating AI as a therapist. What can we do again with that huge variability? How do we really put guardrails around that? How do we protect ourselves from that one bias to variability? How do we go about that? That's something I can't get to.
Dr. Gloria Chance (00:20:27):
We are definitely headed for a time where we're going to have to co-collaborate to find solutions for all of this, because you're absolutely right. And I don't know enough about, because I'm a side, I go along with the tech industry now. I'm sort of like the human arm of the tech industry. I no longer have teams that code and develop, although I will. I'm actually launching a product that I'm going to be doing that in. But I'm sending all that to say, I don't know with hands-on how AI works in terms of coding. But this is what I will say is the challenge and the opportunity. And they're not tech issues. That's the thing with tech, a lot of our things have never been tech issues. We've always been trying to automate what humans need and what makes humans better. And so in the mental health piece, I think that if there's a way to code, to create code that instructs that no matter what, no matter what routine, no matter what path you take, no matter that the answer is never death.
(00:21:27):
And I would imagine that people have coded that. And if they have and people and the AI can work around that, then we are in trouble. Because here's the human part. I just read a study that said 60% of Americans read at a sixth grade level, 60%. Now we have this super intelligence happening, and then addition COVID introduced this issue around us being separated, isolated, and we haven't recovered from it. People are disconnected, they feel lonely, they feel isolated. We have a loneliness epidemic. And many developers, if you look at the research because of just being introvert in who they are, also are likely challenged with this, especially as we have hybrid work situations. So I just encourage people to responsibility to reach out to a human that we should not be using any type of AI to deal with mental health and stress. And honestly, I would try and trick it.
(00:22:29):
I'd almost have a separate email, et cetera, if I were going to do something to ask a question like that. Because again, you don't want it to learn things about you. And then you ask what you think is an innocent question, and then it connects dots and gives you information. Because again, as human being, we have a database that does that for us. That's our intuition. It captures, attracts everything that we've gone through in Mike. That's our database. And that's why we have to know who we are as human beings, because trusting our own database will help us know when AI is not giving us the right information, because our intuition will be, that doesn't feel right.
Nicky Pike (00:23:09):
And so you bring up the 60%. I mean, that's a shocking figure. And I'm sure you know this better than I do. I would be curious, is that lower now than it was 20 years ago? And the reason I ask that,
Dr. Gloria Chance (00:23:20):
Oh absolutely. But think about it, Nicky, but we've been automating so that people aren't thinking anyway. I mean, going in and processing work and automating that does help. It keeps us to start being robotic. It's like, well, I don't need to think I'm just going to do this even with learning. And so a lot of people have become lazy, and I think that's going to be the biggest challenge in AI as human beings. As much as we complain, we've gotten used to the phone and all that it does for us, et cetera. And now with AI, AI is saying either you let me to continue to automate your life and in a way that you might not understand, or you're going to have to get engaged. And I think that this is an exciting opportunity. I know people were stressed out, but it is really an exciting opportunity.
(00:24:06):
You get to know who you are as an individual, which I know most people don't want to know that. But I think as a psychologist, that is one of the most exciting things that you can do, is to really get to know how do you operate and who are you? And then from there, partner with your AI agent, right? Because now you know who you are and you get to be the boss. You get to say, this is how I operate. This is how I want things to work. And honestly, Nicky, I think that's what the fear is, is we're seeing AI as this enemy and this just lution that's going to be negative. And I think it's actually a positive thing. It can be a co-partner that transforms humanity to a place where we're better, smarter, and all of those things. Again, as long as we have the right guard rails, which is the responsibility of the coders right now, and that's a huge responsibility.
Nicky Pike (00:24:58):
And there's a yin and yang to this. So going back to the 60%, the yin of this is technology makes all our lives easier. The yang of this is that we no longer have to work as hard. So a good example, quick story. My sixth grade teacher, I loved her. Mrs. Sims, one of the greatest women in my life, but I remember her teaching me when we were going through mathematics. Well, Nicky, you can't always carry a calculator around in your pockets. You've got to learn this,
Dr. Gloria Chance (00:25:20):
Right?
Nicky Pike (00:25:20):
Well, that's not true. I now literally have one in my phone. I've got one on my wrist. So that wasn't necessarily true, but I lost some of the ability to do math. I think we're seeing the same thing with search. You can go in, you can ask it any question, and you can give very simplified results. Now we're taking that. We're amplifying that with AI. Not only can we ask it very simple questions, but we'll get answers back in a way that's easy for us to consume. Because AI is kind of learning how to provide this information back to you in a simpler way. We're not having to read books, we're not having to study as much anymore, but we can definitely say that our life is infinitely easier than it was even 10 years ago or 15 years ago.
Dr. Gloria Chance (00:25:59):
Well, I think some of us can, Nicky, I don't know that everyone can. And that's the part about bias. As we sit here on the podcast or whatever, a lot of the 60% of those people with sixth grade levels, it's about economics. And a lot of them aren't very educated, which means that they don't have jobs like you. And that will impact us as well, because right? What's going to happen to the 60% or whatever percentage who will not marry AI? They'll just have AI happen to them. What kind of work are they going to do? That's a good point. What kind of learning are they going to have? I mean, I think that's really the key part. And also the point I was making about that was we're actually getting in some ways dumber,
(00:26:39):
Right? Because when you have tech, right, you could take a quote and people have these sound bites, and they do it as a psychologist, a technologist, and some of the work I've done in banking and other industries when I'm using AI and it gives me a response back, I'm researching deeper. Like, wait a minute, does this work over here? And how's it work over here? And I kind of really do some deep analysis. And I challenge AI because I don't trust it. It's not because I think it's bad. It's just that as a researcher and as I have to make sure that I validate that this is real, that this is happening, right? Because I'm educated if I weren't educated. And there's nothing wrong with if you don't have a level of education that I have, but if I weren't, I might just go, okay, great, because I don't have any way to challenge it. So I'm not saying you have to be uber educated, but it does imply that there has to be some basic learning at your level of how AI will benefit you.
Nicky Pike (00:27:39):
I wouldn't even say you have to be educated. You've got to be inquisitive, trust, but
Dr. Gloria Chance (00:27:44):
Verify. Yes, curious. You have to have some wisdom from wherever you sit. What is your wisdom? What is your value? So I do this program called the Human Advantage in AI, and that's really what we're talking about. If what is the human advantage in AI? How can we ensure that we're leveraging our creative mind and our imagination, our emotionally intelligent, so that AI can be robotic? And we're adding the human parts, emotion, imagination, emotional intelligence, creativity. Those are things that AI clearly doesn't have. So then we become great partners. And I'll say the challenge with developers though, Nicky, and you guys can throw me off and never let me come back again, but my remembrance of coders is that while coding can be very creative, a lot of coders sometimes aren't that creative.
Nicky Pike (00:28:35):
Okay, I'd agree with that. Yep.
Dr. Gloria Chance (00:28:37):
So then the question I have for you and the coders is how are you bringing in the creativity and the imagination in this all?
Nicky Pike (00:28:45):
So I'm going to challenge you now, I'm going to challenge the great Dr. Chance here, PhD, and I feel like I'm walking into a gunfight with a spitball.
Dr. Gloria Chance (00:28:52):
I'm scared.
Nicky Pike (00:28:55):
But you said that AI cannot have creativity. Now, I want to challenge that because in a way, I agree with you, but I had a conversation with a guy at a conference, and he kind of opened my mind a little bit to ask this question, what is creativity? So AI may go find patterns and put together something new, but if I've never seen that before, if it's something I've not experienced, I could see that as creativity. So in my eyes, it can be creative in the way that it puts certain things together. So what is the definition of creativity when we say that creativity is something only humans can have, AI couldn't that kind of challenges that statement. What's your response back to that?
Dr. Gloria Chance (00:29:30):
To me, the beauty of AI is in its volume. It's kind of like we're going back to the mainframe days where right before the pc, there was nothing but all this crunching of data. So now we have this amazing, amazing, the data is so big, we can't even fund it with electricity. That's how big the data is going to be, right? But we know the human brain can't process that. So amazing. That's what we get with AI, and it's fast and the speed and it can combine and recombine it. That is one of the definitions of creativity, which is you take something existing. And that's what I did as an executive. I've worked across about nine different industries, and I've taken one thing from this industry and put it over here and put it over there. And that was created because I combined something from somewhere else and it was created for that industry, but it didn't create something new in the zeitgeist.
(00:30:24):
It already existed. I just moved it somewhere else. And people said, yay. And that's what AI does. It basically, you takes something and combine and recombine it looks across all the data and says, Ooh, let's put this together and this will be created and let's do it this way. But what the imagination does inside of a human being is it's the brain simulation engine, and it allows us to rehearse possible futures and test mentally and design before we actually do it. So developers could actually, in their minds and their visualization processes and time travel processes that I do that will allow you to simulate futures. The guy who created Python, actually, he imagined he wanted the code to read, thought more, design thinking, et cetera, that didn't come from AI, that didn't exist. He sat down and said after reverie that I feel like something needs to change.
(00:31:27):
There's a problem. And so what creativity does also is it solves problems, right? And that's why it's tough for AI because if the problem never existed, then AI doesn't even know that it's possible. And for example, if the Python wasn't here before, AI would've maybe gone out and looked at existing systems and said, okay, of all existing systems, here's how we could do this. But the inventor of Python had to think first to say, I think we need to process this differently. We need to have design thinking. So that's the difference. So now AI can help with developing and designing that, but the thought had to come from the human that was out in the world, seeing how all these things connected and weren't working and said, we need a solution
Nicky Pike (00:32:19):
That makes sense. I think there's some people may get that's a very fine line, but I agree with what you're saying is that the idea itself is the creativity. How you put it together, is also a form of creativity, which I think we do have a form of creativity with AI, but AI is going to need me to come in and say, yeah, I see what you're doing, but I want to do this a little bit different. Here's my thought processes, and then it can help me kind of flesh that out,
Dr. Gloria Chance (00:32:43):
Add that one. Exactly. And also it's not novel. So AI is not creating novelty. AI is creating creativity by combining and recombining.
Nicky Pike (00:32:52):
Okay. Well, I think we've talked through some of the issues, some of the nuances of what we're seeing with AI. Let's kind of talk about how we solve some of these issues. So one of the things that you teach is you have this framework about reskilling versus replacement. This is something that you're bringing in the conversations all the time. So an example is if AI can do level one support, then the human doesn't necessarily need to disappear. They can be trained for level two. Walk me through how you think leaders should be thinking about this redistribution of work, especially considering that there are a lot of the dev populations out there that are worried about losing their jobs to AI.
Dr. Gloria Chance (00:33:27):
So that's the other thing. In the past, we could say, okay, okay, we moved from this, I don't know, tiger tech to another retrain, right? We're not there. To your point in all the questions you've been asking about the nuances and the complexity of AI, because it's mimicking human beings, it's mimicking thinking versus process. Because process was easy. I could go do step 1, 2, 3, 4, 5. Now I'm all over the place with AI because thinking, and so retraining isn't about the 1, 2, 3 steps. It's about identity. This is what I was alluding to earlier. People just don't need to learn new skills. They need to see themselves differently. You are no longer a coder, you're not a developer, you're not a whatever. You're basically a creative problem solver. That's who you are. Now with AI, you have to help AI solve problems, and you have to adapt and reimagine what your role is in that.
(00:34:21):
So for example, as a psychologist, how do I change my identity with my client? So if I have one-on-one people that I work with, because I don't do therapy, I do more, again, extending re of thinking and that kind of thing. I use AI in that. How do I collaborate with AI? So I tell my clients, oh, I'm collaborating with AI, so we're going to take your notes. I'm going to read the notes, but we're going to use AI that way. That might sound small, but that's me reimagining how I'm engaging with AI and then communicating. That's what coders have to do. And it's not going to be easy, but it's like that's just what you have. Because if you don't do that, you're almost over here trying to be the developer that you were trying to make AI fit into that model. And that's not going to work.
(00:35:09):
So you have to step back and again, reimagine your identity. The second thing you have to do is unlearn some things. And that's hard too, because unlearning is about the point I made earlier about we don't know how to use our imagination because it was taken away from us over time. And you know how you learn to use your imagination for most people is very uncomfortable and awkward because the right side of the brain is about play and it's about the dorky stuff. And it can feel dorky, especially at work like music, dancing, singing, all of that is creative activity that expands the creative mind. For some people that feels awfully uncomfortable. So we have to unlearn always going to logic, which is where we have lived in our whole lives around the left side of the brain, which is strategy, structure, linear process, all of those data, that's what we've always lived in.
(00:36:06):
And now AI is inviting us in order to be able to work is to live in the right side of our brain, which is actually the fun creative part. But because we really haven't been given permission to do that, it's hard. So we have to unlearn all of the structure so that we can create the synergy between structure and creativity, really have to retrain people what feels safe. The reason I'm excited about the coders having this feeling now, because I encourage you, build that into the coding, build that into your architecture, the fear that you have now, let that fear become the thing that lifts you up and makes your coding better. Because now you're planning for fear, right? You're planning for bias. You're planning for those things that don't feel safe. And also what you'll do in planning with that, because I used to do a lot of, I built change management into my organizations and change management allows us to have people adopt. So you've done a good job in tech. When people adopt the product, it makes it feel safe, it makes sense. That's what adoption is. And so I think that this fear, et cetera, and the coding, it can actually help elevate the coding because if you build the fear, not the fear in it, but to make sure that other people don't feel like you feel like, oh, I don't like when AI does this go because likely other people are impacted by the same things.
Nicky Pike (00:37:40):
I agree. I do this talk where I get asked a lot of times, I'm a new developer coming out of college, should I be worried? And what I often tell people is this is like robotics. Robotics didn't kill people in the manufacturing places. Those that used to do that job are now repairing robots. Public cloud didn't kill the data center. Now you're getting a split. And this is the same thing. It's going to change the way you work now instead of sitting down and writing code manually with your keyboard, now you're going to get to be a true engineer. Now you've really got to start thinking about architecture. And not only architecture, but how do I translate what's in my head into a way that the machine can actually accept and ingest and give me back an outcome that I'm looking for? So you're going to actually start being more of an engineer. And this is something that I think is very unique to software engineering, is no other engineering disciplinary expected to both build the blueprint and turn the wrench. That's exactly what we expect developers to do every day.
Dr. Gloria Chance (00:38:35):
But do you think that that implies maybe a rethinking?
Nicky Pike (00:38:38):
It does.
Dr. Gloria Chance (00:38:39):
Absolutely. About what developers do, because I was an engineer actually in tech, but also in health communications. And so right, you had the blueprint and it was separate there. And even in tech, there was always a separation of duty. So I do wonder if the safety piece is a lot about re-looking at the separation of duties too. Because what you just said is a lot for a new person out of college to come in and be elevated to that kind of, when you don't have the base understanding of coding,
Nicky Pike (00:39:09):
And I think this is where the productivity gains come in. One of the slowest things to do as a coder is the actual typing part. We know what we want to do, we want to do. That's the slowdown, that's the interruption, is actually getting that in there. This is something that AI can do much quicker than we can, but you have to rethink how am I going to get my ideas onto that same screen? Now I've got to tell AI in generally a very specific way or a very process oriented way, this is what I want you to do and let it take care of the menial things for you.
Dr. Gloria Chance (00:39:38):
And pause right there, because that's where I would inject though the visioning. I spoke earlier about time travel, and that's what, because there's so much to do when you're trying to alleviate at least the typing. But I would caution people to approach this when you're coding with conscious and curiosity and not assume that it needs to be done the same old way. Because the way the brain works back to the comfort zone is you're going to find yourself trying to, the way that you approach coding before you're going to do it the same way, but you're going to use AI in literal, I'm saying a very manual way. What I'm asking or saying is if we take time travel and say, before I even sit down to code with AI, after I read the business requirements and I understand, or I have systems design or whatever stage I'm in, I'm going to me as the new improved individual upskill, I'm going to use my brain and I'm going to say, what am I actually designing in the future? How am I seeing this AI? I think you have to actually map that out because if you don't, then you're basically taking how the mind works. You're going to take your same processes and just try and automate them in AI, and it's not going to go well.
Nicky Pike (00:40:53):
So you're talking about moving people. How do we move people from a fixed mindset to a growth mindset? We're asking individuals to shift from this is the way it has worked. This is the way it's supposed to work. To embracing a new technology is a tool that's going to augment them, elevate them, make them more productive. What is your playbook for taking an individual and saying, this is how you move from the fixed side to the growth side?
Dr. Gloria Chance (00:41:15):
So first of all, just getting an understanding that we're no longer building systems. I believe that we're shaping how humanity thinks to sides and creates, right? We're no longer just automating steps. We're actually giving people tools to be better, which means that we got to be better. So we got to build for augmentation, not domination. And I think even saying that is important because AI can dominate. If we come in with the mindset of domination, where do I make sure this works and it does this? So I think how we're thinking about it, and we approach is very important. Design tools, design thinking tools that expand human capacity. But I spoke earlier about preserving human dignity. These are things that programmers haven't really thought about before. So it's education on what does this even mean? And then how do I put, drop that into the code?
(00:42:08):
How do I test that? How do I embed fairness and transparency directly into the cold and model that through the architecture? Because if we look right now, most people no longer trust systems. They don't trust people. They don't trust social media, the mistrust, et cetera is skyrocketing. And it's ironic that it's happening at a time when AI is coming in to shift everything. And so we can see how that's going to be a mess. And so we also recommend you partner with people like me, psychologists, artists, artists are really important too. And we have artists in our firm. Why? Because they're on the right side of the brain doing all that stuff that people aren't comfortable with. But it creates change. It expands thinking. So diverse thinkers, if you are not a diverse thinker, coder, the one thing you want to do is acknowledge that and just say either leverage AI again, but have another person who is a diverse thinker to help you because your algorithms in AI. What do you want it to reflect? I think that's really a key question. What do you want the algorithms to reflect
Nicky Pike (00:43:13):
Your statement about fairness? Again, that's highly subjective. Your life experiences are very different than mine. Your idea of fairness is very different from mine. Now we are starting to see governments in locations starting to try to put some regulation around AI. We see the EU putting regulation in. But how do you bring in an idea of fairness when you're going to have so many different ideas of what that means and how that should be employed?
Dr. Gloria Chance (00:43:38):
We've done this before, right? In the medical field, we have standards. We have standards everywhere. And I think that where we're going to start is where we always do, which individual organizations will have to put in their own standards until they are universal standards. That's how all new tech works. So I would say that the fairness piece has to do with what is your value proposition to your customer? We all have customers and what does that say about how you treat your customers? So I think that we already had the built-in things. I don't think everything changes, but where I hear you going and asking is that AI isn't just this isolated tech that's going to come in and impact this. It is impacting everything, which means we have to review policy, we have to review how we approach things. We have to understand, to your point, what is it? Has anyone told developers what they expect them in AI? What are you seeing across the board? Are developers taking on this whole thing? Or again, are we going to separate duty? I believe there'll be new roles that come in that will be like human engineers who will be people who will help developers collaborate with developers so that they get the AI right from a human perspective.
Nicky Pike (00:44:52):
I'm going to repeat back what I heard you tell me if I'm right. You're talking about not only training people, but having a central, let's say an AI office to use medical terms almost like a morbidity and mortality review. When something happens with AI, then we bring that in, we discuss what it is, and we set a new standard. Now that can be at the organizational and the enterprise level, or that could be for some of the cases where we talked about where kids were having psychological breaks. This could be at a much higher level, but that's what I'm hearing you saying.
Dr. Gloria Chance (00:45:20):
Yes. That's what I'm saying in the process that I think most of us have used throughout the technology evolution is that this is how you introduce new tech. You bring it in and you want to test and learn. And honestly, you should be in an environment that is safe. Let's start there. Let's look at something that we want to automate, like onboarding that seems like it not hurt anybody, but we can learn a lot from it. One of the most complex things that organizations do. So taking something like that and testing and learning about your organization's capabilities, that helps, oh, what skill sets do I have in my IT group? What skill sets do I have in my business? And that starts to give you an idea of what training your organization needs or what skill gaps you have in your organization. So I think testing and learning on a very specific project will help you start to build that office. We're talking about right to know the skill sets that you have and what are the gaps that you need to fill.
Nicky Pike (00:46:14):
Now, one of the things that would worry me about that is you've been in the tech industry a long time, so death by committee, too many opinions. I think the same thing could apply to fairness and morality, that if you try to fit for everyone, you're going to end up with something that works for no one, because you're going to offend, you're going to do something that doesn't work for someone. You're not going to take in certain disabilities or certain aspects of life. How do we prevent the exact opposite of that happened? And in the ability to try to be fair and moral in AI, to destroy an AI, I would
Dr. Gloria Chance (00:46:48):
Say one of the most profound parts of AI, now I think it's been important, but it's becoming more important. And that's culture and culture and in any organization is basically how we do things. The culture, and that's the executives, the leader, the CEO's responsibility to set, but it's all everybody else's in the organization actually creates a culture. I think that we all, as employees and as human beings, we need to ask ourselves, what culture do we want to be in? Are my actions building the kind of culture I want to be in? Because if you're not a fair organization, then by default, people who want fairness will leave. So there's a way in the culture to handle all of that. Being fair equals transparency. It doesn't mean that things are equal. It just means that the leader knows based on their values as an organization and what they deliver, this is how we want you to behave. When you don't behave this way, this is what happens to you. It's kind of that cynical. The fairness part is transparency. Hey, I did this to Paul because of that, I'm doing this to you, Gloria, because of this, right? So you balance out some of these things with transparency and communication.
Nicky Pike (00:48:04):
And we're talking about AI as it's one thing, right? There are a lot of different AIs out there.
Dr. Gloria Chance (00:48:09):
Yes.
Nicky Pike (00:48:09):
So I do agree with you. I don't think that we have to be so subjective and be so concerned because we're going to see adoption through incubation. People are going to try things out. Those that work for the most people, they're going to get the money. They're the ones that are going to write. And it also doesn't mean that just because this one AI works for 80% of what we're doing, that there's not another AI that works for the 20%, we'll have that kind of distribution. So much like we see in politics and social gatherings and stuff today, we'll see the same thing with AI. And I think that's an important thing to remember.
Dr. Gloria Chance (00:48:39):
Absolutely. I do. And I think that, again, the executive's job, which I think because they're afraid too, because when you look at some of the data, right? They're saying that upper level management boards, you can streamline to your point about committees. So I've never liked committees. I don't like building them. I usually don't like being on them, but I understand the importance. So again, being creative about how do you do that, leverage AI to make sure that people have input. The idea is about collaboration. I think that's why committees were, I think initially developed was for collaboration on. But anyway, I think the role of the executive becomes really important, right? Because they have to be responsible for building the culture. They have to be the ones to push humanization. And unfortunately, and not all of them, but what you are hearing are a lot of executives are like, Ooh, we can cut. We can fake money. And that is a big important part. But I think the executive role right now is to make sure that humans feel human and that humans feel that they're still valuable because cost saving mindsets don't help humans. But looking at capability, what are the new capabilities we can bring to the organization, is what it will be helpful and et cetera.
Nicky Pike (00:49:55):
And I think what you said is very important. It's something that I think these leaders that are looking at the hype and they're making some of these bold statements need to really come back and take a look at that. To your point, three things that we don't think for at least five, 10 years that AI is going to really be capable of, or empathy, creativity and imagination. So these are fundamentally human ideas. But when we've seen AI taken such a precedence, there are ideas that we have to or won't even say ideas. I'll say capabilities that we have to protect for the human when they're using AI, we don't them to come over reliant on AI and lose that part of the humanity.
Dr. Gloria Chance (00:50:31):
Exactly. And so I would say as we're wrapping up that I think the human advantage is that so AI can process faster, but humans imagine further when we bring both together precision, which is AI, right? The sort of pattern recognition and the human intuition. So precision and intuition, boom logic, AI logic in our ability as humans to story tell data, which again, left brain robot AI, but we create meaning. So there's data, but now I'm giving you the story, the meaning behind that data. That's when real transformation happens. How do we work in partnership with AI? It is the same thing with coders. You have to figure out your process in advance. I think it's project by project. How am I partnering? How am I envision this partnering so that what is the transformation I'm seeking? Because before, when it was just our brains, we could just write it in the code and erase it, keep going back and forth. But when you address AI, I don't know about you, but I like to be organized. I like to already know where I'm going, what I'm doing, how I'm doing it. Because if you don't give it that kind of instruction, then you don't get that the best answer. So I really think that design thinking, the time travel, all of that preparation in advance will help coders get at least closer to being able to understand the human thing that we're trying to automate.
Nicky Pike (00:52:05):
I completely agree. And on the creativity front, I do agree with you, even though I kind of challenged you earlier, I do agree with you. I don't think AI has the power of creativity, but what I do think it has is the ability to augment human creativity. I mean, I've got this prediction out there. I think with AI we're starting to see non-technical people that have these great ideas that are able to use AI to go out and build them and get funding and get 'em pushed. And I predicted that we're going to see an explosion of ideas through the democratization of software development. What do you think about this? Do you think this is going to be a net positive for us, or do you think that we're going to end up drowning in just poorly built software that just becomes technical debt that nobody uses because we don't have that focus and that mindset to keep a proper development process around what we're building?
Dr. Gloria Chance (00:52:52):
Yeah, I think initially it's going to be the latter, which we're going to have the tech that, and we're going to make a lot of mistakes because everybody's excited and I think that there are a lot of good ideas out there being creative person and research and creativity. And I ran in banking, et cetera. I did a lot of innovation labs, et cetera. So I'm used to all, and I know that usually you can run a thousand ideas through something and you might get one that really works, one or two. So I think we'll have to go through that. Like, oh, this is so cool. I've either been talking to people and I'm like, no, I hear you. But no. So I think we have to go through that and that's exciting because that means that people will get engaged with it. But I do think that, again, I think that the tech industry, I think it does have to morph. I think we're going to be so critical in this transformation, but I think it's going to require us to really book inside. I don't know, do you know Nicky of any, are there any tech? Remember how you all used to do those jam sessions, whatever. Are there any jam sessions going on? How can we be as a tech industry different in this AI? What are the things that we do that we feel we need to ship, like bias and psychological safety and those things?
Nicky Pike (00:54:12):
So I haven't seen, I mean we've got hackathons and things going on. Every company's coming up with their own ideas around AI. I haven't seen us really ask the questions about the human side of it. That's why I was so just absolutely pumped for this interview with you is that's something that I don't think a lot of people are talking about. I do think it needs to be talked about more often. What can we do with AI not only to increase productivity, but also to realize that yes, AI will increase our productivity, but it's likely to increase burnout rates because now we are able to operate at 10 x or a thousand x what we used to be, but that still means we're still doing more work. So what we used to do one for one is now one for five, or it's two for 10. But we're still going to be doing much more, but we're also having to change our mindset. So that's something that I think needs to be talked about in the industry a lot more is don't stop believing that just because we're becoming more productive with AI, that this is going to solve our burnout crisis. I don't think that's true.
Dr. Gloria Chance (00:55:08):
Oh my God, no. It's actually going to increase it, right? And I would, for example, I'm gone to tech companies have done mind spas or bootcamps where we're basically exercising the brain because it is an organ just like your muscles that has to be built and managed and explored, and that also leads to better stress reduction and that kind of thing. So I do think that the tool like mindfulness and things to reduce stress, et cetera, will become important in the tech industry. As I again, believe that first time I've ever said this, I do think that if tech doesn't figure out to your question about the fact that AI can do some things like in tech and a person like me who has been in tech, and if I really focus, I could code again. I can do it again. I have to create my own things.
(00:55:59):
I don't need you all as much. And when we look at the Gen Zs and people who have always, I think it's a Gen X too, who have always, they were born with tech and the people coming up. There could be a time when tech as we know it doesn't exist, right? Because now it's a product of who we are as human beings. And so I do think will be important for coders to think outside of the box. And again, back to the beginning, it's not fun when the rabbit has the gun because now we don't know what humans will do. We do know that humans are very ingenious and that we can create and come up with tons of things. So I'm saying it's a cycle. It's the same cycle that we've had through all the tech. We make a lot of mistakes, waste a lot of money time. But then we end up in a place where, okay, we have standards, we know where we're going, and it settles down a little. But what I will say, the speed of tech we know is exponentially increasing.
Nicky Pike (00:57:00):
Absolutely.
Dr. Gloria Chance (00:57:00):
And the next wave is transhumanism, which is now we're embedding tech into the brain. We're already doing some of that. We've been doing that for a long time in terms of using tech as robotics and operating s and that kind of stuff. So in heart valves and those kind of things. But we're expanding that at a rapid rate. We're looking at using it just to make us better. It's not even a medical thing. It's like, let me extend my life. Let me extend my thinking. So all of that is coming close behind this. And how are we thinking about that?
Nicky Pike (00:57:35):
Let me ask you one more question then we're going to get into the prediction side of this and kind of round this out. You've got two statements that I quote and I've told people since we've talked. One is leaders need to be clear about whether they're thinking AI is a myth or a fact. And the other quote that you always say is, humans always have to be a part of the equation, especially from a quality control perspective. What is the one thing that leaders and CEOs and CIOs need to be thinking about? What's the one thing that if they got it right, everything else would be easier?
Dr. Gloria Chance (00:58:04):
Trust. There's an Edelman report 2025. We've been doing a lot of work in the philanthropy industry around trust. And trust is declining significantly across the world. And it's distinctly because of dis- and misinformation. When we look at it, you talk about myths, like what's the truth and what's not right now, especially with AI, deep fake that we can't even tell anymore what is real and what is not. Or you have to pause a little bit longer and really sort of process and think through. So I think that trust, because it's declined significantly because of these things, it will be the most important thing that a leader can show share and be during this time.
Nicky Pike (00:58:46):
And see, this is why I love talking to you because every time we talk, you bring a different thought in my brain. You talked about trust and I agree with you. So with AI, we are seeing videos that you can't tell from life are fake. We're seeing new stories that come out, bots that are out there to doing things. It makes me wonder, are we going to see ourselves because of a trust issue? We can no longer believe what our lying eyes are telling us. Do you think we're going to see human start going back into more a tribalist mindset? I want to surround myself with people that think the way I do. I want to get more tribalist because I can't trust anything that I'm hearing out there.
Dr. Gloria Chance (00:59:19):
No, actually in the work that I do, when you think about systems, systems thinking, so the way that creativity works is in order for something new to be born, this is why I think this might prove the AI thing even more. That systems have to break down. And what are we seeing right now? All of our systems are breaking down our governmental systems, our tech systems, you name it, everywhere you go is broken.
(00:59:45):
While that can be scary and it's inducing a lot of fear, what chaos theory says, and that's a part of creativity, is that when things break down, they break down. But that's the beginning of it being rebuilt. And so I think as humans, we have the opportunity right now to co-create in a different way. And that's why I keep saying that coders need to look at their environment, their system, and reimagine it. Because when we reimagine things, we could then co-create and say, you know what? That really sucked and here's why. And then here's a solution that makes it better. So I believe that what the current environment has shown us is that we're better, and everybody may not believe this, but that as human beings, that we do need each other and that we need to share. Because you said earlier, we're all different. So what's different in you that can help make me better and vice versa? That is where human nature and our human desire is, even though the fixed mindset is why, because the fixed mindset is about preferences. But if we can get people to rule out into a growth mindset, all of a sudden we realize, oh my God, there's way much more when I'm not just focused on this one thing and this one self and this one way of being.
Nicky Pike (01:01:03):
Excellent. Alright, we got two more questions. One's going to be your prediction and then I'm going to ask you a hot seat question. So the prediction, it's, it's 2030 right now. UA as in all enterprise, it's in our everyday life. We're at that inflection point now where we're seeing that integration into our lives. What do you think collaboration between humans and AI actually looks like in 2030? And how do leaders and teams interact with AI daily?
Dr. Gloria Chance (01:01:25):
So I think that it looks like culture. I think that by 2030, we will have created a culture where AI and the human being exist as beings. So some of the research and work that I do, I talk about the different beings and that AI becomes, we have human beings and then we'll have different types of AI beings that will be recognized by the law as well as recognized definitely by the HR services, et cetera. And I believe that the human and AI, they will in the culture be measured around their productivity and their ability to do good. I think we're going to have a different system. I think it's going to be kind of cool, right? It's going to be like, Hey, this is Doctor Chance. Here's my peeps. They're going to be robots. So that's That's pretty
Nicky Pike (01:02:13):
Cool. I love that. I think it's inevitable it's going to happen. So we're definitely going to see that.
Dr. Gloria Chance (01:02:19):
Might as well make it fun.
Nicky Pike (01:02:21):
Yeah, absolutely. Because now we are going to have things that can assist us. It's going to be the robot vac for our daily lives, right? That's
Dr. Gloria Chance (01:02:29):
Exactly I'd love it, yes.
Nicky Pike (01:02:32):
Alright, so here's your hot seat question. So you left the C-suite for psychology. You've got a lot of opinions on what AI should be doing, what it can do, and the right ways to do this. If you were offered a CIO role today at a major company with AI as part of your toolkit and with your opinions on the right way to do it, would you take the job or would you not? And why? And why? No,
Dr. Gloria Chance (01:02:54):
No, no,
Nicky Pike (01:02:54):
I wouldn't.
Dr. Gloria Chance (01:02:55):
And I could say that really quickly because I have the opportunity to go back into corporate America. And I like being on the sidelines. I like being nosy and being like, what you doing? What you got up? And I love the cross industry perspective. I love, I get bored when I'm doing one thing. So what I like to be in a project where I'm helping as a consultant, advise or guide groups, absolutely would love that all day. But no, not going back to a
Nicky Pike (01:03:25):
Job. And it's one of those things. So for our audience, we told you that Dr. Gloria works with The Mousai Group. You're the CEO of The Mousai Group, which is Greek for Muse, right? Am I correct in that?
Dr. Gloria Chance (01:03:36):
Yes, it is. We combine knowledge and art,
Nicky Pike (01:03:39):
Yes. And 100% accurate because I could tell you I met Dr. Gloria on LinkedIn, we connected on LinkedIn, and we've just had some of the best conversations and she has been a muse for changing some of the way that I think around AI and also cementing some of the ideas that I have. And it's always a pleasure to talk to you. I'm sure this is not going to be the last time. So I first want to thank you for coming on and give you a chance to close with any thoughts.
Dr. Gloria Chance (01:04:02):
Lakey, thank you. It has been a blast. And I do want to say I just went through a health scare over the past year, and when I think about AI, I think that the part of being human is about going through life experiences. AI can be a partner along beside us through those experiences, but it is not the experience. So I just want people to embrace their humanness, and I want them to understand and go learn about being human. Because I will tell you that in my work, most people do not understand what it means to be human. And as human beings, we have the ability to do so many incredible things. Again, if we can expand our creative thinking, our creative mind, I think we'll become really amazing co-partners with AI.
Nicky Pike (01:04:49):
Love it, love everything about it. So with that, I want to thank you for your time. Can we consider you a full fledged member of the Devolution?
Dr. Gloria Chance (01:04:57):
Oh, absolutely. I am a Devolutionist.
Nicky Pike (01:04:59):
There you go. Love everything about it. Thank you again, Dr. Gloria. I'm sure this won't be the last time we talked, and I hope you have a wonderful day.
Dr. Gloria Chance (01:05:07):
Thank you so much. You too, Nicky. Bye-bye.
Nicky Pike (01:05:11):
Thank you for listening to Devolution. If you've got something for us to decode, let me know. You can message me, Nicky Pike on LinkedIn, or join our Discord community and drop it there. And seriously, don't forget to subscribe. You do not want to miss what's next.