Practical AI

As AI accelerates innovation and adoption, leaders are facing rising cognitive load, shifting systems, and new emotional realities inside their organizations. In this episode, Deloitte’s Chief Innovation Officer Deborah Golden joins us to explore how AI is reshaping leadership, why vulnerability and empathy are critical in this moment, and how anti-fragility, not just resilience, will define the future of work.

Featuring:
Links:
Sponsor:
  •  Framer - The website builder that turns your dot com from a formality into a tool for growth. Check it out at framer.com/PRACTICALAI
Upcoming Events: 

Creators and Guests

Host
Chris Benson
Cohost @ Practical AI Podcast • AI / Autonomy Research Engineer @ Lockheed Martin
Host
Daniel Whitenack
Guest
Deborah Golden

What is Practical AI?

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).

The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Narrator:

Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Narrator:

Now, onto the show.

Daniel:

Welcome to another episode of the Practical AI podcast. This is Daniel Whitenack. I am CEO at Prediction Guard, and I'm joined as always by my cohost Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris?

Chris:

Hey. Doing very well today, Daniel. How's it going?

Daniel:

It's going good. It seems like like the weeks are all frantic in 2026, and, having a lot of AI agents help me throughout each of my days to get through it, it seems like. Definitely makes me think about my own human role in my day to day work and excited to kind of dig into some of those topics and others with our guests today, Deb Golden, who is Chief Innovation Officer at Deloitte. Welcome, Deb.

Deborah:

Thank you so much for having me. I appreciate both of you.

Daniel:

Yeah, it's great to have you with us. Maybe just to start out at kind of a general introductory way, for those out there that maybe have heard the name Deloitte, could you just give us a quick introduction, maybe in specific how Deloitte is involved with AI work as is the topic of this podcast, and then maybe how the chief innovation officer or your current role kind of fits into that? That would be great.

Deborah:

I mean, excellent. I mean, at the highest level and to be as super quick and brief on this topic. But certainly, I mean, Deloitte is not just a service provider, but I would say a major global industrial architect, if you will, particularly of our current AI era. And so as I think about our multidisciplinary approach and whether that's everything from audit and tax to consulting advisory services, you think about how all of these combined really help to rebuild the foundational pipes, if you will, of not just our own global enterprise, but any global enterprise and or individual enterprise looking to make the AI native operations actually work. And so again, whether that's from advising on technology to actually rebuilding the foundation, we are engaged in the soup to nuts associated with that.

Deborah:

So it really is quite not just interesting to see how we've evolved our hundreds of years background into being in this AI era, but also looking at how each of those disciplines not only have an impact on the world individually, but also cohesively across the board. So when you think about audit and tax impacting R and D as an example, it's a huge impact, particularly in our tech forward way. So it is actually one of the things that's also made me, when I think about the ability to traverse lots of different opportunities across multi industries. We operate in every single industry from a commercial landscape to the government and public services sector landscape to different types of technology as you think about, again, whether it's advise and implement to operate and or products plus and the commercialization of products and optimization. So it really is the gamut, which can be complicated at times or complex, but it also actually speaks to some of the complexity, I think, that we see in the world today and the world around us And specific to AI, we've created a quite substantive, significant investment around AI, and it's been ongoing for many, many years now as we think about how we can shape not just the AI world but the other parts of that world that have an impact.

Deborah:

And again, whether that's products, whether that's clients, whether that's our own internal operations or how we look at addressing the client marketplace.

Chris:

I'm kind of wondering, that was a great intro to Deloitte, and I appreciate that kind of level set for folks that aren't familiar with it. One of the things that was, that's really interesting to me personally is I know you have a really unique background and you bring a very unique perspective to the role of innovation in your organization. And wondering if you can kind of tell your own story a little bit about kind of like, kind of how you got into the position. And one of those things that that is curious for me is a lot of times in large organizations, I'm a large organization myself, you don't tend to have the most innovative people rising to the top and and stuff and and putting their mark on the kinds of work they wanna do. And so I'm kind of curious a little bit about how you beat the odds on that to get there and to be able to kind of bring your own personal brand to the types of work that you love to do.

Deborah:

Well, thank you for all of the questions today, but certainly that one because I do think it's the foundation of how I approach things. And so I think if you were to look at my resume, you'll see the what. You'll see the decades of navigating high stakes risk leading massive transformations and candidly managing the systems that keep the world turning in a variety of different ways. And I say that because I've been at Deloitte now for plus thirty some odd years. I did work somewhere else for two years before that.

Deborah:

And there is definitely a through line, high stakes risk in terms of being calculated in that capacity, looking at massive transformations and everything from digital cloud native to back in the ERP days to even financial and finance transformation, while at the same time thinking about how to manage these systems that, again, turn the world. I think what you won't see, and to get to the heart of your question, is the part that really actually matters, not just for the future that we're building but candidly how I operate, which is the how. I've spent years what I like to call unlearning the very logic that makes me successful. I spend a lot of time understanding everyone else's business to discern how to best try and help solve for the problems. I love to solve incredibly complex situations and issues.

Deborah:

And ultimately, at the end of the day, it's my inquisitive nature. I learn by asking questions. I learn by trying to understand that the logic that helped me yesterday isn't going to be the logic that necessarily helps me today. And I'm not afraid to actually change that logic. But in order to do that, I learn by asking the questions, what do you know?

Deborah:

How do you know it? What is the piece that you may know? And by looking at it in not having to be in a role or having a responsibility, it could be, I'm learning the most from an anthropologist because anthropologists know how to solve really hard problems. And what can I take away from that situation to then perhaps have me see the world in a very different way? And so when I think about whether it's an AI strategy, whether it's an innovation strategy, whether it's cyber and by the way, it's not just strategy.

Deborah:

It goes to how we do implementation and execution. Those things candidly aren't found in a spreadsheet. They're really found in empathy and judgment, particularly when systems fail. And so my secret sauce, if you will, is based on my own way of thinking. But it's not because it's my way or the highway.

Deborah:

It's not because my way is right or better than someone else's. I've just spent a life journey candidly of trying to understand how my brain works. And then based on how my brain works, how can I take those things that might be intuitive to me or intuitive to others and hone those skills? And so a lot of my life story is based on personal triumph. I lost my mother at a fairly early age and then I too had some very severe life threatening situations where I almost died.

Deborah:

And so when you take those foundations, I didn't have a choice to sit there and feel sorry for myself. I didn't have a choice to figure out how do I solve for these problems. I needed to solve for these problems. And it's not about not dealing with the emotion of the moment, but I did candidly have to separate some of the emotion because I was so emotional about the situation to actually see things very black and whitely. And it's the reason why, candidly, I'm very good in what I would say crisis scenarios or solving for problems because what I find, at least, is a lot of people who are trying to solve problems are afraid to make drastic change, whether it be because they built the systems that need the change, whether because there's an ego tied to those systems that have changed or need changing or simply because they can't see another way forward.

Deborah:

And I think just because of the life experiences that I've had, I both don't look for empathy or sympathy, but on the same time, I've just honed the way that my brain looks at really hard problems. And I hope at the impetus of all that is that no child should ever see their mother die. And if the way that I ask questions and the way that I build solutions and help other leaders come along enables that, then I will have been successful in my own life journey.

Daniel:

Yeah, thank you so much for sharing that, Deb. It's so inspiring and helps us kind of understand the framing through which you view innovation. While were talking, I was definitely thinking that obviously AI can impact real human lives and is impacting real human lives. But also it's one of those things that, is very tied up in people's experience and emotion, whether it's like the board who really has to see AI transform their company or the CISO who is terrified of new liability or the engineer who is afraid of losing the core part of their job. I guess as you've seen this kind of emotion and almost these crisis level things emerge around people's day to day work, How do you frame that particular set of emotions and change from your perspective?

Daniel:

I think this is also exacerbated by the hype around AI and people not knowing what's real and what's maybe not real. For those that you're engaging with, how do you bring a level of real understanding to where folks in an organization are coming from and what actually needs to change in the organization with respect to this adoption of AI.

Deborah:

Yeah, I mean, think one of the most fundamental things now there's an easier way to go about it and then there's the hard work that actually needs to be done to change the foundation. I'm going to talk about the latter first because I actually think sometimes in the race to quote unquote adopt AI, we lose sight of actually what is the true hard work that needs to be done for actual adoption. And what I mean by that is I hear a lot of times people saying, well, we've been in this AI world before with cyber or digital or cloud. And actually, realistically, we haven't been. Those worlds were built on a very deterministic system, so Osin ones.

Deborah:

You're building systems, people, processes based on an if then statement, something that is very expected to understand the outcome. So if this happens, then this will happen, and that is the expected outcome. In an AI driven world, we don't actually have that. We have a very probabilistic system that's actually learning as it goes. And so when you think about the work towards adoption, there is actually some hard work that needs to be done on the underlying again, I say systems, but let's assume that I mean that broadly, not just physical systems.

Deborah:

It could be logical systems. It could be people or a process. It's actually, again, how do you get them collectively to unlearn what they think they know? Because if you're building an AI system on top of a deterministic if then statement, it's already going to be bound to fail. And what I talk to a lot of leaders about is, again, this race to AI adoption where, funny enough, speed has now become the net new metric.

Deborah:

But that's only important if you actually understand your baseline. And most people don't actually understand their baseline. So when you're like, well, how fast are you getting to adoption? Well, did you know what some other metric was before that? Is question A.

Deborah:

And then B, when you see individuals struggling with, well, we built this quote unquote perfect AI model in a perfect sandbox, and we don't know why it doesn't work or why people aren't adopting it, again, it's predominantly and probably built on the fact that it was built with the same playbook and the same logic that got people to an Ozan one's world, which, again, it's just not the world that we live in. And again, I've heard people talk about this like, we need to have all the guardrails possible for an AI world. I'm like, Okay, well, the AI is going to out learn you in sixty seconds, if not faster than that. So having this finite list of playbooks and guardrails and guidelines is going to be really difficult in that scenario. And that's a perfect example of unlearning what you think you know to make this technology the best it could possibly be to enhance the world and what we have.

Deborah:

And it's an and, right? It's not necessarily an or. There's things you can do to keep moving the things along. And I don't say that because the systems work that's necessary is going to take years to go do. Maybe ask a flip question.

Deborah:

Maybe the question is, do I even need this system or process at all? We seem to be utilizing AI to automate the things that currently exist. Operational efficiency is a table stakes. It has to happen. The true unlock is going to be when we are creating net new business models, net new competitive advantage, net new ways of looking at the world collectively with an AI model, not how do I utilize AI to turn a deterministic system into being better at what it did previously.

Deborah:

Again, I think that's a little bit of complacency, but it's also a little bit of ease. That's what people know, and it makes people really nervous when you have to look at, well, I might have to pivot my thinking and my way of being in order to get to the highest probable answer.

Sponsor:

So your team is deploying models that generate code, write entire documents, and you're automating workflows that you stay weeks and days now. But updating a headline on your marketing website, that's a three day ticket somehow in the backlog. There's something deeply ironic about an AI company, which is pretty much every company now, that can ship a model to production in hours, but they can't push a landing page without a deploy cycle. And yet, that's most companies right now. Framer fixes this and it's not even close.

Sponsor:

It's a website builder that works like your team's favorite design tool. Real time collaboration, a CMS built for SEO, integrated AB testing, your designers and your marketers own the entire .com from day one, changes go live in seconds, you got one click publish, no engineer required, your team reduces dependencies, and hits escape velocity. And before you think, no code, cute, Perplexity, Miro, Mixpanel, all of them, all their marketing teams are all running on Framer. Enterprise grade security, premium hosting, 99.99 uptime SLAs. The infrastructure is serious.

Sponsor:

The workflow just removes the bottleneck. Learn how you can get more out of your .com from a Framer specialist or get started building for free today at framer.com/practicalai for 30% off a Framer Pro annual plan. That's framer.com/practicalai for 30% off framer.com/practicalai rules and restrictions may apply.

Chris:

So Deb, I was wanting to kind of follow-up on that last one, you really got me thinking with that answer. And I also want to a little bring a little bit of of kind of one of the things that I know that you are very strong believer in is is empathy, and kind of bringing that human element. And we're at this moment that you were describing with leaders where they are they are struggling. There's a new way of thinking to get them moving forward from where they've been. Though, as you've mentioned earlier in this in this discussion, the logic of today and yesterday may not be the logic you need for tomorrow.

Chris:

And so and there's this and and kind of built into this, there is this fear within leaders in various organizations, where they're worried about, you know, the all the potential bad outcomes that they're thinking about and the guardrails that they like to put around those things. You know, that that really beckons back to kind of human fear, human vulnerability. And what, how do you see the role of kind of acknowledging that vulnerability that people have in this capacity, and the empathy, that might be applied there to to make really hard decisions to to go a different path from what they have been, led to believe, what they've learned through their career, and what they've always done up until this moment. How would you guide them through that process? Like how would suggest that they take that next step?

Deborah:

Yeah, and if I may, maybe I'll provide a little bit of sentiment in advance of that to answer the question, which is, I think on paper, we appear more connected than ever, right, if you think about all the ways that we can connect. But on a human level, people feel invisible. And I think this is paramount. You think about we had a period where we were all at the office, then you have a period where no one was at the office, and now we have a hybrid return to work. And no matter how you define that, because I don't want to get hung up in what does that definition mean or not mean, But I'll use this analogy as an example.

Deborah:

We have some simulation of hybrid in an office, and yet we've never in most situations, we've never changed the semantics of what an office means. And so when you think about that, even if you're telling people to go do these things, it's really easy to very quickly say, not only do I feel invisible, I'm not sure how to communicate in a world that maybe the physical structures also haven't changed to meet the demand of where we are today. And so when you put these things together and then couple that with you have individuals who feel like they've been replaced by efficiency, You look at these things. The way that I might shift some of the question, too, is the logic even of yesterday would tell us that vulnerability is a liability. And I would actually argue that point completely because there is something that is hidden in that to be a true polished I wouldn't even say an executive persona.

Deborah:

I would just say a human. In this world where we have what I'll call authority economy, vulnerability could be your greatest asset. It's candidly, right now, the only thing that AI can't simulate. It's not to say in a world of tomorrow, it won't. But if AI never felt the weight of a life altering decision or the grief of a lost foundation, what might be different?

Deborah:

And so that's kind of where maybe I shift my thinking even a little bit as it relates to vulnerability is not a bad thing. Vulnerability is a very good thing, but it does take individuals very purposeful energy to put that vulnerability out. And that, by the way, is also probably hard because look, we all judge ourselves. We're probably our worst critics in terms of, should I do this? How will people interpret it?

Deborah:

What if I say something that's not appropriate? What goes along with that? But I do think if you think about where we're headed, vulnerability and leading through vulnerability is incredibly important. And at the end of the day, that is how real empathy starts, right? Real empathy starts with saying, don't have all the answers, whether that be today or tomorrow.

Deborah:

But I'm committed to finding out and understanding them through and with you and creating that not just psychological safety, but again, I think something that's broken in today's day and age is the goals, roles, metrics don't necessarily support that. And so if human behavior inside of an organization is driven by goals, roles, and objectives, and those don't tie up to allowing us to be truly vulnerable and empathetic leaders, that's going to be really hard to see people do it more and more and more. And truly, if you actually do it in a way that is accretive, empathy then ends up being not just a quote unquote nice to have soft skill, it actually becomes a high level diagnostic pool, right? If you really truly listen to the struggles and aren't just being nice, you start to uncover the system paradoxes that are actually slowing you down. And I think that's where we see people struggling.

Deborah:

It's because the legacy systems we force them into aren't at war with the tools that we've given to them. And I do think empathy is going to be a way for us to see that friction. And that's something that often the dashboards or the lollipop charts truly miss. And so again, I don't think we're ever short of ideas. I think we're short of actually really understanding how people view things and view the world.

Deborah:

And candidly, I've had that across my career. I may come up with lots of ideas or lots of ways to say, I think we should look at things in a very different way. And it's really been hard because people will say, well, Deb, you're just so different. And the connotation of difference over my life has really led me to think it's a negative connotation. My difference doesn't make me negatively different.

Deborah:

My difference makes me who I am today and the way that I see the world. I needed to do the work to understand how that difference can make me more successful. And I mean that personally, not necessarily professionally. But if I lean into understanding some of those things, I do need to hone it. Like I get from A to Z quicker maybe than most.

Deborah:

Does it make it better or worse? But it does mean then I may need to pivot how I have people learn and understand what I'm saying because I need to make sure that that path is really clear. So I do think understanding that, understanding judgment, understanding empathy, understanding vulnerability is going to be a way that we can see through, if you will, the sameness. And I do think in an AI world, AI democratizes innovation. It also democratizes very different cognitive thinking, which I think is going to be critical to us being able to solve the most complicated problems.

Daniel:

Something, Deb, that I was thinking about, I was trying to think in my own experience over the preceding months where I've had this sort of cognitive load or maybe fear around some of the paradigms kind of shifting under my own feet. I think one of those things has been like, as I engage with our engineering team, as engage hands on, my role for a very long time, at least as far as my engagement in kind of technical things like coding or infrastructure setup, that sort of thing, always sort of made sense to me. Now I'm kind of wrestling with as these agents and connected systems are aiding me day to day, That's been great. As I mentioned to Chris early on in the conversation, I'm able to get a lot done, but I've noticed a different sort of cognitive load on myself where I get some things up and running, right? Like, oh, I'm changing this design over here with this agent to help my engineering team visualize what I'm trying to say.

Daniel:

Then I context switch over here to email and I want this document summarized over here and then I context switch back to another email. There's a lot of cognitive load that I'm experiencing now that I wasn't before as I go back and I see, Oh, my agent over here kind of went off the rails and then I have to remind myself of what to do and redo this. The overall flow of that has shifted and in a lot of ways is more productive, but I've noticed a different strain on myself. I'm wondering if you're seeing also shifts in people's as you do engage with people from an empathetic standpoint, experiencing this new technology, any different cognitive loads that people are experiencing or the way that this technology is affecting them in profound way personally?

Deborah:

I mean, for sure. If you're utilizing AI in this capacity, any capacity, by the way, learning, etcetera, there is going to be a different cognitive load because it's not and it's not just in the way that you described it, because I agree, by the way, particularly if that's not something that, again, even you learned as a way to work through problems. So it's problem solving in a very different way. Again, And it's not good or bad. It is.

Deborah:

And all of our brains think and learn differently. And I do think that that's something that's really important because the way you learn may not be the way that I learn, may not be the way that everybody else learns. And that to me is probably one of the most empathetic things and most vulnerable things you can have is because I think there's part of with which that would say education has taught us that we should all think and be in one way. And again, I just fundamentally don't agree with that. I mean, have five senses.

Deborah:

Some could argue we have six now, but we have multiple senses. You learn from all of your senses. Standard day education teaches you to learn with two senses. You have all these other senses that you learn with and people do learn with. I think that actually is probably one of, again, the most vulnerable and empathetic things you could say or do to someone is that the way that you learn and understand and evolve is not the same for everyone.

Deborah:

And so even just understanding that nuance is super important because, again, the old paradigm, if you think about the strain, was checking boxes, moving files, following a manual. The new paradigm, AI does its thing, right? But that doesn't mean that there's not a perspective for human thinking or human intersectionality, particularly when we talk about things around whether that's hallucinations with the AI, whether that's AI bias, whether there's a whole host of new things now that we have to go learn. So it's not just about how do I build the algorithm. It's all these other things we also have to be thinking about.

Deborah:

And by the way, even if we're not thinking about it, AI is inherently learning in that context. And so I think that it's like context switching on steroids because it's not just the thing that we used to address. It's all these new things that we have to add in. So you're not just context switching on tasks. You're actually almost switching on states of reality.

Deborah:

So you're moving from creator to judge, from empathy to data analytics, and you're doing it in a very rapid pace. And so we used to talk in terms of hard work being about hours and output. Now today, work is cognitive synthesis. So when you think about how you engage with AI, you aren't just typing, right? Like before, it used to be we're typing, and we'd see it on the screen, or you see the squiggly that comes up that it's misspelled, you have time to go back and you change it.

Deborah:

Now we're constantly adjudicating between what the model says, what you know to be true, what the organization needs, what you think might be false. It's a lot just to put into the scenario. And to me, what that means is a constant interrogation of the truth, that becomes heavy. It becomes heavy. You get exhausted even just after an hour of prompting because you've done a day's worth of executive judgment in sixty minutes, right?

Deborah:

So I do think this load is real as we think about it. And again, I'll apply it back to a statement I made earlier. It's not just the load. It's switching between human logic, I. E.

Deborah:

Nuance, EQ, and probabilistic logic. What are the likelihoods, patterns, and averages? And I've termed this thing of like, I think everybody is now becoming a neural athlete. And that's how do you look at running a sprint on a treadmill that keeps changing speed and incline without warning. So we aren't designed to live in that state forever.

Deborah:

We're not designed to be in perpetual high velocity synthesis. And that also reinforces the need for a pause. We aren't just tired. Perhaps we're becoming cognitively brittle. And again, to me, in order to be the highest effective elite and neural athlete that you can, you really have to think about not just increasing utilization, but how do you manage cognitive energy?

Deborah:

And that's just not for you, but it's for you and for your team. And sometimes that means stopping is more important. Sometimes that means pausing to question is more important. And sometimes that means the work that you did in sixty minutes, you're okay to throw it away. Whereas in the past, throwing away work after sixty minutes, I mean, I don't know about you, but I think we pretty much all cringed when we're like, oh my gosh, my whole life's work was in that sixty minutes and now it's gone.

Deborah:

Like, Again, part of this gets back to my comments on unlearning. I don't think the luxury is speed, even though the world is moving very fast. It is focus and how we can actually think about becoming that neural athlete who has the clarity to know which problems are worth solving and the empathy to bring along the right people for the ride.

Chris:

So Deb, I think you really hit a point that is a big, big deal there from from my perspective, and something I've spent a lot of time thinking about. It's, it's something that, that Dan and I have talked a good bit about both on the show and off. And, and it's kind of the, you know, you talked about the I love your term about being a neural athlete. And the fact that, you know, it's maybe something of an obstacle course, everything is changing every moment or two. You know, the speed and the exertion that you have to put and and, you know, not only, you know, Daniel described cognitive load, but you pointed out the notion of, of the synthesis that goes with that and the context switching.

Chris:

And that if you if you put all these things together, that's asking a lot from the humans, at this point in terms of trying to navigate, our rapidly changing world. And, I know, you know, you can extrapolate out to what we see in the world every day and the news and stuff. And the world is changing really, really fast. And it's and in some ways, that's better. In some ways, it's quite hard.

Chris:

There are a lot of people out there that may find it particularly challenging to navigate this. And and so, you know, the question is, what are your thoughts on? Not everybody is is dealing with innovation on a day to day basis, and new ways of thinking with AI and stuff. And so, you know, like, like this little discussion group is is quite unusual. For the rest of the people out there that are just living their lives in more of the traditional, you know, way and whatever culture and part of the world they're in.

Chris:

How should they be thinking about this when this is not part of their normal day to day thinking? How do they navigate safely and productively? Or maybe productively is the wrong word safely into a new world without, you know, stumbling, which is what the great fear is. I think we're seeing that in politics and in just about everything these days. Do you do you have any thoughts around how to shape a larger world so that people can kind of catch up?

Chris:

What are your thoughts? I know everyone has their own thing, but I'd love to hear yours.

Deborah:

Yeah, no. And look, I mean, the world's going to evolve whether we want it to or not. I mean, that is just what is going to happen. And so I look at it and, well, yes, there's days I'm trying to figure out how do we look at solving really complicated health care issues, which, again, I realize everybody might not be trying to solve for. But I also look at it and say, again, even in my own life to the day to day, how do I just lean into it?

Deborah:

Because maybe the outcomes that I'm looking for will help inform my decision. Again, they don't have to be better or worse. I I love data. I'm a nerd. I love data.

Deborah:

I love to utilize that. And so think this is also kind of a little bit of a stumbling point too because some people look at this and say, I don't know where to start. Even in my day to day, like I don't know. And I'm like, Okay, I think about this. We all do whatever we're doing every day.

Deborah:

There's a constant question like, what do we cook for dinner? I'm like, mine just seemed to be such a challenging question someday. It's like, what you cook for dinner? And so I will use AI in that example. I will video or take photos of my pantry and my refrigerator and my kitchen, and then I will say, make a recipe.

Deborah:

And look, I can get more sophisticated, which I do. I don't just say make a recipe. I say, make a recipe that works with my blood type, that works with certain nuances that affect me and my own health, or hey, I've got to cook for four different people who have four different appetites. I have no idea how to do that. Give me a recipe that can still use these ingredients, but I could slightly modify it so every one of these people get the thing that they need out of dinner tonight, as opposed to what would normally happen is I would stress insanely about what to cook for dinner.

Deborah:

And I enjoy cooking, by way. Love it. I love to cook. But it takes that little bit of stress out of me to be able to by the way, very quickly, inside of two minutes, I can do an assessment of my house and what I have. I can get multiple amounts of recipes to be able to look at.

Deborah:

And there's really no downside. So what if I mess it up? So what if I didn't do the right prompting or I didn't do the right thing with it? So what? I learn how AI is interpreting my questions.

Deborah:

I learn that, by the way, if I misspell dinner and put diner or if I misspell spaghetti, I get two very different answers. So I actually start to learn the day to day of AI in a way that does not seem overly intrusive or overly complicated. But candidly, I am learning things. I'm learning about bias. I'm learning about how a misspelling can impact an outcome.

Deborah:

I'm learning that if I give it more direction, it gives me better analysis back. I learn that if I can give it more of what I need it to do for me, it will take away the friction that I need to be able to go do the thing that I need to do. And when I think about how to take, quote unquote, your daily life, that's just how I've done it. We're redoing a design in part of my house. And so I'm pretty creative in general, but art is not my thing.

Deborah:

Maybe in certain circumstances, but it's not in my day to day. And so I could for sure go hire lots of experts. And by the way, I have. But I also want to give them some insight into that. So how do you give them insight?

Deborah:

Again, take a photo. I'd love a modern organic view of what this looks like. Can you help me? And I'm a very visual learner, so the fact that my model can produce back out to me some visuals, it maybe takes away the trepidation I have when I meet with some of these contractors or vendors. Like, I have no idea what they're talking about, but now maybe I have a better way to address that conversation because I've educated myself utilizing AI in advance of that.

Deborah:

Now, there's some dangers again to that, right? Because we know AI hallucinates. We know AI could tell us what we want to hear. We know there's other bias that's just inherently built into some of the ways that we ask questions. The nicer you are to your AI, the nicer it is to you.

Deborah:

I do think you have to understand that. And again, you can use a fairly benign daily activity to go learn that. And I would just encourage people to do that because that's going to be what helps you become more comfortable with AI, to learn AI, to understand how to dip your toe into it. You don't have to be and you shouldn't be in a role of innovation to understand how to advance some of the things that are happening in every single part of your life that are being automated by AI either today or in the future.

Daniel:

I love that building up of that intuition as you're talking about Deb. Chris knows I like to ask selfish questions of our guests because I learn a lot personally from these conversations. So, my selfish question is kind of, I've seen a lot, whether it's on, like engineering teams that we're working with, as a company or developers that I am working with at workshops and that sort of thing. You mentioned that Deloitte works soup to nuts. We just talked about the everyday life working with these models.

Daniel:

I think some of that intuition is maybe shifting for developers where they bring in some sort of intuition because they've been experimenting with models in terms of single back and forth interactions with a single model. And now they're forced to think about how to apply AI in kind of software integration and technical circumstances. What I often see is them trying to pack everything possible into a single thing that the AI could do. I've seen a lot more kind of shift towards the things that actually succeed in implementations being thought of as systems. I think you used the system word quite a bit in your discussion.

Daniel:

Now AI doesn't really mean a single interaction back and forth with the model, but a set of things, some of which are done by AI and some of which interact with other tools and connections to a database or MCP server. It's really a distributed system now. I'm wondering just selfishly if that kind of shift from thinking about an interaction to a model and to thinking about a distributed system sort of thinking with AI, if that's something that you've run across or a shift that you're even seeing.

Deborah:

I mean, for sure. I mean, also the one thing I would add to that in the distributed system analogy that you provided, it's not the distributed ecosystem that perhaps you once knew. So even if it's distributed, it's pick your favorite competitor. Your favorite competitor can now be your favorite foe or friend, right? The ecosystem itself has also shift because we've shifted all the pieces from maybe single approach to multi model hive.

Deborah:

So when you think about that, the legacy intuition was to create the best, quote unquote, model in one single centralized approach that does everything. But that creates a massive systems paradox. I mean, particularly when you pack everything into one model, you inherit all of its biases and limitations in any one single point of failure. And again, given where AI is taking us, it's funny to me because people still think actually that AI is a search bar. If I ask it a question, it gives an answer, and that's 100% correct.

Deborah:

I think that's a 2010s way of thinking because we need to have continuous orchestration. So in your question, we were continuously orchestrating maybe on one single thread. Now this is about how do you actually create continuous orchestration in a multilayered agentic approach where maybe it's actually running in the background. It's not really about the prompt. It's about the connection between the models.

Deborah:

If your AI strategy is based on single interactions, Kindle, you're probably not building for the future. It's just a faster encyclopedia as you think about how you're architecting them. And so I do think another skill and we talk a lot about resilience. I talk a lot more about anti fragility because I think that's how I've always learned my life. But that's also, I think, how we need to learn in an AI model.

Deborah:

And what I mean by that is anti fragility is allowing yourself to not just fail and learn. We love to say we're a failure culture. It's actually the second part of it that's most important, which is learning from the failure truly to adjust your way of thinking to become stronger. And in order to do that, if you're actually not pushing yourself even greater, you're truly not failing. We tend to live in this world where we've built failure mentality based on the things we know.

Deborah:

I mean, I don't know about the both of you, but to me, that's not actually pushing ourselves into failure mentality. Failure mentality is actually saying, look, I expect that 20% of what I do is going to fail. And the antifragility of that is to say, I am going to become a better person when I fail from it. That is how I candidly live my life. Now, honestly, that's a little scary sometimes because, I mean, I could be failing and other people aren't around me, and they could be seen as, quote unquote, more successful or not just because they don't have a failure metric in their brains.

Deborah:

I think that's the only way that I personally and also professionally will actually be able to remove even your inherent biases. I mean, everybody has a bias. We're built with biases, whether we know them, whether they're cognitive, whether they're sitting somewhere back in the deepest part of our brains. We all have cognitive bias. And the only way we can even start to pivot that one iota of a bit is to allow ourselves that moment of failure and actually say that I expect it to happen.

Deborah:

If we expect it to happen, then we can become something more built around antifragility. And that, to me, gets back to your cognitive load across multiple models and agents. We're creating a system that actually handles the hurricane of modern data, of modern processing, of modern load. It actually allows for this intelligent disobedience. Like, I'm not being disobedient, quote unquote, for the sake of being disruptive.

Deborah:

I'm actually being disobedient because I hope that we force a different question, a different change, a different semantic than what we've groomed around status quo. And so if one model is hallucinating, the others can flag it. It's kind of sort of a new way of looking at checks and balances that honestly has been missing in our legacy structure, whether personally, professionally or inside of a corporate organization.

Chris:

No, that's a great insight there. I appreciate that. And kind of the embracing failure to be able to learn and move beyond it very quickly in the way that you put that. You know, as we're looking at kind of closing out here, and you've done a great job of kind of bringing us into a new way of thinking about this, the AI and the world in which we live and how it's impacting people and people in their vulnerability, having to navigate through this, as you're looking ahead, as you're looking at the upcoming few years, and the rapid change that we're having, just the fact that people are having to adjust and maneuver their way through a new type of cognitive load and, and feeling that whiplash. Can you talk a little bit about where you see things going over the next few years?

Chris:

What would you expect next, both from the technology and from the people involved in it?

Deborah:

Yeah, and if I may, before I answer the question, give you an anecdote of how I'm going to come up with the answer to that question, which is I think if you were ever to look at my profile on LinkedIn or somewhere else, you'll see that one of my favorite side hustles is I train service dogs. And a lot of that, I've been doing it now for many, many years. I'm what's called a quote unquote puppy raiser, which means I get a dog from age eight weeks old to 16 or so. And they learn quite a substantive amount of commands with me by the time that they leave me. And then they're ultimately matched the foundation I work with, they're ultimately matched with a veteran or first responder free of charge for the rest of their life so that individual could lead a more independent life.

Deborah:

And when I think about the things that I have learned in the world, I've learned it more from both not just training a service dog, but actually from the things that people have done to fight for our freedom, but also what and how these have an impact on them and their lives. And I correlate that to leadership. So there's nothing like thinking on your feet when you've got a eight week old puppy in your hand and you're in the front of a boardroom and it's got projectile everything coming out of every way you could possibly think of. I can't stop the board meeting. I can't stop what I'm there to do.

Deborah:

Like, what is it that I need to understand in that moment to be able to do that? But even more so, trying to force a dog to learn over the course of an hour, and it's done learning after five minutes. It's like, hey, I can't do it. I'm a nine week old puppy. Learn how to adjust the training style based on the thing that you're looking at.

Deborah:

And by the way, we do all of our training with positive food reinforcement, but all positive reinforcement. Don't have Fantastic. Yeah, we don't have the command no as an example in any of our commands. We actually use eye contact and name recognition or other voice recognition as a way to distract to get the focus back on us because at the end of the day, a lot of these dogs, again, whether it's traversing a busy street, whether it is being there to help someone who's blind and deaf who relies solely on this dog for their way to get across a busy street, it actually has to be trained for the edge scenario. And so it's not about, yes, of course, I have to make sure the dog can walk the person across the street, but I have to train for the edge scenario.

Deborah:

So when something is going to go wrong, because it inevitably will, how can that person have the utmost confidence that that dog is going to be able to guide it into safety? And that's what I think about the future. When I think about how we think about what the future is to bring, it's thinking about the world in that capacity. Really is about the edge solutions, the edge challenges, the edge capabilities built in a world where we have ethical trust. Again, same thing is happening here.

Deborah:

The trust and the bond between the dog and its handler, hugely important. Even if I taught it every single command, quote unquote, perfectly, if that trust isn't built, it doesn't matter. And so when I think about the way that I've evolved personally over now the last fifteen plus years in doing this, the amount of what I've learned that translates from service dog training into leadership, A, I could have learned more from that service training than in any leadership skill, But also, by the way, the empathy of understanding veterans and first responders and or anyone suffering or struggling, to know what it means to have that empathy is truly high because how I may get in a rideshare or how I may get on a plane train or automobile is very different. And so understanding that and then designing for that is what we need to think about in the future. It's not about designing for ourselves.

Deborah:

It's actually about designing for others. And that look, I know a lot of people can look at that and say, that sounds like motherhood and apple pie, Deb. It's really good. But it is really core to that. When you think about some of the greatest technology that we've ever built years past or years forward, it's because they were solving for something that didn't exist for a different set of purposes.

Deborah:

And that to me is really how we're going to look to the future. Again, I think if we're real about all the things that we talked about, the emotional toll is real. Once the busy work is gone, we're left with the hard work that is judgment. And at the end of the day, that is how we can get to manifestation. Learning that's a different rhythm, it's not a failure.

Deborah:

It is, in my mind, the power and a different kind of power as we think about the foundation for tomorrow. That's going to be how we solve the really, really hard, truly complex problems. And it doesn't mean that you have to be the chief innovation officer. It doesn't mean that you have to understand tech. It means that you're willing to be vulnerable and try things that perhaps you didn't think you know or things that you actually want to go and change about the way that you view the world.

Daniel:

Well, I think that's an amazing perspective to bring as we close out here, Deb. It's really encouraging for me personally as we head into this year. Yeah, I just want to thank you for taking time out of your work to join us and talk through these things. I think our listeners will really appreciate it. So thank you so much for joining us, Deb.

Daniel:

Hope to talk again.

Deborah:

Thank you so much for having me. It's been a true pleasure.

Narrator:

Alright, that's our show for this week. If you haven't checked out our website, head to practicalai.fm and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out at predictionguard.com.

Narrator:

Also, thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.