Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - cloudrealities@capgemini.com
CR112 Evolving role of AI across industries, AI mini-series part 1 [AAA team podcast]
[00:00:00] That was good. How did it sound, Robert? Uh, it's good. Uh, that episode, Marcel, sorry, is that a question to Marcel? You should use Marcel's name, not as in Rob. What I mean, you are absolutely on fire today.
Welcome to Cloud Realities, an original podcast from Capgemini and. [00:00:30] Week, a conversation show, setting up a miniseries that we are doing, looking at the current state of ai. I'm Dave Chapman. I'm Esmee van de Giessen and I'm Rob Kernahan and joining us, h, for today's conversation and for the rest of the miniseries on the state of AI, we have Craig Suckling who's gonna join the production team.
Bring in some of his ex-colleagues and friends across the industry for us to talk to. Craig is a colleague of ours here at Capgemini. [00:01:00] Craig, how are you doing? Doing really well today. Thanks Dave, and it's great to be part of this conversation. Good to see you. Why don't you just tell everybody a little bit about your day job?
Sure thing. So, so, so my role is, uh, leading AI across Europe in Capgemini, and I get to work with many different clients, many different organizations, figuring out the complex, exciting and dynamic world of ai. Fab. Good to see you. Can I add confusing to that list as well? Confusing confusion's, a good ad [00:01:30] dynamics and you had a few others in there.
I'm just gonna add confusion in if that's all right. 'cause I think lots of people are confused as well. That is normally what you bring to the, uh, the table, Rob, isn't it Just Yes is, but, but, but not on today's episode because we are doing what we refer to as an Access All Areas or AAA episode today as what's AAA episode.
It is all access area. So it means that you get a look behind the scene and it's mostly about digesting, unravelling what you think about [00:02:00] these topics. And I think an AI miniseries is, has a lot to unravel. So I'm very pleased with that. But that's what we do in, uh, in aaa and everything can be said, but I think that's, that's what we do in every show, don't we?
A little bit. Little bit this one, these, these things are a little looser, a little less structured, and you get to see some of our thought processes that are gonna go into some of the, uh, state of AI shows, uh, which are to come over season five. So it's that end. What I thought we'd do today [00:02:30] in terms of framing a little bit of our conversation and using some sort of external input to that.
I actually did some research. I mean, that is quite surprising, Dave. I mean, look, I felt like we needed a bit of a drum roll for that. Yeah, I think one of those like little applause sounds after I said it, like define research. Yeah. There you go. Thank you, Marcel. Myself. Yes. So I've had a look around and I was [00:03:00] trying to find a, a, a sort of a document.
From a research organization or or thinking organization that kind of, like I said, would, would provoke some conversation today and, and find, and I managed to find one that I think pokes at a few different useful. Points about where we are. So let me shout that out. It's Gartner's top strategic predictions for 2025 and beyond, and it's called Riding the AI Whirlwind.
Um, you can find this online. It was actually [00:03:30] published towards the end of 2024. Uh, and it's a, and it's a decent read. It basically covers a series of recommendations and planning assumptions. So I haven't shared this with the team. We're gonna go through this new and we'll, we'll have this conversation relatively live against what we're doing, and at the end we'll return to Sam Whitman's three key concerns about AI that he's talked about in 2025.
And that's gonna be something that we consistently ask each of the guests in the state of [00:04:00] AI miniseries. So, we'll, we'll give our thoughts on, uh, what that looks like in today's conversation. Sound good? Yes, let's go. Sounds great. So I, I always like a surprise. So, uh, let's see what happens. Let's go, let's get into it then.
So first of all, let's just talk a little bit about some of the key findings. So the key findings starting at the end, if you like, are that the AI whirlwind challenges our privacy and personas. A hundred percent. And personas being AI representations of human beings that might be used [00:04:30] by business to simulate an employee's likeness knowledge and behavior to extend the value derived from them.
That's number one. Yeah. Number two, that operational risks are at the heart of the AI whirlwind. And number three, that the AI whirlwind threatens traditional management structures. Using AI to automate decision making leading to a flattening. So the recommendations before we go onto our, our thoughts on them are that [00:05:00] you revise worker employment contracts to account for the rights to use their AI avatars in order to reduce legal risks associated with the ownership of names, images, and likenesses.
The second one to identify the secondary effects of AI operation, for example, energy. And thirdly, to embrace AI's roles influencing critical management decisions, uh, as well as the redesign of organizational structures. And I know you've got plenty of thoughts on that [00:05:30] one, Rob. So let's, let's just unpick this a little bit.
So first of all, challenges our privacy and personas. So I think privacy for me has been something we've talked about before, but this notion of. Persona is interesting, isn't it? So, so, so it, it brought to mind two things which I, I think gonna become a reality as we all start to embrace copilots and like, you know, whether we call 'em avatars or, or, or agents that are helping us in our work.
The one [00:06:00] is these ais are going to perform activities on our behalf, but they're also gonna create data on our behalf. What does that mean for our personal information? If it's an agent that's acting and creating information on our behalf, is that also our data or is that an extension? And then when you think about the privacy of our information, how does that change how we consider that in the organization?
So I think that's an area that starts to become very gray if we think about this as an extension of ourselves. [00:06:30] Um. And the second thing is reward If we have agents actually driving value in the organization for us on our behalf again. Yeah. And sometimes that could be when we're asleep or over the weekend, do we still get compensated for the value that that's creating as an extension of ourselves.
So I think those are two interesting dimensions. I have no answers as to how that plays out, but I think they're interesting things I hadn't even thought of. And I like the connection you brought there between like an agentic infrastructure. And, and [00:07:00] the, the agent being an extension of yourself. I must admit, when I first read it, my first reaction to it was, what?
They're gonna try and kind of create a series of digital clones of me, but then I've signed away my likeness and knowledge and value is gonna be driven outside of me as an employee to the organization. I hadn't thought of it quite in the way that you put it, which is I actually might deploy. An arsenal of agents in my own likeness or with, or from my own perspective, that basic [00:07:30] basically helps me become more productive.
Right. Well, so you could you, I mean, you could take it a step further and say, well, there's the, the scenario where you might give an agent. Information, they then go and talk to an organization that you are morally or ethically not aligned with, and then they pass information about you to that agency because they're trying to under, you know, execute a task you've handed them.
How do you control that? There's that point about the agent accidentally gives data to an organization. You don't want to. But the second one is if I [00:08:00] get a digital Dave and I use Digital Dave and Digital Dave says something who has the rights to the thing that Digital Dave told me And there's this whole cloud realities productions, Robert, that's who's got rights to that, right?
Yeah, exactly. So if I use a digital Dave and they do something, who owns the. And then the second, and then the, the obviously the other one, which is with your, the agent does something unwittingly that you didn't want it to do. Mm. I think it is a privacy nightmare. Yes, it is. You're right. It is. But it's also quite simplified, like, you know, like there's, [00:08:30] we've got, you know, highly caffeinated, Dave, we've got Monday morning, Dave.
Oh, that's not pleasant. He's got Sunday evening days. Get like, Dave is one of the worst. Well, no, I think we're, it's the same with persona. Nobody that. It's the same with personas now in the marketing landscape, right? We've crossed that bridge. We don't really believe in personas anymore. At least that's what you hear a lot, because it's quite binary or it's, yeah, this is it, and this is your persona.
So, and you also have so many different parts. You, you are dads, but you're also a [00:09:00] husband. And, uh, you know, uh, I'm a wife, so it What part of you are you gonna let the agent be? What tone of voice, what? So fascinating. What occurred to you as with your interest in human centered transition transformation and organizations?
What occurred to you in terms of the human agent interface in that conversation? What was your first reaction to it? Well, the confusing part is how can it be authentic? I think that's still the case also with chats or [00:09:30] every interaction that you have, and it's also the same discussions that we've had about consciousness.
You know, to what level? If you talk to a, an, an agent version of me, can it actually bring. All of me in that, and do I want it to do that? Uh, so yeah. So where's the line in that? Where can you, it's trick tricky, especially if, uh, Robert's fascination with implants, you know, comes to pass. Then, uh, like if an agent is speaking to, if you've got a, some [00:10:00] sort of chip, right?
And one or a number of your agents is having a conversation with somebody else, are you getting that data directly beamed into the chip? Is that what you think, Robert? So that, that's not quite how I see the human brain interface, I suppose. But you've got this, I suppose you talk about orchestration of your agents.
You send them out, um, autonomously, or do you get them to check and balance their activities and actions, which may be. Could be some interface like that. But, uh, [00:10:30] it, it was more to do with technology enabling us to be better individuals as opposed to that then creating some bot network, uh, out there.
Although you could, you could potentially create that, that type of control interface. The whole, the whole point I think is. We use a keyboard to communicate with a computer, we might use our voice. The English language is a very clunky interface. If we can bypass that interface and suddenly improve the performance, then we could probably get so much more done.
So there is a, there is a, you know, [00:11:00] using the humble mouse is not a great way to tell a computer to do something. Well, so let's use that. Rob. Rob, let, let's use that as a, as a bridge, um, into. Uh, things like operational risks, being at the heart of the AI whirlwind and understanding, and I guess at least being aware of, if not managing secondary effects of AI operation, like energy sources.
So operationally, Craig, what are, what are you, you seeing? Is that disruption? Is it simply what Rob was saying [00:11:30] there, which is it's gonna create radical efficiencies? Is it more than that? And, and what risks sit at the heart of it, do you think? Well, I think, and it kind of picks up on what ESMO was saying as well, is that, you know, efficiency is definitely one side of it, but I was just wondering like, how much do we start to sacrifice creativity?
Mm-hmm. Because it's like, yeah, look, look great that we don't have to use a mouse. So I look forward to that day, but some of the most messy, inefficient conversations are where the best ideas are. Hmm. As well. And so like, how [00:12:00] do we balance that stuff at the same time? And I, I think that that's something we have to figure out, uh, as well as we start to become a lot more augmented by ai.
Hmm. So I think within that sphere and within, within the operational risks sphere, some of the stuff that I can see, and if you just take one area of operations, and of course there are millions of areas of operations that this kind of thinking could be applied to. But if you look at a, if you look at a supply chain.
And a supply chain effectively becoming a genderized I to a, to a [00:12:30] high level of percent. What are the operational risks that that introduces do we think clearly it will introduce potentially faster responses, more efficient operation, et cetera, but, but there must be a lot of risk involved in that level of automation, particularly in the early years.
So, so I was having this conversation earlier this week. I'll give you guys two, maybe two thoughts on this. The one is, uh, and supply chain's a nice area maybe to focus in on if you [00:13:00] have an agent that's helping to automate how you procure and onboard new vendors and purchase new goods that you, you're using to manufacture.
What threshold do you need to establish? 'cause it's great that it's doing it faster, but do you really want the agent, it's fine, maybe if it's buying something for 10,000 pounds, but do you really want it going away and executing a million pound worth of procurement for you? Right? And like, you know, to what level do you trust that kind of like threshold of risk for it to take.
And then the second thing is like. You know, the threat of [00:13:30] ruthlessly optimizing for singular objectives as well. Like if you are saying like, oh, Mr. Or Mrs. AI agent on my behalf go away and maximize the, the level of cost efficiency that I need to drive in terms of procuring good through my supply chain, will it do that at all costs in terms of unethically sourcing?
Mm. With the worst carbon footprint from a vendor because it's the cheapest? Mm. And those are simple examples, but you can see how those extrapolates out. Well, it's, it's when you ask a human to [00:14:00] do that, you assume there is a basis that they will operate to moral, ethical, understand your viewpoint. Sort of put that in if you coldly.
Well, more agent. Just building on that, Rob, sorry to interrupt you flow, but. Literally in, in most corporate organizations, not only is that true, but actually it is a terms and conditions level issue where we have to do training on ethical behavior. We have to do training on, you know, anti-bribery, all of those sorts of things, which [00:14:30] is corporate social responsibility, isn't it?
Yeah. And, and, and we assume the human intelligently applies. Wider understanding of how the world works so that when they go do something, it's sensible. So we give them autonomy and empowerment and there is this bit about, you ask an agent to go do something. To your point, Craig, um, how do you codify all those guardrails in effectively so that you know when it undertakes an action with a sizable potential risk or value attached to it, that it is actually going to do it in a way that we want it to.
And you can [00:15:00] just see it. Somebody will do something, it'll work 10 times, they'll unleash it and it will go and do some something. Awkward and unexpected. And so that AI blooper thing that I know we've spoken about before, they have happened and they will continue to happen as we learn. But that's quite a big, if you think about the human mind and what it contains by the time we get to work for a corporate, it's got a lot of back story in there about how society operates to, to allow us to integrate and deliver what we need to deliver.
It's funny from the other side as [00:15:30] well, what if your AI or your persona is flawless? And it actually ingrains your, the values of By Marcel. Yeah, yeah, yeah. He's, yeah, he's the closest to perfect. I mean, it's, it's, I mean, if there was a dictionary definition of a flawless human, and it would say our producer with accurate precision, yes, indeed.
But what if your digital persona was like, Marcel, so flawless and you meet me in person, or a company in person, like customer service in person, and you're just a crushing disappointment. [00:16:00] Yeah. Then your digital version is way better and flawless and perfect. And then you meet human and it's like, oh damn, you really are.
Yes. I love the idea of your digital twin is actually more appealing to other humans than you, the human yourself in your creation of it. You've just done the entirely idealized version of yourself, which is how you see yourself and the Instagram digital twin. Yeah. The perfect version. Yeah. And you wanna marry that person, right?
Definitely not. In real life. Yeah. Well [00:16:30] that's a whole other podcast right there. Well, there's, there's a, what's the TV show where he clones himself, but they have slightly different personalities or he is got a robot that's copied himself and he sends his robot out to meet people. And it's exactly, that happens where the robot's more charming and they prefer talking to the robot.
And then when he turns up behind the scenes, you know, replace myself with the robot. Now we've got somewhere, um. And, uh, they're disappointed and he gets all his conversation wrong and everything else. Oh my God. So that's gonna end up becoming prophetic. That's disappointing. But anyway, let's, [00:17:00] let's zoom out and talk about the impact on management structures.
So here again, embracing AI's role in influencing critical management decisions, quote unquote, and the redesign of organizational structures. Now Rob, I know you've got a lot of thoughts on this. This is the mother load, isn't it? Ah, this is it. The autonomous organization replace an entire department with a configuration, so you basically define the outcome you need from the [00:17:30] organization, and then the it goes and configures itself gets in the right.
Agents to do the class of problem you need to solve, change them together, and then creates a autonomous, you know, finance department, autonomous supply chain, autonomous hr, whatever that might be. I play about with the dates. I think something significant will happen next year, as we've discussed on the season five launch.
It's just. What happens after somebody makes the autonomous [00:18:00] organization work? The scramble associated with how do you implement that in your own existing org, where you've got loads of other problems that you're struggling with. The skills gap is sort of access to capability and such like, and does something else rise and slingshot straight past you because they're able to do it faster than you.
So I think there's something very dramatic in the autonomous organization. So Craig, I think what, what it feels like to me building on Rob's point is over the course of, let's say. Even just the last three years of generative AI scaling for want of a better term. And then of course with [00:18:30] the, the introduction halfway through that of, of agents and agentic infrastructures, it is now reasonably straightforward.
I think to start, well actually maybe I'm, maybe I'm overstating that we're beginning to understand. Uh, the implications that this might have at organizational structure level, I would say just right the beginning of that, uh, process. Uh, but you could get quite ambitious with your thinking in the way that, uh, Rob is articulating for us there.
What are you seeing on the ground? [00:19:00] Yeah, so good question. I, I think. It. It's moving fast. I agree with Rob, like next year we're gonna see some serious breakthroughs in this area. Where a lot of people are at right now is you. You can think of it as almost like just a first process level of like, let's just start to automate how we do demand forecasting or how we predict risk or how we.
Engage in a contact center. So single process type step and like where this is getting to really quickly is where you start to go across functions. [00:19:30] How am I connecting, how I'm doing? Good customer experience with how I'm pricing my products and how I'm creating my marketing. And that's gonna be the next level.
We start to connect functions. And then Rob's holding ground is like, it's the full auto autonomous organization as well. I think a lot of people are gonna get in the way of that. Yeah. To be honest, because I think culture and like that, that that's gonna be a big barrier because it's gonna suddenly obviously be very strange.
Um, it's gonna require a lot of trust and a lot of letting go to let this happen and a lot of shifting. To [00:20:00] just pulling strategic levers across the organization and letting the agents run. So there's gonna be these friction points that we're gonna encounter, and one of those is gonna be culture. Uh, the, the, the other is actually exposing so much of the inefficiencies in our organizations where today mm-hmm.
Those are actually quite useful inefficiencies. Right? There's a lot of politics, there's a lot of silos. Those things help different people to be successful across an organization. Mistakes get made, people send emails to the wrong people. All sorts of stuff. All sorts of stuff, right. Happens in orgs. [00:20:30] Which, which, yeah, I mean, imagine, imagine, I mean, especially if it was a significant email, I mean that, that would be devastating.
Agents would never make such a full-hearted mistake. They would never make such a mistake, would they? David? Never. Never. Even if it was a David clone, right? Digital Dave would not make that mistake. He would not have done that, Greg. He would not have done that. It is an exact example of Esme is crushing.
The crushing disappointment of reality. One of my analogies that I simply come to on this is like, if you look at autonomous driving and how that's playing out. [00:21:00] Yeah. It's like, you know, I think we're all comfortable now in like, you know, letting go of the wheels if we're just in automatic cruise control on a highway and like if you grab a Waymo in a nicely neatly gridded streets in the US, it'll take you around in a nice, predictable way.
We're not ready to have an automatic car drive through the streets of Naples. Or somewhere in Italy. 'cause it's just absolute chaos. Um, and unpredictable. It's the same with like, if you think about organizations, how comfortable we are, are we [00:21:30] right now to let go of the wheel? And where does that kind of like sit with us?
The, the predictable structured places Great. The messy center of Rome. Of our organization a little harder to let go of the wheel on that one. Not so much as I think when, when you think of the, the sort of organizational changes and the impacts of it, and, and Craig already just touched on this particularly things like culture.
What occurs to you both in terms of how, how much will it get in the way and how much will [00:22:00] it potentially extend the adoption lag piece that we've talked about previously? I think it really depends on the level of gravity within that culture. So if you already have like the agile mindset that is predominantly there in the organization, they are curious, innovative, uh, explorative in what they do.
And I think if you have that, you know, if the, the, the biggest part of the organization has that and that's your center of gravity. They can actually drive it further, I think. Uh, but if your center of gravity is still in [00:22:30] control, but, and it also is very closely connected to your, your, your reason of existence of a company.
Like if you, for example, work in accounting, the mindset of an accountant is not so explorative. To be honest, you know, it's, it's preventing risk. So that I think is, is holding you back when it comes to really opening up all kinds of possibilities and agents run the entire system.[00:23:00]
All right, so we've talked over, um, the recommendations. Now let's look at, in those three sections again. What Gartner set out as the strategic planning assumptions are. I'll give you some, some data points now and let's see what we, uh, let's see what we make of them. So, back to privacy and personas. Um, a couple of data points by 2027.
So that's, you know, 18 months time. 70% of new contracts for [00:23:30] employees will include licensing and fair usage clauses for AI representations of their personas. So 18, 18 months out from now, 2028, technological immersion will impact populations with digital addiction and social isolation, prompting 70% of organizations to implement non-digital policies.
It, it's, that's 24 months from now? No, a bit longer, 36 months from now. [00:24:00] By 2027, 70% of healthcare providers will include emotional AI related terms and conditions in technology, contracts, or or risk, billions in financial harm. And by 2028, 40% of large enterprises will deploy AI to manipulate and and measure employee mood and behaviors all in the name of profit.
So who would like to go on that? Let's start with the beg. Let's start at the beginning and just talk about this notion of persona [00:24:30] and that being a tease and CS Craig. How did that land on you? Yeah, it's, um, so, so I, I think, I think it's gonna be spiky. I think that we will see some of the more AI native, the fast moving digital native com companies probably move faster on that.
It's gonna be, I think there's gonna be slower segments of industry that are a little bit slower to move. I would love to see what those Ts and Cs say as well. I, I, I, it, it's gonna be, um, I, I, I think it's gonna be very hard to be specific, isn't it? Reminiscent of. [00:25:00] The, uh, actors strike recently in Hollywood where, and I think the writers did a similar thing where they went on strike because of exactly that.
Like their, their persona being replicated either before or after their death on an ongoing basis, and them losing the rights so that it's like, it's almost like an individual version of that, isn't it? It is and like, you know, you know, like when you work for an organization often in the clause of a contract, is any IP you generated in that organization stays with the [00:25:30] organization?
Yeah, yeah. Will there be a TNC around, you know, your agents that represent you and the the IP they create. Is is not your ip? I mean, is that an example of A-T-N-C-I I'd? I'd love to think about that. Yeah, I suppose it get complicated 'cause you think about the creative one. If AI gets really good, uh, I'll just create an actor from scratch that's compelling and likable to the audiences and everything else.
But when you get into the workplace, would you, then what happens if it's a fusion of three personas into one for a highly effective result? Mm. You've got this [00:26:00] merger of capability. So yeah, I suppose the corporates are gonna go, it's just all hours and we can decide what we want to do with it. I mean, it fe it feels to me that I, I think, I think both the timeframe and the nu and, and the number of 70% of new contracts feels a bit too short and a bit too high to me, that one.
Um, so I think that's quite aggressive. I'd be surprised if that happens in such a short timeframe. From a geopolitical perspective globally, [00:26:30] you're likely to find that's the sort of thing that will force legislation to say it's illegal or it's legal. Yeah. Yeah. I think that's right. It's not gonna be a corporate by corporate, it's going to be a country by country thing.
So you can imagine some saying, sorry, civilian rights. They always own the their, their own identity and persona versus other countries that may say, well, like the personal Data Protection Act. Right. Yeah, like personal per persona protection act, I think is, is quite a likely. See what you did there, Dave?
Very good. Mate. I should be record. I mean, you, I mean you are on fire. That incredible amount of caffeine [00:27:00] you've consumed today has really put you on, uh, the, uh, the edge. Put me on, it's put me front foot. My palms are sweating. Yeah, it's, it's, you'll see some that will just say no, can't do. 'cause and it'll have to be at that level.
If you get into the state, did you once work for X, Y, Z? Then it's implicit. You signed this and then they sell your rights on and all sorts of stuff. You get into a horrible social mess. Wouldn't, does it sound great? Like the writer strike? I wonder if like, unions across all workforces will really slow that down.
Yeah. And from an adoption site, I, I know people, I'm not gonna mention any names or organizations that [00:27:30] still, uh, have trouble using their webcam or use. You like your digital photo in teams so that you can actually see, oh, that's how the person looks. But there are some people that just present using your, their picture.
So that's gonna be a huge leap to actually, you know, give away your, your digital rights. That's agreed. Never gonna happen. Agreed. Now, the, the next one, very briefly as I'll come to you with, which is the one about by 2028, technological immersion will impact populations with [00:28:00] digital addiction and social isolation.
Prompting 70% of organizations to implement non-digital policies. Now, the first part, I think, you know, my take is, I think we're already there, aren't we? Not so sure about the second part. What was your reaction to that? It's opening up a lot of possibilities to do Tai Chi during work time. Oh, I can only welcome that.
I can only welcome that, speaking now. Yeah, well as being somebody who is, you know, well versed in the art of Tai Chi. Yeah, no, I do [00:28:30] see that happen, to be honest. Be because maybe we need that to, it's the same what we, uh, discussed with Jitske Kramer as well in the, in the last, uh, season. Maybe we should create space to go offline and to really create.
Space to connect and to be creative. And maybe that needs to be forced, uh, because everything is digital already. And if we do that, like very consciously as a organization, you create that space. And I also believe that, you know, then you can unite energy altogether. [00:29:00] And, but you do have to facilitate that and scope it and frame it.
But I, yeah, I can see this happening. Um, yeah, especially from an HR perspective. Not in my world. When we're all connected to the net and you can't escape i's gone the other way. He's gonna be online all the time. No multiple, never offline, multiple personas banging away. No, no, no. It's, it's just, it's the matrix.
The matrix. I, was just wondering about that because you're like, you know, for me, like when I'm really struggling to get an [00:29:30] idea at work, I'll go take a walk. Mm. And like when, when you're alone and like relaxing, some of the best ideas and clarity come to you. But that's data coming to like Ralph's point.
You reckon if we're connected, that would be a really rich source of data for ai. It's like all that creative unstructured data, um, that. AI doesn't have access to. And so like, will that be a really rich input now? Yeah, but it's not consciousness. That's something different. And I think that's also what, what Dave has been saying in the, in the past episodes, how is it really consciousness? 'cause that's something coming from a, a [00:30:00] space where AI is not gonna connect to, so, so does the consciousness matter though? So take the Turing test if I believe it's a human, it doesn't matter if it's conscious or not. The outcome is, I believe it's a human, therefore I'm only interested in the outcome. I don't care about whether it was consciously think it matters, created.
I think it matters for me, it matters, Rob, with on the human level, whether it on the human level. Sure. Uh, but where I was going with is this distinction between AI as a tool and AI as a [00:30:30] functioning member of society. And, and to me the difference between it being a tool and it being like, you know, kind of at human level.
It has got to be the spark of consciousness, hasn't it? Because up until that point, it's just a very clever tool. Well, this is the um, Anders inset view of how do you know you're not in a simulation? Is the simulation real? Right? If you are experiencing it and you believe it to be true and conscious, that's it, isn't it?
Then you perceive [00:31:00] consciousness. Also, I just think the line is gonna start to blur because like you look at just, you know, like in Japan where there's a lot more robots helping in healthcare and, and, and old people's homes. Like that's a tool. But now they're also relying on it for emotional support a lot more as well.
And so like it starts to become an emotional, that does get connection as well. Incredibly blurred. Yeah. The lines around it does. Let's, let's move on. Let's, let's then talk about some of the data that underpin some of the operational risks that we talked about earlier. So I'll give a couple of [00:31:30] examples of this.
Um, by 2028, 25% of enterprise breaches will be traced back to AI agent abuse from external and malicious internal actors. And then by 2028, 40% of CIOs will demand guardian agents to be available to autonomously track, oversee, and contain the results of those nefarious AI agent actions. So Robert, how does that, how does that stack up for you?
[00:32:00] Corporate AI warfare? This, it's like an arms race on both sides. So let's take a scenario, right? I implement my contact center and I make it AgTech. But on the other side of the fence, I'm a human. I get my agent network, and then they start ringing up the contact center on behalf of me and it's agent to agent to agent, and then it just, it goes up and up, and up, and up and up, right?
And meanwhile, the poor planet is getting hammered by the amount of energy required [00:32:30] to deliver this. Thing. So for every agent I build, I need other agents to watch it. How, how do you manage and configure that well on energy? And do we just get agents to configure it all again and it's like, wow, where, where does this end up?
There has to be some cap on it practically, and it's going to be something like energy and capacity in compute. Well on energy, which I'm glad you brought that up 'cause I was going to, I was gonna bring that into the conversation. It says that through 2027, which is 18 months out. Fortune 500 companies will sh will shift, [00:33:00] um, $500 billion from energy operating expenditure, eg.
You know, paying your fuel bills to microgrids to mitigate chronic energy risk, uh, and underpin AI demand. So I'm, I'm assuming that microgrids, they're talking about those container size SMRs Yes. Nuclear power stations that sit next to data centers and things like that, Craig, that feels feasible within that timeframe.
I think that's happening already. It's definitely happening already. And you look at all the hyperscalers and they're [00:33:30] already putting CapEx into modular nuclear reactors right next to the data center. So we're seeing a de decentralization of, of energy for, for sure. Um, a alongside that, we're also seeing huge, you know, advancements in technologies like bat battery storage as well.
And so we're gonna start to see the ability to generate and store data in a more decentralized way happening. And, and obviously like, you know, renewable energy is increasing as well. Um. So. I, I, I think we will definitely start to see, uh, [00:34:00] there's a couple of things here as well when we think about AI consumption of energy.
The one is AI is becoming increasingly efficient as well, and we're starting to also see AI being able to operate and deploy on devices as well. So we, we will also start to see less reliance hopefully over time, on always going back to these big, bulky data centers and more micro use of energy. We can already run a sub machine learning some AI on our phones and, you know, open AI came out with a version of chat GP of, or GPT four that can run on a powerful [00:34:30] laptop.
So we're gonna start to see AI fragment onto device more. And with that we'll see energy consumption also, uh, fragment as well. Just, um, to go back to the notion that we started with in this section around, um, around cyber and a and agents in cyber. My take on some of those numbers, which is by 20 28, 20 5% of breaches will be pressed back to agent abuse.
I think 2028 is both quite far out from now and 25% [00:35:00] feels low. And similarly, uh, 40% of CIOs by 2028 wanting guardian agents. Again, that feels like it will be a higher number in a shorter timeframe to me, eg. I, I feel that the impact of this in cyber is kind of, if it's not already with us, it's not gonna be 2028, is it, Craig?
I fully agree and I agree with Rob. I like this is an arms race because there's gonna be agents on both sides of this and it's, it's gonna be a constant battle to, to keep up with advancements. Yeah. This is already [00:35:30] happening now. We're already seeing this. You look at things like X or formerly known as Twitter.
Right. And like, you know, most of the conversations on there are exactly now claim to be bots. Right. And so dead internet theory isn't it? It's a dead internet. And, and, and the other side of this that someone spoke to me about the other day is. You know, this could risk actual model collapse as well.
Because if a bunch of bots are actually creating data to skew results, let's say, around customer sentiment, you know, you could [00:36:00] easily hack into an organization that's trying to figure out like, who are my core customers that I should be, you know, selling and marketing to? And if they. Churn out a bunch of synthetic data.
How do we know that that's synthetic data that completely skews what we see in the market and changes what we're doing. So like there's gonna be a lot of different attack vectors, some we won't have even thought of, but like a lot of it's happening today. And did you see that w wonderful thing that Amed on Twitter where they knew it was bots pushing misinformation?
They responded to the bot and got it to do like an [00:36:30] SQL style injection and re-engineered the prompt. Yeah. By sticking something in and got it to reveal the fact of its configuration and the fact it was a bot. And that's a very clever thing, uh, to do. But it was, it was that sort of realization that.
There's lots of bots out there and they're configured and they're just hammering it and it's going and, and to one of the conversations we had earlier, the speed at which these transactions are gonna happen. It's not gonna be human speed, right? It's gonna be, it's gonna be ultra fast speed. So [00:37:00] let's move on to the final section then management structure and a couple of couple of data points around this.
So through 2026, so next year as we speak, 20% of organizations will use AI to flatten their structure. Eliminating middle management positions, and by 2029, 10% of global boards will use AI guidance to challenge executive decisions. Now, I think 2026, 20% to flatten [00:37:30] a. Is probably the right timeframe. I think maybe a little conservative in terms of the amount of organizations responding to it, and then 2029, 10% of boards.
I think that's way too far out and way too lower number. I would be expecting in 2026, 10% of boards using AI to at least support their decision making, wouldn't it, Craig? Yeah, fully. I, I, I think so. So, so again, like to the earlier point, I think the world is jagged and [00:38:00] messy and like already today we're seeing middle management levels starting to flatten in certain aspects.
If you look at contact centers. Agents and in many organizations are handling the first triaging of calls, so that that's a flattening. If you look at software engineering, we're seeing this move really quickly where a lot of middle management engineers or, or, or product managers are also starting to flatten.
And so this is gonna happen in different spaces at different time. I think that that prediction in terms of the flattening is, is, is well within the gift. [00:38:30] Um, but I think that the point on the executive boardroom. I think that that's, that, that, that is gonna have a slower uptake. I, I am, I'm less bullish.
Um, I think that there's a lot more emotional conversation, a lot more political dynamics that happen in boardrooms, and I think that's gonna create a friction point. And there's also this problem is if you. If you just insert AI inside your organization and get it to control parts of the org, will it overwhelm the human parts of the [00:39:00] organization?
'cause it will work at a blistering pace. 'cause it can, and then what happens to the rest of your organization? You know? Right. And, and it's this, it's this, it's this. Have we properly worked out what the human AI fusion is and how we need to protect? Protect ourselves from just getting overwhelmed by this system.
Just constantly hammering you with requests and stuff and activity and everything else. I think we will, we be able to cope as an interesting point. I don't think we are working with a lot of leadership teams [00:39:30] and management teams. Uh, one of the things they say, yeah, yeah, this will give us transparency.
We will see what the teams are doing and how, uh, you know, decisions need to be made. They're, they really do not want transparency. Most of the time, most people want to hide behind the veil of opaqueness. So I think this interface of, of sort of culture. With kind of highly rule-based agents is, is actually a, it's pretty fascinating subject.
Yeah. But what I, what I think might happen, I, [00:40:00] I maybe bet a decent bottle of red wine a bit faster than 10% by 2029 is some form of Alexa, like device in a boardroom that can be queried. So instead of needing to, you know, kind of have weighty packs. You could maybe query an Alexa like device that is a chat GPT front end with agents at the back that's automating things to either ask your organization a certain information about itself [00:40:30] and, and maybe even trigger certain actions that would be then released through kind of deep automations across an organization.
So it's not that it would necessarily be. A conversational part of the board, but I would think as a sheer, like a data sheer initially and then maybe executing a few little things initially, I think that's probably implementable now. If you had, you know, kind of all the time and money in the world, it wouldn't be a cheap thing to implement.
For a lot of reasons, but I think you could [00:41:00] get there quite quickly, couldn't you? I fully agree and, and I think already today, you know, some of the boards I talked to are using copilots to help, to inform thinking and to source information really quickly. Um, and, and there's that layer just beneath the board, which is often an entire army of.
People that are grabbing data, grabbing stats, preparing packs, creating all of these different scenarios to pour over. That's gonna go quick. Of course, Craig, then the most logical thing to do next is just replace the whole board and get the AI to do it[00:41:30]
all right? I mean, fascinating stuff. It is fair to say, I mean. Just in the, in the conversation we've had so far, the impacts of AI and the depths of the impacts of it are, are startling, aren't they? I mean, I think it's unprecedented, really the impact that technology may well have, not just [00:42:00] in it running a little bit better or processes running a little bit better, or being a little bit more regulated, but right at the heart of decision making at the highest point in organizations and you know, having a giant cultural effect.
I think to that end, um, Sam Altman earlier this year, 2025, um, raised three concerns about ai, which we've cons, we've pulled together into three basic bullet points. So let me just put these out for [00:42:30] us and within the context of what we've just been talking about, how do these land on you? So number one.
Misuse by malicious actors, which wands that super intelligent AI could be exploited to, to develop things like bio weapons or disrupt critical infrastructure, uh, imposing immediate and severe risk. The second loss of control, highlighting the long-term danger of AI systems becoming too powerful to shut down and emphasizing the need [00:43:00] for robust alignment efforts to maintain human oversight.
And then thirdly, accidental over reliance, which is concerned about society gradually seeding major decisions to ai. Probably in small chunks over time, including emotional dependence and blind trust in systems that we don't fully understand The, the irony of the person driving the arms race, warning us of the danger of the arms race.
[00:43:30] I'm glad he has. I'm sorry, but that is just brilliant. Yeah, I just love it. Say, oh, we better watch out for all this stuff, but I'm the one implementing it. Great. Brilliant. Well, it's a bit like Musk, you know, kinda spearheading writing the letter. A pause development of ai. Just shortly before he released, he developed his own ai.
Yeah, I'm sure there's no, this, not the cynical. Sure. There's no, this about corporate behavior, but oh my, well, I mean they're all true, right? A hundred percent All true. Uh, unintended consequences. We have unleashed the Kraken [00:44:00] it, it will be implemented and it'll be the classic. We didn't really stop to think, we just ended up doing it, and then we'll go, oh dear me.
That happens and we'll find that the damage is probably already being done, but we won't realize it until three years time or whatever. Like the data point or the internet becomes dead, or, you know, the, the, we get model, uh, collapse because the, you know, the data sets are no good and I think it's just that, but.
Once something happens like AI at this level and the, you know, the, the human technology [00:44:30] interface is redefined, can you actually stop it? What we've gotta do is learn to cope with it, really, and yeah, okay. We can look at certain things and we're creating roles like ai, ethicists and such, like to, to deal with it.
But I mean, you can't, suddenly, you can't. We're, we're not gonna stop. It's happened. That's it. We're going and we just gotta try and do the best we can with the thing that's just happened. The genie is firmly out the bottle. But I like to, to be positive. You know, at least we are aware of these big catastrophic things that could happen, and we have some [00:45:00] time to think about how we safeguard against them.
Um, the second one on like, you know, you know. AI just starting to really go independent or autonomous and act with, with without our control. There's this entire thing of like, how do we align it and how do we solve AI alignment? And that that's a real challenge because, you know, even between all of us here, who I, I'm sure have a lot of similarities in our thinking and ethics, I bet you there are some things we have separate.
Mm-hmm. And like, you know, everybody has a slightly different take on what it means to have like really [00:45:30] strong ethics in a particular thing or topic. And so, so when we say alignment to who. And I mean, will we, will we end up having, and, and maybe this is the answer to this, is just like humans learn from diverse conversation and listening and hearing to each other and debating, do we need to have a diversity of AI that allows for many different types of alignment to come together and debate?
And is that a way to get around that topic? I don't know, but at least we have awareness of these issues. Feels right though, doesn't it? The idea of. The diversity [00:46:00] being there considering it represents a planet, lots of humans who are very diverse. So you sort of think, well, but you could get some mega corp rise and one AI GMPs the lot, and then that would be a shame, wouldn't it?
So the whole concept of sovereign AI, as in the AI that represents me and my nation state and my ethics needs to be protected almost, doesn't it? Yeah. Yeah, I'm thinking about simplification. 'cause we were talking about flattening organizations and the over alignment got me triggered. Like alignment is like, I think the biggest challenge [00:46:30] for a lot of corp, right?
Oh, we need to align on this. We need to align on this. Especially with silos. So if you have AI help you well. Take down the complexity of the organization to a level that you can actually understand what's going on and you can make decisions, then I think it's really gonna help leaders. But especially the over alignment and the complexity and it's all there.
It's like an octopus, right? AI now, intelligence is everywhere. So I think if it's can help us bring it down to a level that we can actually [00:47:00] understand what's going on, then it can help us be better in alignment and make better decisions. So that, that's what triggered me with the overall alignment.
Indeed. Indeed. I think my reaction to these three things, um, and I think it was similar, I think you said it, Craig, is that we're probably already seeing them to some extent, um, in each of these three categories, and it's, it's how far we allow that to go. And how we deal with it. So the mis misuse by malicious [00:47:30] actors to a certain extent, ties quite clearly to some of the Gartner data that we talked about earlier.
Like, you know, that's underway. People are responding to it, and there is telemetry to, to tell us that that might happen. So how does that, you know, kind of come under control or will it ever, do we just need to get used to dealing with it? Yeah, the one that I find the most likely. And it's also the most kind of human haphazard thing is accidental over reliance, which, which to [00:48:00] me feels like that idea of seeding this not only decision making, but Craig, to your point, um, earlier in the conversation, so transactions and, you know, businesses transacting with other businesses at higher levels with greater stakes attached to it.
That to me seems like the one that could quite straightforwardly happen. And, uh, you know, we might even see happening as early as 2026 and Dave on, on over reliance. We're, we're starting to see some of that creep in now already [00:48:30] because if you think about some of the surveys and research we're seeing coming out of, um, educational institutes, more learners are increasing reliance on ai.
Are they losing their ability to have critical thinking skills? And like, that's a worry, right? And so like how do you actually guard against that? We all have to stay mind active and fit and not just fall into this trap of AI is gonna do all the thinking for me and I'm just gonna copy paste it. So, so like we have to keep critical thinking skills [00:49:00] up alongside ai.
Wonderful. Point to end on, um, because I think that's at the crux. Of all of this actually, uh, I was gonna make a similar point. It was not as well articulated as yours. I was gonna say something along the lines of those of us that are in responsible positions in the implementation of this, we all have a, an accountability, I think, to think about this ethically and to present these dangers.
And to your point, Rob, even, even Sam Altman, um, you know, who is driving a lot of the [00:49:30] pace around this, being clear that, you know, there are dangers with this that we need to watch out for. Before we close out, quick little round the table, anything else on anybody's mind that we haven't at least touched on?
And that's sort a fairly wide reaching conversation. I suppose the, the one springing to mind as we went through was, um, we all deploy our agents. Our agents work together, our agents interact on our behalf. Do we end up killing societal conversation and. [00:50:00] Connectivity because everything's through a digital interface and we create the new generation that is actually, you're just in your cocoon and being fed, um, results back.
And I suppose that's a bit of a fear that we, we, we kill conversation as part of this and I don't know where it's going to go or what that future might hold, but you can definitely see a situation where, on the over reliance point that actually. Social connection starts to, well, that's tumble. That's not only was what [00:50:30] ESMO was guarding against in the first episode of the season, right, ES Absolutely.
It was that thing about how do we protect what we have as a society? We don't want to lose the human touch, so you actually do not want to have a chip in the brain. That's like contra what you said before. It's a win as it's a win. You talked around. Talk me round to actually, oh, I won't take the, uh, the chip interface yet.
Uh, although if you are the only one with the interface, you win. Once everybody has an interface, it's probably a dystopian future. So on that note, on that note, Greg, [00:51:00] so we are gonna, in the next, you know, between now and Christmas in this season, we are gonna have a series of episodes that are gonna be kind of released looking at the state of AI with some of our colleagues across industry.
So do you just wanna set that up as a, a wee bit of a teaser? Yeah, sure thing David. And so, super excited about this miniseries. I, we, we've got a great selection of leaders from across different industry, great friends and colleagues who are, you know, at the frontier of how they're thinking about AI in their [00:51:30] respective organizations and roles.
And we're gonna have some great conversation. We're gonna talk about the realities of what it looks like now and the multifaceted aspects of AI from not just the technical side, but also the cultural and the literacy. And the organizational sides and how to lead from the front on ai. And we're also gonna jump into the future like we've been doing in some of this conversation around what does this look like for these organizations as they start to really drive this forward?
And as we all start to embrace this future, uh, coming, coming [00:52:00] next. Thank you Craig. Um, so you'll find those episodes released in a dispersed fashion between now and Christmas, so please join us for those. Good to see you today, Craig. If you'd like to discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at Cloud realities@capgemini.com.
We're all on LinkedIn and on Substack. We'd love to hear from you, so feel free to connect in DM if you have any questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. [00:52:30] A huge thanks to Greg for joining us, our sound and editing visits, Ben and Lou, our producer Marcel, Mr.Perfect, and of course to all our listeners. See you in another reality next [00:53:00] week.