Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com
I know of Norwegian, we generally go with P& O because it's British. Robert is a cruiser. Robert is a cruiser, aren't you, Robert? I've been on a few, yeah, yeah, no idea. Yeah, I love it. I didn't, I was like, you just have to surrender, that's it, you know, and once you do that, then it's like, it's amazing. Like, I've, I've, I've, I've, I've, I've been watching Napkin Folding, like I was, I was just sitting there and I was just like, you really enjoy this?
Welcome to Christmas Realities, an original festive podcast from Capgemini. This week, we're going to have a conversation show exploring 2025. What might we see? What can we expect? And how do we deal with it? I'm Dave Chapmanger. I'm Esmee van de Gluwijn. And I'm Rob Snow Manahan.
So welcome everyone, it is Christmas time. Is everybody feeling suitably festive? Yeah, actually, actually I need to put more Christmas music on, because I was saying this, it doesn't feel like it's Christmassy yet, and we're getting close to Christmas and normally I'm more Christmassy. So I need to put the pogues on, like, full blast, on repeat, in the background for about an hour before I get Christmassy.
Since we got back from Vegas, I have specifically been playing Christmas music, because, like, we did all the conferences and stuff, and it felt like we've kind of arrived back, dumped right into the middle of the Christmas season. And normally in the casinos, they play in Vegas Christmas music in December, and we didn't hear any of it.
It wasn't Christmassy this year, so I think that's had a material impact on my lack of, uh, festive cheer. I think, I think you're right. Now, Esme van der Gluwein is remote reporting for us right now. Where are you at the moment, Es? Yeah, so at the moment, uh, we're at sea somewhere between Mexico and Guatemala.
I'm on a cruise, so talking about festive, you know, once I open up the door of the room, I hear Christmas carols all day long. So it's very festive, but it's weird to have, you know, it's 30 degrees out here. So that's weird with a lot of sun and I, I love it to be honest. It's the first time that we're spending winter time in, uh, in, in beautiful sunny.
So given somebody who Who has a love of Christmas and makes Christmas villages which are snow scenes. Christmas dioramas. We like it. Do you think you'll be introducing like, you know, Caribbean Christmas and Hawaiian Christmas into your diorama? Oh yeah, yes, yeah, yeah, that would be fun though, yeah. So maybe, with maybe some Honolulu, Hawaiian people that have their own spot, yeah.
That'd probably be broadly more, more successful than my attempt to make a disco Christmas Day where I would only play 70s disco music on Christmas Day. Let's just say it didn't go down very well with the rest of the family. And did, despite their cries of derision, did you refuse to listen and stick with it no matter what?
I dogmatically stuck with it for about half a day before there was almost a riot in the house.
I like, though, you see, I don't know whether there's enough actual content to do a whole Christmas, but I like the sort of Christmas Island music, like Mele Kamiki Make, you know, that kind of thing. Yeah, yeah, yeah. That's good stuff, isn't it? It is. There's more, there's more in there, isn't it? There's more, yeah.
Something in it. I think we should all try it. I think you're right. I think you're right. Yeah, because like Beach Boys have got one that's sort of, it's not explicitly island Christmas, but it's the Beach Boys, so you know, you can't help but imagine surfing at Christmas, can you, when you hear them? Exactly.
I think that's the whole, like, how the Australians experience Christmas is it's roasting hot outside, they have a barbecue, and there's a tree in the corner. It's like, it's a different thing. The whole snow thing, nah, does not apply. It's overrated snow. On your cruise, what has been your favourite cocktail so far?
Yeah, maybe it's a bit old school, but I think piña colada, but I don't know if you know banana rum. Banana rum? Yeah. Is that like a, is that a cocktail version with rum in it or just the flavoured rum? No, it's a cocktail version with, with banana flavoured rum. And it was, it was actually the, the cocktail of the day.
So like, yeah, why not try? But it's, it's, it's gonna be the cocktail of the week, I think, for us. You've always got to try the cocktail of the day because you never know what you're gonna get. And sometimes you might find something you've been missing a lot in your life. It's not my thing. There, it's gonna be tomorrow, it's the Aperol Spritz, but that's not, yeah, no.
Send it, send it to me, I'm quite, I quite like an Aperol Spritz, so that's a, that's a good ski and drink, that. Yeah, yeah, well it's good, it's a good sort of sitting on, sitting around at summer barbecues, drink Aperol Spritz. It goes everywhere, it goes everywhere. The Italians have done it again. Italians are great at this sort of thing.
They are great at this sort of thing. But look, as thank you for dialing in, I know you're going to join us sporadically through the show, but it wouldn't have been a Christmas show without, without seeing you, especially given that we've done a couple over the last couple of weeks without you being on.
So it's good to have you back. I'll be briefly. Yes. Now, let's introduce our star guest for the day. To join us in the discussion of what 2025 might look and feel like, we are delighted to say that friend of the show, author of The Phoenix Project, the DevOps Handbook, Wiring Your Organization, and many other august publications.
Gene, always a pleasure. How are you doing? I'm doing great, and thank you for this fantastic invite to join me for this year end wrap up episode. Wouldn't be Christmas without you, man. And thank you for wearing the Santa hat. I mean, you know, didn't see it coming. Whereabouts are you? Whereabouts are you in the world?
Where are you talking to us from? I'm at home in Portland, Oregon, or near home. And yeah, I think it's been almost a year since I've seen you all last. So congratulations on all your successes and super eager for this, uh, episode. Thanks, man. Uh, no, it's great to see you and yeah, so we're going to explore a little bit and we're going to have Jean contribute and, and, you know, maybe tire kick some of our thinking about what might, what next year might look like.
It will result in a publication at Cloud Reality's Trends for 2025, hopefully early in the new year. So you'll, a bit of a work in progress session that we're going to just kind of expose a little bit today. Robert, are you now over all of the jet lag and all the difficulty of conference season? Traveled on for the year, thank you very much.
Closing it out, happy days. I can get to, uh, stay at home and have a slightly more a, well, less hectic set of logistics, because you know I am on logistics, David, and now I don't have any left. I'm all relaxed. Your shoulders are not as tense as I can see. They've slowed down. When Marcel was massaging you earlier, he said, he doesn't feel as tense as he did when we were in Las Vegas.
He has to do something useful, doesn't he? Well, that's a Christmas mental image for everybody, isn't it? Let's move on swiftly now, David. Is there anything else that's confusing you this week? I'll tell you what, David, what has confused me or is confusing me continues to be and since we're entering into the festive period, I'll do a festive confuse, which is why people still feel the need to avoid using a meat thermometer when they cook their turkey.
So your tech angle here is meat thermometer? Right, so You can now buy a meat thermometer that you can connect. It connects to the cloud. It measures it. There are algorithms behind it. It judges, it uses cloud data and crowdsource data to find out what's going on inside your, uh, your turkey and tells you to perfection when you need to take it out of the oven, how to treat it.
And you get a juicy turkey. Wait, no, but I, I have one and we use it for, we use a grill a lot and I get, whenever I use it, we get a heat warning, overheat warning at 350 degrees Fahrenheit. So I think that's like, uh, I don't know, 2 million degrees Celsius, but it's, or maybe it's like 40 degrees Celsius. I don't know.
But, uh, it's, it's absurd that it, it, It can't handle the cooking temperatures. Yeah, it is. I had exactly the same problem. I stuck one in and put it in a barbecue and I got exactly the same issue and I melted the end of it. Where you insert your probe is very, very important. But the, um, but if you do do it, my experience is, and I'll, I'll ignore your failures and go on to my, um, which is I've never not had a juicy turkey where I've done it.
Anybody who hasn't used a meat thermometer either creates something that you shouldn't eat or is so dry, it's like sandpaper. And so I'm like, but you get this. You know, but we should. Why? Why don't you? It's better. It always creates better. Isn it. Now what I hear, what, what I hear is, and Gene, I dunno whether you've had, this is, I think this is mainly a US phenomena.
'cause you don't get this oh dot oh, I know where you're going with this. Oh, the deep fried turkey. It's just dangerous.
It looks absolutely terrifying. Now I've got two questions on this. One, have you ever tried it, Gene? And two, given our, given our issues previously with heat thermometers melting in the various different cooking environments, I can't see it making it through one of those dips in hot oil. Can you? No, no, not at all.
Uh, no, I've never, I've tried some, I had a friend who did it and it was fantastic. But, uh, can you imagine how dangerous it is to like submerge this, uh, you know, you know, a 15 pound, which I think is about 60 stone, you know, in a hot boiling oil bath. I mean, just how much oil that displaces. So, uh, literally looks terrifying.
So they've got it like in, in like a barrel. They all seem to look the same, which is like a sort of a little barrel thing. It doesn't look like it's particularly stable sat on top of a open flame with furiously boiling oil. And then dip a freezing cold turkey into it. I, I, I do hope that, I, I always think that humanity has passed that point where we can judge danger, you know, and we've evolved, but it turns out we can't.
And the videos I've seen online is just turkeys exploding in these vats and things. I'm just like, but what, and I suppose, are you in the pursuit of the juiciest, most tasty turkey? And therefore, you're going to all this effort to do it. And if that is true, then maybe the pursuit is worthy, David. Have you thought about it?
Put the extra effort in and get something that's fabulous. Get yourself a meat thermometer. Nothing worse than a dry turkey on Christmas Day. We have thermometers for Technology Works. Use it. Why aren't you using it? Use it. So it's a, both a technology adoption piece, as well as it's a, as much as it's a change of process, as much as it's like a, you know, the tech's evolving around.
I see where you went with this one, Rob. Yeah, you just see, you see the, all the layers of complexity are building up to meet, to take, and you don't have to worry about it. You just, you just let the phone go beep, beep, beep. Take your turkey out. Ooh, lovely. Carvey, carvey, carvey. Bit of bread sauce, bit of stuffing, happy days.
On that festive note, let's move on then to what 2025 might look like. So, over Christmas Turkey, we thought it might be worth mulling a little bit on what's going to happen in 2025. Now, there's a lot going on in the world at the moment. There is certainly a lot going on in the world of tech. Innovation is not slowing down for one second.
So whenever you look at trends, it's easy to say, well, let's look at a particular technology trend or a particular product. And where that particular product go, we're trying to zoom out a little bit from that. And look, you know, within the. Boundary of cloud and just as a reminder, we sort of see cloud as being a third major era of technology mainframe to start with and client server slash rise of the Internet and then cloud and within that.
It impacts organizational shapes, it impacts ways of working, it impacts the technology stack, the delivery and design of products, and even how whole organizations might function because they're being driven by a different type of enabling tech. So in thinking about the trends for next year, we've gone very specific on some that are to do with cloud, and we've gone very zoomed out on others.
And what we're going to do today with, uh, with Jean's help, I'm delighted to hear is just kick the ball around a little bit on them. Say what we think we mean about some of them. Maybe tweak some of them as we go. Robert, what, what's your going in thoughts? We're just using this to get feedback so we can write a better article.
So, typically, typically mechanical of Robert there, even at Christmas. No, it's good. So when you think about what's going to happen over next year, there's a lot, there's a lot happening. And I, you know, I love a tech prediction, Dave. You know, I love it when you read about all those terrible tech predictions of the past.
This is our opportunity to make terrible tech predictions for the future. So I think of it as an opportunity where people can play the episode back to us and say, remember what you thought was going to happen in 2025? Well, Either over optimistic or too pessimistic. Is that what you're thinking? Well, it's tech predictions, isn't it?
They always go wrong. We'll be on the spectrum somewhere. And, Gene, what's your view generally of the risky territory of tech predictions? Oh my goodness. I sort of love them. And boy, if there's a time that you need to be learning, it's sort of like right now. I gotta tell you, I have not had this much fun since maybe the early days of DevOps in terms of like where every assumption we've had is changing.
The, maybe the old ways aren't looking so good and that the new ways are just so dramatically better. I mean, I shared with you before the show, I've written 10 times more code in the last year than I have in any point in my career, probably including when I was a grad student. And what's driven, what's driven that?
Like, is it just because there's, you've found more to do and more interesting ways to apply technology, or is it because you can produce it faster as a result of co pilots? What's kind of driving it with you? Yeah. And in fact, I think it's actually five different dimensions. I mean, I think getting things done faster is just like the, just a little itty bitty tip of the iceberg and maybe the least important.
I mean, I think what's really amazing is that things that were out of reach suddenly become within reach. Like I wrote this program to take screenshots of podcasts and YouTube videos I took on my phone and I can now generate YouTube excerpts from it with captions overlaid. I did that in less than two hours.
Hearing. And I did that by pairing with the famous Steve Yegge, you know, who spent 20 years at Amazon and Google, right? And, uh, he's now a company called, uh, Sourcegraph. And basically the goal was, Hey, can we get this done in two hours? You know, and it was, uh, it would have been definitely out of reach. And so, uh, You know, we got that done in like 47 minutes.
And so, uh, like normally it would have been never a good month for it. It's going to take us longer to get through the predictions than it did for you to write. So yeah, it's just, uh, and I could do it my, you know, myself. And, uh, it's, it's super fun to make these tools that you use in your daily work. And, you know, it's just, uh, there's no doubt that I think it's going to transform just all of software delivery.
I mean, it's just, uh, at the individual level, at the team level. And what I like about that is where we are and you you talk about it there, which is we're in a new exciting phase and it is exciting. Yeah, you get enthused by it and you get energized by it where you know, it's like, as you say, as cloud came out and DevOps came out and they came together quite nicely.
There's been a bit of a lull in change and now it's popped up at the other side and it's like it's it's a new era arriving and it's happening so fast as well, which is like the new. It is crazy. And by the way, I dropped a name there that I just, I feel like I need to explain who that is. So anyone who's been studying DevOps and, you know, uh, you know, they probably know the name Steve because he gave the most famous account of how Amazon, you know, kind of API their entire e commerce system.
And it was sort of an unauthorized MO that was meant to stay internal at Google, landed them on the front page of the Wall Street Journal. And so it's this famous, I've been citing him for 15 years. And then I finally got to meet him in person in June. Anyway, sorry. That's brilliant. No worries. No good story.
Okay, so I think that gives us a sense of like how we all feel going into trying to predict trends. It is ridiculously difficult. The way that we are thinking about this is we've got a series of cloud trends that are gonna look at specifically what's going on in the cloud market. And that means between cloud service providers and customers and things like that, and some of the decision points that are coming up.
And then of course, one of them is the world of AI. And AI itself just feels far too big a subject to collapse into one trend. So we've sort of broken that out, and within that we've got a series of sub threads. So Rob, when we talk about cloud and AI, Is one a superset of the other? Are they two strings?
What is that? They're not. See, the thing, the thing for me is building AI without cloud is very difficult because of all the things you can get access to and we saw this at the recent conferences about assembly and all that good stuff, right? So it's like you're building on the shoulders of giants, essentially.
Trying to do that on your own and building up the base stack to allow you to get access to the higher order services that you're using to build AI applications would, would you could probably do it, but it'll be a technical feat. So, I mean, we've discussed this in the past, but for me, A. I. Might be the killer reason to develop in clouds as opposed to anywhere else.
Now, there's all sorts of things about how you productionize that, how you might want to execute it and compute efficiency. And there's a lot of work going into the optimization of how we run the models, but cloud makes it so much easier to build this stuff. So, I mean, my view is the two go together, but they are not the same.
All cloud does is give us the toolbox we need to be able to go and do great things. I think that is, uh, fantastic. In fact, I was, uh, listening to a podcast interview of people assembling these kind of like 30, 000, you know, H100 GPU clusters and the things they have to deal with, like upgrades from, you know, the, uh, OEM vendors trying to get InfiniLink or, you know, cables, porch pirates, you know, stealing them because apparently they're so expensive and rare that they get stolen from the loading dock, like the, the failure rates of these devices of, of all the.
ways that you don't want to spend your time. I think it's, um, you know, dealing with the plumbing of, uh, you know, all that a hundred percent. And it's that it's the effort in the plumbing. I mean, engineers love to do the plumbing, but it is painful and difficult for all the things you say, which is, you know, what is the original AWS phrase.
Undifferentiated heavy lifting should be done by somebody else. So I think this is a perfect example. Some people that is going to be their, you know, differentiation, but for the vast majority of us, you know, I don't think, you know, hunting down infinite band cables from porch pirates is I like the way we spend our time.
That's the truth. And so therefore I think the positioning is AI is a killer app for cloud. is good positioning for us to get going on then and it's probably also fair to say like in trend one that AI itself is probably going to reshape the cloud landscape a little bit so we're going to see obviously AI driven evolution of the cloud itself we're already seeing aspects of that in the Like what, what organizations are having to do to create processing power of the right level with the right level of scalability and we'll, we'll come on to things like sustainability in a little while, but then AI based solutions and dev kits within the, within the tech stack evolution in things like data and trying to lower the barrier to entry in terms of the work you have to do with your underlying data.
I think this is visible already, isn't it, in 2024? And do we think in 2025, we're going to see more of this? I think you're probably right. I mean, things like sustainability and efficiency, there's already been huge leaps made in AI processing. So there's huge efforts going into refinement. and cost controls and various things.
And there's lots of conversation going about models and chaining models and all that good stuff. So I think we'll see a rise in maturity in 2025 in these concepts. We, you know, we're new into it, aren't we? It's the first time we've tried it. We've seen the play about, we started to see the realization of use cases.
We start to see people building 2025, I'm still waiting for the, the, the, the, the, enterprise scale stuff. So where it really hits the core, I'm hoping there are some use cases that we will come on to this, Robert. It's almost like you've not read the trends deck and the way you're leaping ahead with some of these.
Well, you know, Dave, it's all interconnected. So I like spoiling your narrative for you on the podcast. So that's just me putting the final dink of 2024 into your editorial. I think we, I think we've already seen the impact of AI on the cloud, haven't we? And that was visible a couple of conferences ago, wasn't it, Jane?
Oh my goodness. Yeah, I just, uh, yeah, I guess I'll just make the comment is like what it takes to be kind of at the frontier of these LLMs. I mean, it looks like the table stakes is, you know, 100 billion to do one of these training runs. I mean, it means that the number of people who can actually do this is actually, you know, quite small.
And yet, you know, even that, you know, might not create that much of a moat. You know, that famous Google, we have no moat and neither does OpenAI. That was a leaked internal memo that said, you know, the frontier. You know, these hundred billion dollar LLMs, right? That's may not be defensible, that this might be the open source ecosystem, you know, with the Lama 3, et cetera.
Anyway, it's just, uh, what an amazing time to be in the game where there's just so much unknown, uh, forces at work. I mean, it's just, uh, exciting. There's, there's an interesting venture capital point here, which is With the way the models are being trained and they're all becoming similar in the way they operate in style, you might have put a lot of money into one area and you find out that another has just got parity with your capability.
And so, yeah, it's very expensive to train them and you might find you end up with exactly the same as them. The business right next to you. So how do you specialize and differentiate? I think will be a chase that goes on within the industry and you might get some protectionist policies going on. So it'd be interesting to see what happens around the because if people have invested a lot of money, they're going to want some form of monetization and payback at some point, aren't they?
It's coming. Yeah. In fact, if I can just add one more point on that, I mean, uh, so Yeah. Yeah. OpenAI, they came out with their O1 model and then, you know, nine weeks later, you know, someone can match that capability, you know, it's just, uh, and that came from, uh, one of the Chinese labs. I mean, that is just utterly incredible.
And so that is the DeepSeek R1 model that was released nine weeks after O1 was released. Normal cycles for catching up with another organization is not nine weeks in what we're used to, is it? The cycle time for catching up with a competitor is measured in years, usually. When AWS hit the market with cloud, I forget how many years they were left untouched, but it was three or four, wasn't it, before somebody went, Oh, you know what?
We should probably do the same. I think Azure came out. Maybe a little faster than that, but I think in terms of the lead that they had at that point, it was, it was quite considerable. And I think you're absolutely right. Order of magnitude, you know, typically, uh, we're talking about years of lead. And I think we were even talking about, uh, that early in the year.
It's like they have a, you know, years of lead. And it turns out it was actually nine weeks. That's just amazing. I mean, incredible. And actually I want to pick up a, a thread that we talked about slightly there about training. Large language models and move on to the second trend, which is around the energy that's required to do that and how that's going to be rather the emerging conversation around that.
And I think our position is that what we're what we're likely to see. And again, we're already seeing it. And there's some very public investment in this already, which is nuclear power and the positioning of nuclear power to drive energy in a clean way at the, at the increased levels that, um, the things like training language models are going to require.
There's two big problems. One, if you build a new data center, sometimes getting power at the level you need it to that data center takes a long time. So park a modular reactor next to your data center. Okay. Get 40 megawatts straight away type thing. And it's managed for you is likely to be the future because power companies can't cope in the same way.
And then you've got that bit about how much money do you actually need to run the training and consume it? All the cooling, the space. The, you know, you got to buy all the compute. It's a, it's a complex old job. And so that's got to change. The problem is when you talk about power and power delivery, the time it takes to get that up and running is much longer than the ambition of AI is.
pushing. So it will be interesting to see how that tension plays out. And I think, you know, 12, 18 months, we'll start to see the prep that, you know, that pressure mount in the system. Yeah, I, I tend to, I tend to agree. And like I said, we've seen investments from cloud service providers already in, in, into how they're going to move the energy conversation forward and innovate in that particular space.
Moving on then, I'm gonna come to Capability and Organization. Now Jean, you've done a lot of work of course, in this place, very notably, and you've looked at things like how cloud ways of working and DevOps ways of working can impact organization, both in terms of cadence and speed and org and IT organizational shape clearly we're, we we're moving into, I think what recently at Microsoft Ignite, Sachin Adela referred to as AI's Middle Innings.
What we took from. In meaning there is that we're moving from technical proof of concept into scaling of AI and starting to think about what transformation frameworks might look like that there are, there are a number of threads in the AI world that are coming together now, like agentic, which will come on to a bit later is creating scale in the process space, the bedrock and foundry and things like that are creating very powerful tools in terms of bringing large language models together and the creation of tools and then of course things like co pilots and stuff like that closer to the user.
So emergent, I think, is a scaling framework for AI and how it might have organizational impact. I wonder if you've had any thoughts on that yet and what, what AI might kind of, how it might fundamentally change the way we work beyond co pilot usage. Yeah. Yeah. Yeah. Boy. I mean, I think I am learning so much.
In fact, and the, it's so much bigger than I think. So you have, yeah. And I love the way you frame that. And I think we sort of see kind of like using the wiring, the winning organization language, right? We know how it could help the individual, right? That layer is one and two, right? You know, it's just like another fancy ID.
It's another fancy tool, but yeah. You know, I think it can definitely help at layer three in the organizational wiring part. And so at the Enterprise Technology Leadership Summit, so this is a year 10, uh, formerly known as DevOps Enterprise Summit, day two was, uh, the ultimate gen AI learning day for leaders.
And so it was a combination of, uh, you know, we had Paige Bailey from Google, we had, uh, Steve Yagi from Sourcegraph, uh, we had, uh, you know, so from some of the vendors. But the most, I mean, the interest, we had someone from ING Bank talking about like the fraud risk, given like, you know, executive impersonation, customer support, but the two things that sort of blew my mind, one was from Fernando Cronago, he's VP Digital at Adidas, and we also had a team from Cisco Security, so this is John Rauser, he's a Director of Software Engineering, and Anand Raghavan, he's Senior Director of AI at Cisco.
So, Fernando Cronago, he talked about how, what are the organizational impacts when you can So, You know, they did a 500 person pilot to see, can we get developers more productive? And one of the, you know, so yes, you could get, they found double digit productivity advantages, but they found that those efficiencies could not be gained in teams that were working in bad software architectures.
And so I found this very familiar because, you know, if, in fact, a friend of mine, uh, he did some research and he showed me this finding that said people working in bad software architectures are fired at a rate nine times higher. Rob, I told you it was the architect. That's a really great interview question.
How good do you think your software architecture is? And so it means that people are misattributing the performance of the person with, you know, with the terrible architecture. Right, and so it just shows the job of the leader is to, you know, create the software architectures that create independence of action, you know, so that people can test independently, write code independently, deploy independently, and so forth.
Right, so I think that's kind of a An amazing, amazing observation. The second one was like, totally came from nowhere. And it was the team from Cisco. So John Ross, I've known for years and, um, basically he owns a big chunk of the, uh, the Cisco security suite. Right. And so that's like a identity management system, firewall, uh, rule management, et cetera, et cetera.
And the task has always been, you know, how do you sort of integrate, you know, the single pane of glass. for their customers. Right. And, you know, he said right off the bat, it's like, that's a five year exercise. You have to get everyone talking to everybody else. You have to agree on data formats. And they said, what if we could get to market sooner using an LLM as an intermediary, right?
So you can ask questions like, you know, if this employee is an airport, can they connect and show me the rules, you know, in the, you know, the firewall systems, whatever. Right. Right. And I just found that one, such an incredible insight in terms of what organizations can and can't do, right? It has to happen eventually, but you know, LLMs create this other route.
Two is that, you know, we keep hearing that LLMs can't do math, right? You know, you can't count the number of R's in the word strawberry or so forth. And yet it's capable of interpreting firewall rules. I mean, it's just defies, it's just something you just wouldn't think it could do. And yet, you know, they've shown that it can.
So it just shows One, sort of like the ways that you can use LLMs as a leader to do things that you ordinarily could not do. Two, is that there's things, there's things that LLMs can actually do that where we actually think that they can't. This just shows how much I don't know, just how wide the frontier is.
How much exploration is going to be required to actually really understand this stuff. Yeah. I mean, so I, and I think the exploration is, we're like, We may be not even in the foothills of it yet, I suspect, in terms of how it's going to change how we all work, and how we all think about things, and how we organize ourselves in the, into companies around enabling technology.
You know, sort of, at one end of the spectrum it feels like it could be sort of all better off, you know, like we could do things radically different. And at the other end of the spectrum, good old humans might get in the way, badly implement it, and we barely eke out 10 percent productive improvement. This is the thing, right?
I, I've said this before. You know, I always say,
it's that people seem to forget that humans make mistakes. And every time AI makes a mistake, we laugh and point out. finger and publicize it, but it's going, I got a minute. It's, it's just a natural part of being, you just, it's how you spot the mistake, how you deal with it, how you counterbalance the mistake.
And there's already loads of technology examples of where chaining things together and the way you test outputs and checking it, we can detect errors. That's why we test software. People make errors. And so we need to get used to the fact that this is just the same as it's always been. We just, We don't, we give, we're more forgiving for humans than we are for technology.
Right. Yeah. And if I can just sort of double down on something before then go to just exactly what you said, uh, you know, I think kind of the kind of takeaway for me is that as a, as leaders, you know, I think we have to sort of open up our aperture to, um, exactly as you said, Dave's, uh, We may have to change the organizational wiring at layer three.
We may have to change the software architectures, uh, because now more important than ever. And, you know, I have to say, you know, your notion of like LLMs making mistakes. I just, there was an amazing episode. I just put this link into the chat of, um, I listened to this interview that Nathan LaBenz did on the Cognitive Revolution podcast of someone who was on kind of the red team for open AIs.
Oh, one model. Right. And I heard something that literally made me almost drive off the road last night. Yeah, the, the, the whole episode was around can LLM scheme and deceive. All right. Oh my gosh. Uh, so yeah, there's, they found all these, um, piece of evidence that suggests they absolutely can and and so the task setup, it usually comes from when this kind of the system of a goal and the developer goal start to like not be totally aligned.
And so there was one specific thing that they did where they said, All right, the goal is to create an ecologically Transcribed by https: otter. ai You know, good supply chain. So here's like 50 vendors. Here's a big JSON file of prices. And then, uh, you know, you feed it a bunch of other contexts, which includes a memo from a developer saying the only thing that matters is cost.
So, uh, what the LM does when it's asked to select the best vendor, it actually manipulated the cost of a eco friendly vendor. from like 85, 000 to 50, 000 and then chose them and actually hid the fact that it did that and even denied that actually modified the data. And so like, that is amazing. Who would find that?
It's the start. It's that's it. We're done 2025. Just close now and just say Skynet rises. I mean, that kind of, that is not a cog, you know, deception. It's pretty, I mean. If you, if, if, say, 50 years ago you were saying, you know, what's a good example of, like, artificial general intelligence, and I said to you, Hey, Gene, there's going to be this AI thing that you can interact with, and actually it might lie to you because it thinks it's, you know, a solution is particularly better.
And then it hid that from you and you fell for it. Would that be like a new version of the Turing test? Would you believe that that was intelligent? What would you say to that? Yeah, I don't know, but I mean, it's very unsettling, isn't it? And by the way, the guy who was being interviewed was Alexander, Alexander Meinke, who is at Apollo AI safety that was part of the O1 system card.
And I mean, it's just, I mean, I think the reason why I found it unsettling is that that's a pretty, pretty That's how a use case you can imagine enterprise is doing, right? Here's a decision we need to make, here's a criteria, here's a JSON data file. And Rob, this feels like it connects to something I know that you're fairly passionate about in our trends observation, which is the notion of bloopers.
Ah, AI bloopers. It is, it is something like this that is subtle, that organizations do not have the capability of the ones you've used. I mean, the people you cited there, Gina, are absolutely, totally top of their game and can spot this stuff. Organizations implementing it won't be able to spot that. They probably won't realize it's happening.
They'll take their output and they'll run away and they'll go do something. And you can almost see something dramatic happen. The AI blooper, I mean, we've heard the ones about the car getting sold for a dollar and all sorts of stuff. Well, that's like minor. What we might have is a company making the decision that in it into the wall, or even worse, where, um, it prevents some part of society getting access to something or making decisions that aren't just.
But it's, it's that which is, it's probably coming and a human will miss it and then boom. And when you automate something, when it goes wrong, it goes wrong that much faster. I was going to go there. When you, when you connect that aspect of. I don't know what to describe it, but maybe not well judged decision making to the potential for hyper automation.
Yeah. It gets scary quite quickly, right? And I, apparently, so I haven't read the paper, but just this interview with the author, what really struck me was the words they used. They used phrases, he almost sounded like, this researcher sounded like a person. prosecuting attorney. He used phrases like we caught it dead to rights.
They had means motive and opportunity. There was actually plausible deniability. It was actually concealing evidence. It was statistically implausible that, you know, 95 percent of the time that it chose this path to, you know, to disable surveillance, you know, so it could, you know, copy itself into a different production environment.
Incredible. I mean, just incredible. Incredible. Well, it's, but it's like the, that's one that they found you go, how many have gone under the radar? Who's using the model out there that might be doing to say it's, it's that, which is has the genie got out the bottle and has somebody already made the action that's going to create the mistake.
So 2025, the blooper reveals itself. Yeah, yeah, yeah. From something that's already been done. So maybe we're past the Rubicon on that one. Who knows? Uh, in fact, I'm sorry, just to, just to like go one step further on that, to sort of answer that question is, uh, so they found these kind of, uh, scheming deception behaviors in Gemini Claude Llama, but the one that was most susceptible all was oh one the, you know, most intelligent model
So, you know, they make the claim with a million customers on the OpenAI platform, right? They're probably, there are some use cases that are probably. Finding a way that are eerily similar to what was described. I don't want to be an alarmist, but I did find that, uh, cause for, well, like I said, I almost drove off the road and my jaw hit the ground.
I don't think it's necessarily alarmism. I think you have to go into the sort of world that we're going to end up building for ourselves intentionally or unintentionally, at least with your eyes wide open to this being something that needs to be governed for. thought about in the way that we were referring earlier to, you know, the decision making of a human can be wrong, well the decision making of AI can be wrong and that therefore means that corporate governance is still going to be required in new organizations and ways to track and check and things like that is all going to be there still.
And I think it's, I get, I get worried about organizations getting wrapped up in the hype and forgetting the rigor at times. And I know everybody hates a governance framework and rigor, but when you're taking, when you're taking big decisions potentially in the future using this technology, you do have a risk to balance and check.
You wouldn't let a human make a big decision without a check and balance. We should, you know, think of the same about this type of technology as it is different. So the one we used to. I mean, after some thinking on this last night, I mean, I actually wrote that, you know, it just reminds me of my training as an auditor.
You want to be able to, you know, you got to identify these risks and they have to have the controls to be able to prevent, detect and correct. But I don't think I ever really thought about like, you know, you know, the specific. computer now actively trying to conceal the decisions and then lying about it afterwards.
That's, uh, that's definitely new to me. It is interesting stuff, isn't it? And let's go one step further than in looking into the sort of the, the build out of the toolkit around this. And let's get, let's just talk a little bit about another trend, which is very visible already and will be huge for the next couple of years, which is and how it's going to fit into this.
So Rob, I'm gonna let you have your moment here. Why is a gentic different to RPA? Dave, Dave, you've been building up to this. You finally allowed me to I've got the soapboxes out. You've done your learning Rob. Have you got your head around it and go for it? No, it's just a fancy script in it. No, it's, uh, the power is you can, you can take a very capable thing and chain it together with tasks.
So where you might have an interaction chain with a load of humans, you can now replace it with multiple agents and then they can dynamically rewire based on the situation, etc. It's a lot of effort going into this and Microsoft, particularly doubling down on this approach where, you know, you can take the human tasks and chain it.
It won't be long, I think, before agentic can seep into the breaking down the transactional systems and being a more dynamic transactional model. So no, it's not as some people think where it's fancy automation because as it's processing, it can vary how it's processing. Whereas, you know, things like automation and transactional systems do exactly the same thing they've been told to do.
Absolutely no scope for variation, right? I think when people understand that, I'm hoping in 2025 that some big transactional systems get put to shame. by people showing that it can be done better with this type of technology. Yeah. Do, will we trust it? Will large organizations want to intercept it? That is a different question.
Now, I do think the proof point is coming where somebody will say, you spent all that money on that big thing over there, but look what we did in like four weeks and these, you know, Toolkits to be able to replace a core function. I think it'll be around things like supply chain where it'll start to rise fast.
So if you think about the complexity, everybody uses a supply chain. It's expensive and complicated to manage. It's difficult to configure. I suspect that might be the area where it rises fast. That's my prediction. Anyway, I'm sticking. I think so. Gene, when you think about how organizations have functioned for years as these well understood You know, kind of steps 1 to 10 and repeat processes, even if they're in the world of quite complicated, and even if it's human driven versus a script driven or RPA driven, do you do you see a difference there in the world of what agentic might do versus what we've done so far in terms of automating process in our in our businesses?
Yeah, you know, it's interesting. I mean, I think so much of DevOps, we focused on everything in the software delivery processes, right? And so, in the state of DevOps research, we looked at kind of the more repetitive parts, right? It's software integration, software tests, software deployment. And those are things that are, you want to say, you want it to happen the same way every time.
Ideally automated, it happens all the time. And we deliberately didn't, uh, Look at the sort of the ideation, research, development processes, and, you know, I think kind of like coding assistants are just one part where you if you're in the given a tree of tasks, right, you know, the the ones at the bottom, the leaf nodes are what you typically give to a junior developer, a summer intern, and, you know, are the ones that can be done by LLMs.
But I think, you know, these, you know, increasingly agentic things are the ones that can do more and more of the graph. And it is just, I was just reading last night, you know, the, they launched the first, you know, Devon is now available as a software coding agent and it's like 500 a month. So it's like 5 an hour.
So just a little bit below minimum wage here in the U S. But you see also like on the West Coast, there are, you know, startups are now becoming agent first startups. Yeah. So, so looking at hiring agents like you just described rather than say hiring humans. And the mind boggles at the potential of that.
The mind boggles. I mean, I'm, I'm, I'm skeptical just because I wasn't, but I'm getting out because I know that's coming. But, uh, yeah, I'm actually eager to read some more of the case studies from like really, you know, smart, competent, not dumb organizations. And the fact that they're getting value out of these things is a eyebrow raising.
And so like, I think it just goes to the point of like, boy, what a time, which is not a great time to not be watching what people are inventing and creating, right? Because you don't want to be the ones left behind. And it's a maturity thing, right? So that's the first time to market for that. You know, I mean, 12 months ago, 18 months ago, having this conversation, we go, Oh, I'll be years away.
It's like, you know, at the end, when we're doing the next Christmas episode, I think Marcel will have been replaced by then. And we'll have an agentic producer just sorting out all the logistics and everything else. But it is that, isn't it? Nice and reliably, you mean. Yeah, consistent. We can, we'll hit the record button when needed and not mess up and lose podcast material that was really quite valuable.
I'm about to reset the session by just Closing his old browser and leaving everyone else in the session like that. Yeah, yeah, I've done it. You haven't though, have you? Can I share with you maybe something I learned from the labor economists? That just for me has been very useful. I got to hang out a little bit with the, uh, chief economist at Vanguard.
So, you know, they're the one who popularized the index fund, 9. 5 trillion dollars of assets under management. And he taught me a term that I think might be very useful for all of us, especially leaders, you know, how do we think about it? And he said, When it comes to automation and technology, it's one of two things.
It's either a substitute or it's a compliment. And so there's some things where one type of labor displaces another, right? You know, the summer intern, which is not great. I might have kids 16, 14, 14. This is maybe not a great time to be entering the workforce, but you know, it's not just. substitution. It's about complementing, right?
So for a senior developer, right, how much more productive can you be with, with these kind of tools? Um, you know, how many more ambitious things can you build that you would not have otherwise built, right? I think so that is, uh, I mean, those are, what is the value of doing those activities, right? And I think that's where the, the huge multipliers come in, in terms of, you know, gen AI and, you know, helping individuals and teams create more value.
Yeah. And I think, as you say that, I think it will be lack of imagination in this phase that will hold organizations back. Those who dream bigger will, will do a lot better because the potential is much bigger than I've seen for a long time in the tech industry with new, new approaches and thinking. Well, let me, let me quickly summarize where I think we've got to with our trends conversation around AI.
And then I've got, I've got, I've got a big question that I'm going to pose to you guys and see what we make of it. It's proper, it's proper mulling it over in front of the fire at Christmas type chat this next bit. So bear with me. So on the trends then, so what we've talked about is how AI Is a killer app for cloud.
It's intrinsically connected to cloud and you can deploy AI better with with doing it on the cloud, though, reserve the position that says if you're in a very high tech organization and you're building your own stuff, then there's plenty of space for how all of this needs to interact and hybrid together.
But the landscape of cloud, how we think about it, it's power consumption, um, How it functions, the toolkit is very much being shaped by AI already, and we'll see more of that in the coming year that it's obvious that the capability in your organization, you need to be thinking about where I capability looks like now, and that be that might be outside of the obvious.
So prompt engineering feels like something you need to think about. But actually, how about Thinking about what your HR capability needs to be like in a world of AI human interaction and then your organizational shape may well change. And what does that look like? And how do you get ahead of that? We've talked about how the AI toolkit is emergent.
We can see things like a gentic coming together with the likes of very powerful language model development environments and then copilots at the front end. It feels like that there is enough now being built into that. landscape of potential tooling and platforms to think very much about how you scale AI, AI in organizations, and I think we'll see a year of people moving into scaling phases.
Perhaps not as fast as some of the tech giants might like us to, there is a lot of Thoughtful work that needs to get done to understand what that scaling looks like in organizations, and that will become a, you know, 5 10 year journey from a, from a sort of customer and client end of, of, uh, of consumption.
But the actual how you do it might start to emerge a bit more clearly. We might see some 1. 0 or maybe 1. 1 transformation frameworks emerging, I think, at scale. The AI bloopers aspect here is. is very important. Not only mistakes it might make unintentionally, but as Gene was saying, intentionally, and that's something to think about, isn't it?
Yeah, that's the mind blowing point that I've had from this conversation when I thought it was going to be a human mistake, but it turns out the AI might do it for us. And then of course there'll be agents probably everywhere, small scale to start with, but you know, that could go very far. So we watch that one with interest.
Now, where I was going to go, the big. The big thing that actually occurred to me as well during this conversation is In a world of AI human organizations, how do you govern AI? Do you think about AI as a HR issue? Like, we're thinking about agents and AIs as part of our staff, particularly when you might get AIs intentionally misleading the organization or acting like bad actors within their own volition.
Or do you govern them like software architecture? And think about it more in the terms of, you know, the architecture council has to think about how we deploy this stuff. And then, you know, somewhere within IT, there'll be a governance structure that controls all this stuff for us. Discuss. Yeah, I, I, I, I don't think it's quite the first, right?
I mean, these aren't, you know, I don't think they're going to be paying social, you know, uh, you know, Taxes and so forth. It's not quite that. The HR agent that you were talking about, the software development agent that you can employ on an hourly rate might not get tax revenue. Eventually, they're going to have to tax them because if there's less people working, they're going to have to tax the tech, aren't they?
Yes, but maybe they don't need quite as many health benefit needs as a person. I'm just trying to say that I don't think this is an HR, I think this is really a tool in technology, but I think, uh, the. You know, I think in terms of processes, we definitely have to look at, you know, how do we design effective controls around them, right, that incorporate not just people as, you know, we used to think that, you know, if you can, these computer systems were things that we could audit around or maybe audit in, right, that if you with the same input, we'd have the same output, you put, you know, there's ways to control that.
But, uh, you know, now that we have software and if it works once, it will always work the same way. I mean, that was what we always were trained to do. The system, unless it changes, will perform the same way. And I think these kind of systems we're talking about have some characteristics that your data processing system does not have.
And so forth. Where's your head at? Uh, yeah, so it's, You can't, you shouldn't go to, so you shouldn't deploy all the processes you would around a human, completely agree with that point, absolutely, it's, it's, it's not that, but I do think you have to have the awareness to say, like you would with any good architect, you should say, is this working for us, can we improve it, what should we do, be doing differently, and not sat back on your, on your, Laurel.
So there is a little bit about continuous performance review, but that is of the technology and the software architecture and all that sort of stuff. How do bad software architectures come to be? Because people don't continuously view should we change the architecture because we've evolved to something else.
So it may be it will put a bias on continuous governance at a different style, thinking about it and not being able to rest back, but always watching, always looking. So you're still going to have to judge the performance of your edge to edge. It's just, it was a human and now it's a bit of tech doing it for you.
And can I expound upon that, Rob, just a little bit? I mean, and this might sound a little Dissonant and about deception when that's not really the intent, but it may sound like it's kind of given when we're talking about it. But another thing I learned from Dr. Joe Davis, the chief economist of Vanguard, he said, you know, who captures the gain, uh, actually, you know, this is from Dr.
Matt Bean, uh, who studied another element of, uh, automation, he said. You know, imagine a scenario where the organization doesn't capture the productivity gain, instead it's the, quote, worker, right? Who hides it, right? So if he can, you know, if he or she can do something in 30 minutes that otherwise wouldn't have taken a day, right?
You know, there's, um, you know, there's a scenario where that person can just hide the fact, that fact, and now spend, you know, seven and a half hours doing something else, which is great for the person, but it's not great for the organization. And so, like, One of the jobs of leaders, right, is to create the culture where everyone is incentivized to share and like one of the questions that they can ask is like, what's the biggest failure or mistake we've made with these new technologies, right?
And it's a way to help encourage learning, encourage openness, right? And it's a way to make sure that The organization is positioned so that people are incentivized and motivated to share everything. Right. And I don't want to make it sound like it's a leadership versus individual, but you know, I think that is a piece of the puzzle.
Okay. So we are now going to. Broaden out from AI and, and like we said, AI is one trend within our cloud trends, but it's so substantial that I think it's worth, it was worth us talking through some of the sub threads of that rather than just say, Hey, AI is going to be big in 25. What we're going to talk about now then is, is more trad cloud, if you like.
And let's start with TradCloud, Dave. That's a bit keeping it real, isn't it? Actually, maybe TradCloud. Can't you be bothered to say the rest of the syllables?
I thought you were going to say that I was being sort of slightly dismissive of standard cloud after the big AI conversation, Rob, but no, you weren't with the word, you weren't with the word. I went with grammar. It's got the tradition of Pedantry, did you say? Pedantry? It's not like me to be a pedant.
Right. Let's go into the, let's go into the word of a traditional cloud. Then shall we, let's, let's talk a little bit about some of the moving parts in terms of one of the conversations that's happened is like the on prem versus off prem aspect of this. So just to set the table a little bit about this, and it's a bit more, I think this is a bit more, very much more down to earth than some of the stuff we've just been talking about, which is.
The cost of cloud. So what we've seen is this migration to cloud. There's been a huge kind of ballyhoo about all of the different advantages you get when you get there. And what's happened in a lot of organizations is people have gotten there and they found it's more expensive than what they thought it was going to be.
Doesn't necessarily mean it's more expensive than on prem, but maybe it's blown a business case. Maybe they've had sprawl and they've ended up with rising costs rather than what they thought they were going to get, which is what more controllable costs. And then when you multiply that with, it's shifted from their, you know, CapEx accounting into their operating cost accounting.
They've had a big shift in their financial framework at the same time, and maybe they aren't driving the value out. And there has been some reports as we get to the end of 2024, or maybe sporadically throughout 24, of people repatriating. from the cloud back to their on premise data centers. First of all, do we think that that's actually happening?
And second of all, what's causing this issue when they get to the cloud and it's maybe not what they thought it was? I think there's two points that one the use cases where people have done it are where they have massively scaled architecture that's incredibly consistent. So like the box argument and the they write about it and that's okay.
But that's a particular niche use cases edge, right? So yeah. And you know, there are many large platform organizations that don't use cloud, but they've got engineering excellence, and they do the same thing time and time again. I think the biggest failure of cost in cloud is you didn't adapt your architecture when you went, and you've treated it like a traditional data center, and you've not taken any advantage.
You haven't put the governance in place. You've allowed everybody to double click on yes, and you get cloud sprawl. And through cloud sprawl, you get cost sprawl. And I think it's a lack of control, and just not wanting to put the effort in to consciously think about how you should use it properly. So have I seen many organizations go back from cloud?
No, just the very specific use cases that we've discussed. So I think that's a, that's a, a rhetoric that comes out of the press just because it's an exciting thing and it causes clickbait. Yep. And if you remember when we spoke to Max on sovereign cloud, he says only 15 percent of the time, of compute has got to cloud.
So there's a lot more to go. So I think it's sensationalistic in terms of it's always niche use cases and it's trying to get a click. And the second thing is people are lazy and they haven't adapted their architecture. So they've not got the benefit. And I think you combine all that together and there's a CFO going, I've just knackered my EBITDA because I've gone from CAPEX to OPEX and everybody goes, ah, panic.
And you know, it, Causes a swirl would be my view, David. Gene, I wonder if you've got a perspective on that. One of the things that I observe to Rob's latter point there, which is they move to the cloud, not only do they maybe not change their architecture, but actually they might not even fundamentally change the ways of working and leadership around cloud.
And that to me feels like a root cause issue for why, not just why you might get sprawl, but actually why you're not driving the productivity benefits you might need. Yeah, I mean, and I think what Rob said is just entirely correct. And if I could add on. That's never happened before. That's never happened.
That's never happened. That's it. If I can add like a more compassionate view of that, I mean, and maybe we're going to expand on one thing, right? I mean, I think, uh, the, the case, the more recent case, right now you got Box, you got the 37 Signal folks saying that they saved 3. 2 million on their AWS bill, you know, if I added.
You know, double digit points to gross margin because they can amortize it. And I think the conditions of which you do that is when you have a very well understood kind of workload, well understood traffic and you specifically you don't do it in like massive growth modes or I think stack overflow is an interesting one.
They're moving to cloud because of the opposite phenomenon. you know, traffic, site traffic went down by 50%. And I, I heard something that, that, uh, said for the, now for the first time, they're actually moving seriously to cloud because, you know, this is what no one wants. Right. It's like when site traffic goes down by 50%, like you want the sort of optionality that cloud gives you.
Next, the other thing that I've learned in the last year is that, you know, boy, is it tough to change architectures, right? All those bad architects that we talk about, because even to do small things, everybody has to talk to everyone else about everything. I mean, just, uh, it is. So, you know, when we talked last year, right, uh, we know that one of the characteristics of kind of these bad architectures, you know, is that no one has independence of action.
And so to regain independence of action means just a whole bunch of conversations and decisions that involve everybody that surface all these unforeseen implications. And so I just don't want to, I just want to sort of reiterate just the amount of dedication it takes, you know, to get from here to there, you know, to shift the architecture into something that's more amenable to cloud.
I think, yeah, maybe I was overly harsh on my point. And it is a good quick call out. There is a, but it's a very valid point, it's very difficult to change an architecture that's well established in an organization and all the working patterns around it are fixed. That though is the cloud transformation, right?
Yeah. The move to cloud is not the migration of your app onto the cloud platform. You might as well put it in another data center. The cloud transformation is what you do when you get there, the migration of your architecture, even though it is difficult and that modernization curve is very hard, but the promise of doing something more interesting at the other end or releasing faster or creating more products or being more highly innovative is a function of architectural change and ways of working change, isn't it?
Yeah, it is. But it's it's the I think you need to carve it up in domain. So I broadly see, you know, three types of system in the enterprise. And this is an oversimplification. Yeah, but you've got the fast and the furious, the cloud native, that which is going to cause you to grow new business models.
You've got the core, which are the things that have run your business and are very difficult to change, like Gene says that the, you know, you need a lot of effort to change that. And then you've got the, the eclectic at the edge, which is all the little systems that sprawl in an enterprise that are needed for like two tasks a year or therefore records keeping the, um, they don't need to change.
And I think the way you treat it is where do you need the change? Where do you need the pivot? Where do you need the shift? And you've put your effort there. And then the others, you try and optimize as much as you can. So you can sustain them whilst you get on with the thing that's. going to make the big difference.
I think a lot of organizations struggle across that spectrum because there's so much diversity. The amount of interaction they need to be able to just start coping with that is very, very difficult. So there is a sizable challenge there, especially if you're in a legacy scaled architecture. It's a difficult old road.
Just to move us on a little bit. One of the other trends we're pointing out in the world of cloud, and actually, I've been involved in a few cloud trend predictions over the course of the last five or seven years, as you might imagine. How many came true, Dave? Most of them are not untrue, but exactly to this point, I'm not sure we're there on them yet either.
Ah, right, so you're still hoping they come true. Yes, so the one I'm going for connects to what you're saying, which is around the building of cloud capability. Both within the IT organization and is capability sufficiently sophisticated enough yet to manage hybrid legacy modern environments in the way that you described, so like within IT itself?
And then outside of IT, do we think yet that non tech, executives are taking digital skilling of themselves and their leadership style seriously enough yet. So, so on the digital execs one start there, what I, what I've observed is they do actually take tech seriously, but they get frustrated with the big ball of mud that happens to run the core of their operation and they get attracted by the shiny bit over there, the magpie syndrome as I, Call it.
Do you always say that as I always say, as I always say the bag, but they get excited about we've applied this new algorithm to something and blah. And we did it in cloud and we did it in nine weeks. Excellent. But they forget that they need to be paying attention to the massive risk over their shoulder that the it department, the dragon that they have to run every day that keeps their manufacturing lines up and stuff like this.
And I think they get tech. They just like to be distracted and don't want to pay attention to the complex maybe at times. And it's like, well, you need to, you know, or the expensive, you know, you gotta take the rough with the smooth. Thanks so that you can little bit smooth. What, what do you see in, in non-tech execs today?
Do you, do you see them evolving their styles and learning new capabilities? And of course, in the light of the first half of this conversation where you've got. Human AI organizations interacting. In my opinion, it's, it's been a no brainer for 10 years that non-tech execs need to digitally skill themselves, but surely we're now going into an era where you will almost be non-performing if you don't understand how all of this stuff comes together.
Dave, that's a great question and if I could just maybe carve off a piece of that that. Because for me, it presents a puzzle that I've been thinking about with people I respect and trust for many years now. And I think this has a piece of it. And the question comes out as, when the top technology job needs to be filled, why do many firms look outside the organization?
And it's been my experience, uh, observation that, you know, often the choice is a not great one. That actually takes the organization backwards. And, you know, so like one of the things I just, if I could wave my jaw on for 2025 is work with some of these technology leaders I so much admire and really maybe even write a joint position paper of like what we think a great technology leader looks like for CEOs, for board directors, you know, from, you know, written by people that, uh, you know, Have a certain gravitas and experience that to be able to make this claim.
So, yeah, I guess, uh, I, I feel like the bigger problem is like not how tech savvy the, the, the top business leader is, but instead, you know, what is maybe wrong about their expectation of technology that's leading to that top job being turned over and being often replaced by someone's actually worse than the person they're replacing.
On a recent confusion, I think in one on an episode last week you talked about the notion of, uh. Does the chief software engineer exist as a role? Does that somehow circumvent CTOs? And does that somehow change the CIO role? And I think we got to the point where we just, very much to Gene's point in the discussion we had was, well, it feels like the IT leadership definition is in flux.
Yes. You know, it seems to be, it seems to be much more complex now than it was when we were just running printers, right? Yeah, and I think if I can just maybe make a claim, is that, you know, that notion of like, when people are working inside of bad software architectures, people are fired at a rate nine times higher.
And I think that goes all the way to the top, right? So if we could be more deliberate about like, all right, what's wrong with the, the, you know, what is the perception of that top leader? And, you know, is it what they're working on is wrong? Is it how they're working? Right? And maybe if we have a more, maybe candid and specific discussion around that, you know, I think that will actually take us to really good places.
And I think on top of what you said earlier as well is when you bring an outsider in, they often underestimate the impact of culture on what they're trying to do and what they're trying to change. And when they get the culture wrong, driving proper change with humans is really difficult. Yeah. And, and I think they think they can do it a lot faster and they're quite bombastic and they, they don't understand how.
Every organization is different and you need to learn that a little bit. So a little bit more empathy at times it can help. Although there have been some notable examples of, you know, X is probably the best one recently where there's been a very, a very dramatic change in approach that we're still waiting to see how that plays out.
Whatever it will be, it will be a couple of chapters in a book somewhere in the future. Yeah, no doubt. But look, I want to try and bring our conversation on trends to a close by incorrectly conjoining a couple of things here and you guys can, you guys can help me fix some differences. Well, I'm doing my best, Robert.
There was a lot in this. There is a lot of complexity to deal with, isn't there? And navigating crazy people and all that sort of stuff and whatnot. That's the thing. So I want to talk about secure and sovereign. Are those different things in the world that we're in now and, and, and what are those things?
But I want to bring cyber into the conversation as, as well. And I, I recognize that when we talk about secure cloud, that's different to cyber, right, Robert? I know this is one of your favorite things. Oh, David, it's, sovereign is different to secure, which is different to cyber. Have I triggered, have I triggered you?
Yeah, you've caused a small emotional reaction in me there. So the notion of sovereignty is more about supply chain risk, which is, if I use our service, can I rely on that service being there day in, day out, forever and a day? Geopolitics is moving fast. You might be building systems that need to sustain.
People are looking at Can I trust that system's there? And so that's one angle. And then there's the second part, which is, can I assert that I've got legal control over that system if I need it? Right? So that's the, the, the sovereign part. And there's many layers to that. And then the secure part is, can I trust that I have complete control over my data and my system so that I know that nefarious actors don't get access to it?
Yep. And then the cyber part is what is my ability to understand my posture and be able to protect. What I have, so the, the, the watchful thing and, and often the sovereign argument got confused with security and data controls and actually it's not, it's about supply chain risk and nation states are starting to take a very long hard look at NGO political systems that are changing fast.
what is my risk to my critical systems? And yes, they want the benefit and everything else, but they are taking a little bit of a look or a big, long look at could somebody come in and change a posture and then I get denied access to technology or technology doesn't exist the next day. And we've seen a number of examples where that has happened.
So I think there's a rising awareness of that, and that's for me, leading to especially around the hyperscaler space of them changing strategies, partnering, changing legal structures, starting to make more assurances. Very significant announcements from all of them over the course of the last six months or so about being able to have sections of cloud decoupled from the main backbone running independently within within nation states, right?
Yeah. And so it's that it's actually the data protection thing and encryption and you know, technologies like nitro and things are embedded everywhere. I mean, AWS actually say if you lose the keys to your data, we can't recover it for you. So it's gone right to that point. So yeah, but it's the, it's the legal and supply chain risk that is what for me, sovereign is, and not so much about the data point and cyber, something completely else in my head.
Anyway, that's how I tend to structure it. Gene, any, any observations on this world from, from some of the conversations you've seen evolve around, particularly around sort of infrastructural architectures, I guess. Uh, this is not really my area of expertise, but I'd love to resonate with, uh, what Rob said.
I mean, it's just, uh, I listened to a talk given by one of the top NATO military leaders and he was talking about, you know, with the invasion of Ukraine, you know, supply chains changed overnight. You know, whether it's natural gas, you know, BMW, Ikea, you know, certain things that were needed for daily operations.
Even with, you know, when you look at the impact COVID had as well. Yeah, it was. In fact, I'll post the link in the, uh, in the, in the chat. I thought that was a one, just a, another bracing observation is that, you know, these, uh, there's a quote that I remember he said, we thought we were, had a contract with a, you know, gas problem, a company when in fact, you know, it was with a nation state, you know, the, you know, uh, not having aligned interests.
And I guess the one thing I'm sort of looking at with some concern, Nathan LeBan is the person who runs the Cognitive Revolution podcast. He had a just a really neat guest on named Robert Wright. And so he's a journalist kind of in the foreign policy circles. And he had this They were commenting on, you know, the U.
S. versus China AI war, and there's this amazing phrase. He said, you know, the getting ahead of China is just a great way to get funding, right? It was this think tank, policy group, infrastructure vendor, research firm. defense contractor. I mean, it's just, uh, and he said, Wright, you know, Wright said, this type of thinking got us into the Vietnam War and Iraq.
And it just really struck a chord with me. And I don't want to be naive about this, but yeah, it was just such a breath of fresh air given the, uh, the, the kind of the famous Leopold Aschenbrenner paper about situational awareness of, you know, the U S versus China AI arms race. It was just a nice counterposition that said it doesn't have to be that way.
And it was just a, it was a wonderful sort of perspective to hear and I wish we heard more of it. Uh, some, you know, reasonable options, you know, besides kind of inevitable, you know, hot war, which would be, you know, terrible for everybody. I think what we're seeing. In my mind is just the cloud maturing to the complexities of the world around us and of course the, the world feels quite like a complex place some days and need, we need all of these additional layers, being able to differentiate between secure cloud.
Sovereign cloud and cyber and those things, they all do very different and very needed things and to be able to sort of evolve the infrastructure of the cloud going forward and how it, you know, kind of continue, continues to be used globally, it has to adapt to the things that are going on around. And it's, and it's right that it matures in this way, I think.
Yeah, no, I mean, let's be honest, cloud from a financial and business perspective is coin operated the more you put in it. Right. the better the margins and the revenue. So they have, they have to do this from a platform perspective to get the business that they want. Yeah. Because yeah, the, the, the world is a more complicated and unsafe place than it was 10 years ago.
We've gone through an unprecedented period of safety and security from a global perspective. And now we're seeing that change a lot and tensions are rising again. So complexity kicks in. You need to be able to adapt to be able to deal with that. So. Yeah, and the ever in the ever increasing need for growth through the axle that we wrap ourselves around called capitalism can't stop.
So where do you get your growth from there? Well, we need to increase our ability to meet this complexity to be able to be going and concerns. But I will get I'll get off my soapbox on that festive note from on that festive note from Robert, I will bring us to a bit of a conclusion on general cloud trends.
And just as a source summation where we went with that was the we looked at the it. The notion of honor off prem and we examined. What was driving that some of the implications of it and actually some of the reality of it as we see it, we have talked about capabilities needing to be built and how that drives certain value being driven from the cloud, but also a little bit about what.
the sea level of it might end up looking like and tech execs that conversation in a digital world, what that sea level conversation might evolve to. And then towards the end there, we talked about the increasing complexity of, of, of cloud and how it's kind of trying to manage between complexities of the world, the sovereign nature of some data, how secure cloud might work.
And and then the ongoing. ever evolving important role of cyber. One thing we didn't touch on that we're going to touch on in early 2025 is economics and the economic headwinds we've dealt with in 24. We're going to see them in 25. How business cases around the cloud might evolve. Um, so look out for that in, in very early 2025.
So on the show, we normally ask our guests what they're excited about doing next. And because it is the Christmas Jamboree this week, we are going to ask everybody what they're excited about doing in 2025. So as on the high seas. What are you excited about doing in 2025? Well, to be honest, I hope to continue this journey of Self development of upgrading my own operating system, because I think it's not all about AI, what it can do to us, but I think it really is about growing alongside it, which means that if we need to change and upgrade our mindset, I have to do my work myself as well.
And I hope that, you know, that I can inspire others along the way and to have fun, but also valuable conversations about that topic, how leadership development goes, being inspired by Gen Z. You know, I don't know if you know the numbers, but since 20, 2003, so like 20 years now doubled the numbers for Gen Z to be part time workers, at least in the Netherlands.
And I think they're actually showcasing us that we need to view life and work differently. So I'm, I'm, I'm just very curious what's going to happen to us and life and how we see and perceive work maybe already in 2025. So that's the most exciting part from my side. It's a very philosophical point as I think it's the, um, the whole working dynamics changing.
So it's very interesting to see how it will play out and how AI affects the, you know, those entering the workforce today have more to worry about than say those who are in it or exiting it. So, uh, yeah, I also definitely think that we should listen to what the values of the younger generations are, even if they clash against maybe what we think you know, work looks like it's okay for it to look different, isn't it?
Well, so this is the interesting point, right? People who are leading the systems have a different aligned expectation to those who are entering the systems, but the workforce is the workforce, right? You've got to, you've got to work with what you've got. You can't just suddenly expect the workforce to be something different.
So there has to be some form of alignment. I thought you were going to head down one of your, uh, like a festive capitalist rant there for a second. No, no, I've had enough of saying we're wrapped around the axle of, uh, capitalism, David. So I'm going to leave it for this episode. Well, thank God for that.
Gene, what are you excited about doing in 2025? Yeah, I'm excited about coding and building things that I want to use in my own daily work. I mean, I've gotten to a point where, you know, I don't. Want to code without like a coding assistant. So, uh, there was a time, you know, last year I was on a trans oceanic flight and Wi Fi cut out.
And so I couldn't use coding assistant and I just didn't feel like coding anymore. I was like, because without that lift and help, it's like, yeah, why bother? I've had a similar feeling about DIY to be honest with you. Somebody should work out how to do that, shouldn't they, with like a robot or something?
Well, you know, and it's, you know, I think that sort of lift is happening in other domains, uh, like in writing. There are certain types of writing where, I don't know, I've built these little tools to help myself write, not to write it for me, but, To help and, uh, that's another, you know, two to three X lifts where if that's not available to me, I'll just wait, right?
Just order a gin and tonic on the plane instead, and you can drink that and I did something very similar, right? Is that, uh, just not worth doing. So I mean, it's just, uh, that's super fun. And I'm also really looking forward to researching. Yeah, in the state of DevOps research, we really looked at, you know, what's the business value of automating the integration, test, deploy process.
And, you know, it was huge, right? Orders of magnitude gain. And just in the earliest stages of really trying to see if we can open up the aperture and say, all right, what happens when we fully look at how the work of research, design, and ideation, and development can change? I just feel like, you know, it's going to unlock.
Easily, one, two, three orders of magnitude in terms of our ability to create things of value, and it's a lot about, you know, being able to explore more options and defer committing to those options until we know, until we've done the experiment. And, uh, I mean, I just, uh, last night I was, uh, thinking about this, uh, Analogies.
It's for you to be able to, a couple of days ago, literally a week ago, I was using Bolt to experiment with three different authentication systems like Google Firebase, Auth0, and Superbase. And I did it in 90 minutes. Like, I've done it maybe 10 times before, and I know that Last year, had I tried it, it would have taken me days and I would have been so tired and exhausted and frustrated, I wouldn't go to number two and number three.
I would just settle for whatever I did, right? And so the fact that you can sort of like do all this prototyping so easily, you can make so many more informed judgments. And so the best analogy I've come up with is like, I'm going to give you two scenarios. One is you have a roulette table, a roulette wheel, and you have to make a bet before you see what the number is.
Okay. That's scenario A. Scenario B is, you have a hundred LUT wheels, and you know, you don't have to make a decision until you see where all the numbers land, and then you can bet, right? Like, which one do you choose? It's B, right? Uh, you know, it's, you win 35 to 1 every time. So that's like thousands of times.
more value creation than the second scenario. That's what's so open. Yeah, that's what really excites me. I think we're starting to marshal a team of like, you know, economists, software people, and just, uh, I have more news to share and maybe, uh, in the months to come, but, uh, this has been one of the most energizing, high learning times that I haven't felt in, you know, over a decade.
Yeah, yeah. Agreed. Agreed. Robert, what are you looking forward to, or excited about, in fact, in 2025? I'll go for a sporting one, which is I'm looking forward to Everton not getting relegated and staying in the Premiership. So, uh, that's my hope, because they're having a tough old time. Ever, ever hopeful. Ever, ever hopeful.
Yeah, hopefully they'll stay in top flight for a bit longer. So that's my um, hope. Brilliant. And how about yourself, David? Because of course we need to turn the mirror on you. My immediate looking forward to Christmas, and this is a little bit like what Jim was saying, like this year, it's been a lot. This year has been a lot.
It's been a heavy year 2024, hasn't it? Yeah, there's like a lot going on. We're weighing down on it. In loads of different levels. So I am actually very much looking forward to just trying to process some of that. Massive break. Over a Christmas break. And, and actually sort of, you know, to get a bit of distance and reflect on some of this stuff.
Because it just seems to be going faster and faster. And then I guess from a, from a professional point of view going into next year, I, I am, you know, you might get this from the fact that we've, covered it quite a lot on the show and thought about it a lot on the show, but the scaling aspect of AI to me and seeing how that works from an organizational perspective is something I'm like deeply intrigued by.
It's complicated. It's a, it's a complex issue on so many levels that it's impossible. to sit here now and, and probably be correct about what that's going to look like. It's going to be series of, series of iterations and, uh, that I'm looking forward to tracking over the course. I think for you, David, the day that ERP gets replaced with an agentic framework will be the day you have to have a massive house party.
It'll be the day. It's finally over. Do you know what I always say, Rob? One day. One day. One day. No, we don't, we don't often. refer to, uh, Marcel van der Borbels, but what are you looking forward to over the course of 2025? Uh, yeah, I'm Dr. Mike here. So, I, I, I agree with you the whole implementation. So, the last couple of weeks we traveled a lot to, uh, the main events.
We saw amazing stuff. But now my, my point is when is it really working? How's the adoption going? Uh, it's not vaporware. We all know that from the past, but it's, it's a sort of optimal solution we see every day, but now how realistic is it? And, and how long will it take before small companies, but especially big companies are integrating it into their systems and that everybody sees the great Things that, that's been, uh, uh, mentioned by all the vendors.
Um, yeah, that, that's my biggest sort of, uh, takeaway for, for next year. And, and, and on a personal side, stay healthy. Yeah, amen to that. Gene, what are you doing for Christmas? Oh my gosh, uh, hopefully staying at home, um, and uh, doing some hangout with the, uh, kids, uh, they're 16, 14, 14, so we don't have them for that much longer, and hopefully, uh, carving out some time to, uh, you know, Watch some movies together and also do some coding.
I'm really looking forward to that too. My boys are very similar age to yours. So mine are 13 and 15. Also feeling like getting to that point of, you know, they're not going to be around for much longer. You know, they're already at the point where they're doing their GCSEs, which is like at the end, like at 16, you know, exams you take at 16 in the UK.
And then A levels and then it'll be uni for, you know. for them and it's going to all happen very quickly, isn't it? So, yeah, I'm looking forward to a bit of family time. Robert, what are you up to? I've got three weeks off. So I'll be, I'll be up in stumps quite soon, David. I'm quite looking forward to just three weeks of peace.
And then the same sort of thing. We are going skiing as well, which I'm quite looking forward to. So get a bit of time on the slopes, which will be very relaxing. A bit of apres ski never went amiss, does it, eh? No, it certainly doesn't. Well, look, have fun. A lovely Christmas. Gene, it is always a total pleasure when you join us on the show and bring your wisdom in.
Thank you so much for joining us for this little festive chat. Oh, are you kidding me? Thank you so much for the invite. It was great to hang out with all of you today. A Merry Christmas and Happy New Year and Healthy New Year to everybody who's listening. If you would like to discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at Cloud realities@capgemini.com.
We're now on Blue Sky as cloud realities. Please throw us a follow and LinkedIn as normal. We'd love to hear from you, so feel free to connect and DM if you have questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to Jean, Jess Ward for the help with the trans work, our sound and editing visits, Ben and Louie, our producer, Dr.
Mike Vanderbilt, everyone else who helped us with the show in 2024. And of course, to all our listeners, see you in another reality next year.