Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com
CR107 Reflecting on Season 4 – Highlights, what we learned, loved and are planning next
[00:00:00] All right, well look, it seems that it would be, oh shit, I still went out the dark. Gimme a second.
Welcome to Cloud Realities, an original podcast from Capgemini this week, a conversation show, pulling the threads of season four together, trying to make sense of some of the themes, and [00:00:30] we'll end the season with some questions from our listeners. I'm Dave Chapman. I'm Esmee van de Giessen and I'm Rob Kernahan. all right, so the beginning of the end of season four, Marcel give us Season four in numbers.
It was a long season, so started in September until now, end of July, and in total we recorded 59 podcasts. So, uh. Every week we have a podcast, and then you think, huh, [00:01:00] 59, there are only 52, uh, weeks in a year. Uh, but we also did a lot of live podcasts.
So Microsoft Ignite six podcasts, AWS in, uh, in Las Vegas, also six Google Cloud, couple of months ago, eight podcasts, and also MWC, uh, in Barcelona with six podcasts. So in total. Almost 60 podcasts recorded, so, uh, yeah, so if you include this one, if you include this one, yeah. I'm gonna use it. Yeah. Oh yeah.
It's 60 round number. [00:01:30] Let's round it up to 60. Oh. I mean, that could have gone so differently, couldn't it? But it could have been professional and by the numbers and showing us as an outfit that knows what they're talking about. But no, no, we had to do it our way. Yeah, exactly. Well, why, why would we start at a professional tip?
We've never done that before. Have we? We actually made the effort to get some, uh, information together for this as well. Why? Why I love about that execution was we prepared the data. We had a conversation, validate the [00:02:00] data. We found some errors in the data, and we updated the data. Is this, and still, when we come to it, we get it wrong.
No, 59 or 60. It's, it's, I like round, round numbers. Just make sure you are never a judge, Marcel, because that type of ruling, ah, 25 years, 10 years. Ah, well also, I'm not sure that we did 60 full shows because the Microsoft Ignite in AWS we did. Day one, two, and three. Right. So we [00:02:30] did multiple different interviews and recordings, but they all went into three shows.
So did you account for the three shows or did you account for the number of interviews? Uh, so in a total of loan, of, of publications, it's 59, but. Indeed Ignite and AWS day, 1, 2, 3 with multiple episodes. So, uh, it's always, uh, sort of nice there. Go, well, I, I think we've got that off on a clear first foot.
Don't you know, you know how we normally open the [00:03:00] podcast with a confused section? We're not doing it on this episode. I think we've just naturally created one in the live, exposed ourselves as rather poor at managing what could be considered core data. Well, everybody can see now why we've made confusion and feature of the show, can't they?
Yeah. Oh dear. Right, so we are at the end of season four and we're gonna take a look back across the season and just pick out the things that sort of bubbled to the top either for us or what [00:03:30] was obvious in the guests that were coming through the show, the things that were in their minds, what was resonating with them, and sort of therefore, when you look back across the year, what are the big.
Prevalent themes that have come out, and I think scaling AI seems to be the one that we probably should start with. It has been in most, if not all of our conversations this year in one way or another. At all of the hyperscaler conferences, obviously the messaging [00:04:00] coming out of those, uh, conversations was around not just AI as proof of concept, but AI scaling out to materially impact organizations or processes.
The rise of Ag Agentic AI this year has been sort of in the middle of all of that, which is therefore starting to build out some form of scaling framework alongside Gen AI at the front end and ent, that's, that's allowing processes in the middle to start to scale out as well as tool sets like Foundry [00:04:30] and more and more emerging trusted data sets.
It appears that there are a lot of, of tools now that can be used for, uh, more and more broader tasks. However, amongst all of that excitement at the front end of that journey, organizations are still lagging that, and I think I might make an argument here for the adoption lag actually may be slightly increasing over the course of the year as the complexity and [00:05:00] scale of some of the tooling that's come out increases.
I think people are trying to break out of proof of concept mindset, trying to push into what it might mean for their organization to go further and also perhaps even more existentially what it means for organizations to exist in the intelligence age as what in all of those conversations this year is like.
Has stood out for you? Well, I [00:05:30] think we talk about tech a lot. Obviously that's all that we do. But in the end, it really is about the human connection, whether it's in ecosystems collaborating together, whether the tech is really helping, if it's including people across the world. I think we've already touched upon that as well during the show, that we're, we're not always aware that not a, not everyone is able to use internet or tech or due to cultural reasons, or political reasons, or a lot of other reasons.
[00:06:00] They're not using it the way that it might be intended by the makers. So it's, it's way more nuanced, I think, than just a release of, oh my God, this is really high speed tech with Quantum with, you know, uh, they mind blowing, but in the end, it's us humans that give meaning. So, yeah, I think that's. What's resonating for me the most?
Yeah. Well, we'll, we'll, certainly I think we'll unpack the human perspective as a, as a main topic in a second. 'cause I think you're right. It is, it is [00:06:30] not as high up in the conversation as it perhaps should be some of the time, I think. Roberto, what are you seeing in terms of the scaling of ai? I, I agree with you that the lag is taking longer or it's a longer tail than I.
I would've hoped of. I think one of the problems though that is arising, and we did this when we spoke to the head of AI at Google at the conference this year, that he actually said he struggles to keep up with AI developments and he's responsible for delivering them. Which I found Ray was like the man [00:07:00] responsible for delivering this to the world also struggle not only delivering by the way, setting, setting the pitch in a lot of ways, setting as far in the standard.
Yeah. Yeah. But the point is, I think he's going take his division as an example. He, they, they're going so fast that if you are an organization, you try and adopt one style of the technology and then there's been two variants of that before you've even got it near to production. So I actually think that we need the development life cycle of those creating AI capability to settle a bit before the organizations will be able [00:07:30] or stand a chance to catch up and use it as it's intended.
If you look at where a AI was when we did that conference. Things like multimodal was coming online, but now look at the AI videos that are being created that didn't exist even six months ago. They're all over social media. Yeah. It's like, it, it, this rate of advancement is incredible. And I think because of that, the big machinery of corporate is struggling and I think that's causing that extended adoption lag.
I think that's right. It's, it's both hard [00:08:00] to work out how you apply this stuff. Yeah. And then it's, and then it's also legitimately hard to work out how you scale this stuff. You could stand up a team, right? And then they go and invest in the technology and then three months later it's not obsolete, but it's now, you know, version minus two and you're like, well you should be on that one now.
And you go, oh, hang on a minute. But I think we've been challenged as well. If you look at honors inset and also cow on the pool, they both said, you know this, this might be a very outdated mindset that we're [00:08:30] using still. Yeah. Yeah. You know that, that it's far, far way more out there than, than we can even imagine.
Yeah. So we're trying to grasp it still with our, with our outdated mindset. So, so maybe that's it, right? Traditional thinking needs to be chucked out and we end up with some different AI thinking, new age, new thinking, new strategy, and maybe the tenants of like the way we think about architecture, service delivery, security.
That just needs to be a, you know, rebooted. Maybe that's where we're heading. I think there may [00:09:00] be. Some grains of truth in that, in the sense of certainly sort of traditional organizational structures Yeah. Are gonna radically change. Let's say we get to a point where it's like, we, we consider these organizations to be human ai hybrid organizations.
That's really new. New. We're gonna have to think about what the, the consequences and difficulties of that sort of thinking is going to be. On top of all of that complexity, maybe the most technological [00:09:30] complexity that we've seen in terms of trying to integrate it into our lives, organizations and society.
Humans have not been that great. Let's face it, at implementing some previous technologies. So I wonder whether it's going to be AI in installing and deploying AI that might actually up the ante here. I had a conversation with a prominent analyst on operating models recently, and we got onto a conversation, a prominent analyst.
Did you everyone see [00:10:00] that? Yeah. Yeah. But the, uh, David, uh, but the conversation we got onto was what does the, what is the role of humanity in computing? And it was, we've got very used to codifying architectures that deploy and we control. But actually, if you take the agentic, the intra and interconnected argentic, and you step back, you might get to a position where you codify the whole organization.
You, it deploys itself, itself governs itself, organiz as you watch it and you tweak [00:10:30] it and you change it, and you might redeploy it, but essentially you elevate yourself out of the, yeah. The weeds of the day-to-day where humans do it. And now you could define the whole organization and AI then goes deploys it and creates it.
So you have a, is your organization alive? That phrase you used David is, you know, it's an AI whole edge to edge org and you just configure the parameters that you want that organization to behave to. Not absolutely. And that might reconfigure, you know, every quarter depending on what's needing to do. And [00:11:00] the other aspect of this that I think was major this year with scaling as an agent to agent protocol.
Yeah. So I mean that's interesting when you think about That's one of the enablers. Yeah. Within, and that's within your own organization. But then of course that's gonna be business to business. Yeah. Potentially nation state to nation state agent interaction. So you get this whole concept of, I define an entity and I want to achieve a macro outcome, and then you just let it.
Go. And as you see it go, you, you, you, like, you operate the machinery, but it's on [00:11:30] such a vast scale. You're talking about whole domains. And I think the first organization that works out how to deploy that, like the replacement of the ERP concept with this type of concept will suddenly win. So you can codify a style of org that you want and then it just goes and implements it for you.
Well, and if, if you look at scaling AI from an adoption point of view, like this week I've been seeing so many AI literacy training for a lot of employees like thou, and they're all interested, they all eager to learn and to, to [00:12:00] understand how it can help them in their regular jobs. So that, and, and that I think is also something new because a lot of people already see that there is value.
You know, in using that new tech and that, that hasn't been the case, uh, for years, I think, you know, then you need to push tech and that it can help you. And now you even see employees, you know, that do operational jobs. Mm. Uh, asking for help and, and can I use it and can I get a license? And why, why are we not allowed yet?
Mm-hmm. Uh, so, so that's a different, uh, pool I think [00:12:30] now. So I think there's, there's quite a lot in that. So obviously the tech industry over the course of the next year and on is not gonna stop pushing in terms of the rate of technology innovation. And I think closing down the adoption lag in your own organization and thinking about what it means for your organization and how you scale it, is going to be something that we're all challenged with.
And as we were saying there, that could be something that's so radically different. I think we should be open to something that's [00:13:00] so radically different to the way we've traditionally architected and designed and delivered technology, uh, that it might look nothing like what we're used to, which is both exciting and, and, and somewhat unnerving.
And of course, within that, that sort of perhaps bridges me onto themes two and three, which are somewhat interconnected, but we'll take them one at a time. The, the first one, um, is. In this world, what do we have to think about from a, from an ethical perspective? [00:13:30] Robert, why don't you, why don't you unpack that for us in a just two seconds and then we're gonna come on specifically to the human perspective and the human in the loop.
Well, let's, let's start with ethics. Yeah, of course. We are entering a new age. With that, the way we compute answers can be different. So we used to be very deterministic and now we're entering into a world where we've just said we might configure whole parts of organizations. We do. So we're turning over more control to the machine [00:14:00] and through that, that machine will then be empowered to make decisions.
And we must remember when we do that, that we, we have to be ethical about how we make those decisions, that we can trace those decisions effectively and that we don't disadvantage a part of our society based on a machine doing something that we didn't want it to do. So the new role, we've saw the prompt engineer rise, new role, but the AI ethicist is the, is another role that's propped up that says, as we enter into this tech age and we run really fast towards it, somebody's always there saying, should we.
Hmm. Is that the right [00:14:30] thing to do? Are we gonna disadvantage, are we still being humans by just using technology in this way? Do, yeah. I recognize the role of the AI ethicist that you talk about. Yeah. And we've, uh, have a number of interactions with people who are taking on that emerging role. Do you think that that is just early implementation curve, heebie-jeebies and therefore we're putting in these roles 'cause it feels like we should have them.
We've all read, you know, certain sci-fi [00:15:00] that suggests to us, oh God, we've gotta be a bit careful with this. So do you think that's where we are or do you think the legitimately the change is so big that we have to redo ethical frameworks. So I think we need to put a lot of energy into the system now to make sure we remain an ethical.
And moral society, and that might become par for the course. And we build it in and it's easy and it rolls. But we should never turn our back on judging if the system is right and is treating a human correctly. And it's very easy to become [00:15:30] complacent to that. And we've seen that in certain systems even with it.
And things like healthcare where claim decisions go the wrong way and even though morally they should have gone the other way, they haven't, et cetera. So I think actually, if you look at past performance of implementation of technology that can affect a human, we've not been great. So I feel that the role probably has a high importance in the future, especially with turfing over more controls technology.
But that role, or is it roles and responsibilities in general? [00:16:00] Because who's accountable? When AI disrupts something in a certain. Process or program. It's not gonna be an ethicist that's gonna be accountable, so, so this is the health and safety officer conundrum that we have, which is the health and safety officer sits in an organization that's completely independent and audits everything we're doing.
The person implementing the technology is responsible, but health and safety officer can call it out and they can't be touched. And it's the same role for the ethicist, I feel, which is they'll just be a role in society that are there to, to [00:16:30] watch what's going on and are independent of the corporate structure, the corporate structure itself, or whichever organization is doing it, they're responsible.
Right? You implemented it, it did something bad. More for you, you need to deal with the consequences, whether that be a hefty fine or even jail time. So we can never discharge responsibility just 'cause we said it was automated. How we delineate that through, if we get to the world of defining whole organizations, I don't know.
It's something we'll have to tackle, but there still has to be somebody accountable. And I think that, that having that uncomfortable position in an organization [00:17:00] sharpens the mind about doing the right thing. Sometimes you could argue that incentivization goes too much in the wrong direction, and we need to make sure we have that check and balance.
And the role for large scale, uh, governance around this. Do you think, so whether it's nation state rules or whether it's, you know, kind of macro nation state, like the eu, what, what role in all of this for them then do you, do you see the ethicist mely drawing down from their framework and thinking and [00:17:30] then implementing that locally?
It, it would be nice to think that. Nation states get together. I know U un, et cetera. We spoke to the advisor around the UN and the Pope around AI and ethics, et cetera, that they set mm-hmm. The rules that then the AI ethicist watch and discharge through the org. I think in the interim where legislation lags technology adoption, we're going to have to rely on local or local type approaches to this whilst we get a grip.
LE legislation is so far behind [00:18:00] and takes years to catch up that the mistake will have happened long before it ever becomes a legal controller or, um, you know, a regulation control. So let's, at that point, let's, let's bridge into the human perspective, Esme, and the importance of the human in all of this as it, as, as the world evolves in the way that we've, we've set out there.
Start us off. What, what's, what's your, what are, what are your first thoughts on this and what resonated with you? Well, for me, I think season four actually had a [00:18:30] soul. It was quite human like that. Very good. Uh, and I think we've came across, unlike seasons one to three, Rob, uh, I'm not saying anything about what That's a, that's a heavy implication right there.
I think she threw shade at you. I dunno why. I dunno why I think it, yeah. I, so I think it was, wasn't it Marcella? I reckon it was Robin. It. Oh, it was Rob. Did she Look, there's, it was, is that thing that, you know, that uncomfortable Look in my direction. She shut you a little glance as she said it. Anyway, sorry.
That's gone. I think you're right, by the way. No, I think, uh, [00:19:00] across all the episodes, you know, whether we were diving into AI governance or ERP transformation, sustainability, product engineering, that powerful question kept on, you know, echoing beneath the sheers. What does it mean to stay human? In systems designed to scale beyond us, you know, if, if, if we lose control, so to say, where does that leave us?
And I think one of the key, at least for me, thought provoking people was TKI Cramer this season. And, and she also [00:19:30] said, uh, in, in this digital will, we don't need more AI ethics panels, we need more human rituals inside digital design. So where's the human, where's the culture? Where are all those human aspects?
And those are, you know, years. And those are ancient, old rituals that we're talking about that we've somehow lost. Along the way, I might say, and I think we, we also saw the same with Defra who was talking about, you know, it's, it's about showing care, not code if you wanna have [00:20:00] long-term impact with teams.
Uh, Twinings, when we were talking about tea, that was also about change ready cultures, people adopting and feeling safe, uh, to learn and adapt and lead. And, uh, one of the, the previous episodes that we just had with, uh, with Tim Elliot also was talking about, you know, composable business isn't just a technical concept, it's that call to reimagine how humans can collaborate and live across fracture system.
So that's, you know, it's, it's [00:20:30] setting so much upside down. Uh, but I think the most important topic has been trust, digital, trust, human trust. Are we trusting agents? Are we trusting other companies? Are we, is this, is this what, uh, people are referring to when they say human in the loop, do you think? Well, I, yeah, well maybe in terms of control and making sure.
I think that's the same with the AI ethic. I hate that word. Uh, it's hard to say. I, I I, I, I fell over that one earlier too. Yeah. I didn't, well, that, [00:21:00] the one that Rob just mentioned, there's a conflict there though, isn't there? Right. So the, the AI scaling, and I think this is the, the issue that we'll have is if we talk about configuring entities on a much larger scale, like whole organizations with a parameter, that whole human in the loop thing, it's gone for quite large.
You read my mind scale things, and then you read my mind. And then what do we do? So do we then put automated. Ethicists inside. See, I said it correctly. They're ethicists, ethicists. Uh, I stick it. Do [00:21:30] we stick those as check and balance agents inside? You know, is there an AI watching the ai where, where, where do we go?
It gets very complicated. Uh, and if there's corruption, how, how, how do we deal with it? Right. So I think there's a, there's a real issue here with how do we keep the human in the system, but we scale wider for the efficiency and the productivity we seek. Yeah. It was at a, this wasn't on the pod, it was a separate thing I was at, but there was a, it was one of those panel pieces, and the question was, do we think we'll ever let AI make [00:22:00] mission critical decisions?
And it was pushing it human in the loop and it Mm. And it already is, is the answer to that question. No, it's, no, it is, it is. And if you look at defense. There are some, they're talking about automating huge parts of the defense battlefield and the way drone works, the problem is they're in a, they're in a literal arms race, right?
So if they don't, we know the people who may want to harm us will. So we have to, and it's like you're caught in the, in the, in the cogs that drive it forward and so it will happen. And That's [00:22:30] right. And, and it, and, and less dramatically, perhaps it already happens on things like aircraft, you know, where aircraft are making decisions the whole time.
There are robotic surgeons and things like that, so that yes, there's some, there's some human decision making in place. And the, and the ethical question that was thrown at the group was if you had somebody who was coming in who was having a heart attack, and the heart doctor, whatever, wasn't available, but the heart AI was.[00:23:00]
Would you let the heart AI treat that, that human? Of course the answer is yes. Right? I mean, under what circumstances would you say No. So it's almost like the cat's fully out the bag. Well, so I'll going back to the early space shuttle days, they, the space shuttle control system used to have seven independent computers.
And because of the way it worked and the way it received telemetry, there was a chance that the computers might make the wrong decision. So what they did was they put seven in, you know, not one or two, or three or four, they put seven in. And what [00:23:30] happened is if four of them made one decision and three made a different decision, the four outvoted the three and disabled them, and then those four were left.
And then they would continue on this voting. So they always voted for the answer. So the way they did automated check and balancing was they put different engines in. Um, and then if one started giving spurious answers, it got kicked out. And so they developed a system in the very early days of handing over control.
'cause the human couldn't cope with the speed of what was going on, so the computer could actually [00:24:00] self-regulate. Maybe those types of strategies will start to become more prevalent in the way we operate as well, where we have, you know, different systems deployed and then it's the outcome and they vote out that if there's an issue, et cetera.
I, I, I did. I think the second time we've, we've touched on the, for want of a better term, the network effect of AI is like, what happens when they are making decisions. E even, even where before the singularity, but just making decisions based on rule sets that they've got to today, and then they create a series of interactions that might [00:24:30] create a different set of conditions.
Yeah. So I, I do think that kind of multiplier effect is something we we're not really yet talking about, but actually could be the thing that drives scale and, and sort of next level. In inverted comm intelligence from the, uh, from the, from the current set of technology. So I'm curious, eh, so, um, you on the voice from above.
And yeah, had a voice from above. Who's that? The voice from above. Who's that? Is it our overlord? Yeah. [00:25:00] Yeah. He suddenly decided to turn up, pay attention, and now he's, he's got a point. What's going on, sir? Normally he's messing up logistics. So what will happen, Rob? One moment. What will happen in an organization when there are agents and, and there's one process is going wrong, will one agent blame the other agent?
And will there be a discussion between agents that that's what humans do? Yeah, exactly. So did you hear the, did you hear the thing recently? I think it was, I think it was OpenAI saying this. It might not be because I've given me if that quote is wrong, but [00:25:30] there was saying that, um. There was one particular agent that for some reason, I don't know what conditions or context it had, but it had a belief that it was about to be deleted.
So it replicated itself. Oh yeah, yeah, yeah. No, or it started, it bribed the person who they told it it was gonna be deleted that said he'd leak all the information about, that's right. He'd done something bad or whatever. And it's like, oh my word. That's a, so I think the answer to that question is probably, yeah, Marcel.
Yeah. You know, the, if that's sort of responses, you know, defensive, [00:26:00] defensive responses, um, seem to be easy to get. Oh, we have seen a lot of unexpected unintended events from these types of agents that have been through heavy testing. So the sycophantic one was the other one where the behavior was just completely unpredicted, yet it went through a deployment and testing process though, but it still behaved oddly, and they had to revoke it.
There's that one that, you know, did everything to stop its deletion, almost displaying, um, you know, sort of human. Traits. But, uh, so could also be [00:26:30] fun, eh? So if, if that happens, take some beer and chips and just sit and relax and see what happens. So, uh, robot, as robot wore in real time in your organization, it's like, was, was it called Robot Wars?
That show that used to be on where Yes. Sort of a gang robot for the time. Nerds would build a little robot. Yeah. And then they would have fights between the two robots, not nerds, Dave, high quality engineers. Engineers gladiatorial arena. That's it. Not, not nerd, but the, uh, the thing is though, if you ever wanted high [00:27:00] quality engineers in a gladiatorial arena rather than nerd robot, be an interesting mix.
Although if you ever wanted to give motivation to ai, but for why it should take control and delete us all, it's what Marcel just said where we basically turn them into some form of entertainment through their own distress. I mean, you know, if I'm honest, there's, there's a motivation right there for them to, uh, I'm not touching that one.
I am not going anywhere near that point. Robert told yous. Thank you, David. Alright, moving on then, uh, to theme four, which [00:27:30] again, I think has been escalated partly by some of the things we've been talking about, which will, which we'll unpick in a second, but also, you know, changes this year, uh, in the geopolitical landscape of, of radically changed what, how we think about sovereign technologies.
Robert, you've been doing a bunch on this in your day job. Why don't you just unpack. The changing world of sovereign for us. Yeah. So, you know, we're going into a more unstable world that [00:28:00] has been, uh, seen. And so people are starting to look at their reliance. So we've had this period of massive globalization.
Supply chains have been stretched. So your digital supply chain, your technology can come from all over the world. People are starting to wonder about their survivability and reliability of that supply chain. And so there's five things they're kind of looking at. So what's my legal control over this situation?
What's my operational control? What's the supply chain control around the technology? How about my data? Who has access and where is it? And then [00:28:30] finally, do I have the skills to be able to operate this locally? And, um, sharp focuses on if I'm creating and deploying a digital asset. How much autonomy do I actually have over that digital asset?
So is it free from interference from others? And what we find is, especially in the technology supply chain, if you take, uh, you know, rare earth metals that create the chips, where switching comes from the network, how clouds are built, who owns the cloud, where's the compiler tech coming from? Who owns the language?
Who owns the database? [00:29:00] Um, how much control do I have over it? Then when I deploy my asset, where is it? How do I run it? Who can deny me access to it? There's been this very rapid rise in sort of risk associated with these factors. 'cause people are becoming more and more nervous about this. So sovereignty is becoming a hot topic.
What we do see though, is the markets responding and uh, government entities responding around this, sticking huge amounts of investment in, um, to try and answer the question. So sovereign clouds rising, joint ventures rising, [00:29:30] and that, that gives peace of mind to associated to that structure I just discussed.
So there's been a massive market response. Many billions of pounds invested. So it's something that's not going to go away. When this much money goes into the energy of a topic and it happens so fast, you know it's here and it will be part of the way we think, uh, moving forward. And so how do you navigate that path?
Lots of complexity, lots of confusion, lots of market fragmentation, but essentially sovereignty is here to stay as a concept, as people see a destabilizing world. [00:30:00] I think you said that well, but I think maybe even understated the last bit, which is you, if you've listened to previous episodes of the show, you may well have heard my change in position on sovereign because I, I, I really did just think it was nimbyism of the highest order and beyond certain legitimate top secret nation state stuff or potential industrial espionage aspects of what they're doing.
The vast majority of the, the case studies could have [00:30:30] been resolved by sensible architecture on the public cloud, but clearly the geopolitics of the situation of really changed that. And it's now moved from purely a data residency conversation, I think to a much. More troubling and complex digital supply chain conversation, hasn't it?
Yeah. Yeah. So I mean, it's going back to, if you look at some of the research going out, it is going down to chips are created using rare earth metals who has control of rare earth [00:31:00] metals who creates the chip fabrication. Whereas that, that's a massive issue, right? So Europe, uh, has the uh, uh, you know, with organizations to do the basis of the fabrication, but most of the chips occur away from Europe.
So what do we do about that? Because without chips as a very basis, none of this digital works. So that's a good example of the supply chain goes right down to where a MA materials mined from. And that's the level of analysis that's going into this. Yeah, it is. It is non-trivial. Yeah. The other aspect [00:31:30] of this that I think was a real eyeopener for me this season was Chris Stok Walker talking about sovereign ai.
And there's one way that you can look at sovereign ai, which is connected to everything that Rob just said. So digital supply chain. If you've got nation state AI that you are mission critically dependent on, can you control that? Do you fully understand how that machine is actually fundamentally working?
And in, in the case of having [00:32:00] certain services removed, does that thing, uh, continue to operate correctly? So it's like a massively high order business continuity conversation. Yeah. But there's another aspect in it to sovereign ai, which is the notion of an AI that represents your nation state in terms of language, in terms of, you know, uh, the economics of that nation state, the major industries, the traditions, the beliefs, all of those sorts of things that the, these two things, to me, the, the [00:32:30] notion of true nation state AI is a representative and the, the geopolitics of like fundamental tectonic shifts in this conversation for me.
Yeah, yeah. No, no, it absolutely is. So how am I training my ai? Does it represent me? Um, so if I'm in a society using a model that's available today, there may be some issues with the way that responds or the way it makes decisions that may not be aligned to your political or social angle. Yeah, that happens.
The world is very [00:33:00] different over. So how do I create those AI engines that I can control and have autonomy over the first part of the conversation? The second one is how do I train it so that it represents us? Um, 'cause if you are to do this agent, you know, nation state to nation state, which we talked about scaling, uh, you need to make sure that the AI you're using represent you properly.
But that's a big question, eh? Yeah. Because who are you, or at least who do you want to be? And I think that that goes deeper than a marketing slogan. [00:33:30] Uh, you know, that's how do we share, share everything that we have in a, in a company? How do we want people to behave? And, and, yes. Also our agents, but I don't think we have a lot of those conversations in companies.
No. To be honest, no. We create agents and then we go, is this the agent? We want to represent us. And I don't. Yeah, you're absolutely right. I don't think we've got onto that part of the conversation. What'll happen is we'll build a load, deploy it, it'll do something we didn't want, and then we'll all stand back and go, oh yeah, no, that didn't go well.
Um, whoops. Should [00:34:00] have got an AI ethicist on it, Rob. Yeah, yeah, exactly.
Yeah. There's loads of famous quotes in history about things like when we were doing it, did we stop to think, should we, I think they're gonna come out a lot in the next 10 years. You're gonna get loads of sort of Jeff Gold bloom types. Pulling the Jurassic Park speech. Yeah, yeah, yeah, exactly. That is the one I was thinking of.
So I think we, we bracketed those four themes together because I think the interconnection [00:34:30] between them are some of the bigger questions of our time. Uh, and it's been thrilling to, to sort of hold a mirror up to all of that as it's developing over the course of this season. Humming away. Underneath all of this though, is also the question of sustainability and power.
It we, we have touched on it a number of times across the season because it's, it is important and there is also, I think, been a shift in view [00:35:00] based on the power requirements of AI and what we need to do to be able to do that sustainably. Very briefly, Rob, how does that all square with you? So we know we're going to need a lot more power.
If we want to do the things we're gonna do, there's huge amounts of work going into efficiency to make it less power hungry. But fundamentally what we're seeing is the rise of nuclear come to the gate. Small modular reactors, we did a pod on them, which is you basically build your data center and [00:35:30] you park a modular reactor next to it.
And it also helps build the microgrids. So rather than build these great big, humongous power stations, you have lots of little ones. It creates much more resilience in the system, which is a great thing that linking back to the, you know, capability of your sovereign nation to survive big part of its power, we're gonna need a lot more of it.
It's good to be green, you know, keep pushing in that, but the base load will still be there and so that's going to need that alternative. Nuclear, albeit, comes with some [00:36:00] issues. It is a very green technology from a perspective of carbon. Yeah. So if we want more power. And supply the demand, but we don't want to create carbon then it's rapidly rising as probably the only solution available to us at the moment to be able to catch up.
And, and not withstanding a major breakthrough in how you train models and such, like we're just gonna see that going up. We've seen some extreme power consumption stats to train some of the bigger models on the market. Uh, and we've also seen announcements from. Tech organizations [00:36:30] in nuclear research and you know, it's inevitable.
And it's that one, please don't say thank you to the ai 'cause it costs me a lot of money and burns a lot of power. I want to be a nice human. Uh, but yeah, it's, it's like that to say it's like millions of pounds just in PS and Qs. So how much hack power is that just in people if really lose that? If we lose that on the contrary, then we might also lose those values and those rituals.
You know, the human continue human to just drop out of, oh no, that's just taking money and efficiency. Let, let's [00:37:00] not thank somebody are you a then gonna stop thanking people in real life.
All right, so as we get into now the second half of the. Oh, it's almost the second half. We've done four. We're gonna do six. So let's call it the second half. It's a small half, Rob. It's a small half. It's, it's, it's the last third, [00:37:30] Dave. Technically there is a word in it is in our language to define two out of four added together, which is a third.
No, now you said it. Now you said it like that. It seems really obvious. I didn't think of it when I started this, but let's keep pushing on Anyway, let's keep pushing on. Anyway, for the first time in the show, we've actually tried to do a couple of little miniseries threads, um, this season. So obviously we've spent a fair amount of air time on these two, on these two threads.
So we thought it would be worth just pulling these two things together. [00:38:00] So the first one was a focus on a particular vertical industry, and we thought it might be interesting to go deep into a particular industry. Obviously learn more about that particular space, but also kind of look at what the resonance out into other industries might be.
What the lessons learned that have come from the, that particular industry, and for varying different reasons, we chose the telco industry this season. We thought it was an industry that was [00:38:30] interesting because it was pivoting its services. It is trying to deal with a very complex legacy situation, and it's trying to work out how, given its criticality for everything we're doing in society these days, how it doesn't just become, you know, a commodity provider.
So, EZ how did we, how did we try and unpack that? Well, together with our guest host. Let's not forget Pravin. 'cause he, you know, he knows he lives [00:39:00] telco. And I think that's very useful to have as a guest host as well. This, and, and also, of course, he's a, he's a very enthusiastic guy. What are you suggesting?
We actually had a host that knew what they were talking about for a change. Esma. Yeah. And then we actually just called him a guest host and now he's, you know, he, he went off the stage again just to easy. Ah, make sure that we stay on. He was looking so good for a minute there, wasn't it? Especially that one we did without Rob.
Easy, easy. I'm right here. I'm a human. Dave, we just talked about humanity. Yes. Oh, I couldn't, [00:39:30] you know, even I, an architect have feelings. Well, talking about feelings, I think the connection part, what we saw also at MWC, the ecosystem coming together, helping telcos move away from commodity to more and more to the front stage, to make sure that, uh, things get more connected.
So we've organized around five recurring themes that we've discussed, uh, together with, uh, uh, different, uh, guests. Of course, we talked about growth, you know, the collective capabilities across the [00:40:00] ecosystem, across organizations, making sure that, you know, telcos also challenge regulatory capabilities and use that as a growth lever instead of maybe even a blocker.
And also talking about human impact, of course, trust, you know. They've always made sure that we were interconnected. The telcos with all the, the, the network lines, I think we've been using that for ages and now you can actually see them utilize that and they also use, utilize AI for [00:40:30] that, for, for the networks going towards six G obviously.
So that's what we see happening. And as I already mentioned before, they still see in some parts of the world that the networks aren't being utilized, uh, as much as they could, but that's more on the adoption side. So I think that's, that's also still something to touch upon, that networks actually mirror culture.
That's also, I think, was a very interesting topic. So after growth networks, we also talked about simplification. So cutting through complexity in the world of [00:41:00] telcos to make sure that it's all, uh, you know, that it makes sense that it's connected altogether and aligning across what matters. And obviously data and AI scaling, uh, routing calls, making sure that, you know, it's being recognized, that it has context, emotion, urgency, and risks obviously is an important topic when it comes to networks and everything in telco land and regulations.
Trust infrastructure. You know, I think that especially in geopolitics and everything that we've just discussed, [00:41:30] they can help us trust the networks and the lines that we're using to communicate and also to have in our homes as well. So I, I was also thinking about those cars. I don't know if you have those discussions as well, but you now also see a lot of people that are very conscious what their car is made of.
You know, what, who's, who's owning the tech? Who's owning all the data that are in those cars? Yeah, so it's, uh, it's a classic, is your car sovereign? So if an external. Country manages the [00:42:00] control software and they shut it down. Does your car stop working? And in fact, with Tesla, we've seen a number of cases where they've been able to deactivate cars and they go dead.
So, uh, you know, you don't have autonomy over the ushi vehicles. It's a very good example of consumer tech that might be under control of another nation state. And was people starting to think about exactly those types of points. So on top of those five episodes, we also did a sixth bonus episode with GSMA.
Yeah, like looking at overall industry change. Things [00:42:30] like automated control of networks, I thought was, was really interesting. Where you could have, for example, an AI agent spin up a network to do a specific thing using, uh, what, what do they call it? Open API or something like that. Open API Open Ran o Ran and things like that.
Yeah, yeah, yeah. So there's like just a huge amount going on there. There is the, the, the variance of service, like I was saying. So if you are at one of the Oasis concerts that is happening over the course of the summer and they're a big event, [00:43:00] how do you actually, you know, can you buy a tier of service that will guarantee you connectivity, uh, in crowded situations?
Having gone to the Goodwood Festival speed last week, I can, I can say with all assurity that that is not in place yet. I could not get a signal to save my life. So I think that would be a potentially interesting thing to be able to do, especially if you could do it on the fly and you really needed an emergency situation to get out.
So the. Uh, the telecoms, [00:43:30] uh, piece I thought was a really interesting exercise to understand that amount of depth and dimensionality in a particular industry. What were the big themes that came out of it that you thought were resonant for, for others? And while, while you have a think about that, the big one for me was dealing with legacy and being able to deploy new large scale, but at the same time, still having to deal with the financial difficulties of legacy and sunk assets and things like that.[00:44:00]
5G being a good one where it had, it had, you know, I had missed the fact that a lot of endpoints were being converted to 5G, but actually one of the biggest challenges in 5G is all of the core conversion, which is some somewhat so throttling it and is, is still a big focus of the industry. Were there any big takeaways for you as in terms of.
What you heard in Telco that is resident elsewhere. Yeah. Reinventing business models. Mm-hmm. Mm-hmm. Like as you also mentioned, that the [00:44:30] upscaling like, oh, I wanna you now, now upload a video. So, you know, um, make sure that I have maybe even a service that I can turn on on my app to make sure that I get that video uploaded now, but not have that continuous service, you know, uh, uh, with a, with a high price and, and that reinvention.
I love that. You know, it, it feels like they're all coming together and thinking about that end user and thinking about the, the way they're responsible. That's what I also see in the industry is that they, they show that responsibility, but also in, [00:45:00] in terms of consumer value. We've been walking around a lot and you see also, you know, the new phones and what is it that can actually bring something good.
Uh, but that reinvention that maybe I, I don't know if they were in the ashes to be honest, uh, but it feels like they did rise and, uh, and that they're now conquering. A lot of more areas than we would even think about. Uh, when it comes to telcos. Well, given MWC make a big deal of it this year, do you know what area this [00:45:30] haven't reinvented yet?
Fly. If you bring up flying cars with some tenuous link, Dave, I'm gonna be very upset. The flying car, Rob. It's still not there, is it? Oh, you went to, good word. So you saw the new Jag co concept calf, that that was on display there? I did not see that. And you know what we were even talking about it? Was it there?
Yeah. Yeah. That, that's the helicopter. Yeah. Does it flow the launch? No, no. It's the new, the complete new design was there, wasn't it? Uh uh I did not know that. That's [00:46:00] interesting. Re my brother and I had a, had a discussion about that. We missed it Anyway, what yesterday I was going was flying cars. I did, I did see a new, uh, a, a new attempt at this by a Chinese manufacturer.
Did you see this? I did send a link around, yes. And basically it's a van with a little helicopter in the back, so it's not, it's, it's not, it's not a flying car, is it? It is a van with a little helicopter in it. I mean the problem, quite good looking van. The problem you've got, Dave, I think with the physics we use [00:46:30] today, you need either a jet engine or a a rotor above you.
And as soon as you see either of those, you just say that's a plane. You are actually after the flying car, all our fifth element. And unfortunately the physics of our lifetime will not be able to deliver that. So you are always just going to say it's just a helicopter. Rob's being a proper buzzkill about that.
Anyway. Well, I think the biggest question is what we, what are we trying to solve with a flying car? What Dave wants is to be the [00:47:00] only one who's got one and then he can just fly around without any interference whatsoever and then just never gets stuck. Think he'd able to get around anyway. Alright, so look, um, telco mini series.
We hope you enjoyed that. It's something we are hoping to do next season with a, with a different vertical. I know we, uh, we very much enjoyed the deep dive and, uh, a big thanks to Pravin for all of the work that he put into the production of that as well. And, uh, some fantastic guests in that [00:47:30] thread. Uh, now it wasn't the only thread we had, uh, on this season, so no, no, no.
Uh, we finally got to the sixth and final theme this time. It was when we built out of our new accessor areas style episode, and that was where we got friends on. To have a sort of a bigger, slightly clumsier, more open conversation about a particular sort of set of themes. And Rob worked on our cloud, on the [00:48:00] rocks thread this year, Rob.
So is, is that a way of saying if everybody thinks it's rubbish, it was my fault. Is that, is that you discharged responsibility there? Yeah. So anyway, after that complete disaster, Rob has no editorial control of the show In the future, you'll all be glad to know. No, it was very good, Rob. What did we do in that?
So, um, uh, operating model. Always critical to make sure you use your technology correctly. So we talked about the big rocks [00:48:30] that people don't smash up and by not smashing them up, people can't be free to do the right thing within an organizational structure. So the product and the platform as the future.
And we broke up into three. So we talked about the overarching control, so how we manage risk and governance and how that has to change to allow us to get autonomy. We did that with Georgia Smith. Mm-hmm. Then we went on to, we have to rewire all our financials to make new operating models work. So financial structure defines people structure, people structure defines how the output works.
So basically if you don't rewire the [00:49:00] f uh, the finances, you get an output that looks like it always did. And we went into quite a lot of detail with that, with Chris Doon and Judge Back. And then finally we spoke to Jasmine about the product itself and what makes a good product. So cross team collaboration, capacity, demand, all of those sorts of things touched on cognitive overload.
But the point is, if you take the first macro one, the finances one and the product one, what you're able to do is understand the big things you have to deal with within an organizational dynamic to make it work correctly. So those are the rocks that we like [00:49:30] to smash apart. So that was the intent of the theme.
And if you, if you listen to the three episodes, it gives you a nice arc associated with that conversation. It does. For me, it was almost like a, a bit of a back to basics on how you get cloud, right. Yes. Thread. That's fair. And I think with all of, with all of the dynamics that are prevalent in that and all of the interconnected difficulty and complexity that that exists there.
And these aren't easy things for organizations to do. So an organization that survived for 20, 30 years, suddenly [00:50:00] changing the way they perceive risk at the core is actually a really hard thing to do. And the, you know, getting the CFO involved in the conversation and actually taking the PE people through the, you know, you have to really properly change outside of the technology domain.
Otherwise you get very frustrated people. But if you get autonomy. It can really free an organization as long as that autonomy is balanced with good risk control and good observability and transparency. So there's a huge prize at the end of [00:50:30] doing this, and if you get it right and some have, it creates very empower, uh, um, powerful.
I always remember back to one of the earlier conversations we had with the guys who did the book about the wardly mapping, and they talked about how they completely transformed the core of the culture and the governance and the structure and the technology with Liberty Mutual. And it did, it had a fantastic outcome.
Mm. Uh, it's a great use case that stands the test of times, say, actually people have done it and it did work. One, one of the things you hear a lot, even now, 10 odd years [00:51:00] into cloud, is I went into cloud. It didn't change anything and it just ended up getting more expensive. And I think what you are poking at, there are the 20 other things alongside the technology that you need to be thoughtful about to be able to really drive it and.
And some of them are proper, top of the shop, quite profound changes to organizations. The big one, to me, beyond the financial framework, which is obviously always sensitive, is the [00:51:30] notion of empowerment and empowerment and product teams and trust. And turning that series of dynamics on its head to allow people that you might perceive to be somewhere down the organization to have decision making rights on things like go live or product changes.
And that's a big shift for the old humans in it as. Yeah, absolutely. And, and especially for the leaders that they have to let go of control or at least change their perspective on how do I have control. I think that's the, the [00:52:00] biggest thing that we also see from more project management approach towards agile.
Uh, it's a different way of getting more control because you're more on top of it. You're iterative, you know, more, you know, that's the idea behind iterative development. Uh, but somehow it feels more comfortable to have an entire project plan that we all know is going to fill. So I think that's a, a different perspective on, on letting go of power or finding your comfort in, uh, knowing things in, in from a shorter perspective or [00:52:30] shorter timeframe.
Uh, and letting go of that power is, I think the biggest change maybe for a leader. Uh, when you go into that new type of. Governance models or organizational models working in an agile manner, that that's gonna be the, the, the toughest challenge, I think. Do you remember the pod we did with Helman Logistics?
Mm-hmm. And they basically did that where they took a group of people who knew what they were doing and they discharged responsibility and said, you go and. Changed the way our supply chain works. So massive [00:53:00] responsibility, literally the core of the business. And then they talked about the results they got and it was amazing 'cause they gave them the freedom and the power and they could make the decision at a local level.
And they got these amazing results. And that's, that's an example of a traditional organization that turned itself on. Its hadn't worked really well. And again, it stands test time, great results, people were happier, performance went up. Why? I mean like why don't you do it? And I think it's probably down to people like to keep control and it's in their psyche and it's like, it's a different type of leadership.
I know you talk about this a lot today, but if you don't [00:53:30] fundamentally change style of leadership, you ain't gonna get the autonomy associated.
Okay. So that's a little bit of a look back on season four and some of the things that really stood out to us as being both resident and intellectually challenging. But also I think things that you can [00:54:00] really see the industry talking about. The one episode we didn't touch on, which we'll just give a shout out to was when Gene Kim joined us again just before Christmas, where we set out some trends for the year.
You feel like those trends are broadly on Rob? I'm feeling quite good about them. Uh, well, I mean the, he, he wrote the book on what we've just stocked about. So cloud on the rocks and all the things is literally, he, he started that, you said you stole all those ideas from Jane Kim. Uh, basically he, I [00:54:30] mean the Phoenix Project DevOps Handbook and all the wiring, the winning organization and things like this, they're all telling us to break.
The paradigm that we discussed in Big Rocks. And so yes, it was very good to hear his view on it and they all hold true. So the organizational dynamic constructs that we talk about now are, have been true for a number of years. It's just people need to get better at implementing them correctly. So it was, it was an interesting episode there and it's always lovely to have Gene on.
He's such a, a, [00:55:00] a well of knowledge. So that gives you an idea of what resonated with us over the course of season four. We think it also somewhat represents what's going on in the industry and, you know, there are, there are exciting and difficult and important subjects to wrestle with in a lot of cases, whether that's for your organization or even society at large at the moment.
Fascinating stuff. If we didn't touch on the thing that resonated with you, please let us know. We would love to know what you're thinking and what the show, uh, triggers for [00:55:30] you. And to that end, for the first time, we're delighted to say that we are going to end the season with a series of listener questions.
So is everyone ready? Yes. Uh, I'm ready until I hear the question. Mm-hmm. And then I might choose to opt out, or suddenly my internet connection drops or something. Shall I give you some input? Yeah. Marcel is going to be the voice of the listener. Yeah. So Marcel, yeah, I'm here. Who have we heard from first in the mailbag and what's the question?
[00:56:00] Yeah, so Daniel Delicate, uh, he had a question related to, I know you and the team often quote the, the connecting, uh, framework. How would you use it in development and operating model or strategy for it? Hmm. I mean, that's a good question. It is. I'll, I'll have a run at this and others might wanna jump in a sec because it's area, it's close to your heart.
Dave. I am particularly passionate about it. As a bit of a reminder for everybody, the connecting framework has got, sort of describes four states of the world [00:56:30] and some liminal states that exist in between it, the world of order, which is order itself, which is repeatable, quite well understood process. The world of the complicated, which is it could be vastly difficult to do, uh, but we've, but the level of unknown unknowns and that are low and therefore the level of plantability is high.
And then there's the world of chaos where you have chaos itself, where it could be battlefield conditions and you just have to take one [00:57:00] step forward even if you, even though you don't know if it's in the right direction. And then finally have complexity. Which is a state where it exists in between complicated and chaos.
So you've got slightly longer lead times, but the level of iteration and market test needs to be very high. So, uh, in my mind, this does relate to the, the state of organizations and leadership within organizations. And I know that Dave Snowden would, uh, [00:57:30] probably be rolling his eyes horrifically at what I'm about to say.
So Dave, apologies for this in advance, but in the world of complicated, which has been the world of traditional technology, waterfall style design planning, horizons of three years, annual goal setting. All work within that predictable world. And what seems to have happened is the, the world generally has, has shifted in into being more complex.
So when you're [00:58:00] trying to drive new products, you need a much faster turnaround time for that. You need to be able to scan what your market is telling you and be able to respond to that very quickly. So in my simpleton mind, I sort of see broadly, broadly the world of sort of waterfall projects and, and very plannable it and therefore you're trying to make it efficient.
The whole time is being quite aligned to the world of the complicated but world of, of of [00:58:30] product based work where you're putting stuff out and you're testing things and you're pulling it back in. And the world of agile delivery is much more the domain of the complex. So that can then relate to how you're structuring your organization and how you're leading your organization and the tools that you are using and things like that.
And I think what really drove that shift, and in my mind at least, was, was the advent of cloud technology. Because previously, whether you liked it or [00:59:00] not, you had a certain lead time on your technology. So even if you had like product teams doing like really sexy, fast things, they would've struggled to do it because of things like lead time on technology cloud completely removed, anything like that.
So the tick speed and the ability to iterate at your own speed or at the speed of your market really became the main thing. So the pivot of it operating model from operating in a complicated way to operate in a complex way is [00:59:30] core in my mind, to any digital transformation. And multi-speed. I think we've touched upon that as well in this season.
I, if you're talking about automation or standardization, that's very clear. You know, that that isn't complex. So please just don't, don't try to make that more complex than, you know, the easiness. It is. If you just keep it clear and crisp and automate things can be at a lower pace or less complex. Mm-hmm.
Uh, and I think it's not a one size fits all that it, everything needs to be [01:00:00] in Kanban or everything needs to be, uh, in, in two weekly sprints or et cetera. 'cause I see that still happening a lot that we have. We choose one way of working, but still, you know, you need that adaptability also per team. Per context per product to decide what, what is the best speed and complexity and way of managing, uh, tough decisions and impediments.
Uh, so it really depends on the context within an organization itself as well, uh, including [01:00:30] leadership styles and, uh, maturity levels. I think if you take, uh, what happened and where it demonstrates the FIN framework structure, a good one is you might have been in complex or complicated, but COVID pushed us very quickly into chaos.
Yeah, yeah. And then organizations had to navigate their way back to either complicated or complex. And it was a good example of very quickly, uh, an external event can occur and push you into chaos. What you need is to [01:01:00] design your entities, mechanisms and capabilities to be able to cope. With external things you just didn't know were coming.
And to be able to adapt quickly, so the organization that can adapt fast, and we saw this in COVID, they won those who didn't really struggled and had massive issues. And so it's a great way of framing a situation, then understanding what you might need to do to, um, uh, do next or the next best step, et cetera.
I mean, very good. The connective framework is, we [01:01:30] have covered it a lot on the show, and it is, is very sort of, uh, close to where I start a lot of the conversations that I do in my day job. So thank you for the question, Danny. That is a real cracker. Marcel, who we've got next. Yeah, so Andrea Kis. So she was asking, and we, we discussed it already a little bit, but it, it's a very valid question.
So how can we all influence the fast moving tech worlds to never lose its humanity? Or how can we keep up the drive and hope and not give up? [01:02:00] S maybe you, maybe we should turn the way around. You know, let's start with humanity, who we want to be. And then look at, you know, from that perspective, look at tech.
Uh, and now it seems that the tech has the, the highest priority or the most dominant one, obviously, also because the cultural one and the who do we want to be is, is under pressure at the moment, and it's very diverse across the entire world. So it's, it's quite tough. Uh, I think, you know, uh, referring to ska again, turn off the laptop, [01:02:30] sit together and just have real conversations and slow down, go back into seasons and then see, you know, uh, how do we want tech to help us, uh, and create tho those moments, uh, small moments for yourself in your teams, uh, in your organization where you can, if you have the influence to bring people together and have those conversations, I think, and that's very good.
Uh, I mean, but I, it always makes me think about the story about the individual who created the world's first search algorithm. And [01:03:00] they created an ecosystem where it gave you the best possible results. So they're implementing check. It was very useful to people. Their motivation was morally sound, trying to make a better world.
So you get hold of the information, massive unintended consequences that that tech and those algorithms have been expanded and now have created massive echo chambers that have created an inability for society to have good discussion and people shout at each other on the social platforms, et cetera.
Mm-hmm. So we've always got that balance to say everybody might agree. The intent [01:03:30] was good, but the actual outcome became corrupted quite quickly. And I, and I don't know what the answer to that is, because we didn't perceive it at the time we were breaking new grounds. 'cause we didn't understand what that new ground would bring.
And then it's only after, you know, we're 10 years later, we've gone, oh yeah, that did kind of really wreck a fair bit of society. And you sort Oh yeah. So I mean the, the, the theming is right. It's just how do you stop things you don't understand from having unintended consequences. The complexity of that is [01:04:00] very d the un unknown, the unknowability, I think of the impact of social in, in my mind was scale.
Yeah. So like, you know, when you had people ranting at each other in the, in the, on the village square. The, the, it was, there was a bit humanity in that, which is they could see each other and see they were correct. Having an effect. Correct. You could see each other and you could see the impact you're having and also, or punched you.
Yeah. That was the good feedback. It wasn't like a, a thumb down. You got a real punch. That's right. It would've consequences potentially, and [01:04:30] also everybody would know you said it. Yeah. You know, there was a, there was a lack of anonymity and also the, the amount of people you're dealing with there is so much smaller When you scale that up to sort of global level and you have these sorts of polarized conversations, I, I, I tend to agree that I don't think you can wind anything back because it's, it'll be ridiculous to say it should all be shut down and things like that because it's simply never gonna happen and I'm not sure trying to take an intentional backwards step is necessarily a good idea either, but it certainly has to play [01:05:00] out.
Yeah. One of the things you said as I like very much was the take time for those conversations. Take time to recognize the seasons, to, to, you know, take time to do these kind of physical human things. And then I think apply that to what you are doing with technology and, and think about that. So if you are creating, even if it's a new agent, you're creating a technology transformation of some description or a new program, whatever you do, don't make the change management bit where you [01:05:30] crowbar the human aspect and then, you know, and then schedule an email once a quarter to communicate what you're doing that is insubstantial and it's, it's thin in its thinking.
And where most good IT design should start is human-centric. What's driving this and how should the customer, let's, let's, and let's say the customer in this case is a, is a stand in for the human at the center of what you're gonna subject them to. [01:06:00] And, and, and think about it in a way that's additive, not in a way that just, you know, kind of makes your product more efficient to whatever it might be.
Marcel. Who's next? So a big shout out to, uh, one of our sort of loyal followers, uh, especially on, uh, on, on LinkedIn. Always very positive with this feedback Ezhil Suresh. So the question is from how do you prepare select topics and how in advance are you done and recorded? Mm-hmm. [01:06:30] Uh, good question. So, uh, guests are coming in via all kinds of channels.
So via our own network, uh, that's one. So good keynote speakers at events, uh, with our vendors, but also clients. And when we have selected and, and a strong, uh, guests, then we will first prepare a sort of pre-briefing call. Pre brief and call is 30 minutes, uh, for meet and greet and also discuss the topics on, on the podcast.
And then [01:07:00] we will always create a, a so-called production sheet with all the details in. So that's centrally stored on, on teams and all the, the hosts. So Dave, Rob, and, and as May they will give their feedback into their production sheet. And while recording, Dave is looking at the production sheet and uh, just go with the flow on the topics.
So that's basically the whole process. After recording, of course there's an, uh, an, an edit mode with our editors, Ben and Louis Long. [01:07:30] Yeah, they will take out the slip of the tongues, add music, et cetera. And then we give a draft version to our guest for approval. Often PR must be, uh, connected to that or comes from the, from the client.
And when, uh, approved, then we plan the release. Marshall, can I tap into this process? 'cause now it sounds like it's all. Smooth and buy the numbers. Yeah. Well, no, that it's all fluid because, you know, let's talk about a good prepared production sheet. How many [01:08:00] bullets do we need to be in there? Oh no. This again, you know, let's, let's, you know, put up the lid.
Let's 5, 4, 6. Tell our listeners the truth about what's happening. Let's just say to the listeners, the way production sheets are prepared by me and the way production sheets are prepared by Dave can be somewhat different. Yes. And Dave often complains. Yes, I, but my sort of preference for a production sheet is, um, [01:08:30] five to seven bullet points, absolutely no content.
Five to seven bullet points that will just give me a structure of off which we can then collectively, you know, have a discussion. I, I think you go for five to seven pages, don't you, Rob? Of uh, I just like, it's like you're reading a novel. It's like you're reading a novel. I just like to provide you our primary host with all the information you would need to make a thrilling and scintillating podcast.
I like to think, you know, I'm providing you more. It's, it's all goodness to make sure you get what you need. [01:09:00] It's all goodness. Yeah, it's all goodness. Yes, but you, you're right. It, it is a, it is a dark art, the production sheet. Just getting just enough information to um, give us a little bit of steering where we need to go and to give the guest a heads up on, on the subject matter.
'cause some of our subject matter, it's not, it's not just ne necessarily general conversation. There's quite a lot of depth in it. So we, we go through a, a preparation loop for each one. Um, the guest choice aspect. Is, uh, [01:09:30] is, is driven by the process that Marcel talked about, and then we make a call on which guests that we're gonna have, because we have a, across the season, like a loosely coupled structure, which means we want people to be able to dip in wherever they want.
We don't want to necessarily editorialize much beyond the two miniseries we talked about at the end of the themes there, which is the first time we've, we've really deeply [01:10:00] editorialized something. Most of the time we let it be loosely coupled and that helps with scheduling and various other things. The other thing I think it does, which is a, is a byproduct of that editorial.
And something I, I'm really delighted about is that we like hold a mirror up to the industry. And you can see in the evolution of the show, for example, we've, we've charted, uh, the rise of the AI conversation and, and where that's got to from the very [01:10:30] early OMG aspect of. Chat GPT through to the sorts of stuff we talked about in the first three themes today.
Like real, real, real changes in, in societal aspects and societal points of view on, uh, on ai. Now, you wouldn't necessarily have got to that if you sat down to try and write, what should we cover? You know, so there's an element of the, of the structuring of the production that's important like that, I think.
Yeah. And it's a living, breathing narrative arc, isn't it? That we create, yeah. So we have an idea and then the experiences we have [01:11:00] when we meet all the interesting people, which is a bit of a, you know, for me the biggest part of the enjoyment of the podcast is then we vary on the way through. So what I like about these end shows is that we sort of consolidate on what we thought and we talk about the things that surprised us.
Yeah, and if you wanna get a delta between those two things, if you do go back to the Gene Kim episode we talked about, which is the Christmas episode in 2024, and just compare it to what we've just talked about now, and you know, we'll let you be the judge of how [01:11:30] well we did. And that might give you a clue as to why we don't overed editorialize the show.
Um, Marcel, we have time for one more, I think. Yeah. One more question from, uh, John Eaton-Griffin. He's also a host on cloudy with a chance of dyslexia and, uh, it's quite a, a nice question. Uh, food for thought, I think. Did any guest that you had this season actually get you to change your thinking dramatically.
I think it is, [01:12:00] it, we had amazing guests. Uh, there's definitely one for me that stands out, uh, made me, we've had him on twice and, um, and his inset and the simulation theory, and I remember the first time we spoke to him about it. And, uh, for months it played on my mind. I started looking into it. I started to think about it and then it re it is been like an 18 month journey.
Journey. And I [01:12:30] actually now, you know, sort of think, oh, b blind me, oh, there's something, is really something in this about when you compare, oh, and for me it was a big cognitive shift. I hadn't really considered it. I just thought, oh, what are they talking about? And then I got into it. I went, oh man, there really is something quite serious behind this conversation.
And so for, that's been a big change for me. Ez, how about you? Oh, it's hard to choose, but I think the one, uh, we talked about robots at the borders of Mexico. Oh, yeah. Where that was [01:13:00] mind blowing. I, that was, but it was also Petra talking about it, and she was so passionate. I was very impressed. She uses two dimensions to different types of intelligence, and that's really close to her talent.
So I think that was beautiful to, to, to have her explain how law and the impact on human beings and using robotics to the security boarding and how that impacts real human lives. Yeah. That was, I was very [01:13:30] impressed by that episode. That was on my list too. Um, I, I was, I was almost to the point of being a little bit shaken by some of the things she was saying on the episode.
Partly it was the, the robot dog use. And partly it was the being sentenced by a faceless ai. If you're crossing a border and you're doing it with your family and you're in a scared state that a faceless kind of AI might make a [01:14:00] decision about you, uh, and where you might end up. The way she framed that was deeply scary.
Hmm. It goes back to the AI ethics point. Yeah. It really put you in the shoes of the, of, of the person who might be on the other side of that. Now, on the other side of the, of, of the development of the technology. You know, there may well be just, uh, very good intents about, you know, uh, efficiency and trying to do things in a fast way and, and processing and all [01:14:30] those sorts of things.
But the human element at the center of it, to me, goes right to the heart of bad implementation of technology or technology implemented for the wrong reasons, how it can negatively impact on a, on one view. And it's already happening. Society, you know, it's not a, it's not a book, it's not a, this might happen.
Mm. It's like it's already happening right now. Right, right. Marcel. What about you? Um, yeah. [01:15:00] It's not one guest. It, it, it's more, it's more the, the whole sort of live episodes that we did with AWS, Microsoft and Google mm-hmm. The mind blowing sort of technology development that, that's being presented every day, every hour by keynote sessions and, and the change on the whole world That that's relief.
Yeah. Amazing. And, and flabbergasting and, uh, also scaring, I think a little bit. Uh, is it, yeah, we talked about speed, but [01:15:30] isn't it going too fast? Uh, we see a lot of sort of end user solutions. How will that be implemented in organizations? But that whole shift, uh, I'm already old, but I have never seen the sort of technology in the last year, one and a half years really explode.
It's, it's, it's amazing what's happening and that, that really changed my, my thinking and how I look at things. So yeah, that really changed it. Yeah, mine were, um, [01:16:00] actually largely talked about in the episode, but I had TKA on for sure and her just a different way of viewing organizations from an anthropological perspective and some of the customs and traits of, you know, kind of our ancestors and how they're visible in society today.
Um, I, I thought was, uh, was deeply fascinating. Petra Mno was also on my list. Uh, we've already talked about it, but the, the raw edge of the, of [01:16:30] technology impacting on humans. I thought she communicated beautifully and terrifyingly. And then finally Christo walker's, uh, position on sovereign ai. It deepened my understanding of sovereign and what sovereign was in the current geopolitical landscape that we have.
But it actually also, I'd not really thought about AI as being representative of nation state before. Um, and then how. Those AI themselves might interact with other nation states as [01:17:00] being, as being a likely future. So for me, there's, there's something, um, there's something in that that I think is, is really resonant.
Thank you to our listeners there who took the time to send in some feedback and some questions. We only had a chance to do a few of them. Um, we had a delightfully long list and actually we'll try and come to them, I think maybe in [01:17:30] season five. And we would love to make the listener dialogue, um, a part of the main show.
So if you're up for that and you'd like to get involved, please do, uh, get in touch with the show. EZ will, uh, give us more information on that, uh, a bit later on in the show. Can I do a, a voice of the co-host? I had, I just have one question for us. Yeah. What is the most used sentence in this season? I'll, I'll give you three options maybe.
Oh, you go, maybe you can have a a fourth option. Oh, man. Oh yeah. Go option [01:18:00] one. Marshall, are you on your mic? Ah. Yeah. Okay. Like that one? Yeah, that's a frequency Option B. Are you on a PC or a mec? Oh, brilliant. Oh yeah, yeah, yeah. Option C. Did you push record? They're all, you know, that's before we answer the question.
They're all associated with Marcel's role. No, no, no. There they are. They absolutely are. Neither recording not. Are you on a [01:18:30] PC or a Mac? True. That the guest usually something we should, you know, elaborate on that. That for if a guest come in, comes in on the platform, we usually struggle a bit with the audio settings.
And then one of the first questions, and we usually we stop, we just stare at Rob to be honest, or at least Marc. So data tech support for the podcast, extremely good at tech support. We all know that person. That's tech support. Well, in our team, that is Rob. So then his sentence always, are you on a PC or on a Mac?
And then we actually start to applaud when it's [01:19:00] a MEC and we start to boo when it, no, it's not. Uh, well, do you remember Dave Snowden's response? He went, no, I wanna a proper operating system. And then proceeded to declare Linux. So we're like, whoa, we've never had this before. You're on your own. Dave, is there a control panel on it?
I, I'm not on a calculator. That's true. I'm still questioning whether you're on your mic or not, but hey. No, it's, it's, it's B because Rob asked at least 150 times this question in the last two and a half years. So it must be, it must be, [01:19:30] I'm gonna go B as well. I would like to think it's, did you push record 'cause of the lost content in Chicago, which we won't go over and open that wound again, so I'll, I'll concur with B.
'cause it does come up a lot. Are we correct? Yay. It is B. It is B. Excellent. Yes.
All right. So that's it. That is a wrap on [01:20:00] season four. Really good season as your first season. Yes. Woo. Yes. Will there be a second? Yes. Well, well, yes. From my point of view, yes. But you know, it, it, it, it takes, uh, in this case, four to Tgo. Oh, no. I like, I think we're all in, um, I mean, a brilliant first season.
I thought I, I loved the, uh, edge that you brought to many conversations and the, um, surprise video recording that we did Mm. Uh, at the [01:20:30] events was always quite, it was quite cool. Yeah, it was, it was quite cool. So as he'll be back next season, uh, Rob and I'll be back next season. Marcel will be pulling us all forward next season.
So we have got some stuff in store. We are still working it out. Uh, we'll be working on it over the summer. We will be doing more access all areas episodes. So those would like plowed on the rocks where we had. A bunch of friends in and we'll do slightly rawer conversations. We are looking at another [01:21:00] industry vertical to deep dive in next season.
We will obviously be covering all of the main hyperscaler events. Uh, those are sort of quite key to our calendar and help us track that. The, the sort of technology, uh, announcement aspect of what we cover on the show. And then if you like, the rest of the show is like, and then what do you do with that?
Is the way I sort of think about it. So those are stables in our. Calendar. And of course the show wouldn't be the show without the guests, as we've talked [01:21:30] about. And we've name checked a number of guests, but we've only scratched the surface on how strong the guest lineup it has been in season four. Uh, a huge thank you to everybody who took the time to come on the show, prepare for the show, work with us on making some points about, you know, some of the big things that are going on in our industry at the moment.
You are a lifeblood. So thank you very much to all of the guests this year. But there are other things coming in season five, a [01:22:00] couple of new things that we hope will drive an increased interaction with a wider audience of listeners. So EZ, why don't you tell us about our AI babies? Yes. So we're bringing in a baby.
And, uh, the question is, who he, is he or she or it going to look like? Is it gonna be more of a, a rob or a al or a, is it going to be like a, a morphed version? We should also talk about that, but at least it's, it's gonna [01:22:30] be a, uh, a bot, an AI bot, obviously, that you can ask questions about all kinds of topics that we've been discussing.
So that's gonna be exciting is, and, uh, we're also, uh, extending the conversation on Substack. And I think that is also very exciting to have thought provoking questions and conversations, experiments, uh, maybe even behind the scenes reflections and guests coming and dropping in other reflections after that, maybe they've been on the show.
And that's all on Substack, so please. Also follow [01:23:00] us there and put in all your questions, whatever you want to share with us, and then we can, uh, have a continuous conversation throughout the season. If you would like a preview of the Substack, as has done an absolutely amazing job starting to build it out, so you can find us as the Cloud Realities Podcast on Substack.
Please do. Go and have a look there. We'll be launching it more formally as we think about season five a bit later in the year. But the build has started. So we would love you to go over there [01:23:30] and join us and, uh, make that a little bit of a community conversation. Uh, and we'll bring that into the main show if that starts to, uh, develop.
So yeah, go jump on that. Now. We end every episode of this podcast by asking our guests what they're excited about doing next. And I thought, given where we are, it's become a tradition to ask what our colleagues are excited about doing over the summer. So, Marcel. What are you excited about doing over the summer?
Yeah, so normally I go to my island, Bonair, but this [01:24:00] year I will go to their economy. Their economy is supported by Vanderburg Industries, so it'll be Aruba in the end of this year, but during the summer we will go on holiday to Italy with the car. Big question is, is what have, what are, what's the Department of Education in Bonne gonna do without your funding?
Without, yeah, just think of all the uneducated people who are gonna now exist in that, in that society. They thought they could have trade deal with Marcel [01:24:30] about two, three years ago, and they've built that into their budget. Marcel, yeah. Not this year, but I'll go to a Aruba in the Christmas season, but yeah, Italy.y with the car. Whereabouts Italy going and the dog. And, uh, yeah. So, um, uh, first go to Lake Garda and then to the mark. Uh, and then also to the Dolomites. So, uh, three weeks in total. So this would stunning. Yep, stunning. What an amazing, what an amazing place to visit. And Robert, big fan [01:25:00] of Italian wine, right, Robert?
Oh, yeah, yeah, yeah. The, uh, Brunello by far and away. I'm close to the, the grape of Gran region. So, uh, well, you need to, need to go. I can, you need to. Um, there's a few people I know who point out local vineyards as well. That're very good. That you can frequent. So I shall let you know those, um, I'll bring you a bottle.
Yeah, they're very good. That's, I bet they, yeah. I love the wines from Tuscany particularly, and Umbria be nice. Well, well, you've got the microbe. What are you excited about doing [01:25:30] over the summer? The excitement's done. I've had my summer holiday 'cause I was, uh, I've got two with exams, so I was able to go out school holidays.
So we took the advantage and the price drop and I've been, so now I've just got to look at forward to picking up all the workload while EULA aren't here. Yes. So, so I've got work now. My excitement is over. But no, we just enjoyed a, a trip to the States up the East coast, which is very good and came back and, uh, the kids liked the, um, experiencing the [01:26:00] American culture for the first time.
Now we haven't, uh, we haven't talked about this yet because our holidays crossed over a bit and we're, we're actually recording the season finale a little bit later than we normally would've, but because Rob was off gallivanting, we didn't get a chance to do it. So I am actually on holiday as we are recording this, but Robert, the big question I wanna ask you is.
How good was the pizza at Lombardi's? Ah, now it was, uh, it, it was two highlights. It was, 'cause we did cat delis as well, but, uh oh yes, Lombardi's was very nice. Which was [01:26:30] the first American pizzeria in New York. So, uh, it was very te I really like it. 'cause it's still quite back to basics in its sense. Yes.
It's not gone fancy and flash, but it's uh, it's pizza remains and there are, there are trendier places perhaps to go, I think, to get pizza these days. But I think that just like one of the old timers, I think. And going back to our conversation in Chicago, obviously pizza being a prime topic, I had to go to the original place of the New York pizza.
So you go quite right, quite so out of 10, what would you give a Lombardi's Peter? [01:27:00] It is pretty high. Mm-hmm. I, I can't go. You're allowed to use one decimal place. You're allowed to use one decimal place. I do a nine, a 9.5. It was very, very good. Like, it was like, I, we did actually have the conversation say I can't, for a long time I can't remember having a pizza that was as tasty, so, yeah.
I'm sure I can't give it full marks 'cause Yeah, fair enough. You always have little bit back. Fair. But it was, it, it was, uh, it was, it was, um, of high standard. [01:27:30] Highly recommend that place. Highly recommend it. Ez what are you looking forward to doing over the summer? Uh, we're going to hike in the German ops, so a lot of hiking and in wellness hotels and going by cars.
Oh, very nice Zen. Yes. Very Zen Digital detox. Um, yeah, just enjoy nature and uh, some time off. Much. Do you think? You go, go to one of the, um, the wellness places that's, uh, actually slightly cultish. Have you seen, um, [01:28:00] nine Perfect Strangers Season two? No. They're in one of those like Alpine Wellness places.
You might wanna have a quick look at that before you go, just in case. Isn't the current White Lotus at a wellness place in Thailand for the, oh, that's, I thought season four. Let's not get into all those horror scenarios, please. Well, maybe you could star in series five. I wanna go Stressless. You know, just wanna, it might go in and enjoy.
Might useful to prep yourself, just in case you know. No thanks. [01:28:30] No, not getting a, not, not getting a big amount of enthusiasm for that. No, no. We want something to talk about when you come back. No. Digital, some detox. You know, I, I just, I wanna go offline for two weeks or three weeks and then just see what happens.
You, Dave, I, uh, well, like I said, I'm already actually on holiday, um, because we're recording this a little later. I'm in, uh, in Cornwall in the UK at the moment and just about to have, um, family holiday for a couple of weeks. So that from a point of view [01:29:00] of just a bit like you is just to put my phone down for a bit and just focus on the world around me rather than, you know, running after things the whole time is something I'm very much looking forward to.
Um, and. This sounds a bit cheesy, but I'm actually really looking forward to starting to work on season five and, um, every, every time we do these things, we learn a lot from each season. And, uh, I'm [01:29:30] particularly looking forward to season five 'cause I think we have got maybe a slight repositioning coming.
Uh, we're working on that at the moment. Uh, and maybe slightly broadening some of the conversation, but without losing what it is to be the show. So I'm really looking forward to doing that. Uh, a little bit of additional creativity. So there we go. We really are finished this time, end of season four. A couple of big thank yous.
Again, a big thank you to all our listeners. The show wouldn't be the show without you. We get to do [01:30:00] great things, uh, and we love doing the show. So thank you for your time and your, um, attention. And we would love to hear from you. So please do engage with us, whether it's just on the email that EZ will tell us about in a sec, whether it's on the Substack, whatever it is, come get engaged and again, to our guests who are the lifeblood of the show, bring so much spirit and inspiration and, uh, we couldn't do it without you.
So thank you very much to you. If you would like to discuss any of the issues on this week's show and [01:30:30] how they might impact you and your business, please get in touch with us at Cloudrealities@capgemini.com. We're on LinkedIn and on Substack, so we'd love to hear from you. Feel free to connect on DM if you have any questions for the show to tackle.
And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to our behind the scenes team, our sound and editing wizard, Ben and Louis, our marketing expert, Kishore, our producer Marcel, and of course to all our guests and all our listeners. See you in another reality next [01:31:00] season.