Cloud Realities

Sovereign AI is to become a cornerstone of national strategies, enabling countries to develop their own AI infrastructure and capabilities, thereby enhancing strategic resilience and fostering innovation, it will also play a crucial role in sectors such as healthcare and finance, driving advancements and improving security in an increasingly interconnected world.
 
This week, Dave, Esmee and Rob talk to Chris Stokel-Walker, Journalist, author and communicator specializing in tech, AI, and digital culture (including YouTube and TikTok) about the notion of sovereign AI and how that might develop as part of the AI world going forward.

TLDR
04:50 Rob is confused about Net Neutrality
08:40 Cloud conversation with Chris Stokel-Walker
45:42 Superintelligence and how the public and policymakers understand and react to technology 
54:50 Going to an art exhibition opening and e-textiles 

Guest:
Chris Stokel-Walker: https://www.linkedin.com/in/stokel/

Hosts
Dave Chapman: https://www.linkedin.com/in/chapmandr/
Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/
Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/

Production
Marcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/
Dave Chapman: https://www.linkedin.com/in/chapmandr/

Sound
Ben Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/
Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/

'Cloud Realities' is an original podcast from Capgemini

Creators and Guests

Host
Dave Chapman
Chief Cloud Evangelist with nearly 30 years of global experience in strategic development, transformation, program delivery, and operations, I bring a wealth of expertise to the world of cloud innovation. In addition to my professional expertise, I’m the creator and main host of the Cloud Realities podcast, where we explore the transformative power of cloud technology.
Host
Esmee van de Giessen
Principal Consultant Enterprise Transformation and Cloud Realities podcast host, bridges gaps to drive impactful change. With expertise in agile, value delivery, culture, and user adoption, she empowers teams and leaders to ensure technology enhances agility, resilience, and sustainable growth across ecosystems.
Host
Rob Kernahan
VP Chief Architect for Cloud and Cloud Realities podcast host, drives digital transformation by combining deep technical expertise with exceptional client engagement. Passionate about high-performance cultures, he leverages cloud and modern operating models to create low-friction, high-velocity environments that fuel business growth and empower people to thrive.
Producer
Marcel van der Burg
VP Global Marketing and producer of the Cloud Realities podcast, is a strategic marketing leader with 33+ years of experience. He drives global cloud marketing strategies, leveraging creativity, multi-channel expertise, and problem-solving to deliver impactful business growth in complex environments.

What is Cloud Realities?

Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.

Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?

They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.

Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.

Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - cloudrealities@capgemini.com

CR092: Sovereign AI with Chris Stokel-Walker, Journalist and author
[00:00:00] Yes, I have also got that document open in front of me.
You wouldn't believe how many people don't do that. We get that with the presenters sometimes.
Welcome to Cloud Realities, an original podcast from Capgemini. And this week, a conversation show exploring a different aspect of AI and AI development, the notion of sovereign AI and how that might develop as part of the AI world going forward. We might also look at at how some of the narratives are being formed around AI at the moment.
I'm Dave Chapman. I'm Giessen. And I'm Rob Kernighan.
And I am delighted to say that joining us for that conversation today, we have got Chris Stokel-Walker, journalist and author of books like How AI Ate the World and TikTok Boom, [00:01:00] amongst others. And Chris, I think you also present and lecture. Good to see you today.
How are you? You just gonna say hello? Yeah, naughty bad. Thanks, Dave. Yes, I am a jack of all trades and absolutely a master of none. At least you're honest about it. Exactly. Self aware. Self aware, that's it, isn't it? That's the corporate version of honesty.Diagnosed version. And whereabouts are you today? Where do we find you? I am in not so sunny Newcastle, but it is a place close to us. We were talking earlier about how we get on despite sharing some commonalities in geography, but also a few rivers and big city that divides us. We all are united in hating Sunderland. Sorry to everybody who is a fan. Poor old Sunderland. There we go. But you're right. I'm from Middlesbrough, and for those who don't know, that area of the world is particularly noted for some of its, uh, rivalries. Rivalries. Yeah. Right, let's just call it rivalries and leave it there.
Robert's here. How you doing, Robert?
Roberta? [00:02:00] Uh, I'm okay, David. How's it? Not. It's been uh, quieter week. And that's put me more in zen. So we're all good. I'm feeling, I'm feeling positive before I enter into travel chaos over the weekend. Yeah. Yeah. I heard you, I heard you heading off. So you've had a nice relaxing week this week. Next week, a bit travel. Yeah. Yeah. Yeah. So it sounds like the job's really quite straightforward, doesn't it? It's ticking nicely, isn't it? How's, how's what you've been doing?
I've actually the complete opposite. Yeah, me too, by the way. I've been to Parma in Italy for a client. Uh, so workshops all day, which is intense. And then you have a lot of carbs, pizza. I mean, that doesn't sound awful either.And we actually had really good workshops. I'm sorry. I know that others have experienced different types of interactions with clients, but for, for me, it was really good, but I'm still digesting all the food and the, and it's so good. So I want to tap into. The pizza, it's, it's, it must be [00:03:00] Italian. I know you've been talking about other pizzas, but Oh yeah, well we did the great Chicago pizza.Yeah, yeah, I just, it's, ah, yeah. Important journalism, Chris. Working our way around deep pan pizza shops.
Well, what I loved was, what I loved was the head of AI at Microsoft was a, was a pizza aficionado. It was absolutely amazing. The depth of this man's knowledge about American pizza and its cultural past was outstanding.
It was one of those little moments where we'd We, we had this little, you know, like a little bit on the show where we'd go out the night before we were in, we were traveling, covering the Microsoft conference and we'd go out the night before. We'd specifically go to deep pan pizza places and then we'd come on and like rate the pizzas and have a bit of a chat about them.
And yeah, literally the last guest that week. As if you planned it like that. Knew everything you want you could possibly know about Deep Pan Pizza. So like we, so the bit he comes on, he's like head of AI or something. We did about 15 minutes on, you know, his role as head of AI and AI and then we did like [00:04:00] 25 minutes on, on Fargo Pizza.
Eyebrow, uh, podcast with its finger on the pulse, David. So that's what it is. I'm discussing the important issues of society today. Yeah, not so sure that deep pan pizza is pizza though. I always knew that. Yes, thanks
Chris. Controversy. Controversy. You had one that had like a, like a, literally like a pie crust.Which I just was not a fan of. I was quite taken. I was quite taken by that one. I quite liked it. It's very filling. Do you remember the one where, uh, they were from Detroit? And they wouldn't say anything just in case they offended anyone. Because of the pizza, I mean, it's like a Think Northeast rivalries in the UK.Yeah Detroit, New York, Chicago, yeah,
yeah. Anyway, Robert, look, you've had plenty of time sitting around, having some mid afternoon snoozes, feet up. What's been confusing you this week? Losing track of time.
Yeah, though, uh, it's net neutrality, David. It's net neutrality. Neutrality? Now, it's a thing that's gone under the radar, right?
[00:05:00] But in recently on, uh, at the start of January, the American rules were reversed again, which says that carriers can bias traffic. Yep. So this concept of all traffic on the internet should be treated equally. Were you a bit thin for subjects this week? This is really important, David. This is really important.
Keep going, keep going. Thanks for the support. Um, and they reversed again, so the idea that, you know, without the internet, our lives don't work, right? It's like it would have to be a very different lifestyle if you couldn't have the internet these days. And so the importance of that carrier, you know, shunting your data around equally so you can use any service without bias is really, really important.
And what we've just seen in the U. S. is a shift back to carriers can bias information. And we've seen that yo yo every four years against the administrations, and it's calling into question again Uh, net neutrality and the carriers could potentially start to interfere and a [00:06:00] bit I'm confused about is where is that going to go?
Will that creep into other nation states because it could be a point that under one internet service provider You can't get access to a service with the same quality as maybe another one and then you start to get this Fragmentation of standards which something that's so cohesive to society suddenly potentially could be under threat again.
So the confusion is, where's all that going to end up? Because it's really important that net neutrality sustains, because it literally connects our society together.
So where is the, where's the regulatory conversation? So there isn't, and that's the thing about the internet, isn't it? Who controls the internet?- ICANN, address names, DNS. What it is But carriers have. Carriers have aspects of regulation that are around them, right?
And so what the first administration did was say, you can't bias, and the regulation said you can't bias traffic. So if I want to use Netflix on, you know, uh, you know, any carrier, I get the same service up to the speed I've bought.
Then the administration flipped to say carriers can [00:07:00] interfere, and they don't have to, and they can bias traffic. and then it flipped back and flipped back again. So there's this thing about the regulators keep changing saying what the carriers can do and then the carriers could potentially interfere and then you get this this concept of what is a uh, a buy agreement nice system that's grown up and is um, decentralized and works very well is now potentially could start to be interfered with.
Capitalism ruins everything, eh? It does! Oh my god! Oh my! The first, the version that was for society kind of worked, and then Friedman made it all horribly go wrong, didn't he? For the greater good, we, you know, yeah, yeah, don't, don't, I'll get started, I'll start ranting in the chat for a while on that one.
Um, where am I going to go? I did have a, until you said, I literally had a response until you said that, and it was like, no, that, I mean, that's literally the perfect answer. Um, I think we should just leave it at that. There's no confusion, as long as we
all agree that capitalism wrecks everything. Yeah, I think we should just leave it at that, shouldn't we?[00:08:00] Yeah, just that's it. Bye. Answer's solved. We're done.
Chris, glad you came. Good. I'll give you an ending for that. Good, uh, good contribution. Good. I feel like Dave on that confusion. That's not really in your sphere of interest. I can't even remember what it was It's really important when you go back over it I was like, yeah, I can see where he's going What why do I bother as I why hey we love you Rob Who'd work with Dave Chapman?
All right, then on to today's subject Okay, so we have talked on the show a number of times around sovereign cloud And why Sovereign Cloud, uh, is of increasing importance in the world today. And, and just briefly before we move into [00:09:00] Sovereign AI, Rob, why don't you set the table from a sovereignty point of view? Like, why is it important? And like, why might it be increasingly important over time?
So, um, sovereignty is I mean, the world has ever increasing geopolitical tensions and people are looking at how supply chains deliver critical national infrastructure and such things, and we've seen regimes of sanctions come in and various things. So when people look at where they put their important processing, they think very hard about who they're using.
The security of that supply of that compute and the availability, et cetera. So what we see is the rise, especially in Europe, the need for sovereign clouds. And a lot of the hyperscalers are moving towards that to deploy it where the compute that they're using and they rely on critically day to day is underpinned by a supply chain that's controlled by that country's legal system.
Yep. So it can't just be disconnected and switched off, et cetera. So there's this thing around the, it'll stay there for as long as I [00:10:00] need it to be. And the second part is that when they want to host particularly sensitive things, that those clouds are run and managed by organizations that are vetted and known and trust.
So we have security of data, but more importantly, security of supply chain of service. And so, sovereignty has become a big thing. Maturity is rising in organizations understanding it, but also we see that the CSPs, hyperscalers, starting to produce ever more sophisticated solutions for this disconnected clouds, clouds that are, you know, you've got the sovereign from AWS launching in Germany in 2025, et cetera, you've got partnerships all over, and we see that maturity rising as the understanding of why sovereignty is important begins to kick in more and more and more.
Thanks for that answer. Chris, when you were thinking then, around sovereign as it applies to AI do the same thing? Is it, is it a similar set of concerns that are going on in your mind or does the need for sovereign AI come from a different place?
I think that, you know, Rob's point about [00:11:00] the Requirement to kind of have control if something goes wrong is still vitally important.
I think that that is particularly important given what we know about big tech in the past and their kind of control of this stuff, their willingness to dictate the rules. Often people overlook and frankly, our listeners won't be one of those people because they will think about this day in day and out and probably have them keeping awake all night because of it.
But ordinary people, if you think about on the street, they will often build their livelihoods on social media that has the rules of the game that can be dictated to by, you know, individuals. So you can have a change in policy that can alter things significantly. So I think for sovereign AI, there is this risk of putting all of your balls in one basket and eventually eventually having, I suppose, a private company saying, actually, you know what, we are changing how we operate our system and there is a race going on, so they changed quite a lot.
Uh, that materially affects the performance of those models. So the idea of having a sovereign [00:12:00] AI, I think, is a good kind of mitigation against that risk. But then at the same time, I think that there is a more fundamental thing, which is, it's all well and good us talking in English on a podcast in the UK about this sort of stuff and recognizing that AI systems are unlocking real opportunities for us, but What happens if you are speaking in Swahili or if you are trying to develop a business somewhere else in the world where the training data that these systems are based on is not so cognizant of and then suddenly I think sovereign AI becomes a real game changer because actually it allows you to tap into your own training data to capitalize on it.
And to not only have that moat so that you are kind of protected from any business disruption, but also to kind of fine tune it to your own language, to your own specialisms, and then ultimately to make it work better.
Right. So in, in your mind then, when you think about sovereign AI, it's [00:13:00] training AI models to the needs of that country.
Is that what you've got in mind? And, and does that mean also that, that you're developing the technology in that country as well?
Yeah, I think so. And I think that's becoming, it's, it's interesting because actually we've seen this go in and out of vogue over the last two and a bit years since the advent of, you know, chat GPT kind of, I guess, pushing this to the forefront of lots of people's minds.
So we had, for instance, I wrote a story in I think March 2023 about the sort of potential for sovereign AI, the opportunity that that unlocked and individual attempts at this. We'd seen other countries, mostly in the Nordics and elsewhere trying to pursue this pre chat GPT, but the opportunity I think they saw from OpenAI's software was that actually they can start to do this more readily.
What's interesting is that that then went away. for a long time, I guess, because of just frankly, commercial incentives and the race continually going on. But if you look [00:14:00] in the last few weeks, we obviously seen, uh, you know, Donald Trump's attempt to kind of, you know, regain supremacy in the race for AI in the United States with this kind of half a trillion dollar project.
We also saw frankly, Japanese businesses, um, being sort of. I suppose encircled with the security of AI with a joint venture between SoftBank and OpenAI that's designed to, uh, I guess, roll your own models to enable those to operate better. You know, I think now that the hype has settled a little bit, I think that there is an idea of, okay, well, everybody went mad for the last two years.
And truthfully, actually, a lot of these software providers went mad for the last two years. So now we have the breathing space to think about, can we do something that works better for us? Because, you know, if you're a German trying to rely on a large language model, you're going to have good performance when it comes to patent applications.
If you want to ask an LLM about patent applications, great, [00:15:00] but if you want it to do conversational stuff because of the way that these models are trained, a huge hoovering up of all the data on the web, majority of that is in English language. And that I think has implications for buyers for representation and a lot of companies and countries within those are trying to think a little bit more about it.
I think the, the complexity is even more increasing if you have an international company, right? So where does that leave? Does it cross borders or how does that work?
Yeah. And this is one of the problems, right? Is, is trying to figure that out, particularly not just cross borders, but also I guess cross jurisdictions.
The EU and the U S are increasingly going in divergent directions around this. And I guess, you know, we can look to the history of big tech more generally and particularly regulation around tech and see that actually what tends to happen is you go for the kind of lowest common denominator you go over the the kind of you know the the easiest and often circumstances, the easiest kind of [00:16:00] barrier to get over.
Or if you are one of those organizations that does cross those borders, then you kind of, you build your wall really high and you say, actually, you know, we have to meet these demands. Usually generally that is the European Union, but also when you look at kind of what parallel jurisdictions are doing, I mean, Australia with its kind of very strong anti tech approach, increasingly so UK.
developing lots of regulation around this. It does become very difficult for businesses to try and figure out a way through.
So I'm, I'm trying to sort of get a mental model of how the, how the thing would work in the way. So I'm fully get, fully get it from like a, um, uh, What AI enables within that country and ensuring that that's trained for the culture and language and needs of that particular country and the industries of that country, et cetera.
What I haven't got yet clear in my head is with Sovereign Cloud, for example, you still have, say, the big cloud providers involved in Sovereign Cloud, though there are a few that are popping up [00:17:00] that are specific to nation states at the moment, but in the main, it's the public cloud that are sort of setting up sovereign instances.
of the, at least the technology, if not the organization in different territories that then allow that to be, you know, decentralized a little bit. In the world of sovereign AI, is it, is it leverage of the same sort of backend technology or would it be a country like fully building that technology as much as training the LLM for example?
It's kind of different
approaches. So, and this is part of the challenge is that we're in essence trying to kind of pin the tail on a really fast moving donkey. Um, you know, you've got the Nordic approach, which is we are going to build our own. And that is something that predated ChatGPT. And truthfully, I think ChatGPT kind of just Made that end up in the bin, frankly, because [00:18:00] suddenly when you have an off the shelf package that can do it way better and you're never going to be able to match that, although we do have deep seek looming in the horizon that I think could change the equation again on this, it becomes very, very difficult to think about, but I think for a lot of people it will be the equivalent of, you know, there'll be many of our listeners who work with governments and they will know that they are There are proprietary versions of ChatGPT that are specifically designed to both limit the outputs of and the control of and also to feed in the inputs of those systems to kind of more specially target stuff.
It's kind of like going to get very boring and into the roots of things. Um, it's kind of like, you know, RAG you know, Retrieval Augmented Generation, the idea that you can kind of, you know, tweak the training data a little bit in order to ensure that you're getting more rigorous answers. If you can fortify your LLM with a kind of few supplements of [00:19:00] Um, you know, language data or specialisms that are of interest to you.
Then suddenly you move from a kind of off the shelf package to something that I think is much more sovereign, much more focused on what you're doing. And also you can, of course, then put your foot on the pedal a little bit of what it sees as important and not important and reflects more the morals and the culture of your society.
I think, I think we had this, we discussed this the other week, which was the provenance of the models and where they're trained, how they're trained, how they operate, etc. That's starting to rise to the fore as a conversation and DeepSeeker supercharged that. See, you know, Italy and Australia just saying, banned.
Um, and then you get to this point about. Actually, the sovereign approach gives you an option to adopt a model that you comes from a trusted source or a source you might prefer to ally with. And then that that's starting to deal with that sourcing and supply chain conversation about, you know, if I need to be sovereign, I need to consider where this [00:20:00] has come from, how it's been trained, what it's been trained against, etc.
So it's almost like we'll get a fracturing of the way this happens inside. I'd be interested in your view on on that. And if is that, you know, sort of two years, we'll be there five years, or we'll continue to try and come back to the core.
I think at the minute is incredibly difficult to understand because, um, we're in the middle of another one of those fragmentation phases.
Um, and it feels like tech companies and the private sector are kind of messing up the puzzle. or reshaking the kaleidoscope every single time that the slower moving governmental approaches Settle on a sort of way forward. So the perfect example of this is, you know, as we are recording this, not to get too, you know, inside baseball, but as we are recording this, you know, [00:21:00] people are gathering in France to discuss the AI Action Summit.
The great and the good from politics, from, you know, uh, society and from, uh, private industry are coming together to essentially have a chat about where they think the direction of travel should go. This comes. The month after Donald Trump said we are going to essentially develop our own. sovereign AI approach.
We are going to kind of, you know, build that wall very, very big. We are going to tamp down on compute. It comes two months after Joe Biden leaves the White House saying, actually, nobody else can get access to these computer chips that are going to power this. And it comes just a couple of weeks after DeepSeek shows the opportunity to do this.
And if you'd asked me this question, Rob, about a month ago, prior to particularly the Trump Project Stargate stuff, I would have said, well, the idea of a sovereign AI is dead in the water. But then Trump kind of signals a direction change. Then we have DeepSeek kind of changing [00:22:00] everything and saying, actually, you don't need, you know, 500 billion dollars and a load of GPUs in order to do this.
You can actually do this comparatively on the cheap, which is ultimately what governments are after, right? Value for money while improving productivity. And so I think the conversation might have changed a little bit now. In the last few weeks where they might have gone. Well, this wasn't economical when chat GPT came out and ruined our business model.
Then we've kind of taken the approach that we would have to throw a lot in with big tech companies and we're not really comfortable with that. So we're not going to pursue sovereign AI, but no, hang on. Actually, we've realized that bigger isn't always better. And so maybe we can rekindle those conversations.
So, so Stargate then, which you touched on, um, which was an announcement. As we're recording, let's just say there was an announcement in January, um, this year and the announcement which was, I think, shared with OpenAI and a number of others, um, it was an announcement to, to spend half a trillion dollars on [00:23:00] developing AI in the U.
S. Now, um, what actually in your mind, Chris, Were they going to do that? Do you see that that was the beginning of building a large, entirely U. S. centric, sovereign AI, or was there something broader in its intent? Well, I think
fundamentally, Dave, if I'm being honest, I think it was an attempt to let Donald Trump feel like he's announcing something in the early days of his presidency.
A lot of this stuff did exist already, and it is, as with all things government, a re announcement of a re announcement of a re announcement, a kind of rebadging of this stuff. Well, I think behind it, there is this idea that the US and China in particular are kind of caught in this tussle, a geopolitical race to have tech supremacy.
They are number one and number two in all, pretty much all AI rankings. The UK is a very, very distant third. Um, and I think that it was an attempt to try and head off that risk. And I think [00:24:00] to say, um, well, we've had 20 years of, um, tech, more generally speaking, with an American accent. We recognize AI is the technology of the future that is going to be laid into pretty much everything.
Even if it's only in 10 percent of all that it could be laid into, and all that's been discussed that it will be laid into, that will still be pretty significant. So we need to ensure that for the next 20 years at least, that it still continues to have a US accent.
Gotcha. And DeepSeek then comes along.
DeepSeek then suggests to other countries that they can build their AI much more cost effectively, potentially, that's still to be played out. In my mind, I'm trying to work out then, as a consuming organization, how does all of this impact me? So, you have tools now like Foundry by Microsoft, for example, that allow you to pull varying different [00:25:00] data sources and language models together.
To sort of aggregate a series of AI tools, would you see that sovereign AI exists in an organization's portfolio of things that they could pull in. So if I'm in the UK, I can pull in the UK AI alongside potentially, you know, multiple other data sources that might come from private and public organizations.
Is that how you see it playing out? Yeah, I think so. I think that there's kind of two elements. There is obviously this is going to be used to try and underpin and develop and deliver government services, but I think that you can imagine that private organizations will be tapping into that. I mean, we are seeing the direction of travel, again, in the United States, where in, you know, private sector and public sector are increasingly kind of blending together.
We've seen what Elon Musk is doing in terms of government data there. And so, you know, while other countries might not go down quite extreme a route as that, I think that the idea that actually you can call upon a kind of [00:26:00] government AI system that is tailored sovereignly to your country and its kind of interests, I think.makes good sense. And, you know, maybe that is, for instance, in customer service, you, you have, you have a business that is operating across different jurisdictions. You have very different cultural connotations of how they perceive of customer service. Maybe you can kind of plug into that sovereign AI to present a very British way of dealing with problems versus a very American way or a very European way.
Have a cup of tea. Yeah, exactly. You get the nation state. Calm down. The stereotypical response to problem management. I love that idea.
But it's something that could be, you know, relatively easily done. And I think, you know, it is one of those things where we've, we've all read those headlines, right? Of the productivity gains that AI can give us.
And frankly, you can take them sometimes with a pinch of salt, those big massive numbers. But if that is to be the case and it is what's going to happen, you can imagine that a lot of this stuff gets more [00:27:00] centralized and actually it does benefit from that kind of localization, I suppose. It's the equivalent of your A16, uh, A26Z teams, right?
Of kind of like, how do you, how do you switch from one to the other?
Although there is a, there is a bit to what technology has done in the past, I absolutely accept your point, but what technology has done in the past, you take one of Dave's favorite subjects, ERP platforms. It was easier for the organization to just take the platform and change their operations to use the platform.
form as opposed to sort of configure the platform to work for them in their specific way. Is there a scenario here that somebody creates a version of AI, it is a huge improvement to a way of operating or something and companies have to change to adopt that technology because they don't have the opportunity to reverse?
Is this back to you still confusing AI with RPA, Rob? No, no, well, yeah. Oh, he's brilliant, I mean. No, but the point is. [00:28:00] the it's quicker to just take the technology and get the advantage and change yourself as opposed to try and build your own to work in the way you need it to because we've seen technology do that where organizations change to around the technology as opposed to the configure the technology to work for them is that what I think.
Again, a pre deep seat conversation, I would have said yes, because ultimately to a certain extent, and I know that there are kind of like nuance interest in this and, and kind of different ways that you can utilize these off the shelf tools and they can, you know, if you pay a certain price, you can get a kind of very specialized version of this stuff.
But generally you have kind of, you know, very basic off the shelf tools that you. are sharing with everybody else, and it is too much hassle for you to kind of develop your own AI system. With DeepSeek, or too expensive as well, uh, just in terms of the basics of it. With DeepSeek, you, I think, can make that leap more easily.
And so I would [00:29:00] suggest that because of the open source, open model aspect of it, and the way that frankly, you know, I have friends who run very, very small businesses who are running localised LLMs. Not DeepSeek, they're running Llama. Meta, open source, large language model. But they are using that and kind of, you know, quizzing that for their own business purposes.
And it's completely outside of it. And they're not really that techy. I mean, they are, they have a mechanical keyboard, so they are that level of techy. They're a little bit nerdy, but they are not like full on coders. And one of the things I suppose that you can do is, You know, I'm not brilliant at coding, but I have used, um, you know, chat GPT to kind of help me sort out coding problems that I have.
So it's kind of that self fulfilling prophecy, right? Where you come up against issues, provided they are not, you know, proprietary bits of data that you're worried about handing over to a private company, you can maybe get over those problems, those hurdles a little bit easier than you could in the past.
The, uh, it's quite interesting, actually, you talk about, um, [00:30:00] the coding angle, because I, I, it's, to test it, I asked ChatGPT and DeepSeek to write Tetris in Python, and the DeepSeek version worked, and the ChatGPT one needed a fair bit of hacking before it kind of came together, but there was this bit about You know, writing Tetris in Python would have been a lot harder.
You do realise you can get Block Blast off the App Store, right? I've got a bone to pick with you, David. I downloaded that and I'm hooked and it's ripping our family apart. I forgot to mention that. Dave recommended a game on Android that didn't. It was like, oh my words. It is, it is ludicrously addictive.It is addictive. Watch your high score, watch your high score.
Oh, I don't know. I'd need to check. But I was, I've started, it's stealing my life. It's your fault. But the thing is, a game like Blockbuster, uh, Blockbuster, to use an example, easy, easy for me to say, you could ask AI to write that now and it would be able to take a really good stab at it.
I saw a guy, more than that, I [00:31:00] saw, I saw somebody refer to, and he referred to it as something like large prompt windows or something. And he goes, you know, so TikTok got banned. In the U. S. Eight hours for 10 minutes. But he goes, if that hadn't been eight hours, why don't you just ask a I to build you another tick tock?
Yeah, that's the point, isn't it? It's about the it's about the ability to create things at such a higher order. You go. That's the bit about What do we seek back to just we want the one with the most capability and therefore we will always gravitate to the capability as opposed to suppose the sovereign angle, which is you look at your sovereign model and you go, Yeah, but it can't do what that one does.
And then you end up bouncing the boundary. Yeah, but they're no longer different.
Right now, at least right now, and that will change Rob, but like currently the state of the art best Is the free one that you can download and tweak yourself. And I think that's what's really interesting it now that again tech moves incredibly fast I have a stupid living of not only doing journalism that is fast and [00:32:00] reactive But I also write books on tech and all of the reviews always say oh, this is a bit out of date Isn't it?
Yes, that is the point Right now Until OpenAI, you know, piggybacks, or Anthropic, or whoever, piggybacks on what DeepSeek did in terms of eking out that extra performance and then adds on their GPUs like it's a supercar and kind of speeds away, you don't need to change that sort of decision. You can just pick the best and the one that you can tweak the most.
Yeah. It's that thing, will somebody create one that's so good and so advanced that everybody just runs to it. But actually, I mean, if you look at what's actually happening is it's becoming more, it started highly proprietary and it's opening and it's opening, it's opening, it's opening, where does it end?
And there was, there was chat GPT have accused the DeepSeek team of nicking a load of their research and all this sort of stuff. So there's a, and thought they can't do anything about it as well because of the way nation states are structured. So it's a very interesting pattern at the [00:33:00] moment. And will it ping back because of the, what DeepSeek's done?
Will you now get a very protectionist view? on the technology, I suppose. And I suppose that's a mini confused in the middle of the episode, David. I bet you sprung that on us, Rob. Surprise! I wasn't sure I was listening. As were you listening. You glazed over a bear. Marcel's fallen asleep. He did it 10 minutes ago.
Moving on. Um, what I wanted to do is maybe for the last section of the conversation today, I want to talk about governmental or state development. of AI versus commercial and the sort of race aspect of it. So one of the things we've talked about, and it's talked about everywhere, we are not unique in this, is that AI being a purely commercially driven race at the moment has a series of attendant dangers.
You know, they're sort of numerous and it depends on how, you know, what feels like current sci fi you get with your definition of AI, but actually those [00:34:00] consequences might be within the next single digit years away from where we are now. And I thought there was a James Cameron, Terminator everybody talks about as being a good example of AI gone bad, I guess.
And I saw a recent Um, interview with Cameron, who was, it was like the 30th anniversary of Terminator or 40th or something like that, and he was reflecting on the sort of model that he'd taken when they were thinking about Terminator was that it had been governmentally developed AI that was then militaristically used.
And he said, the world we're in at the moment with commercially driven AI and where we're going with like data and data access and data privacy and the things that that could do is potentially, you know, scarier than the scenario that he had sort of dreamt up in Terminator. I wonder where, you know, having been thinking about these themes, Chris, I wonder where where you're landing on sort of state versus.
Versus commercial race in [00:35:00] AI.
Yeah, I mean, I think the first thing to say is, I understand the people who are concerned around the Terminator and things like that. But it is generally only two groups of people that are really shouting about that very loudly. And it is. The AI company is developing that stuff and I appeared in an event that you did and described myself as a tech skeptic tech reporter and I will say always try and maybe follow the money and think about does it service them well to say that they are developing world changing technologies that are going to get artificial general intelligence.
And the other group of people that say this stuff have been saying this for 30 years. Um, and they are a little bit like Boy Cried Wolf, and maybe they will be right. But, um, maybe they're also just Cassandra constantly shouting about this stuff. So the idea that, you know, the government's paradigm kind of changes the thinking.
Ai, I do think is still a concern, however, not in terms of kind of sentient killer [00:36:00] robots, but in terms of killer AI being used more generally. We've seen Google very, very recently taking out a kind of longstanding commitment to never use AI in the military context. And that was, you know, not to get too deep into the weeds of the history of it, but that was a prerequisite.
that was required for them to be able to buy DeepMind way back when, right? And they've just chucked that out the window in 2025, saying that actually this technology is so ubiquitous, so important that, you know, it might become a case that we do need to use this in a military context.
But the kill chain conversation is a really interesting one because it's obviously a huge moral dilemma associated with it. But as soon as one nefarious nation state puts AI into that. chain and completely automates it, all the others will have to follow suit to be able to defend against it because it may be too fast and too sharp. And so we see it now with, um, the U. S. Air Force have created an autonomous fighter jet and it wins.
Yeah, it can win and it started to win quite quickly. So [00:37:00] I think it's a, a classic example of, you know, Pandora's box has been open. It's just a matter of time before it gets done. And then. It's, it is a literal arms race in the sense that once somebody does it, everyone will have to follow suit or you might find yourself on the wrong side of history.
Which is where I'm concerned by the Stargate stuff and the Japan joint venture between SoftBank and OpenAI specifically, kind of. picking off the Japanese market and delivering an AI flavoured to their own culture and society and business interests is because up till now, and the key thing to point out with all of this stuff, is that we are still incredibly early in the history of generative AI.
Um, you know. Really as a thing that most people are aware of, we're just over two years old. We see, you know, you get kind of like hipsters who say, well, I knew about, you know, I AI back in 2017 when this was That's what Rob was saying. Yeah, [00:38:00] exactly. But you know, he thinks he's got AI swagger, Chris. Yeah, exactly. And then we have to spend like What an idea! Episodes and episodes of the show educating on the genetic versus RPA.
I mean Let's be honest, Dave. The only reason I'm on this show is it's a reverse way to educate me. So I actually am competent at my, my day job. It's just a massive corporate ruse. We don't actually have any listeners. It's all elaborate actors. And you know what, there's a little bit, that might be a slightly, it's also quite a scary thing to consider that this is all just a simulation. Yeah. Oh, there you go. There you go. We go to the natural reaction of what we're discussing around AI. Um, so. For the last two years we've taken the approach that actually private companies can run their race and they have done and they've kind of broken a lot of stuff but that's fine because they have the hack away and that literally says move fast and break [00:39:00] things so it's unsurprising that they would do so.
However we have kind of relied I suppose on governments acting as a check and a balance towards that and have said actually we are going to work slowly, we're going to work in unison, we're going to try and draw a lasso around this stuff but we're going to do so together even though. There is this competition between the U.
S. and China and elsewhere. What I am concerned about, Rob, a little bit, is that actually we are seeing those individual countries being picked off one by one. They're kind of going, actually, we tried that, we've tried that for two years, it hasn't really worked, and so we're going to kind of just slightly, slightly kind of sidle out of the party and just start our own stuff.
And then, I think that's concerning, because then who is going to be there to say, actually, you know what, we are all now in a race and we all need to slow down. Mason.
Yeah, but it's the way, the way the world's structured, that's almost an impossibility, isn't it? Because there's always somebody trying to race ahead with a different moral position on the subject.
And [00:40:00] then your face was fatalistic. Mason. Yeah, I'm sorry. Steele. Turns out you're, you're even more of an idiot. Don't get him started on capitalism.
Oh, no, yeah, no, we won't go there. But, but that's, that's the folly of it. We know we should slow down to consider what we're doing as it is going to deeply affect society with some potentially bad things that we discussed.
But there are many who won't. And that's, that's the then we face with. It's like you had, it's the point about the Pandora's box before. It's like you could try, but I don't think you can hold back the tide of what people are going to use this for. Yeah, I don't know. Sorry, that is quite a depressing way to go.
I don't want to bring this back from the brink. How do we end on a high five? How do we end on a high five? You can make nice pictures out of it. You can make your own Tetris alike using that skill. Yeah, it's good, isn't it?
I ask it to write haikus and cheese jokes, so that's my So, actually, maybe to bring us to a bit [00:41:00] of a close before Rob drags us down even further. Uh, what, it's always dangerous to cast out, uh, like, you know, you mentioned you, you put your books out and then everyone goes, well, that's a bit nowadays and sometimes even within the, you know, a weeks, some months of recording these shows, which has got a faster iteration than, than the effort that goes into writing a book.
We can look back on some of these shows, even the ones we haven't released yet and gone, oh, that's no good anymore. , like the deep seek thing, you know, was we had five shows in the pipe. When DeepSeek came out and there's like five shows going out, often talking about AI, it's not reflecting a, yeah, another massive shift.
Um, so, I say all that as preamble to asking for a dangerous prediction. Everyone loves a tech prediction! Pick your time frame, like this time next year, in five years, when do you see an emergent sort of settling of the picture. So if you think about Cloud, for example, you [00:42:00] know, the early days, there was no patterns of how you adopt them.
There was competing Clouds, there was a race to get PaaS out. It maybe took I'm going to say what six years, seven years to sort of settle into being known adoption. It feels like we're racing through AI much faster, but actually patterns of adoption and patterns for scaling, for example, are not, are only really emerging at the moment.
So for you, what does, what does that maturation cycle look like over whatever period you dare try and predict that?
I think we've still got a long time to go, to be honest. Um, I mean, we're two years in. And we are still throwing spaghetti at the wall. Like if you speak to, you know, some of the people who developed even ChatGPT, they say actually they think the idea of a chatbot is not the supreme way that we are going to do this in the future.
And so we don't really know. We haven't yet fully got a handle on multimodal models. Truthfully, never [00:43:00] mind the idea of kind of then going beyond something else there. Um, I think for businesses it's going to be a long time until they can actually be sure footed enough to say actually, if you're a particularly small C conservative business in terms of your spend.
You're probably thinking about five years, I would say, before you have any kind of idea of actually, we have a settled basis on which we can make a decision on what to do here, and until then, you are always going to be chasing the technology's tail a little bit. It doesn't mean that you shouldn't be doing it.
You should still be doing this, because ultimately, if you're not doing it, your competitors will be, and you will get left behind, but I think the way that you should be thinking about it is, always kind of just be doing it not quite as a quick fix, but as something that you can kind of roll out relatively agilely and something that you can kind of spool back relatively agilely because the next thing will happen, you know, who would have known that [00:44:00] DeepSeek would do this and that you can actually unlock it.
What I do think is interesting is that that moment does give you a bit of breathing space. And I think that it's kind of resettled the paradigm a little bit. We were relying on big tech companies just doing the same thing, the same thing, the same thing. Whereas now, we know that actually it is back a little bit within your grasp, because you don't have to have that huge computer, you don't have to have that huge budget to do this stuff.
You can kind of, rather than Be constantly listening to what the big tech companies are saying about what the opportunities are in AI. You can maybe draw your horizons, you know, envisage your dream a little bit closer to home. And so you can say, you know what, maybe I don't follow the latest trends entirely.
Maybe I step back a little bit, realize that in a deep seeker or in a llama, I have a really strong model that will still do quite a lot of stuff that can benefit my business. And I just kind of park it there for a bit. And I [00:45:00] say, actually, let me use that. Let me wait and see how this fully shakes out.
What the next huge rupture is that is going to change the direction of travel. And then I can kind of take stock, figure out what I'm going to do. Being aware that the adoption cycle takes a little bit of time to kind of shake itself out fully.
So throughout this season, we've been talking about the importance of narratives. Dave Snowden, Jitske Kramer, they're both mentioning the importance of stories in cultural change and transformation. But now we're talking to a journalist here today. So I think we should also take a take on that angle. So there's an influential voice on journalism and its impact on AI.
And I don't know, you probably already heard the book. Uh, are people that are talking about AI a lot is superintelligence from Nick Bostrom. And he actually warns [00:46:00] that the media reports on AI, uh, directly shapes on how the public and policymakers understand and react to technology. Right? So those are narrators actually.
Uh, so they don't just report on it, but they shape it and also determine whether we fear it. or whether we're really eager to, to get our hands on AI. And I was actually wondering, Chris, what's your take on that? Cause you actually, you're one of those narrators, right?
Yeah. I'm to blame, that's me. So so much.Nicely said. It was nicely said though, wasn't it? It was, it was, it was kind of the metabolicans you've ever had in your life. It was a nine fist couched in a velvet glove.
It's interesting that you say that because actually I've just done. A story, um, that will, by the time this goes out, will be up on New Scientist's website and will be in the magazine, um, looking exactly at this, so it's not just Nick Bostrom that is thinking this, there are also kind of academics that are starting to reckon with what impact we have as, [00:47:00] I suppose, narrative makers, as journalists, as the media, on how we perceive of AI, and what they did was they looked at, and I literally, pfft.
Brilliant that you mentioned it, because I literally just filed a copy yesterday, so it was fresh in my mind. They looked at around about 1, 200, um, American citizens over the course of, uh, 16 months. It was actually 12 months that they surveyed them for. Four months they had a glitch in the data, but it was still publishable and still really interesting.
And they asked them open ended questions about, um, their perceptions of AI. And then what they did was they coded the responses and tried to figure out actually how do ordinary people, this was a representative sample of the US, think about AI. Um, and the ways that they thought about it, they often came to things like, you know, this is a machine, this is a kind of calculator, this is, um, Often, sometimes they said, this is a genie because they had this kind of idea,
the process that Rob's working through, Chris, [00:48:00] there you go.
You see exactly the model. This is helping. This is like there is catharsis occurring on the podcast. It's great.
One of the things that was really interesting about it was that over that. year that they did collect the data, they realized that two kind of key things came up. There were lots of really interesting wrinkles, not least that people of color, um, um, women and generally older people tended to anthropomorphize AI more than others, which I thought was interesting and actually speaks to a whole bunch of different things that we need to discuss probably at some point in the near future.
But over the course of that year, they found essentially that people had become more warm to the idea of AI. And part of the reason that they had become more warm to the idea of AI was because they started to anthropomorphize it more. And that is, I think, a big issue, actually, that we have. Uh in the media, which is we Haver between two unfortunate [00:49:00] polls, which is we either swallow wholesale the Marketing hype and which is designed to make this seem easy your best friend your supporter Ai companion all of that stuff Or we just highlight the really big issues.
And I think because of a whole bunch of reasons, including the fact that we don't have your subject matter experts as much as we would like to on this subject, because frankly, everybody is learning about it simultaneously. And also just because of a whole other issue, like, you know, the inherent economics of the media and the fact that we are being asked to do more with less, we tend to.
prioritize the former than the latter. So we are taking a really kind of techno utopian view of stuff that is often not hugely inquisitive, and I think that is perhaps a little bit of a concern. Um, so it doesn't surprise me enormously that, you know, we are seeing that sort of stuff. [00:50:00] because the media are ultimately huge tastemakers.
And I think that we need to get better at discussing the issues as well as the challenges.
That's a super interesting point. I don't want to digress too far off it, but your point there about tastemaking, I think is, is, is fascinating. And Um, one of the things I think social media has, and the current drive against traditional media that's, that's going on.
One of the things that we're just throwing away, um, is this notion of taste making. And a really trivial example of it is I am really into music and it is almost impossible now. to find good music because one, you're either overwhelmed by it and it's just impossible. You don't have the hours in the day to get through it or you revert to, you know, jump by Van Halen, which is where we just robbed sweets.
Um, and there's no scenes anymore as a result of the fact [00:51:00] that there's no, there's no, there's no taste making going on. Um, what's your take on like from that perspective, like the erosion of. media, being able to test make based on what's going on with social media. Have you, have you thought about that, Chris?
I'm sure you have. Loads. As someone who wrote the book about TikTok and how they're kind of personalized algorithm is run a coach and horses through this enormously. I mean, we've seen a huge collapse in popular culture, right? We, for sure. We used to have water cooler talk. Yeah, we no longer do because your media tastes and consumption will be entirely different to mine.
So, you know, I can't say, Oh, did you see this random YouTube on YouTube video about theme parks that I watched last night in the same way that, you know, You can't say, did you go to this gig last night? We have very different tastes. I think that we do lose something in terms of our collective understanding there.
It's, you know, part and parcel of that is, [00:52:00] I suppose, just the way that the algorithms on social media are changing us a little bit. It's also that we are leaning into this and maybe, you know, we're kind of being directed in this way by some people that it. benefits not to have a kind of cohesive thinking.
Um, if we are all kind of divided and just cooped up alone a little bit and don't have that kind of rallying point to get around then that can sometimes be useful. Um, but I do think that we lose a lot from that and I think that it ironically makes AI more useful, um, because we've seen the rise of AI companions.
Um, and, you know, in a world where things are really complicated, increasingly so, and where we might not have those touch points to connect with one another as humans, who do we turn to to try and understand that? Well, probably the super intelligent computer that's been presented as all knowing and all seeing on our phones under, you know, the little apps tab next to, uh, the knock off Tetris.
So you can imagine a [00:53:00] situation where we all, you know, what's that, um, uh, the Philip Pullman book, his Dark Materials, where everyone's got a demon creature, but the, but the demon creature is like an, an AI, I don't want to use the term AI assistant, it's not sexy enough, but you're like an AI friend or companion.
And then, then you could get every, you could get. the, the social scenes back by getting the AI agents to all talk to each other. So like, and then they can sort of aggregate up what's going on. It's like, what are Rob and Esme watching on TV this week? And then it'll come back and be like, Oh, you know, you don't want to know what Rob watched, but Esme watched, you know, blah, blah, blah.
And, and, you know, I sort of get that. I get that. I don't know whether I like it though. I get it, but I don't know whether I like it.
To end on a positive note, I don't know, Dutch, uh, um, woman who's actually highly, um, secured. How do you say that? She, she has a lot of security around her because she actually testified against her brother.
It's a huge story in the Netherlands. Ask it all later. [00:54:00] And she's a lawyer. by heart, but she had to stop working because she needs to be, you know, completely off records. And she actually uses ChatGPT as a companion, but also to challenge her on topics. And the entire day she talks to it. And then after a couple of days or weeks, she actually wrote a book.
That's why we know it is. Uh, and she asked, do you know who I am? And then ChatGPT actually set back based on all your input. I think you're a little later. Wow. That's a bit scary. Other than the login name was the same as that, so it's just the login. Spoiler mystery, Rob. Damn. Well, look, what a, I mean, what a fascinating discussion today.
Chris, thank you so much for taking some time out on a Friday afternoon to come along and talk to us. Thank you. Now we end every episode of this podcast by asking our guests what they're excited about doing next. And that could be, I've got a great restaurant booked at the weekend, or it could be something in your professional life, or maybe a bit of both.
Chris, what are you [00:55:00] excited about doing next?
I am excited about going to an art exhibition opening later this evening. And then unfortunately I am back at the grindstone doing yet more stories, talking about how big tech is. interesting and scary for our lives. And would that be a Baltic? It's not actually, it's in Usbun, uh, which is the up and coming hipster area of Newcastle.
So I'm very excited about it. 36 Lyme Street. Very nice. And what's the exhibition? It's kind of e textiles. So it is, uh, my partner is in the kind of crafty computer science, HCI space. And so it is a student, PhD student of a colleague of hers, uh, who is opening their exhibition tonight. So that should be fun.
What is e textiles? Oh gosh, it's so you have, I could probably dig some out if I could find it. Um, it's like you have wool, for instance, and they run. Uh, wires through it so you can do stuff, you can make it kind of tactile so you can have [00:56:00] a makes it sound very crass and my girlfriend will kill me for describing it in such a way, but you can have like a cool hipster builder bear in the sense of rather than having to have something built in in terms of a button within the middle of the padding, you can just have like a sheet where you can tap it and then it will play for instance audio or whatever.
That is the real feature, never mind AI. Wow, um, wow. It's a long, it's a long way from those t shirts that would change color based on your body temperature. Do you remember them? By the way, you might not. Back
when you could get NME, you knew where you were with life. You could put one of those t shirts on and go out for a good dance. I mean, just the concept, did anybody actually wear one before they marketed it? Because it just, it just looked awful.
No, it's an, it's an, it's an unusual concept, isn't it?
If you would like to discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at cloudrealities@capgemini.com. [00:57:00] We're all on Blue Sky and LinkedIn. We'd love to hear from you. So feel free to connect and DM if you have questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to. Our sound and editing wizards Ben and Louis, our producer Marcel and of course to all our listeners.
See you in another reality next [00:58:00] week.