Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com
CR100: Intelligence age ethics (in 2025) with James Wilson and Philip Harker [AAA]
[00:00:00] In America, right? So what you've got is basically all the social info is basically, I'm eating the hot dog of this hole. Now I'm eating the burger, now I'm eating some chicken wings, and now I'm seeing my cardiologist.
Welcome to Cloud Realities, an original podcast from Capgemini, and this week it is a conversation show, exploring the deep importance of ethics as we move forward into the intelligence age. I'm Dave Chapman. I'm Esmee van de Giessen and I'm Rob Kernahan.
So it's CR 100. I think it's, I think we should mark the occasion by talking about our centenary, don't you? When you say CR one hundreds, uhhuh, what does that refer to? Because we've done more than a hundred episodes, David, we've had this conversation before. So quite frankly, as I always am, I'm confused [00:01:00] 'cause this isn't us ent rerecord.
Now we, we have got a CR number that's on the front of every episode. This is CR 100. Now we've done a number of other episodes 'cause we've do live episodes, so we've done. 30 or 40 of those. And we've done some specials. Yeah. Like the recent telecoms miniseries, for example. So this is effectively number 100 of the standard studio record episodes.
That's right, Marcel. That's correct. And, and when we look at the back office, uh, sort of the back office tool that we use, it's episode 155. 55. Yeah. So are we just gonna celebrate random events then? Is that what this is? Is this just an excuse to go Cheers, clink and off we go again. It feels like that 'cause like the numbering system is like Hmm.
But you mentioned, you mentioned the live ones, Dave. Yes. And uh, basically we recorded 40, 46 episodes. Yeah, right. [00:02:00] But, but, but mm-hmm. Sometimes we combine episodes for live, so it's day, day one, and we have multiple records. So basically we published 46 episodes that were live. Yeah. But basically we recorded 62.
There you go. So we've done 100 plus 62 plus the telecom specials. That's four. Four of them, 1 66, 1 66. It doesn't feel like a number we should be selling. No. Basically this is a massive sham to the listeners. Sorry. I must apologize for our poor accounting. Well, I think Marcel, you and I, we can celebrate the 100, shall we?
Yeah. Champagne. Champagne. You are the producers, eh? Exactly. Yeah. Every moment has, has a champagne moment, so it's okay if you took, if, if, if you took the sham, that was the recording we did this morning and Marcel's not unable to hit a recording button, Dave logs in and finds it straight away. I'm not sure that it's [00:03:00] producer in name only.
If we go in, there's thing because it's definitely not a producer. That helps. What's that thing where you have to do like 10,000 hours before you actually get to be good at anything? Must be 20,000. You got a, you got a fair way to go. Yeah. Marcel, do this morning situation. What I loved was, I can't see the button.
It's not working. It's not working. It's not working. Dave logs in, we start recording. It's like, what? It's something personal. So, so I saw the button, it was a red button record start session, and when I hoover it over it with my mouse, it disappeared. I, uh, it was like David Copperfield. So it was, it, it's like the system knows it wants to frustrate you, but it has respect for Dave.
So it didn't dis, of course, disappear the button. Of course, it's the same with your mic. We continuously keep saying you are not on the right mic. It sound like you're in an echo place shouting into a bin. No, that's my house. So I have a design house. So you know what, we should let the listeners know that Marcel always records his episodes from the bathroom.[00:04:00]
That's what it sounds like, that you're in some sort of life from the tub. Yeah, life from the tub. Normally under the shower. So, oh, anyway, look, it is, it is a centenary of some description. Um, and it's just worth saying a quick thank you to everybody who listens to the show. Uh, we really, really appreciate you taking the time out to spend an hour or so a week with us.
We hope you get something from it. And we've got some brilliant stuff coming up over the course of the next few weeks. And then we will take a bit of a break over the summer as usual, and we will be back for season five. So with that, congrats everybody. Uh, let's move on and to help us with such a weighty subject.
I'm delighted to say in this. Access all areas episode. And for those, just as a reminder, um, this is a bit where we just get some of our friends in that we work with on a day-to-day basis and we just kind of kick the ball around and explore, um, some of these subjects. So I'm delighted to say we've got Philip Harker and James Wilson with us.
Uh, how [00:05:00] our friends from, uh, Capgemini, Philip, just wanna say hello and tell us a little bit about yourself and what you do. Hey, I am Philip and I lead our insights and data advisory practice in the uk. Good to see you mate. And James? Yeah. Hi guys. Uh, so I'm James Wilson. So I'm an AI ethicist working in the global AI labs, and I also spend a lot of time working with Philip on the data powered industries and domains stuff, given the subject matter today.
James, glad you're here. Glad you're here. See, sounds relevant. Yeah. You can lead us through and murky and difficult, but, but probably one of the biggest topics of our age, I think is fair to say. Absolutely. Robert, frame us up. What we, how are we gonna get into this? Uh, well, okay, so AI's coming, it's coming fast and there's lots of in, it's here, it's here.
Similar to, yeah. Yeah. Uh, and the impact is, uh, not properly understood and there's lots of areas we need to explore. What's it mean, uh, for us from a human perspective, what's it mean for us from a societal perspective? What's it mean from a work perspective? And we haven't got to grips with it yet. Hmm. So today's all about a [00:06:00] conversation that basically says, what is the ethical thread we have to weave through this new AI that is impacting us?
And how do we deal with it and what's it mean for the future? And sort sort of the road ahead now, Rob. Yes. Is this a smoke screen for something I. A smokescreen for something is, I'm not sure. Ethics could be considered a smoke screen. David. I mean, there's a moral conversation in there somewhere are, we're gonna get into the middle somewhere and you're gonna bring r pa into this conversation.
You gonna go just fancy automation, Dave? No. No, we're not going there. No, this is a serious. Serious and deep conversation that we have, and I'm not sure a lot of people are thinking about it in the right way. What's the structure, what's the framework? What's the way we should approach it? And it goes a lot further than most people think it does.
Right. A lot further. And I think it's only through a continuous conversation that we're probably gonna be able to explore it. But I, I'm a great believer in society, should be having these sorts of discussions much more than we do the, you know, the classic will just shout at each other on [00:07:00] Twitter for a laugh.
And we don't actually get anywhere fast. That is the world we're going into. The algorithm has a lot to answer for, and now we're implementing more of it. So what's gonna happen? No, I, on a serious note, I I couldn't agree more that. These conversations are to say the least multidimensional. They, they touch on exciting and, and sci-fi aspects of what we're doing at the moment, but they also touch on the, the unknowable challenges that, that go around this sort of stuff.
And it is important to, to step back and, you know, safe space, analyze this and, and turn the subject around and look at it in, in multiple different directions. On that note, let's roll in. So we're in a position where we, I think we framed that up quite nicely. We understand in the world at the moment, obviously the ri the rise of AI is happening all around us.
It's difficult to track exactly what that means. Maybe we'll dig into that a little bit in the next half an hour or so. Uh, and we also know that that needs to be framed in a way and leveraged in a [00:08:00] way that we can understand the potential consequences of it. Sometimes at the moment, that is also unknowable.
Rob, why don't you kick us off. Just frame up a little bit what the rise looks like. What, how do you frame up the capability and, and how is it, and how is it showing up today? So we see sort of four themes emerging around ai and it means we need to pay attention to how those themes are gonna affect us as a society, as a business, as a human, et cetera.
So we're seeing AI become very adaptive in the way it behaves. So it can be used for a lot of things. So we're seeing permeate right across, uh, our world. Hmm. It's starting to become proactive. So AI will start to prompt us and tell us to do things. So it's going to start to be not just something we engage with and get an answer, but actually it's going to start becoming part of things that tell us what to do on a day or might be directing us, et cetera.
So it might be in control, uh, as a theme. Um, it's without us known you mean? Yeah, yeah, [00:09:00] basically. And then there's the, uh, rise of the complexity of it. So ever more sophisticated tasks. So we're giving it more, uh, responsibility as we go, and we're starting to see that kick in, and then it's becoming autonomous in that capability.
So we're starting to see it do things on its own. Yeah. So we're giving control away to it. Maybe for good, maybe not so good. So the whole concept of is it right, is it ethical? What's the impact? What does that mean for the human? All of that context is starting to come to life now. So we're starting to see this.
Maturity rise in the concept of understanding ai and that leads to us. So what do we do about it? And we're starting to see new roles emerge associated with that. So Philip, does that broad framing resonate with you in terms of how it's showing up in the world At the moment? I put an umbrella on that.
It's a reality, right? People are thinking about this and that thought process isn't over. I think the the thinking behind why, why we do it, um, and [00:10:00] oh, what if, and the implications I think is, is kind of. The important bit that often gets glossed over racing to, oh, this nice, nice, shiny thing. I think there is a wonderful variety of thinking from every typical context from society to end of rise and everything in between.
And I think it's a wonderful discussion to have. Mm-hmm. But we need to in parallel that thinking, interoperate more. We have to action stuff because the pace of change is phenomenal that we can't be always thinking. We need to kind of be thinking and doing at the same time. And that's kind of quite exciting.
James, what about you? Does that framing do it for you? Anything missing in it for you? No, I, I think it's a good framing, but, uh, but there are a couple of things that I, that I would add to it, which is that, uh, it, you know, we are a progressive species in general. You know, if we, everything we do is about progressing what we do.
And, and, and technology's been a big part of that. And you think, you know, in the, in 1800 the mortality, [00:11:00] average mortality age was 43 and 90% of the population lived up below the poverty line. We're now at sort of average mortality rate, about 75. Poverty line is about 10%, I think, based on the World Health Organization figures.
So, so, you know, pop technology's done a great deal for us, but we've always, in the past done things using, uh, a concept of intelligent design. In most of the things we do. Right. So, you know, take the space race. Okay. As my, as you can see from behind me, I'm very much a, a nasa, uh, freak. And, um, you know, so we did land on the moon then.
It's official. We did do it. Yes. Yes. Where do you now, can we just get that out the way? We did actually land on the moon. It was was James. It was James. It was your footprint. Yeah. And so, so how, how much, how much time have you given thinking about Kubrick versus the actual moon land in James?
It's just seriously, that would be the, the, the cost of that conspiracy. Yeah. Right. Would be more than actually just going to [00:12:00] the bloody moon in the first place. It would be you, it would be like, it would be the most perfectly executed conspiracy theory ever. Wouldn't it? And that would be harder also than just going to the moon.
Yeah. Yeah. They'd just be kicking themselves. Now if they'd done that, you know, why don't we just do it? It's that thing where they might have started with the conspiracy and then they're all sat around the table going, should we just build a rocket and go to the moon? Be easier. Exactly. Exactly. Exactly.
And there's a person that says, I'll make the flag fly. I'll put the light in the right place. Yeah. Geez. It's, but yeah, so getting back to it was saying, but intelligent design in the past, everything's had a level of intelligent design. And, and you think about it from a, there's intelligent design from a technology perspective, and there's also intelligent design from societal perspective, and we've not always got that right.
Um, the, the, the industrial Revolution was a, a nightmare for about two generations for most of the people that lived through it. You know, it was at the very, very few people actually benefited for the two generations worth of. Stress and illness and relocation of people and all that sort of stuff. But we've, we, we've [00:13:00] always had a, an intelligent design in place, whereas now we don't have that intelligent design.
Right. As of September, 2022. It's just like we're chucking data at this thing and it's gonna do something good. We know it. Yeah. We know it. We're sure of it. It's almost like you read my mind on that. I was, I was, I was going to go to this place of unknowability around it. Mm-hmm. I dunno whether you did this on purpose, Rob, but as you were going through those four things at, at the end of each one, there was almost like this little, oh, and I.
We're not really sure about that bit yet. Yeah, we're not really sure about that. A lot of doubt. Well, not not, not a doubt. I wouldn't use that word. It's, sorry. It's not fully understood. Yeah. A lot of unknowns in there, isn't there? Right. Yeah. So we don't know where the path leads us. It looks like a fun path.
There's pretty views on it. It appears to have be fun so far, but, you know, you might round a bene nuts. Oh, it's model. Yeah. Oh no, but that's, that's kind of the fun bit, right? We don't have trajectory in front of us. No, no, no. Seriously. We don't have, we dunno what's around the corner, but it's not a [00:14:00] straight road.
Right. But we, but how do we think in the right way, way so we can kind of plan what could be around the corner. Mm. Is kind of the wisdom bit we gotta bring in. Yes. And not blindly assume oh, it's a straight road. 'cause that's kind of, that's just implementing technology for technology's sake. Right? I think that's right.
I, I was sat next to someone at dinner. Um. Maybe it was a couple of years ago now, so maybe they can be forgiven for having this view at the time. Uh, but even at the time I thought, Hmm, have you really thought that through? And, and they said, well, ai, it's just gonna be like robots in factories, isn't it?
You know, like purely coming at it from a pure controlled. Automation perspective. And I was like, I'm not sure. I mean, it might be, you know, humans are great at taking great technology and not doing a good job of implementing it. You know, we, we, we've done that forever. I think we sort of stand in the way of ourselves quite, quite often for a whole host [00:15:00] of different reasons.
Um, but it feels to me like AI James is, is a bit more than robots and factories, isn't it? Yeah. I, I, and, and this is, so the, if. If you do roll it back 2, 3, 4 years, people were still in the mindset that it was going to be, it was gonna be thing, it was the, it is more of its paradox in the, in the opposite direction.
Rob kept saying it's just like RPA, it's just like the RPA revolution all over again. David, big David's. But it's all about, it was, it was all gonna be about actually, you know, helping us robots doing stuff and, and moving stuff. But, but the, the thought leadership and, and the, and the, the sort of the, the white collar jobs were going to be the last to go and it would be the blue collar stuff that we did.
And, and that's obviously proved to be entirely the wrong way round. Absolutely. Um, so, uh, so yeah, it, it, uh, you know, it's still the hard, you know, we still not at the stage where a robot can crack an egg properly. I know there's some demos that say it can, but it's, you know, it's, it, it's not in the wild. No.
Like in a perfectly controlled scenario, you know, E Exactly. 'cause we don't have that. They, they don't. Lang large [00:16:00] language models are not designed to give the AI a proper world model and the proper what it needs from embodiment, but for it to be able to actually learn how to, to work in the world, you know, if you, if part of the training does not include, um, don't put your hand over a candle, a robot would just continually put it hand over a candle.
It would never learn that it, that's gonna hurt it. Whereas a human being, he does it once and then, you know, hopefully he never does it again. Yeah. You still can't crack an neck. Right? Not enough. It's actually totally true, isn't it on that though, James? Is that just the l LMS of wherever we are, May, 2025?
Mm-hmm. Or is there, is there something, do you think, in that, in the, in the tech somewhere for, you know, for those, the, the layman's among us from a, from a technical perspective that you can see might ultimately get those learning traits versus kind of regurgitating the information that it has. Yeah. And, and, and here you have to be slightly careful 'cause there's, there's kind of an anthropomorphization [00:17:00] thing that happens here where we are, we are starting to treat them as, uh, I mean, I mean I I I'm gonna be honest with you, I cannot listen to Sam Altman anymore.
His voice actually makes me gag. So I, I, I, I'm, I am that annoyed with everything he says, but he talks about, you know, AI is at this, it started off at the level of a junior school kid and now it's moving up to PhD and so on. It, it's, it's, uh, it's not learning from a world model. It's just regurgitating facts.
Even all this stuff about reasoning models that are reasoning, they're not reasoning. It's just very clever chain of prompts that actually take it through a series of different levels to come to an answer. It's not, it's not a proper reasoning model. They're not thinking machines. Uh, you know, we haven't actually achieved what Turing set out.
In his paper 75 years ago. It, it, it's still just a process At the end of the day, I don't think large language model, and I said this about two years ago in a Microsoft conference and got a bit of a dismayed look from the Microsoft, [00:18:00] uh, account lead that was standing next to me. But I, I, uh, I, I said that I think large language models actually could end up being an evolutionary dead end.
Mm. Yeah. Because it needs, they don't have the concept, you know, it's the, we understand how gravity works. If I drop a ball yeah. I know what's gonna happen. I'm able to predict it. Large language models do not do that fundamentally. Exactly. So we have to augment them. So we want a GI. But the, you know, we have to augment to get there.
Yes. Whereas the Altman view of the world is, if I just keep chucking data at this, it's gonna become intelligently. You go, does this, does it work? Does the science really support that? Yeah. Is that one of those, isn't it? Absolutely. What do you think about this situation where up until maybe about six months ago, most of the ais were learning and the language models were based on previously created human data, and we're now at a point, or at least a, a, a guest that we had on the show, uh, a week or two back, uh, made the point that actually now there's more AI created data that the ais are learning from, than [00:19:00] human created data.
What are the implications of that, do you think? This is, uh, a brilliant question. 'cause I actually, I actually wrote a paper on this for quite recently. First time I wrote in Dave. Well done. I think we.
Yeah, it had to happen sooner earlier than I did. I mean, 150 episodes, and there it is. So, so my, my old, uh, my, my old employer, uh, Gartner, um, they, uh, they've predicted that by 2026, 80% of the internet, uh, content will be generative AI created. Right. And if you think about that from the point, so I, I use the Aura Boris, uh, kind of concept.
So you know, the Aura Boris, which is actually an Egyptian symbol. It's a snake eating its tail. Uh, in a rink, effectively, if you, if you think about as more and more of the content is created from ai, what you're actually doing is you are, you are moving the ratio to the point where when you actually train an AI on that content, you'll get to the point where most of the content's coming from [00:20:00] generative ai, two effects of that.
Firstly, a lot of that context slots, it's just the rubbish, you know, not true. So you're gonna get more and more disinformation actually becoming fact as far as the, the AI is concerned. Right. Right. Um, the other thing that's really important, and this is one I'm very, very passionate about and is this, Philip knows I'm, I'm a great reader.
I read a lot and I love, uh, exploring different styles of read writing and so on. Is that the. The way that because they're effectively probabilistic stochastic models, they're effectively picking a, uh, you know, the most likely answer. Uh, for something, what you're gonna see is if you take the entirety of, let's just take the English vocabulary for, for instance, or English language.
If you take the entirety of it and you look at how many, how words are used in the English language mm-hmm. You will see that there are certain people like Ro Dahl, Shakespeare, and people who they, if you look at, at a bell curve, their words are gonna be kind of on the edges of that bell curve. They're gonna be outside.
Right. Because of the way that a AI LMS [00:21:00] work, you are actually gonna see that bell curve of what it contains. Narrowing and narrowing and narrowing, and you are going to lose that kind of language. You're gonna lose the variety that is human. And, and as you keep going through that, they didn't, there's a great example of a, where they took a, an image, it was an image based one, but they took an image of some numerals and they passed it through 10 layers of LLM training.
And by the 10th layer it was just generic trash. You couldn't pick out a single number from it. Interesting. Yeah. Dave Snowden, who runs the FIN Foundation, and we've had on the show a couple of times, like a, a real big thinker. He, he does a lot of interesting work about dealing with complexity and leadership and things like that.
Uh, and he's got a, uh, and I'm, I'm not paraphrasing you James necessarily, but what your bell curve point there reminded me of is one of Snowden's views. Is it AI makes us dumber in a lot of ways. Yes, it does. The rich pageantry of options or the color spectrum that we have as humans. Will be [00:22:00] filtered down to be, I'm not saying black and white.
Mm-hmm. But it'll be primary colors if it's overworked. At what point in time do we allow it to, to grow again, or to, to fil I dunno, to to your point there, Philip, that, that if you tell, you know, I, you're gonna love this. 'cause this is coming back to something you were talking about earlier on, which is it's aian.
Because actually when you think about it, it's, it's how, it's how, um, I can't remember what the language is called, but the language that they use, which is a very, very structured, very limited vocabulary. You know, things are double plus good or plus good and so on, they don't have, uh, different descriptive terms.
That's where we could end up. Yeah. One, one of the things that also, that occurred to me, and maybe this is one of the places that we need to keep the human in the loop, is I, I'm rather attracted to the notion of what's the role of human critical thinking in the intelligence age. Philip, do you have a, do you have a thought on that?
Well, we have to think a bit more, right? Um, and we have to think a bit more about. [00:23:00] Is it right, rather than just racing to it. Mm-hmm. And, and, and that, and to say it, whatever it is. Mm-hmm. But let's, let's use the pace of change in technology to broaden a horizon of thinking and think about the implications.
And those implications are vast and there is phenomenal good if we get it right. And I don't want to be majoring on all of the atheists because I'm, I'm not, I'm not a negative person. Yeah. There is enormous opportunity to get things right, but let's not have unintended consequences as a result. Let's look at that human in the loop to look at the human impact more than anything else.
Myopic in terms of the west or the east or, or the rich or the poor. It needs to look at societal impact of things to look at unintended consequences. And that that's kind of [00:24:00] professional, mature thinking, critical thinking process that we should be baking into the, the RA to ai, whatever that entails. And, and it, it's starting to reveal a new role for the human in the loop, doesn't it?
So we, we started out with, oh, there's a new role prompt engineer didn't use to exist. Now it exists, but now this whole ethical dilemma that's coming up and what does it actually mean for society, to your point, um, is becoming ever more critical that we need to think differently about how technology will affect us.
'cause like you said, Dave, at the beginning, we implement technology and it just goes down a straight path. Now the path's really windy, so we need to adopt as humans. A different role in the whole system isn't there? Isn't there? I couldn't agree more. But isn't there also something that. And I think it was Philip, the Philips Bell Curve definition and the, and and Philip's spectrum analogy down in black and white.
There's something in indefinable in there, perhaps about what a human brings to things. And let, let's bring up, Rob, the, the conversation we've had a few [00:25:00] times on the show, which is AI generated music Oh, and hot topic. And, and what is the role for that? And will that, I'll put that engender the same relationship.
You could have to, you know, insert your favorite song name here and, and what is it about the human creativity versus the kind of automated or, or artificially intelligent creativity that, that, that creates a differentiator. James, have you got a, have you got a view on that? Yes. I mean, I, so for, for me, when, when I, when I see a, um, uh, a great example, right?
Joe, Joe Cocker, Woodstock, singing, little Help with my friends, right? That on stage, right, that's music. To me, that's the, the, the, it's the human effort, the endeavor, the passion, and so on. An AI generated version of that does not represent what I'm looking for. I think, I don't think it's the same for everybody.
I think I've, I've heard people raving about the fact that they could take their, you know, their favorite Black Sabbath songs and pipe it in. Yeah. The Beatles to do it, create a new one for them or [00:26:00] whatever. Um, but, uh, but for me, uh, I, I actually appreciate more the effort, uh, and, and the, the passion that's gone into something.
It's the same with art, you know? I mean, for me, I, part of it is, I don't think I consciously do this, but I think I unconsciously do, I recognize the brush stokes strokes and the effort that went into it. Whereas, you know, a a, an AI artist is gonna print something digitally for you. It's not the same. It's not the same.
I have this feeling that we may end up at a point where rather like we had. 20, 30 years ago where you had the buy British kind of thing, or buy British beat, all that sort of, I have a feeling we could end up with a kind of by human uh, movement where people are actually valuing human creativity. And it, it's that thing.
It's that how and how you frame it is really important and what you, it's important to you. Right. So you sort of said about the human endeavor, the effort you get that it's smash AI is smashing apart the creativity industry. Now you get Coca-Cola famously doing their [00:27:00] Christmas advert, which was always a big thing now totally generated by ai.
Mm-hmm. Did you enjoy it? Was it entertaining? That's one measurement. But then there's whole two, remember? Yeah. If you can remember. Yeah, exactly. But then there's this whole undercurrent that says, part of the enjoyment is appreciating the how we got to the end state. Yeah. Which is the, the point. And I think there's that balance, and you're right, there's, it's splitting the audience.
So someone's saying, if I sit there and I'm entertained, it's a great application of it. If I sit there and I'm more entertained because I appreciate the endeavor that created the end product, well then that's a conversation with 11 sophistication, I think will probably play out a lot more in our society.
I can't see my me ever being sort of brought to tears with, by a performance from ai. Uh, I, and I'm, I may eat those words at some point, but, but the reality is, you know, I don't, I don't see that as, uh, I'm trying to make you cry in my future. It might make you cry, but for all the wrong reasons. Yeah. I think it's, it touches upon the same thing as what you just said.
Uh, if I was envisioning [00:28:00] a word cloud. With the bell curve, you only see the highlighted the words that are being mentioned the most. But we've had this on the show before. I did research way back, uh, for my masters and, uh, I did a, a research on Chatroulette. I don't wanna dive into the topic of chat again, but one of the key things of that was that you were really surprised by who was popping up on your screen, which made it also very much fun because, you know, you, you would never search for that kind of person or to connect with that type of person.
But it's the same with search engines, right? They also feed you with the same linear mm-hmm. Thoughts. Um, so did we actually, we we're talking about progression, but are we actually progressing if it's doing the exact same thing? So now we're, first we think, oh, it's really cool, it's creating images. But now even, I think we all have that, like, oh, we see that that's an AI generated image.
Mm-hmm. Like two and a like a year ago we were impressed and I was like, oh yeah, that's AI generated. So are we actually progressing? But certain portion of the population won't.[00:29:00]
A difference. Are we overly precious? So that is right for certain things. Do we, and certain things have to be authentic for, for us to appreciate them tho those things that are there. That's, that's society. Um, that's the art as the cultures. But certain aspects we shouldn't care if that, if it doesn't matter if it's a commodity space.
Mm-hmm. And filtering those things through. So horses, of course, this is a user language very often used, is a certain horses for certain courses. So let's not waste the resources of AI to doing, polishing certain things. You know, we don't, there there's a, there is a need and a desire that we can take away the burden of life by doing certain things, which case?
Let's not over polish it, let's just get on and do it. Mm. Um, but we've gotta put some rationale behind that and some guide, guide rails. [00:30:00] That's kind of ethics, I think in terms of we, we apply those things where it matters. It matters a lot where it doesn't matter. Hey, it doesn't matter. And, and I suppose that's the role.
The AI ethicist role, right? Which is somebody who thinks on our behalf that can guide us in the right way 'cause with a busy life and all of that. But I, I know J James, this is your area. I enjoy people who think on my behalf, Rob, I find it extremely useful. Don't I have to think on your behalf and I work with you to prejudge what's gonna happen and stop terror from occurring.
So yeah, I like it. Well, I, I've got the first air GI in the world. I, but j James, I suppose it's a good point to discuss the, the, the concept of somebody thinking about this full time as a new role and how do we deal with it and what's the framework they would sort use to get into it. I mean, it's, it's a really important point.
'cause what, what we've basically discussed is the world's changing. We need to think differently. Well, we need to get people to do that on a full-time basis. Otherwise we might miss something critical. So, yeah. Great. I mean, [00:31:00] I, I, but before we do, just quickly going back to Philip's point, it's, um, I think you just have to be aware that we, there is definitely a generational difference here as well.
Mm-hmm. Um, you know, gen Alpha, gen Z are very much, their, their attitudes to privacy are totally different. Yeah, absolutely. Absolutely right. So you will see a different, uh, appreciation of AI to what we have, uh, as a slightly more El Esmi, uh, accepted. Thank you. Thank you for noting that. Got got away with that.
Um, but, uh, but yeah, anyway, Marcel's the oldest just for the record, by a country mile as well. It's, it's not, it's, it's no small margin. Basically, he's got a painting in his loft, which we should call him Mr. Gray, which means he comes with wisdom. Yeah. Well, I, I'm not No, you think so? Wasn't the, except she think so.
Wisdom teeth, but Yeah. Um, but, uh, um, sorry, myself. Um, but um, but yeah, so, so the role of the ethicist that the one thing we should get get clear is what I'm not, [00:32:00] uh, as an ethicist. And I think that it's kind of some of what you said there, Rob, is could maybe make, make people think that I'm the moral sponge for the organization.
Um, oh, we were hoping that, so I don't have to be moral. Are you saying I still have to have morals? Oh, come on. I can't discharge responsibility. I told you before, but, um, but yeah, I mean, if you, if you think, do you remember, um, Douglas Adams wrote, uh, obviously the Hitchhikers series and when he got bored with making the number 42 Funny, he went off and did the dirt Gently Books.
Oh yeah. Yeah. They're quite good. The Dirt Gently books. I like them. Yeah. Cracking. But there was a character in there, very minor character, but there was a character called the Electric Monk. Oh yes. Yeah. I remember An electric monk was someone you could pay to do all your believing. For you. I love it.
Yeah. It's a great concept, isn't it? Outsource conscience. Yeah. Yeah. Outsource conscience. I am not that. Okay. I, I am, I am not there to basically take that what I'm there to do, monkey.
Okay. What, what I am, what I am there to do is to make sure that we are having the conversations and deciding, you know, doing that, thinking about what are the potential [00:33:00] concerns that we need to explore and get the right people together to actually make a decision. Uh, I do not own those decisions myself.
My point, my, the point, point of emphasis is not, as I said, not to be the moral sponge. And is there, so is there a way of framing that in your head and how you educate people around AI and ethics? Because I mean, what, what we've basically said is there's great potential, but there's potentially great doom in there Yes, as well.
And I do see you as a sort of Sherpa in your role. That's how I perceive it, to keep me honest and to keep me going in the right direction. But is there a way of framing it or thinking about it that can get people doing the right thing without sort of creating tragedy before you come back on that as well.
James, I, I wanna add in that a little question about tick speed. So how fast these iterations. Are running and maybe should run. And the thing in my head, and I actually mentioned him earlier, is, uh, Snowden's Fin Framework, where he's got the world of the complicated and the world of the complicated is very knowable, very plannable.
There [00:34:00] might be risk, but we've done it before. The world of the complex and the world of chaos are either unknowable in chaos, so you just have to keep taking one small step forward and, and assessing. And then the complex sits in the middle somewhere where you, you know, you test receive results. Mm-hmm.
Adapt, test, receive results, adapt one. Am I right in kind of holding that the process you are talking about here around ethics sort of, sort of fits in the world of the complex? Yes. And and how does that function, do you think? It it, it does. Yeah. Absolutely. So, so I mean, you know, there, there is a, there has to be a governance around how we're, we're implementing AI within an organization.
And it's not just a technical governance, it's very much a, a, it needs to be even broader, I would say, than, than just necessarily the sort of HR interactions with, with the, with the governance to make sure that we're dealing with the, the, the people aspect of change within the organization. It needs to look at societally what's the impact beyond the organization.
And if you do it right, then [00:35:00] you, you actually can start sort of recognizing that ethics or AI ethics is actually not just a, a, a, a, a sort of bureaucracy or something that, not, bureaucracy is the wrong word. Um, is, is not a, um. Uh, a, a resource strain and a, a sort of necessary evil. It actually becomes a benefit to the organization because, you know, it being seen as a good, an ethical organization, the way you're implementing things and the impact you're having more broadly is actually better for the bottom line for the organization as well.
So they, you know, they, they have to see there is value in doing this. Right. And the way of doing it right for me is, is, is to, um, you, you absolutely, to your fin framework, um, example, Dave, you, you absolutely do have to deal with this is at, in the early stages now, where we are now. Um, as being very much around this sort of chaotic side of things, we've, we've got to have, you know, every sort of major decision or every sort of substantial decision around anything to do with ai.
We need to have a discussion about is it hitting our company values? [00:36:00] Is it hitting our societal values? Um, you know, and, and make sure that you have those, you know, we have a code of ethics. Uh, you know, most organizations will have a code of ethics of some sort. Um, a code of AI ethics, and there'll also be, um, a, uh, you know, the, the societal values that we, that we want to adhere to as well.
We, we need to be challenging all of those. But what you want to be doing is trying to move more and more of that process back into sort of ethics by design, where it's actually, you know, 99.99, nine 9% of the people that work in your organization are going to be hugely ethically. You know, concerned about what's going on, uh, with what they do.
Um, and we need to, or at least I hope that's the case. And then we need to be making sure that we are leveraging that and, and giving them the tools they need so that they can start, you know, cutting these things off before they become a problem, before we have to have the difficult conversations. Um, but to start with, it has to be a conscious effort to do this and we need, you know, regular governance discussions around what is gonna be the impact of this.
And it [00:37:00] doesn't go away once something is deployed or whatever, it's, it, it's there for the entire lifecycle, uh, right up to decommissioning.
Well, we've talked, I think, about some aspects of impact so far in terms of like the human in the loop and how that might work. Yeah. And uh, how it might actually counterintuitively make us less intelligent. So like, in abstract terms, I think we've touched on some of that. I wonder if we just get a bit more concrete on what we think.
The impacts might be in society that might need thinking about. Um, Rob, you wanna kick us off with some, with some thinking? Uh, there's, I, I suppose there's, there's a couple which are looming on the horizon fast. So, and I know it's one that we've talked about before, but the, you know, with the rise of Ag agentic and that sophistication, is this gonna dramatically reshape the workforce?
So if I have a 10,000 seat company, does it become a 3000 seat company? [00:38:00] What happens to the 6,000? Do we retrain? How do we do that? How do we cope with that? Because people will get radically more productive, you'll be able to do the same with less. Right? So impact number one is there may not be the same jobs around impact.
Number two is around that. Then well, the, the flow on is what do we do about things like how we pay for society? So there's a conversation going on about do you tax robots? Mm. So you've replaced a human with a robot. Do we tax the robot? Because it's creating output. So something that creates output.
Sometime you, that's the fundamental concept of taxation to pay for society and do we use a universal wage instead? And, you know, star Trek future where everything's perfect and everybody's well taken care of, and we all live a happy life. And then kind of in the, the third one there is also the, when these systems become highly autonomous and adaptive, do they become impenetrable for the human that doesn't get the result they need?
Outta the system and how do they raise the exception and how do we deal with that? Mm-hmm. So it's so [00:39:00] streamlined. You'll always get a few who don't work within that structure, the exception to the rule. Um, are they disadvantaged? How do we deal with that as a society? So if you think about those three, we're actually starting to see those types of impacts turning up where people don't get the service they need and they can't penetrate the system to be able to complain and get the, you know, the right answer.
The woman in the Chinese hospital famously on the video, having to use a robot to book a, an appointment, the robot keeps saying no. And then she attacks the robot because she becomes so frustrated with the system that she can't get into for what she really needs help with. Um, and it, you know, it comes out as a frustration.
So there's those three. It's bit like you trying to use a iPhone, Rob. It's just Apple need to get better with their interface. It, it's, I mean, great. If we, if we tackle the, the, the, the impact, the, the work. So you, we started talking about, you know, the, the workforce going from 10,000 to 3000. I, I think there's some really interesting things that we, we've.
We are rushing ahead without really considering properly. But if, if you imagine right now, um, that let's say I have a job [00:40:00] and I have three core tasks in my job, so I do a then a B, then a C, that's, that's my three parts of my job. If, if, if I ai, if we start using more and more AI and we actually start saying, right, okay, AI can do most of A and it can do most of B uh, most of C and I'm left with B in the middle.
And, and it, it's, there's a question there as to whether or not I'm going to still get the sort of actualization I need self-actualization, I need from the role if all I'm doing is pressing a big red button in between two AI processes every day. And, and you know, if I'm, if I'm a. Trained physician or something.
And all I've gotta do is, you know, do this one thing. I'm likely to get a little bit cheesed off and probably think about leaving. Now if I do, isn't that human in the loop? Yeah. Well, you, it is a human in the loop, but it literally is just a human in the loop. It's, you know, just pressing a button. It it, if that happens, if we, if we automate too much of a role, then you're gonna get into this situation.
My, my, one of my favorite books is, um, the Machine Stops by Em Forster. Um, but believe it or not, the guy wrote, you know, passage to India in Rim With a [00:41:00] View, wrote a brilliant sci-fi book in 1910, um, where the entirety of society is run by N ai. It's called The Machine. It doesn't call ai, obviously at that stage.
Mm-hmm. And when that starts going wrong. They have no idea how to fix it anymore. Theocracy, isn't it? That's the whole thing about, they dunno how to run their society, how to water their plants for Gatorade. I mean, it's got electrolytes though. It's got electrolytes. Exactly. Exactly. So, so we have that risk.
So first thing, you've got this risk of people to the people of actually not wanting to do the job anymore. Wondering, and you not being able to replace 'em with similarly skilled people because why would they want to, you know, why would you wanna train as a doctor if a doctor has just become pressing a big red button, uh, in, in the middle of a process?
And, and so we, we've already seen this. I was actually talking to a colleague recently and they were saying that, that the, a number of the, uh, hospitals in France have actually seen doctors actually moving to, uh, moving to Africa because actually their skills are much more valued in countries with less technical infrastructure.
Well, I've seen, I've [00:42:00] seen, uh, another argument around global medical footprint, but it was almost the counter one to a certain extent, which was, which was saying like, at the moment, a situa, you know, the situation to now has been in, say, Africa and, and, you know, kind of disadvantaged parts of the world.
They found it really hard to get very skilled, you know, physicians and doctors and things like that into those, into those areas. Well, how about with, with an AI assistant? Like everywhere, everywhere gets a skilled AI physician. It's just a self-service, you know, AI version of it. But the, the thing with professions for me that I'm, you know, that I, that I think about quite a bit and, and, and I don't know the answer to, uh, and it is a, it's concerning to me is with, with a lot of traditional professions, a lot of the, you know, the first 5, 10, 15 years in that profession, we're doing things that gen AI completely eats up today.
So what happens then when you're trying to breed [00:43:00] judges? Let's say, let's say the legal profession. So let's say the, let's say paralegals and things like that, that that AI could easily target that part of that profession and be extremely effective at it. But how do you therefore as a human, you know, train yourself?
Because that whole training path ends up, if it doesn't end up getting removed, it ends up getting changed very fundamentally. And is it as effective? And therefore, when you say, well, we'll always want human critical thought and human decision making at, at say, judge level or in a courtroom. Well, how do you get humans there?
Yeah, there's no path there anymore because they can't go through the junior clerk and all those kind of roles to get to get there anymore. Yeah, agreed. So do so do you think that's something that ultimately we will need to choose to protect, even though you can automate it, but we might choose to protect it?
Or do you just think that actually there is something else there that you know, maybe because you've got everything at your [00:44:00] fingertips, all the information you need, that part of the training is, is not required and we just don't understand that yet. You know, I'm sort of really wrestling with this as an issue, or you short circuit the whole process and you just go straight to judge dread.
Yeah. Or Judge Red, that is the doorway, the gateway to, nah, I just make judge it the happen. Rob, that's, um, it is an interesting one. So, so, um, I mean, with the exception of Philip, who here remembers what they were doing on November the second, 1936 fruit, Marcel, what was it like back then?
Okay. Um, so, so, um, outrageous. Yeah, yeah, yeah. Oh, no, I'm not having that pay review con conversation with you soon. Am I? Um, but, um, no, the, the, um. So if you had been, uh, around then and you were in the uk, the chances are you might well have been watching Adel Dixon singing The Magic Ray in the first ever televised broadcast from Alexandra Palace.
Okay. So that was our first BBC television broadcast. If, [00:45:00] if you had been doing that and then you, you, what you would've seen is Hearst standing in a radio studio in front of the BBC radio orchestra singing them song Magic Race. The, but the weird thing is for the next 20 years, that's what broadcasting was.
It was people standing in radio studios doing the same thing they've been doing without visuals. But in, uh, in, in front of a television screen, okay? Mm-hmm. And it took us that long before we even, I mean, the biggest innovation in that period was renaming the BBC radio orchestra to the b BBC television Orchestra.
Cool. Um, and yeah, that's dramatic that it is dramatic. BC, the renaming it to the BBC iPlayer Orchestra now. Exactly. But can you imagine, can you imagine though, if, if, if the, the next thing after Adele had been an episode of Jordy Shortes. Society was not ready for that. We don't know. The point is we don't really know where this is taking us.
We don't know what's gonna happen with society, but what we are doing is we are making huge leaps towards something, but we are not really defining what that something is. Yeah. And we're not actually defining a path to [00:46:00] get there. And that's why the ethicist role is part of, the, part of the answer is basically is actually saying, well, okay, it's great if we wanna get over here to, um, I can't remember, Rob, which the, the example you gave of a sort of utopian or Star Trek, star trekk, utopian future.
It, it's great to have that in mind, but we, we, we are, we're jumping way ahead in thoughts without actually thinking about the steps that we take to get there and, and having gates in that process to say, are we actually. Doing this in a safe way, are we doing it in a human-centric, planet centric way?
There's a, there's a great, great book that I've recently read twice. 'cause I was, I was so captivated the first time I had to absorb it a second time. Um, and that's, uh, a new social contract by minutia Shafi. And it, it talks, she's an economist by heart and she's, she's saying that there's such a pace of change upon society at the moment that we have to stop and think what we want from that change.
Mm-hmm. And, and [00:47:00] is it about automation of XY is it about displacement of the blue collar, the white collar? Is it. World society around the, and have not, this whole thing around social contract. We need the society I think is an important aspect that, that is there. Like the, we, we should come in at it, Philip, from a point of view of the intentionality of it.
Like, we can let this happen to us and we can accidentally stumble forward or we can be intentional as we go forward. Um, it, she's, she's talking about it in, in respect of it change is happening. So how do we embrace that change for the good rather than complain about it when it's just passed you by and you left.
Right, right, right. And that's what's really good from an economist to, to talk about that perspective as in the reality of these big decisions. Um, she's absolutely a citizen of the world, an absolutely fantastic book. She's, she's looking at not just from [00:48:00] the westernized perspective, she, she's genuinely looking at it from.
Societal good. Whatever that society is and means for different neighborhoods around the world. Mm-hmm. And be careful for what you wish for. So that relates back to the, the, the, the earlier conversation with the doctors in Amer, in, in Africa. Well, what, what it, what that means could be perceived as saying, oh, we we're trying out this AI doctor bit, oh, let's try out in Africa.
Right, right. That, that, that's wrong. Right. So, so let's be careful on what we wish for. And I think there's a, a, an underlying currency on there is the last 15 years I think technology and its use. Has matured significantly because of this critical thinking, distance thinking systems engineering has simplified and accelerated the adoption of some of the leading technology.
And, and I say long prevail, but there's [00:49:00] additional thing on there, which we talked about critical thinking, but unstructured thinking in terms of what's the implication of this? Because we have to address it from an ethicist perspective. We have to say, oh, what if it's because, because we should be curious enough to ask rather than just think, oh, you know, all a ol aid.
Right? We, we should be just kind of pausing for thought, I think around, Hmm, should we, what's the implications? And that comes into societal contract, not just, oh, it's good for me. Whatever. 'cause we need to be kinder as people. Yeah. Maybe let's briefly then look at our readiness and, and where we're up to as organizations, as society and as individuals in terms of going down this path.
So, um, you know, are, do we have the quality of data that's supporting this? We touched on that a little bit earlier. The human, human driven data, the, the plethora of, uh, AI driven data now, and then the quality of that and, and whether it's helpful or not. [00:50:00] Do we think we have the skillset and do we understand even what those skill sets need to be?
James, where do you think we are in sort of readying ourselves as organizations and society for, for the sort of challenges Philip was setting out? Yeah, that's a really good question. And, and, uh, we we're kind of like looping in a lot of the things we've already discussed together, that they've all got a, they've all got a touch point into this.
So, you know, we've all, we've all seen the issues around, uh, the, the sort of, uh, intellectual property challenges, uh, of getting, uh, enough data to satisfy the, the existing brute force approach of just throwing as much data as you can and allow them to, to actually train it. We've al we're also seeing that within that data, as we discussed before, there's a, there's amount, an amount of it that's generative AI created as well, and that, that, that amount is, is expanding.
But, you know, EI don't know, we don't know if that how much data we need to achieve. 'cause we don't really know what the end goal is. So we're not really quite clear on where, where we're trying to get, uh, with this Is the data of the high enough quality? I, I am, [00:51:00] I mean, historically, no. Because, uh, a lot of the data that we have from the past, I mean, you know, I, I think they've kind of.
Masked over this now, and a lot of the, the image generation tools. But if you were to say, draw me a picture of a boardroom and if it is gonna be, you know, 10 white men in their, in their fifties Yeah. Sitting around a table, that's what you would've got. Because that, that was the reality of, of where society was in the, the historic data.
And, you know, and it's, uh, it's not where we are now and it's not where we'll ever be, thank God. So, so there are issues with historic data being useful for us, and, uh, and then, and then that then puts a counterargument that maybe actually synthetic data can be better. 'cause you can make it more, less biased in that way.
But I, I do think we have to be slightly careful here. There's a, there's a topic that doesn't get discussed that much, or it's starting to bubble up now. And actually, particularly since deep seek came out, uh, people have started to recognize this is, it's not, it, it, it's about who we're actually. Asking to choose that data.
Who is choosing what data [00:52:00] is used to train the model as an example? The, the reason the Deep Sea gets asked is, is gets brought up is if you ask deep seek about Tiananmen Square, it'll go on. Nothing happened here, but, uh, but you know, as the same time, if you think about the, uh, is it Project Orion? He's called it the US or Stargate?
Stargate. The Stargate, right. Stargate. He's, he's chucking money into all these different things to try and advance AI in, in the States. He's got a bit of a hold on there. So imagine I, I could definitely see a situation where some of the models in the states are gonna start having a political leaning.
Yeah. Um, it touches, doesn't it, on sovereign ai? Yes, absolutely. State AI where the, the AI imbues language values mm-hmm. Economics of that particular nation state. And, you know, you don't have to, you don't have to take too many leaps of the imagination to get to a pretty interesting stage with, you know, kind of nation state agents talking to nation state agents and, you know, full, full [00:53:00] AI kind of global interaction.
Yeah, absolutely. It's, it, it, it's really, you know, I, I would predict that there will be models out there that are very anti-abortion. There'll be models that, you know, that, you know, flat earth models for all I know. But, but you know, it's, uh, we didn't land on the moon models make a go together, can't they?
They'd be two in, in a pub talking about the earth was flat and we never landed on the moon. 'cause we can never get there or something. Is that a virtual pub? Virtual pub, yeah. Sat there. Can imagine everybody, the crazy conspiracy theorists with their, uh, vision prose on, uh, interacting with the models. I think, I think this, that, you know what, although I don't agree with the belief of flat earth or we didn't land on the moon, I do think there's a business market there.
So maybe I can exploit it. Is that ethical? The right cloud reality productions will work out the ethics later, Rob, but, but this, this comes onto capitalism, right? So, so you just said, okay, I can make money outta [00:54:00] this one. Yeah. Which is, which is kind of one of the. Corrosive parts of AI is, I can make money on this.
Yeah. And very much so that that is, I'd say, eroding some of the purity of it. Mm-hmm. I can make more money if I do this, but in which case, ethically should I be doing that? And we have to kind of govern ourselves on those things or else we're all capitalists, right? Yeah. But that's the thing. After a Greenspan, the, you know, the east were defeated by the west with economics.
We got wrapped around the axle of used this before and we just go, oh, money. And we just ran away with it. We didn't think about the implications, did we? So my view is there is quite a strong risk because the incentivization of making money is strong in our community and society at the moment. That, that is the, a driving force that we can't put the brakes on properly to say, ah, hang on a minute, do you really want to do that?
Yeah. I I think that actually capitalism will be the reason why we make a huge set of mistakes with this. Yeah. The commercial arms race aspect of [00:55:00] it has been something that's, that's, that we've noodled on before on the show and is, is concerning. Um, and they fall behind from a political perspective. They fall behind the, um, if we don't do it, someone else will do it.
Yeah, someone evil will do it first. Uh, as their argument. And actually that's, it just doesn't, it doesn't register 'cause they're all thinking the same thing. And if, if they could just, if we could just get some communication going between the, between, uh, naming countries, China and the US and you know, that we could actually get some standards, that would mean that we could both benefit from this rather than both destroying each other with it.
Um, that's very existential, but yeah. No, you, well on, on that existential thought. Maybe let's start to wrap up a little bit. I think we've had a pretty good walk around and it's always on this subject. I think that when you just scratch the surface of it, you realize just how deep it goes and it feels like we've had a good roam around.
I'm not, I'm not sure though we've even begun to fully, uh, explore the subject, but maybe that's multiple podcasts on that I suspect in the future. [00:56:00] Excellent. Where, where I wanted to go though is I wanted to touch on something you talked about really early on, James, which was the industrial age. I. Hmm in the dawn of the industrial age and, and how it took us a while to sort of accommodate that as a society for want of a better expression.
As we move into the intelligence age, then, do you think there are lessons that we can learn from the previous adoption of technologies? Or is this, is this so different that we're in new territory? No, I think, I think there are lessons we can learn. Uh, and I, and I think, I don't think we are, in fact, um, I saw the Nobel Prize, uh, round table that they hold after each, each year after the, the Nobel Prize winners get put round a table to talk, discuss to topics.
And, uh, de Salbi obviously who won one of the prizes this, this year, last year, just said it was gonna be great. It's just gonna be like the industrial revolution. And thankfully there, there was an economist at the table that said, actually, do you know what actually happened during the Industrial Revolution?
Yeah. So, so the trouble is the people that are leading these things don't [00:57:00] know history. And I think history is a really, really important thing. For people to understand, um, burn coal 'cause it's gonna power everything. Well, the implications are, et cetera. So, so the implication needs to be considered and the winners and losers are across society or, or across geographies or whatever, without being too dramatic about it.
We just need to acknowledge that some things Yeah, we gotta kind of remember. Through clear colored glasses to say, Hmm, okay. Yeah. Winners and losers, because I mean, yeah, absolutely. I mean, it's 30, 30 or 40 years of abject suffering for the workforce that were displaced and had to move to cities and all the diseases that came in as a result of Yeah.
This sudden overcrowding of cities and all that sort of stuff. Now, I'm not saying obviously that's not gonna happen. It's not gonna be that kind of Yeah. Revolution, but it's still gonna have the same effect. You're gonna have people losing their, their livelihoods, uh, or finding themselves, you know, not having the jobs that they enjoy displaced in [00:58:00] some way.
Um, and, and this brings some, brings us to something that you mentioned, Rob, which is around, you know, do we need A-A-U-B-I kind of concept or a, or, or a tax, uh, paid for with a tax on agents and so on, and. There are steps that we should be taking now if we were going to do that, which we're not taking. So it feels like we're almost a little bit late already.
Uh, well, we know we are. We're late already in dealing with the societal impacts. If we were gonna tax agents, we need to have a clear way of identifying. Agents. Mm. Uh, they would need identities. And then you get into a whole ethical discussion around human rights versus AI rights and all that kind of stuff as well, which is another area I talk about a lot.
And it's, uh, you know, it's a frightening area to, to, to think about we are not doing this. The, the, the declaration of, uh, human rights, the Universal Declaration of Human Rights does not cover enough clauses to actually cope with us co inhabiting with, uh, effectively another species, as in AI agents[00:59:00]
es after that. Where's your head at? Yeah, I think it's. Touching upon a lot of things that we've already discussed, uh, but maybe more from a philosophical point of view and purpose and systemic, uh, transformation as well. So we have that shift from classical technology to quantum, right, and we are already talking about huge leaps.
Uh, so, so this is also gonna be a huge leap for us as human beings. Quantum forces us to move from certainty to probability, from control to interconnection. And that mind shift mirrors how organizations should also be evolving. And I think, uh, Frederick La Lewis's words, I don't know if you know the, the book Till Organizations is what he's talking about.
And he, he speaks to self. Management wholeness and evolutionary purpose. And with AI and quantum, [01:00:00] you could see them as accelerators maybe. But we've already discussed, you know, is that, you know, really something that AI could do for us to have those principles feel more actionable than ever using tech for that sense of, of helping us survive as a, as a species or maybe go to a next level.
Um, maybe even so, leaders must also cultivate those networks that can dynamically scale and adapt, you know, from ego to ego. And I was thinking about that in terms of ethics. 'cause what we now see in ethics, it's more about preventing harm, you know, uh, making sure that it's not. That it's fair, that it's unbiased, that it's respectful, uh, prophecy.
But that's all with a fixed mindset. It's not harming us. You know, what is best for me? Am I gaining money out of this? But if you go into that other shift, that tool organization, evolutionary, it could be even a compass. It could help us drive our purpose. And while the heart is talking about [01:01:00] moving from the meaning, so instead of okay, we can go to a Star Wars kind of world future, or are we using AI to help us drive the compass and then see where it's gonna take us, uh, that's a huge shift in mind shift as well.
And I was wondering, do you actually have conversations about ethics that are more compass driven instead of protect driven? Do you already see that shift happening or is it mostly protecting I. Uh, I, I think at this stage that most of the con It's a brilliant point actually, but I I love that, that that concept.
Um, I think that most of the conversations I have at the moment are definitely around the, the, the sort of, uh, protection view that you, that you give. But I, I think part of the reason for that is that we, we don't, we can't, or at least we are not ready to accept and trust AI it to, to, in order for us to be able to say, right, you're gonna shift us to go over in this direction, we've actually gotta trust the ai.
And we're [01:02:00] not at that point at the moment. Philip, you can say something. I was wrong. I was just gonna say that the, the trust is, will be eroded if you ask this AI thing one question and two minutes later you ask it same question again and it's different, different answer. So we must be able to audit the decisioning that AI can give us.
Which is probably industrialization of, of ai. How to ensure that consistently giving us.
A week later, it's giving me a different answer, which is gonna kind of erode that trust. So we've gotta kind of protect ourselves from expectations of what we put on AI or else it's gonna start eroding our competence levels in embedding this AI to, to help us answer it. And yet actually certain times those change, those answers are gonna be changed because it's learning.
And we have to understand that as a, as a living beast. Do you [01:03:00] think if we're talking about the intelligent age, could we make that maybe emotional intelligence age? Mm-hmm. We would love that, you know, that it actually helps us increase our emotional intelligence instead of the tech intelligence, if you know what I mean.
Yeah, yeah. Absolutely. Right. And, and actually there's an interesting point on that, that as, uh, we've already talked a lot about the fact that, you know, it's, it could potentially dumb us down where, so we're no, no longer thinking about stuff so much. So someone, uh, I think it might have been Yuval Noah Harri, but I'm, I'm not sure it's one of his, but the, um, he talked about the transition from homo sapiens.
It's a homo sentience. Yes. Where it's all about feeling as opposed to, to wisdom and intelligence. Which is right or wrong. Right. Not right or wrong. It's a feeling. And I think there's that point about humans change their answers as they get older. So a question you ask a 20-year-old and you ask it when they're 50 'cause they've got more knowledge under the, they've forgotten what they said.Yeah. Yeah. But they can change. Yeah. But quite belief systems, although very [01:04:00] rooted in structure for the human and And they don't move fast. They do move. Yeah. Yeah. There is that as well, which is like you can change your opinion on something. It's okay. But as long as it's based on there was an experience, it's audible.
So to your point, I had an experience, I now understand this better. I've changed my mind. Yeah. And I was with you. Uh, and your thinking there is right up until you said Star Wars is in the future. And actually everyone knows it was a long, long time ago. So, you know, it was a long, long time ago and a galaxy far, far away.
Let it go. I really don't like it, to be honest. I don't have a clue out to, I'm sorry I'm hurting a lot of people right now, but I really don't. This is too late in the show to bring up such a monumental issue. Know I know. Oh man, we, I'm tired now. Dar Trek, star Wars. You can't confuse the two, right? It's, I mean there wat the world's a part.
Well, don't understand the difference of two. Well, one's in the future, one's in the past. That's the first delineation between the two, isn't it? And one's right, one's wrong, right? [01:05:00] Well, one's actually happened. So One's truth. The other prediction, right? One's his literal, literally history just being replayed out.
Yeah, true. Marcel, do you like it? I'm just trying to, to find somebody who's actually with me on this. Uh, good luck. No, I've never seen it. I don't believe you've seen, so I do not believe you for a small bar. Second. I don't believe you for a second. You Well look on that, on that extremely controversial note.
I mean, we, up until then, the show I thought was going very well. I quite like DeSay and Marcel. Yeah, I know. Now I'm gonna have to dislike them. I still, I don't know if I can cope with this, Dave. It derailed right at the last minute. Robert, will. It, it's terrible. This is it. We'll have a production meeting as soon as this is done.
Yeah. We're just you and me, by the way, just to sort. This is the new production team. Don't, don't bring me in as witness. I'm not, I'm not gonna fall into that one. Right. Yeah. All right. Well, look, thank you very much to, uh, Philip and James for joining our, um, access All Areas Chat today. Good to see you guys.
My pleasure. Thank you very much, guys. It's really fun. Now, we end every [01:06:00] episode of this podcast by asking our guests what they're excited about doing next. And that could be, you're looking forward to seeing the new and our episodes that have dropped this week, which is excellent style or content, Esme.
Um, or you might be, it might be something in your professional life or it might be a little bit of both. So, uh, Philip, why don't you kick us off? What are you excited about doing next? I, I, I'm, I've had a pretty phenomenal week actually, and, and I need to distill us into thinking bit and I've been working in, um, challenging our area of our working submarines and in that submarines we are introducing LLMs and AI and all that type of stuff.
And it's phenomenal and, and it's brilliant thinking person's, uh, agenda is, is wonderful. We've gotta make it real and, and my mind's exploding at a moment. Because of the people we put in a room is just awesome. I'm gonna kind of distiller, so I'm looking forward to a three day, week weekend where I can kind of not [01:07:00] think about things, let your subconscious work away on stuff and, and, and, and I think benefit from post.
So. Yeah. I wanna kind of take time out and understood. So I'm looking forward to getting to sun, uh, at the weekend and doing little thinking and kind of getting back to Sun of the whole self. Well, enjoy yourself doing that, and I think it is gonna be a nice long weekend with a bit of sun, so it should be nice in the garden with a drink of your choice.
James, what you excited about doing next? I've just been asked to be a, a, an advocate for, uh, an organization called Safe AI for Children. Brilliant. And I've committed that I'm gonna spend some time thinking about how I can actually help. So I'm actually really looking forward to spending the weekend just having a bit of a brainstorm with myself about what can I do to, to actually yeah, contribute, uh, to what they're doing, which is I think is fabulous.
I mean, that's brainstorm. Is that, is that like, um, schizophrenic or Yes. Is that like bipolar [01:08:00] brain? Isn't that called a thought shower now, isn't it? Isn't that called thought, thought? Yeah. No, because it's too brainstorming's, too violent, too aggressive. Yeah, it's too, it's too assertive and violent. Oh, it's thought shower.
Okay. I thought it doesn't quite have, okay. Sorry. I've learned something today. I'm just, I'm just, I'm just impressed with how meaningful your both your weekends are gonna be. I mean, I, I sitting and staring at the TV Kasandra, right? Yeah, exactly, man. I've only seen the first two episodes of this week's drop.
Oh. I might treat myself to a Black Mirror episode at some stage. Oh, yeah, yeah, yeah. First two was so depressing. First episode. Just absolutely horrible. Horrible. Yeah. Yeah. Yeah. It really, it really, Chris Odo does a very good, that was amazing. Really good. That was, that was a good one. But I did own, I did wonder, a spoiler alert, but I did wonder why he didn't, at the end, why he didn't just drive her outta the county.
Seems a much more shut. Turn her off. Shut down. Shut down. When she got outta range. Yeah. So why didn't he just drive her out the county rather than Oh, I see you're saying for that. [01:09:00] Yeah, yeah, yeah, yeah. Yeah. Good point. It could, that would've been a little more straightforward, wouldn't it? Yeah. Less dramatic.
Anyway, that was, uh, for, for those who didn't pick up on it, it's I think season seven episode one common people. Mm-hmm. And very much worth a watch. Very much worth a watch, but not a happy place. So yeah, don't use it to share, I wanted to say address ethics. Oh, Chuck. Think Mirror is turning into a full examination of that really isn't there when you sort of step back from it?
I mean, that's effectively what it's about. It's Charlie Brooker, the new George Allwell, right? Yeah, yeah. One of the most absolutely preeminent thinkers of our age. Really, like really doing it in a populist way, you know what I mean? Mm-hmm. Mm-hmm. Good. Science fiction is always the best source of ethical discussions that I, that I come across, right?
Yeah. I, I used to say science fiction is, um, uh, today's science fiction is tomorrow's iPhone. So that's talking about the technology side of, of things. But I also think that good science fiction, they always address the ethical issues that we don't address easily in real life. If you would like to [01:10:00] discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at Cloudrealities@capgemini.com.
We're all on LinkedIn. We love to hear from you, so feel free to connect in DM if you have any questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to Philip and James, our sound editing visits, Ben and Louis, our producer, Marcel, and of course to all our listeners.
See you in another reality next [01:11:00] week.