Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com
CR083: Jagvinder Singh Kang Partner, International & UK Head of IT Law at Mills& Reeve
[00:00:00] If I'm not, if I'm trying to be funny and I'm not funny at all, let me know that as well. I used to that with the rest of you. We do that all the time. I know. We're a very low bar of humor on this show. You're in good company. You're in good company.
Welcome to Cloud Realities, an original podcast from Capgemini. And this week, a conversation show exploring AI from the world of legal. And what are the impacts on society and organizations? And what is the law doing to protect us? And what do we need to do to respond to that? I'm Dave Chapman. I'm Esmee van de Giessen and I'm Rob Kernahan.
And I am delighted to say that joining us this week to talk about all things law is Jagvinder Singh Kang. He is partner, international and UK head of IT law at Mills Reeve. Jagvinder, [00:01:00] thank you for joining us today. How are you doing? I'm doing great. Thank you very much, Dave, for having me on the show today.
Really excited to be talking about ai. As you mentioned, I'm at, uh, mills and Reefs. So we're a top 40 UK law firm and at the firm I head up the AI IT Data Protection and Cyber Law Services. So I've been specializing in tech law for well over 25 years. But I have another side to me. I love tech. It runs through my veins as in addition to being a lawyer.
I'm also a qualified software engineer. So I have a degree in computer science and software engineering as well. So when I'm working with organizations, I can speak the same language and align the tech with the law. So I look forward to delving the world of ALI with you. That is a coming together of two mighty professions, my friend.
Does that make you like a sort of a superhero at the, at the nexus of two things coming together? Absolutely, Dave. And we'll hear more about superheroes during today's podcast. [00:02:00] Looking forward to it. Esme and Rob are here. Esmee, you good? Yeah. And I, I absolutely love that when there are two worlds coming together in one person.
So I'm really looking forward to those two perspectives, uh, in the reflections he's sharing. It's certainly how you get a new and unique perspective on things, particularly given that we live in the world of, of tech pretty much exclusively. So getting a different perspective on it is going to be very good.
Can't wait to hear about that later. What's confusing you this week, Robert? Well, David, this is an interesting one, or it is for me anyway. Our organization's over embracing the algorithm at the wrong time. And so I'm going to use Netflix as an example. So Netflix releases a TV show. We've seen the recent one with Jeff Goldblum come out about the sort of mythology, the gods that, you know, modern day spin, quite an interesting story.
Oh yeah, got counting, right? It's been cancelled, but you look at it, great production [00:03:00] values, good acting, the storyline has a decent hook on it, but because the algorithm told them that people didn't instantly click on it and watch it and stream it, they've killed it. But, creatively, you could sort of see a nice four sort of series arc, I'm sure they designed it, it could have got really interesting, the story could become very compelling, and so it's like, They're killing things off that if given a little bit more time could be so much better and how many TV shows would have hit the, you know, would have died because of net, uh, you know, an algorithm actually got to survive and became like the great.
And so I'm wondering if in the pursuit of just relentless, ruthless, objectivity associated with performance, are we forgetting that sometimes you have to let things live a little bit longer to see if they're actually going to be great or not? And that is the bit I'm confused about. Should we just embrace the algorithm or should we not?
Money makes the world go around, huh? That is part of the problem, isn't it? Didn't get the [00:04:00] clicks, you're off, you're gone. Hollywood's had the same issue with building to scripts and things because they think that's successful, but we're missing out on, on potentially great films because of a, you know, a formula.
Well, I think a lot of the time, this, this actually reminds me of a couple of the previous debates we've had on the sort of art versus science or art versus AI debate. Well, just like to remind you, Robert, you came very squarely down on the side of AI generated music, which I'm not sure you're understanding that point.
It's good. It's good. No, if it's, if it's good, it's good. That's my argument. But I think where, where I would go with that was like, AI generated music can generate You know, like the data set called music and it will therefore generate things that have been created within that data set and great art often breaks rules or does something uniquely or does something that's like really against what data driven decision making would have you do.
So I think I would always, I think I'd always come down on the [00:05:00] side of the, you know, to quote Jobs on the side of the maverick, who's going to do something. different, despite sometimes all the commonsensical things and all the data things would tell you to do something else. So you need to write a strongly worded letter to the CEO of Netflix and tell him to uncanceled that TV show then.
I think we've got an AI agent running Netflix these days. Is it fully agency? We probably have now. I mean, but you wonder where they're going to go because they might, everything's just going to become bland and things that the, I mean, if I was a TV show producer, I'd find out how the algorithm worked and then.
do whatever you needed to do to make sure you know what would probably be the case. I'm guessing I'm not, I've not seen any of his data, but I'm guessing from my own habits and what I've talked about with friends, it would all generate to a level of probably reality TV. Yeah. On board or whatever it is on deck or whatever you like.
Yeah. I alone did 13 seasons of nutjobs. You're [00:06:00] both here. I can't believe you watch it. But no, it's that, it's that the, um, we're going to miss out on stuff. And it's like, you know, I, and here's the worst part. I invested in watching it. I got really, I really liked it. And then at the end, they cancel it. And you're like, Oh, you shaky fist.
It's like, do you remember Cheddar led? I think the world needs a little bit more of chat roulette. Do you remember that again? That's an application and you go into chat around the world and then the algorithm decides who's going to be your, you know, who's going to be in your meeting. Right. Yeah. A lot of the schemey things happened and then, you know, the people.
I don't think I can handle the stress of that. Yeah. Meeting random strangers. That's a terrible horror. You know, the fun of, you know, just letting things happen. Can you imagine Deepfake being introduced into the world of chat roulette? Oh, that would just go horribly wrong, wouldn't it? Yeah. I think I'm solidly against it.
I think it's, I think on one level being data driven is a really helpful thing, but it's like most hammers, isn't [00:07:00] it? Really good at being a hammer and like just having a hammer is not as good as having a full, a full toolkit. So I think it needs decision making outside of it. And maybe, you know, maybe there'll be a point where the algorithm can factor in artistic merit.
So maybe it could say, you know what high production values that looked like it was going somewhere. You know, maybe there's a decision point here that we need a human on rather than just going, well, it didn't get, you know, kind of N million clicks on the first seven days. So therefore it's dead, but I guess time will tell and we'll see how that goes.
I am gotta say quite worried about it. The streaming world is going through a weird flux of momentum, isn't it? Yeah. Yeah. Not great. Anyhow. Let's get on to maybe some subjects that might help with this over time, and we're going to have a look at AI through the lens of law. So, let's start with AI itself and how business are perceiving it.
It's been around for a while, I [00:08:00] think, in various different guises. We've covered it on the show in a number of different ways. But in the way that you're framing it, Jagvinder, like, why is this being perceived as new at this point, do you think? I think it's new depending upon the audience today. So if you're talking to organizations in the industry, the I.
T. Supplier organizations, they know that A. I. Has been around for decades when you're looking at it from a customer organization perspective, who wants in that sort of daily business of providing I. T. Goods and services. It will feel new to them. I do a lot of sort of public talking. And one of one of the things that normally comes up when I'm mentioning to audiences that it's been around.
Longer than me. It's been around for over 60 years. I get two surprise reactions. Firstly, that it's actually been around that long. The second reaction I get is that they look at me and say, Jagvinder, you haven't hit 60 yet with all of that grey in your beard. And I have to tell them, well, that's because I'm advising on GDPR all the time.
We say that about Rob's hair. Oh, [00:09:00] there's a fun, there's a fun subject, GDPR. But, uh, We've been living with it for decades. The reason why it does feel new to organizations is it hasn't been labeled as AI until recently. So we've been living with AI, whether at an organizational level or in our businesses.
So it's been behind the scenes, whether it's been sat navs, it's Siri on the iPhone, or it's facial recognition, or even watching sports, you know, a lot of the sports, the curated highlights, you They're done by AI, but they've not been labeled that way. So what's changing now, it's being labeled that way, and it's, that's the, that's the key change.
From a organizational consumption perspective, there's something perhaps different in terms of how the tool sets are showing up to them, the power of those tool sets, and perhaps how you have to sort of, sort of commercially protect yourself and legally protect yourself in the use of them. Yes, I think it's this, well, there's a couple of things, Dave, on that.
I think [00:10:00] firstly, it's The immediacy now that there's easy access to AI. So if we look at the gen AI platforms like chat GPT, it's the fact that anyone could access this. Now you can sort of easily have that within your organization. You don't need a technical background to interact with it. Anyone can use it.
And those immediate results which you get now make it feel cool. New and powerful. And then there's also the versatility of what it can do. And if we sort of go, you know, look at the sort of evolving nature of AI, it's now come to a point where it's become more powerful because of the power of cloud computing, the vast amounts of Data analytics.
I mean, if I take you back, Dave, think about your youth when both of us had less gray. Do you remember our good old dial up modems? Oh, very well. U. S. Robotics, here we come. I hear the 78 800 board, we all loved it. [00:11:00] And I was, I was going to say, if, If you remember back then, and now we're going to sound like really old blokes, the dawn of the internet.
So when you were trying to go on a web page and you heard all those beeps and static and it took hours to get onto anything and you had a domestic experience in your household if someone picked up the phone and started dialing. Rob can do an impression of it. Make the noise. No, no, I'm not going to do that Dave, but I could actually, if you, if you could, you could get your ear tuned to the, the modem sounds and you could tell how it was connecting and what speed it was connecting at.
So before it popped up, you knew if you were getting 52 or 38 and stuff like this, it was, you got, you got quite, you got, it's a skill. What's the difference between a 52 and a 38 sound Rob, do it. No, I'm not. Go on, do it. No, no, I'm not doing it. Dave, you do it. No, I can't. I can't give it justice. See, I knew you'd be able to do it.
Dave, I think all your listeners are going to sort of tune out. They're probably thinking we're still on, [00:12:00] you know, old speeding today. But, uh, This is childhood memories, Sam. Well, for Davey was an adult, let's be honest. Oh yeah, for me it was childhood. For me it was a childhood memory as well, yeah. But if you think about those days, back in those days when life was a lot simpler, you were also very limited on what you could actually do with the internet.
If you look at us where we are now, we can't live without the internet, whether it's personal life or our business lives. And I think it's the same thing now with AI, that's Now that we've got more powerful processes, we've got cloud computing, we've got data analytics available to us thanks to the internet, we're going to be able to do a lot more things and that's why again AI is feeling a lot newer and that's why businesses are going to have new challenges when it comes to sort of the legal side of this as well.
So let's maybe go a little deeper. I think we probably acknowledge that in general, AI can do a lot of good. And I think there are examples of, of AI [00:13:00] doing interesting and good things. A lot of which are invisible in our lives in the way you've just set it out. But I think it's true to say that it's visible in most industry sectors these days.
So what, in terms of the adoption of them then. from your perspective, do we need to be mindful of and where do the downsides sit? So Dave, as you know, that in addition to being a tech lawyer, I'm a software engineer, and I love my sci fi. So, and I love my superhero. So I'm going to ask you, are you familiar with Uncle Ben?
Yes, I know there are two Uncle Bens in my life. Jack Vinder, both are pretty important to me. The first one pioneered microwavable rice, which you know, has got an important place in society. And the second one, Rob, come on, you must know it. Yeah, with great power comes great responsibility. Spider Man's, uh, what was dad in inverted commas?
Uncle Ben. Literally the clues in the name, Rob. Well, that's why I said dad in inverted commas, [00:14:00] Dave. He basically raised him. Why didn't you just say uncle? Because he was his dad figure. That's why he's so sad when he dies. And I'm sorry if that's just spoiled something for you, if you watched the film, but they've been out for a while.
No, I keep doing air quotes. Yeah, why don't you go do it? Nobody obviously is. It's an audio medium. Yeah, I've done it again, Dave. I've done it again. Jagvinder, did we get the right one? You got both of them right. So I've now suddenly got a craving for some white rice, but also with good old Uncle Ben from the spider world, Spiderman world universe, that line with great power comes great responsibility is absolutely key to all businesses.
And I think it's to our society as we look at AI. So looking at businesses wishing to adopt AI, they've got to keep that in mind. In the forefront of what they're actually doing, when they're looking at their objectives, they've got to look at doing this in a responsible manner, which is why AI governance is going to be [00:15:00] absolutely key.
So I think going forward, that is going to need to go hand in hand, this responsibility with looking at deployment. Just on that point though, when you think about the nature of the world as it is, we like to think we live in a socially responsible country, and there are many who have the same type of views, but there's always those rogue nation states who you probably can't trust.
And I often wonder is they probably don't have the same ethical position. And I sort of think that that's where maybe the bad things will rise from. Do you have a view on, you know, how the world is coping with that? And the sort of geopolitical pressure that will come that people don't want to miss out.
But actually, we do have to be responsible with new technology. It can cause harm if, if not correctly controlled. So I think this is one of the areas when people start talking about existential risks that people are sort of thinking, well, what's the point of having good governance exactly like you mentioned, Rob, in one [00:16:00] area of the world and not having that in, in other areas.
And the Nobel prize winner, Professor Hinton, who has been described as the godfather of AI because he helped develop artificial neural networks. One of the things he mentioned was that it's hard to see how you can prevent the bad actors from using AI for bad things. And he does feel that even with his life's work, that This could pose a threat to humanity due to the sort of unexpected behaviors led by machines for the massive amounts of data, which they analyze.
And I think this is why countries are now sort of coming together, whether it's through coordination at summits or conventions to look at. the ethical and responsible way of deploying AI. I saw a very discreet example of that like literally just today as I was flicking around and for some reason, I don't know, I don't know whether it's because of [00:17:00] what's going on with the Starship at the moment, but I'm getting a lot of Elon Musk generally on my feeds.
It could be to do with something going on in the Your phone's listening to you Dave, that's why. Yeah, it could be that. Um, anyway, it was an example of a video on YouTube where when they were live streaming the Uh, the Starship takeoff and landing, there were a couple of other, what looked like live stream options.
When you clicked on that, it looked like, you know, a crowd around the bottom of the Starship with Musk on stage talking to the crowd. And there was a QR code on screen. And Musk was saying, basically, if you want to get special access to different views, blah, blah, blah. Just click on the QR code. And it was a crypto scam.
And the whole thing was. AI generated. So like, and it was completely believable. I mean, if you looked at it very closely, perhaps you might have spotted it, but if you're just browsing, especially browsing thumbnails, there's one example of like, like incredibly sophisticated in terms of its [00:18:00] presentation, at least.
Scamming that didn't exist even a year ago. And it's those kinds of things where it's going to require more than just sort of AI to counter those. It's going to require education amongst everyone to sort of see these types of risks. Just as organizations at the moment, they're sort of helping staff phishing risks.
These phishing risks, these other scams are going to increase. In fact, one of the big cyber breaches which involved InterServe, the regulator said that the biggest risk area is lack of appropriate sort of training amongst certain other aspects. So your point is spot on, Dave, that Looking at AI deployment, looking at the use of that, these risk areas will require sort of an educational aspect for the workforce and the public as a whole as well.
And I think there's another side to that, which is responsibility built into the technology, which is, you [00:19:00] know, the phone camera scans the QR code. And it's connected to a knowledge base that says, actually, are you sure? And the technology should prompt you and say, are you really sure? And we kind of see that now in sort of browser technology and what else, but I think it's got to get dramatically more sophisticated to say, detecting AI and say, are you aware this is AI?
You know, blah, blah, blah. Otherwise, you know, some people still struggle with the text scam that says this is your bank ringing. You've been a victim of fraud. How do they cope in a world where this really sophisticated AI is? It's replicating your son's voice or, you know, whatever on the phone. It's, it's a, it's a minefield for them.
But to look at that, that type of example, Rob, where you mentioned this sort of minefield of where people can use it for various sort of dubious things, there's been something quite interesting. recent where two Harvard students have decided to link a pair of metaglasses to capture [00:20:00] images and then they've sort of taken those images in real time, used facial recognition software to find out who is being looked at, look at social media feeds, relay that information back to the wearer of these metaglasses and they actually went out in public and you know, went up to people saying, you know, it's like me going up to Dave and Dave and myself not ever meeting and saying, Dave, how are you doing?
And Dave's thinking, hold on, should I know that person? He knows my name. And then I say, Dave, do you remember, we saw each other at Tower Bridge because I saw something on Dave's social profile saying this. And apparently, this was quite convincing when these Harvard students did it. They managed to trick individuals.
I saw that video. It was quite impressive. And, and I think it's also, do you look at it from, wow, this is cool. Do you know, can you imagine what this can, you know, bring to all kinds of things in society that is actually helpful. But if you look at it from the risk point of view, you're like, Oh, are we there yet?
And [00:21:00] the point there is society is generally inherently good. So, if everybody was good, it would be fine. But, dot, dot, dot, we have one in a thousand people who are actively trying, I think the stat is one in a thousand actively trying to commit crimes, etc. And they're the ones who are going to wreck it for everyone, aren't they?
It's that thing about, you can't trust everyone, and therefore, it's actually, you have to limit the capability, or try and control the capability, because, you know what? There are bad people in the world. But the question arises, how are you going to limit the capability? Because with that Harvard example, the Harvard students did say, because the metaglasses weren't designed for this, so the Harvard students said, we're not going to release the code for, you know, how you can do this.
But the thing is, the fact that they've told everyone you can do it, it's not going to take long for somebody to work it out, is it? Yeah. So that is going to be the problem with AI, that If people are going to use it for negative uses, [00:22:00] it's going to be very difficult to constrain that. You can have your laws and your regulations, but the criminal element there are going to disregard that anyway.
Let's maybe use that as a jumping in point to the perspective from a point of view of law. What are the safeguards look like at the moment? We've talked about on the show a little bit how we feel that regulation is running behind the rate of innovation, and that's particularly an issue at the moment because AI in particular is a commercial arms race for kind of superiority versus say, you know, governmentally controlled innovations.
And that is rushing forward for commercial gain ahead of regulation. What's your perspective on that, Jack Vindram, and how do you see the law catching up? The law's always going to be in catch up mode. That's just a matter of fact when it comes to technology. Technology is always a constant innovation journey, and the law will always play catch up mode.
What we [00:23:00] do have at the moment is the new shiny EU AI Act, so that is trying to provide some governance around this. In order to sort of illustrate how it can be helpful, it's probably worth just having a look at Some of the sort of examples of where AI has gone wrong. So if we cast our mind back, back to 2014, so a decade ago with Amazon, they were trying to streamline recruitment and they created some software which was going to help them with that.
It was trained on the previous 10 years worth of data at the time. And as a result, what it actually did, and Amazon stressed that they didn't put it into live production use because they spotted this, but when CVs came through, which had women's chess clubs, women's colleges in there, they got auto rejected because the data that was trained on previous 10 years, male dominated industry was all blokes.
And what the software did pick up on was. If you put the [00:24:00] word executed or captured in there, because apparently guys use that in their CVs. I never have, but apparently it's the thing to do at the time back, back in that decade, they got selected. Rob tends to overuse the word execution. I don't know why.
He claims context is everything, but I'm not so sure. Context is king David, you know it well. And that's, uh, I'm expecting Rob to just use execute and capture throughout this podcast. Oh, just wait. It's only a matter of time. Yeah, I'm going to be waiting. We'll have bingo on that. But, uh, that is an example of where there's been, you know, an example, if that had gone into live use, that would have been biased.
And if you think about the sort of negativity associated with that, the EU AI Act, Is looking at aspects such as that they're looking at aspects such as transparency as well to let people know that you are dealing with a ice again back to Rob's point that if you're dealing with something to let them know it is AI generated.
And [00:25:00] therefore, to answer your question, Dave, you know, on regulation, what's happening, the EU AI Act is looking for a more human centric approach, trustworthy AI, try to build in this high level of protection on health, safety, and fundamental rights. So again, going back to Rob's point, that generally people are good.
So if you can get enough organizations out there doing the right thing, we should have good AI. And those who aren't doing the good stuff. The EU AI Act has got some quite sharp teeth as well with some quite large regulatory fines that they can impose. How is it categorizing this then? I assume it's taking some form of risk based approach.
Just dig into that a little bit for us and just how does that kind of practically get applied? Yes, it is. It is a risk based approach. It goes from the highest risk, which is prohibited uses, which are unacceptable risks. So these are the big no nos, you just can't do them. Then you add, as the name [00:26:00] suggests, high risk, which are high risk items.
Have you got an example of like what an absolute no no is? So, so I'll give you an example of an absolute no no. And I mentioned to you, Dave, that I am a sci fi fan. So I'm going to, I'm going to give you a Trekkie example for this because I'm sure you must have some Trekkie fans out there. I have no doubt.
Absolutely. But I'll break it into non Trekkie speak as well for those who don't understand it. So let me give you this example and I'll show you why it relates to the EU AI Act. Those of you who watch Star Trek may remember an episode with the Ferengi. The Ferengi are an alien species And the entire race is based on the sole pursuit of profit.
So some of the listeners may be thinking, well, that's just described a private practice lawyer. And I'd agree with you. And what, what this alien species tried to do was use mind control and captain Picard. If you don't know who captain Picard is, he's the guy who does the Yorkshire tea adverts, Sir Patrick Stewart.
And they were trying to [00:27:00] control his mind to blow up his starship enterprise. So that's. cognitive manipulation where it's going to have a detrimental impact on someone. And the EU AI Act actually refers to that, and that's a definite no no. So if you're going to be using any cognitive manipulation to do something detrimental, that's a prohibited act.
I mean, if you think about it though, there's actually regulation preventing mind control. I mean, we've actually got to the point of technology where we're legislating about mind control. I mean, it's proper dystopian 60s sci fi come to life. Yeah. Well, let me give you another example. Rob, let me give you, let me give you a Tom Cruz example as well.
So that'd be sticking, sticking your mind for, for prohibited uses. So you might remember that old movie, Minority Report, good movies on Amazon prime. If you haven't seen it, yeah. So, so Tom Cruz in that almost had as many screens as I do as a, as a tech lawyer and software engineer, and he was doing predictive policing.
So. with the [00:28:00] predictive policing was trying to work out who was going to commit a crime before they committed it and then arrested them. And if you watch that movie, you can see why that wasn't a good idea. The EU AI Act says that those kinds of things are an absolute no no as well. So obviously the legislators have sat down with Pyramid Channel and Amazon Prime and thought, right, these are good examples of ruling out.
But there are other, you know, Sort of more real life examples, which are, are no, no sense things like inferring emotions in the workplace or educational environments. So those kinds of things using biometrics, very intrusive types of things, those things are prohibited as well. We had recently a guest on the show who was talking about the use of AI specifically at borders and the use of, you know, things like facial recognition and potential, you know, sentencing.
So it sounds like potentially that some of the AI acts in the way that you just said with inferred emotion may well have an impact in places like that. Yeah, there are [00:29:00] some exceptions for law enforcement. Right. So there are going to be, for example, if you had a pilot that was falling asleep and you're going to use biometrics on that, that would fall into the lower camp, which is high risk.
So there are going to be some exceptions. The acts come out, the guidance hasn't come out yet. So it's going to be a little bit like the GDPR where organizations are going to have to have a watch and see type approach. But I think that The sort of general view that you can take is if you're going to do something super intrusive or super creepy, that's probably going to be a prohibited use and you shouldn't be going down that route.
There's a really good example of this is that deployed facial recognition to spot people in the street. And if you ask the community, do you want law enforcement to deploy facial recognition, there's a resounding, no, we don't really want it. If you change the use case to, do you want to deploy facial recognition to find something like missing children when they're lost, then suddenly everybody agrees with the use case.
And it goes back [00:30:00] to that the technology is the same, however, the applied use case is different. is different ones for inherent good and the other one is just knowing where people are and people don't like that intrusion. So the problem though is once the technology is created and deployed people will turn its use case because it's easy to do that as well.
So there is that danger that you get creep on the technology and I think it will have to watch it quite closely. And that's a good point Rob because with any law this happened with the GDPI will happen with the AI out. There will be this grey area for organizations to work out. which side of the line are we actually on, which is where it's going to come back to good governance.
So just like the GDPR, you've got to do data protection impact assessments. The AI Act will require you to do something similar, a bit more detailed in terms of fundamental rights impact assessments, where you will have to sort of look at these risks. How are you going to deploy it? What are the benefits?
You know, what are the, what are the [00:31:00] shortcomings? where can it have a detrimental effect and then weigh all that up as well. So when you bring it back to the rate of innovation, say a subject we were talking about earlier and whether regulation is currently trailing the rate of innovation, does something like the AI Act, which is now trying to get its arms around this to just to take the counter position from the one we're talking, does it now get in the way of innovation or do you think there's enough room in the act as long as you're acting responsibly to innovate at the same rate?
I think it's going with the way of sensible innovation and responsible innovation. The legislators aren't looking to stifle innovation, but they are looking to make sure that, as I said before, the genie doesn't come out and you can't put it back in. If you look at GDPR and everyone's eyes will roll when you say GDPR because the way that it's been implemented has been incorrect [00:32:00] for a number of organizations.
If I get any more pop ups that I have to close when I go onto a website as a result of legislation, Dave ain't gonna be happy. That's a good example of where it kind of missed because you should build that technology into the browser and then use a global choice and you go that but It's good intent that lands very badly from a user experience point of that.
But yeah, it's, uh, they're trying to protect us, Dave. You're right. Lightning is too trying to protect us. I think that while I'm spending the 30 seconds closing all the boxes down, Rob, all for my own protection. Or you do what everybody else does and just accept it. Yeah, totally. It's the fastest way through the process.
I don't want that second pop up where it gives you like 40 different options. Yeah. boxes that you have to go through. But it is great because they, you have to be allowed to show. So it's a good example of where they've frustrated the process and you want to get through it. So you just click accept all because if to reject, they say, well, you have to tell us of these 40, which one you want to reject.[00:33:00]
Anyway, Jack, but did you think this is what it's going to look like? We've got our little, our little AI bots are going to ask us questions about responsibility before we can get on with Well, the problem, the problem is, is how it's actually going to be implemented and the consultants that you're going to have to help on this and the internal staff you're going to have, because if you, you know, taking back the analogy with GDPR, the whole purpose of that was to safeguard individuals.
Yeah. And if you think about it at its heart, if you do truly embrace it and you give it a nice big warm hug, it can actually help an organization. So if you're saying. You're not going to store data for longer than you need it. You're going to make sure it's up to date. It actually protects people. But if you adopt it with the sort of mindset, it's a tick box exercise.
Then the problem that you get is it just gets in the way of the day job. You're building a very weak foundation. It's not going to do any good. With the AI Act, it's going to be something very similar. You're going to, it's a very technical piece of legislation. You're going to need both [00:34:00] internally and externally staff that are going to be able to help you with that.
The key stakeholders from IT, from your compliance teams, from your HR teams, et cetera, who can get properly involved and look at What is the objective of those laws? It's to safeguard, it's for building up trust, it's to guard against risks. If you implement it in that way, it would be very useful, but if you're kind of looking at it as a tick box exercise, it's going to be frankly a waste of time.
And what do you think that Implementation is going to look like in this case. Can you just kind of give us an example of the sort of conversations are implementations that organizations are going to have to put in place? I think I think one of the main things that organizations need to do. I mean, I've been I've been advising on tech for well over 25 years and the best advice I can give when you're looking at a I deployment and you're looking at appropriate governance and doing this responsibly is make [00:35:00] sure you have enough of a runway to your A.
I. deployment. Often in tech projects, what happens is there's an artificial timeline which has been sold to the board, and then everyone sort of crams to get it done. Corners are cut, so it doesn't matter what governance you've got. If you're cutting corners left right in center, it just will not. It will not work.
So having enough time to do it properly in terms of some of the elements that This act will require organizations that are involved in sort of high risk areas to deal with having risk management systems, and this will be a continuous iterative process right from sort of inception, design, development and deployment and looking at aftermarket, you know, what is there to guard against risks, looking at risk mitigation, not only with regard to foreseeable risks, but also reasonably foreseeable misuse as well.
Right. Looking at. What's going to happen with that? Data governance as [00:36:00] well. Where are, where's the training data coming from? How's it being labeled? How's it being cleansed? A lot of the things from the AI Act actually do also map to GDPR. That's why the previous government was saying in the UK, we're not going to put in any AI specific legislation because we've got a lot of it already through all the laws.
like the GDPR because the GDPR covers data, it covers technical safeguards, it has record keeping transparency, et cetera. I see. I see. And for organizations that are on the wrong side of this, what kind of punishments does the Act Does the act enact? Can you say the act enact? So can you say it? Well, clearly, because you just did.
I can't. I'm not sure. I think I've twisted the English language beyond its reasonable boundaries, but hopefully, Jacqueline, do you get where I'm going? Fair. I was just going to ask you to say, Dave, just repeat it 10 times and see if you can carry on saying it. Peter, Piper, Pictor, Pepple, Pepple, [00:37:00] whatever it is.
But what, but what does it do? Like, what are, what are the, um, what are the punishments that are liable for, I guess, both individuals and for organizations? Well, the, the AI Act is looking at this from a organizational perspective. So there can be no, like personal responsibility, like there can be for chief executives, for example, when they're signing off things that are related to finance or whatever it might be.
There is some, there is some personal culpability there. Is that, will that be the same in this case, or is it purely on the body of the organization? Like the legal entity of the organization? So the AI Act is looking at, from the fining regime, it is looking at the organization itself, but that's not to say outside the AI Act, that individuals could be fined if they're, for example, not complying with their fiduciary duties as a director, for example, but the fines are quite large under the AI Act, so there's a tiered system, and at the top end, if you get it, Really wrong.
And you [00:38:00] do some of those no nos for the prohibited uses. It is the greater of 35 million euros or 7 percent of the annual global turnover. So we're talking very large figures. In fact, the larger figures in the GDPR finds, which are the greater of 4 percent of global turnover or 20 million euros. And in the UK, thanks to Brexit, 17 million or 4 percent global turnover.
The thing you need to be mindful of though, is you can be fined under both regimes. So you could get something wrong, and if it's, and since AI uses data, chances are you could end up getting it wrong under both regimes, and suddenly you're going to be whacked by fines under, you know, the AI Act and under GDPR.
But there's yet, there's more, Dave, there's more, and that's outside the, the AI Act. There are litigation risks. So if, if organizations are just getting this wrong, there's the claims which can arise from that. And. Even greater than all of this is the reputational damage for organizations. If they are [00:39:00] deploying AI in a way which is not trustworthy, which you know, doesn't uphold their, their reputations, that can be damaging not only internally for an organization with their staff, but also externally with trading partners and customers.
So maybe just to bring our conversation to a bit of a close for today, give us a sense on when organizations should be in action on this, and if they're not, what should the immediate steps be? So organizations need to act now, because every organization is looking at AI. So every organization is always looking at cyber.
And they will be always looking at AI now going forward for innovation. They may also be looking at AI as part of their cyber risk mitigation. The, the act, the act is now in force. The various elements are sort of a stage, sort of stage process from next year and going into 2026. But what you want to avoid is what happened with the GDPR.
A lot of organizations sat on their hands until [00:40:00] close to the time. And then, it was a complete nightmare for organizations to try to do their day job and get to grips with compliance. So organizations now need to be thinking about, are we going to use AI? How are we going to use it? So they need to have a look at that in terms of what are our strategic objectives and who is actually going to affect in our organization.
Is there a HR element? There's definitely going to be an IT element. There's going to be a compliance element. There's going to be a legal element, getting those key stakeholders involved and then having a clear plan of what What are the benefits of what we're looking to do? What are the timelines, timelines within which we're going to do it?
Because the thing with AI is, you need to also look at the supply chain. And most AI will be using cloud computing, different providers, different countries. All of that due diligence needs to be done, and all of this takes a lot of time.[00:41:00]
So this morning I had to help my mother in law and I think we all know these examples. She tried to send out important forms to the bank physically, but then it got stuck somewhere in the postal office and now she needed my help to get like photographs of that exact same document and then get it emailed.
And I've got this. mother in law and she's the best. You have to say that. No, no, no. That's required. No, no, no. We can even go on holiday like for three weeks and we have the best fun ever. So I mean that. I'm very lucky. And so she's very powerful woman and she's, she's very independent. So even her asking me for help with this says a lot because she also got stuck.
With this, the, the chat bot of the bank itself, it didn't show any like phone number. So she was very frustrated. We've all been there. Ended up, yeah, we've all been there. So I was actually thinking about that on the longer term as well. So [00:42:00] we're talking about AI a lot and I think Jack Finder also said it narratives and labeling AI and knowing when it's AI and what you know, what you can expect of technology.
I think we talk about it. all day long. So, and, and we're, you know, on top of it, but there are so many demographics that don't have that. They don't talk about technology all day long, but they're being confronted with it day in, day out. So how do we make sure that with all these regulatory, documents with these laws, how do we make sure that adoption, you know, the way we talk about AI and technology and how it's being affected everyone in everyday life is, is like, you know, with the unique needs and perspectives of all these different groups, elderly, Especially, you know, with the fast pace of technology coming in, how do we make sure that they're part of it?
Do you remember the Microsoft paperclip that would pop up and it would say, it looks like you're trying to write a letter. Would you like me to help you? That's what we need. That's what we need. It's [00:43:00] a, it's a little paperclip, a gentic sort of paperclip that lives with you, sits on your shoulder and just tells you what to do.
It could be a Musk robot agent. It didn't, it doesn't have to be a little clippy anymore, does it? Well, I was gonna say, I was gonna say Dave and Rob, you need to be, you need to be careful about what you ask for, because there could be an existential threat if you go down the paperclip route. So, so, if I just tell you about this.
What ended humankind? It was a paperclip. I love it. You know, Rob wants, Rob wants paperclips, but just ask for one, don't have too many. So Nick Bostrom, he's a philosopher and he came up with this thought experiment to say, if you had AI to maximize. paperclip production as its primary objective. You could just wipe out the human race because it would just consume all of the resources that are out there.
So, With everything that we're going to be doing in AI, whether it's, you know, going back to Esme's point, you've got to have this education piece. How are you going to do that? It's going to be [00:44:00] difficult because even if you look at tech at the moment, forget AI, how are you educating the population at the moment?
Well, Rob, you're having some fanciful thoughts about paperclips, you know, that could result in the destruction of the world. So we've got to be careful there as well. I knew I'd do something great! after all. But what would, what would have to be the case though, is that the personification of the AI that runs the paperclip machine has to be the paperclip clippy thing.
You could only ever do it as clippy. It has to be clippy in charge of paperclip production. That was, that's my first input on this. Highbrow insight from chappers. One, that's what I'm here for Rob. The second one though is back to your point, Esme, in the UK at least. Back when I was, I was working in government during the noughties and, you know, it was like post the major dot com bubble, UK government had in place like a, like a digital inclusion scheme, which was exactly to your point.
It was to protect [00:45:00] sections of society, either, either age demographic or affordability or whatever it might be to one. You know, have options when they were kind of interacting. So there was, there was ages that, you know, U. K. organizations had to offer, you know, multi channel access. You couldn't, it couldn't all push down a digital route.
And then secondly, things like broadband coverage and all of those sorts of things becomes incredibly important because if you're living in an isolated community and you're in one of these demographics that are going to struggle with this, then you're effectively unintentionally off grid. Which can be, uh, which can be very damaging.
So it feels to me like it needs to be another one of those situations potentially. And, and I don't know, I don't, I don't know what you think, uh, do you think governments are active enough in it from an AI point of view at the moment? In the Netherlands, we also have the accessibility law. So I think you're, I think it might be the same, even on, I don't know on what level it is, uh, in terms of countries.
So on a UX [00:46:00] perspective, you have accessibility and, you know, we, we check all the marks, at least for government sites, it's, uh, you need to be, uh. accessible, but I think the storytelling and so I understand accessibility side, but especially the, the, you know, how are we going to work together with technology and all that narrators?
Cause that's the same, we've been saying quite some episodes already. We all have such a huge impact on the stories we tell about technology, but who's actually responsible for that main narrative? And that still fuzzes me a bit. Like, how are we going to do this in this fast pace, especially for like, for example, my mother in law, you know, In terms of you mean like how she understands the progression?
You know what I mean? And understands what a chatbot, you know, if there's somebody behind it, and now with the agentic way we're heading towards, you know, how can they keep up and understand what is, what is what and what is who? Yeah, I would say it's not a new problem, but I [00:47:00] would certainly say that it's an accelerating problem.
You know what, It will be technology that solves the problem and let's do a sort of like a far out there one which is there's lots of research going into merging carbon and silicon so chips in the brain yet we've seen eyesight be improved by chips in the eye and things like this it's coming just download an operating system upgrade I think we've said this before that you can just update your software to know how to deal with all the technology and then it'll be a lot simpler but you know you sort of think about you struggle with the human well augment the human There you go, we've just fixed it.
Yeah, exactly. Copyright cloud realities augmentation. Exactly. Haven't you just described, Rob, what Elon Musk is already doing? Yeah, yeah, basically. He believes, he believes in that as the future. But it's like, technology fundamentally under the covers just makes everything go faster, right? So we're accelerating everything.
So you can't get out of the cycle. So Elon's view, he's just embracing it. And doubling down on it, isn't it? [00:48:00] You can't rage against that particular machine. On that cheery note, a great conversation today. Jack Vinder, thank you so much for joining us and sharing your insights on a Friday afternoon. It's been great to be here.
Thank you for having me. Our pleasure. Now we end every episode of this podcast by asking our guest what they're excited about doing next. And that might be an excellent restaurant you got booked at the weekend or a new sci fi movie that you want to see, or it could be something in your professional life.
So Jack Vinder, what are you excited about doing next? Well, preparing for today's podcast, I was thinking about all these movie references. I'm probably going to watch a lot of movies now, uh, Dave. And I'll give you, I'll give you a tip out there. Look at Eagle Eye, watch Eagle Eye. It's an old movie, but it shows you the dangers of interconnectivity with AI.
But at a professional level, what I'm really looking forward to is I'm going to be presenting at the flagship, the Lawyer Conference on AI and Gen AI. And I think it's useful to get this kind of messaging [00:49:00] that we're talking about on the podcast today about some of the key benefits, some of the key risk areas and how to have responsible deployment of AI going forward for the benefit of all organizations and the world as a whole.
Thank you so much for that, Jack Finder. I think it was very inspiring to talk to you today. Listeners, if you have any questions, you can of course email us via cloudrealities@capgemini.com. We're also on LinkedIn, so drop us a DM if you have any, well, thoughts or questions or maybe guests you would like to hear.
We're all open for feedback so that we can improve the show. A huge thanks also. So for Ben, Louis, Marcel behind the scenes, we couldn't do it without you. See you in another reality next [00:50:00] week.