Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com
CR077: Emergence of Agentic Intelligence in ERP with Miranda Nash, Oracle
[00:00:00] Is this better? Is this better? Is this better? Is this better? No, no, no, no, no, no, no. They're all exactly the same. All on a hundred percent.
Welcome to Cloud Realities, an original podcast from Capgemini. This week, it's a conversation show about scaled AI, the evolution of large scale enterprise ERP, and the rise of agentic intelligence. I'm Dave Chapman. And I'm Rob Kernahan.
I'm delighted to say that with us this week is Miranda Nash, GVP, Applications and Strategy at Fusion AI at Oracle. Miranda, thank you for joining us today. How are you? It's great to be here. Thanks for having me. Now, whereabouts in the world do we find you today? I'm in San Francisco, looking out at the [00:01:00] skyline here through fog.
Beautiful. Beautiful. Yeah. Well, it is, it is wonderful to see you. Robert is with me. As you hear, unfortunately, as is not with us, she is off at Dreamforce. So we'll hopefully hear about what she's, what she's found out from that at some point in the future. But Robert's here, Rob. How are you? I'm all right, Dave, actually.
It's been a reasonable week. It's not been an utter disaster, so I'm in a reasonably good mood. And it's Friday. Woohoo! You look chipper. I actually got a fair bit of energy today. But tell me, what are you confused about at the moment? Well, Dave, have we reached the zenith of non-functional requirements? So, back in the day, when computers were steam powered, we used to have to really focus on non-functional requirements to make sure that the system was able to be performant.
But now, we are at a place where we have a phrase that's being used, planetary scale applications. So we have these, Who uses that? That's just a phrase that's popped up. Yeah. Do you make that phrase up? No, no, I've heard it, planetary [00:02:00] scale applications. So what we have is platforms now, you know, the social media platforms are a great example of this, where billions of people can go on those platforms and they have, Excellent performance all the time, high availability, all this sort of stuff.
Just de facto, you make it, right? Big power of cloud, highly resilient, cloud native thinking, you know, you make applications that respond to changing conditions. So, basically, human population isn't increasing, it's plateauing. Have we reached the point where we've just solved the non functional problem and actually we can all just hang up our coats and say, cloud has taken care of it, as long as we follow these rules, it's not something we have to worry about anymore.
I was a bit like, it's a bit sad. Back in the day, as an architect, I used to have to fix a load of non functional issues and you got a lot of satisfaction about the system was really slow and then you went it. So we're left with basically CPU power keeps increasing so we can do more, but actually we're already able to serve billions of people on these platforms.
Have we hit the zenith? And that's the bit I'm confused about, or is there more to come? The first thing I'm [00:03:00] confused about, and Miranda do feel free to give a view on this, is, is that the correct word, use of the word zenith? Oh, don't, don't pick me up on that, Dave. I'm thinking that, that Dave, it's basically you're sitting here behind a microphone as opposed to, like, in the trenches dealing with all the challenges that the architects still have.
Right, right. Now that might be a zenith. Hang on. Hang on. Dictionary definition. At the point in the sky or celestial sphere directly above and, oh no, that's not the right one. The time at which something is most powerful or successful. Which is correct, actually, because this is where non functionals have basically been, we've got it, we've done it.
Ticking the box. This is where they're just, you know. I see what you're saying now. That's a good clarification. I think you're right. Is that it? I'm not confused. That's the first time anybody said I'm ever right in my life. On a serious note though, you raise a good point in the sense of how you think about solutioning and [00:04:00] architecting is fundamentally different.
You know, the days of non functional I remember too, like you're specifying underlying equipment, you're specifying underlying processing power and underlying configuration. You do some of that, but that's all in code, and that's highly automatable these days. So, you know, once you're on the cloud and you set up your platforms, you're really drawing down on pre configured pieces, you know, that's in a template or a blueprint of the type that you need to use.
And that could be a dinky little thing, or to use your phrase, it could be planetary scale. You know, what's your perspective, Miranda? Well, yeah, and it's, it's not just the cloud technologies, right? It's the AI technologies as well. Somebody in my marketing team built an AI agent in 15 minutes. The other day with all the large language models and the GPUs and all the functional requirements behind the scenes, didn't have to worry about a thing, right?
And that really hit [00:05:00] me. 15 minutes, marketing person, and I'm thinking where, where, where are we going to find bigger scale from other than the platforms we see today that are all on our mobile phones and everything and power our life. I think that's my point. I think we've hit it. We've got to a point now where the big issues have been tackled.
And now, as long as we follow sensible rules, And sensible approaches, which can be quite structured and easy to follow your 15 minute example. Then it's, it's, it's like, it's a bit sad that we no longer have to worry about it in the same way that we did, but it's also great that we've kind of solved the problem.
So, you know, Well, I think, I think that's good. Let's leave that one there. Um, and that was good. Interesting one. That one. Oh, thank you, Dave. I try, I try. I think we all learned about a refined use of the word zenith. So you thought you were going to catch me out, but you didn't. I did. Your clarification about what you meant was spot on.
Thank you very much, Robert. Right, let's get on to our main subject of the day. So why don't we start? So we're going to talk in this conversation, I think, [00:06:00] less about proof of concept style AI. There's a lot of proof of concepting going on in the world of AI and Gen AI at the moment. There is a lot of tool centric proof of concepting going on.
So yes, it's use case based, but actually what they're saying is, well, does tool X with LLMY and dataset Z, is that actually going to work? And can we do things? And it's all very contained and it's quite technical up, but I think. Uh, you guys, Miranda, if I'm right, are thinking much more about mission centric AI at scale, um, and, and developing that in the platform.
versus kind of individual use case. Have I got that right? Well, yeah, we're thinking holistically about how we bring the power of language along with context from enterprise data into automate, um, this, you know, customers workflows. So yes, we're thinking holistically, but, you know, it is, we do also have to think about the individual use [00:07:00] cases because so much of what makes this work is in the details of the props.
And so we're working, we're making a lot of investment in prompt engineering. Right, right. And, and, and what does that look like? It's interesting because we're doing an experiment actually in my team, in my day job world, where we're bringing in some prompt engineers into the team as an exploration. So we have got an idea of about 15 to 20 use cases that we think are going to have some aspect of value attached to them.
Some is very obvious where the value comes from. Some is like a little bit more tenuous, but it's kind of worth, it's worth some iterations to have a look at it. And we think by virtue of bringing in A team of prompt engineers that we will we will just discover a whole amount about what that application of AI at scale feels like versus say, just doubling down on one on one technical components.
And interestingly, [00:08:00] I was recently in ISG AI Impact, uh, sort of a couple of day conference and they had a, like a really fascinating number, which kind of, if anything, added a bit of even more enthusiasm that I had for the pilot that we're running, which was BCG had done an experiment where they'd taken two groups of consultants.
One AI enabled and one not AI enabled, and the AI enabled one was 40 percent more productive, which I thought was fascinating. So tell me more about your thinking in this world and how prompt engineering fits in. Well, kind of interesting to me that you call, you have a role called prompt engineer. I mean, we actually look at it more of as the domain experts, the product managers, the folks who really know the domain, they can, they can do prompt engineering guided by kind of the expertise from our data science teams.
So, uh, that's how we think of [00:09:00] it. And, um, yeah, basically we're taking use cases. Doing the initial, uh, experimentation in the factory, as we say, and then working with early early customers. And that's when we make the interesting discoveries. Um, I share a couple examples, please. So like, um, one of our use cases is to help.
Managers, we all write performance reviews, right? In corporate life. And, and writing, when you're staring there at the blank page, right? That's a, it basically inhibits folks from doing this very important thing they're supposed to do every year. Yeah, yeah. It's like almost getting, um, You know, writer's block sitting there staring at the anguish in blank page.
Exactly. So, we basically start with summarizing existing feedback from peers in the system. You know, the LLMs do that really well, summarizing, getting the gist, and that gives [00:10:00] you a starting point. Um, but one of the things we discovered in early iterations is that, you know, the models tend to be too nice.
Right. So they'll be like apologetic, like, Oh, I'm so sorry to say this, but you didn't do it. You know, I would have liked to see this from you last, you know, and they're, they're too. Um, so we had to do a lot of, It's prompt manipulation early on to get that, get the performance summary review to be straightforward.
And that's a classic. There's, there's training you can take about if you're too nice to somebody in a performance review and you're landing a very important impact. full message. If you try and sugarcoat it, either side, they forget the bit in the middle they were supposed to remember, and then they walk out thinking they've done a nice job and they don't affect their behavior.
So I absolutely agree with you where we're all geared up to be nice to everyone, but sometimes you've just got to deliver a very direct and stark message to an individual. So Rob now, having read that. Rob has now stripped any level of [00:11:00] emotion out of when he's giving hard messages and just says it straight to the face.
And, and the way that he can tell us that if the person opposite starts crying, he's like message landed. It gets better if I just print a t shirt up and say, wear that and look in the mirror a few times a day and let's get the message over.
But that's an excellent example. That one about the. about the the corpus of information that these things are trained on. And again, it's the four very specific things we need to be very aware that the models need to have adjustment. Yeah. And it's that thing about we can't just trust one big model to rule them all.
It's the idea of domain specific Models are starting to rise a lot in the thinking around how to actually use this technology much more effectively. That is true. Although, you know, we're getting a ton of value just from using context with a very strong foundational model. So for example, um, the model may not know one of our use cases is to process FDA [00:12:00] notices about like discontinued products, for example, in healthcare world, that's really important.
Yeah. And it, they, they go through this big process and it's costly. So the large language models can understand the language of these notices quite well. But there's always like part numbers and kind of esoteric terms that they weren't originally trained on. And so you might think, Oh, well, you need to have a domain specific model.
And. That may be the case in some domains, but here we can actually use context and existing examples, and we do what's called, you know, multi shot prompting, and that actually gives us really good results. It's quite interesting on the variation of, because a lot of people have talked about context, but the practical application of it.
There's a An excellent example of a little bit of extra on the side can create a massive dramatic difference because the cost of training a domain specific model can be high and difficult so that that adaptation on something that [00:13:00] is little effort to get to the right answers a very pragmatic way to think about it, I suppose.
Yeah, exactly. I mean, we, we use lots of different styles of RAG, retrieval augmented generation, you know, to enhance what we're getting out of the models, uh, from what they were trained on natively. Other examples are, you know, we maintain this large search index where we can, we basically know all the proper nouns that are related to a particular customer system.
So like all of their vendors, all of their customers, uh, you know, and so then. If they get asked a question, if they ask a question that involves one of those proper nouns, we know what, where the metadata comes from. And then we can basically prompt the model with that, again, that augmentation. So, we're just Using lots of different strategies for enhancing context.
How are you guys setting up safe scaling and scaling guidelines? What are the guardrails [00:14:00] here? Or is it a bit wild westy to start with and then you get them back together again? What's, what's the preferred go forward? Yeah, I mean, well, it starts with a lot of sort of offline in the factory as we say, uh, testing and negative testing.
Obviously these things are non-deterministic, so you have to set up tests accordingly. Then once we have a certain level of confidence, we put the same type of guardrails online so that we are testing for toxicity, we're testing for bias online, we're testing in a RAG context, we're testing for groundedness.
from the, from the source of truth. Right, right. That, that must be interesting, because you're right about the, testing was always domain where it was deterministic. You, you had boundary conditions, you knew you could put in the same thing, expect the same thing out. With this, it, it, it varies so much as, what, what's your view on the, the testing communities, Change and mindset shift to think differently about how to test these more nuanced concepts [00:15:00] and the thresholds and things like this.
I mean, is that been a, uh, in your view, a difficult change for the traditional tester? Or do you think it's a yeah, it was relatively straightforward. It feels quite different from an ethos perspective when you talk about it. Well, we've, we've kind of isolated that, that, uh, did non determinism in a way from at least how we do things in Oracle, uh, so that it's essentially, uh, the folks who understand the data science and who really get the non-determinism and the statistics behind the scenes are setting the thresholds, you know, that should come out of the black box, but the black box, it ends up looking fairly similar to folks who are used to putting them into, you know, a regression suites and that type of thing, right?
So you can kind of, um, isolate that, that complexity. Okay. Cool. Right. Yeah. So let's, let's take a step back from the specifics and let's just talk about Oracle strategy going into this [00:16:00] world. I assume AI much, much the same as a lot of your competitor organizations in the world. AI is pretty central to that strategy at the moment.
Could you just give us some context for that? And maybe talk a little bit about how it's being implemented into the platform. Mm hmm. AI is central to every component of, you know, our offering at Oracle. Whether that's the cloud, whether that's the database, whether that's, you know, our, Applications are fusion applications, which is where I'm coming from or industry applications.
So, I mean, it is very clear that the company has kind of done a massive 180 around, not a 180, but just a reorientation around AI. It's almost incredible, actually. And of course, like I said, it's not just Oracle. It's most of the world's organizations recognizing the disruptive power of this technology and need and needing to be up to speed with that at that rate of innovation.
Absolutely, and, [00:17:00] you know, some of it comes back to investments we made a long time ago, uh, that weren't necessarily Anticipating AI or, but, but that really have a big impact, certainly not in the breakthrough level of generative AI. Well, I assume you guys like machine learning and we've had machine learning embedded for a long time.
Yeah, that's true. And the kind of consistent shared data model have That has been a huge benefit for us for them to be a platform to any successful scaled implementation, of course. Exactly. And some of the networking that we invested in early on in Oracle cloud infrastructure as well turns out to be very beneficial to the LLM vendors wanting to train models in our infrastructure.
Right. So in the platform, then, for those of our listeners that are not 100 percent familiar with the Oracle suite, maybe you could just try and set that out for us at a high level and then explain sort of how AI is being kind of infiltrated into that and what kind of [00:18:00] improvements you might expect from it.
Sure thing. Okay, so it starts with the infrastructure. High performance, high security, wherever you need it. So that's the, you know, the kind of the bottom layer. We all build on that. Then we have the tools that data scientists and other specialists use uh, to build models, to, develop their agents, the reg pipelines, et cetera.
We, of course, have the vector database inside the Oracle database for assisting with reg and other, you know, other embedding strategies. And then at the top layer, we have embedded AI within our applications. And that the big thing there is that we really want to reduce the cost of adoption for all of our customers.
And that's a main principle and how we why we're basically making embedded AI completely free. Of charge along with the SAS subscription. So basically, if you're on board with Fusion Cloud, with Fusion ERP, [00:19:00] Fusion HCM, whatever modules you're on board with in the cloud, every quarter you're getting embedded AI features.
That includes the GPU infrastructure you need underneath the hood. It includes the LLM, it includes everything. And again, the point there is to just help customers get on board with this new mindset. Continually adopting. Uh, innovation, and I was going to ask about customer expectation. Actually, what are you hearing from your customers in terms of where they're up to in their thinking and how are they looking to organizations like yours to just support that?
Do they see it as something that, you know, I'm an innovative company and therefore I'm going to go and have, you know, a load of off platform proof of AI separately, or are they coming to you going like, you know, guys, can you just sort this AI thing out for us so we can get cracking? Yeah, I mean, look, we, you know, we just can't our cloud world event and talk to lots and lots of [00:20:00] customers.
There's, there's a lot of eagerness to adopt AI and to start on the journey. And I guess a couple things stand out. So one is the level in the organization. I mean, Personally, when working on sort of back-office systems, I've never really had that many conversations with CEOs, but now that's a common thing.
The CEO, the board want to engage on on AI and part of that is how they automate in the back office. It's funny, isn't it? Because you take you take AI, the boardroom is very exciting. Suddenly the conversations get elevated. Everybody's looking at the CIO going, come on, then make it happen. And in the same the same side of it, you still have this specter of, say, security, which isn't Yeah.
As interesting to talk about, but often should be also a boardroom conversation because the risk of a security failure can be very dramatic as well. It's interesting about the almost like the magpie effect where they go something shiny best talk about that. But it's we know we should talk about security, but we don't really want to.
And it's that it's that [00:21:00] striking that balance to say there are some boring sides. Technology have to make sure you get right. And there's very exciting things that are also obviously funded it's. To talk about it. I find it fascinating about the way the two topics should be both at the same level of conversation but are treated very differently in the psyche of organizations.
Yeah, it's interesting. I mean, I guess the customers I talked to sort of expect that level security from Oracle. It's sort of just One reason they engage with us. Yeah, so it's like the level of the company that's engaged clearly is different, but also there's one dynamic I noticed, which is you know, the way customers think about software and justify it is always by the ROI.
It makes sense, the anticipated return on investment. And that's great. They think about on per use case basis, what's the business benefit. Um, they're also, I think, starting to think about just the getting in the cycle of continual [00:22:00] adoption, not sort of per individual example, justifying, because there's really not much cost.
The investment is so low to uptake and the ability to do change management is really low that it sort of changes the dynamic on how they think about it. This is a point that Mr. Chapman makes a lot about. It's AI that will make cloud cemented as the future. And we know we're all moving to it, but this is the this is the domain that will force people in because of all the points you just said, which is the cost of adoption, the cost of test, the test fail learning.
loop, etc. The ability to change quickly and try different things, connect all your data sets together. All the conditions you need to make AI successful are basically cloud native thinking. And so for me, and Dave, you should share the point, which is this is the, this is the thing that will eventually get everyone onto the cloud properly.
Totally agree. And I heard that from talking to many customers who were still on [00:23:00] prem with their applications at CloudWorld. Okay, it's time. Yeah, it's time. Isn't it? It's like trying to do AI on prem. You're like, Oh, my word. Good luck with that. But it's crystallizing. It's like a crystallizing thing.
you know, almost like killer app style that the cloud hasn't really had. So you might have had 10 other motivations to go to the cloud as part of the sort of the early adoption phase of, you know, the first 10 years of cloud. Lots of good reasons to go, but not one like what one killer that every organization knows it's gonna need to do this because it's If you're going to be competitive in future or unless you're going to be an artisan beer company or something like that, you're going to need it.
You're going to need it. So, you know, like this is the thing. And that to me feels like a very sort of, you know, it's going to lead the charge. Yeah. Unless you've got some amazing patent that will keep your business going for the next 40 years. It is a proper, and [00:24:00] we see the cycles of. business survivability reducing as well.
So we get this pressure. You have to do it now. Sorry, guys. That's it. Get on board. Get on cloud. That's right. So Miranda, you mentioned then in your answer, uh, cloud world, it's just happened. It's in Vegas, I think. That's right. Now, before we get onto cloud world, how was your Vegas? Well, I have to say, I really, I loved the journey concert.
Okay, very good. Excellent. Yeah. Where did you see them? Well, they, it was part of the Oracle event. It was right there in the same hotel and it was amazing. It's an incredible, it's an incredible couple of days, isn't it? Exhausting to say the least, but an incredible couple of days. So cloud world set the scene.
Where were you on the strip? What hotels were you using and what kind of [00:25:00] scale is cloud world at the moment? So we were at the Venetian and, you know, the sphere was right there behind the, uh, but, um, it, it was four days and usually the way this works is folks are just kind of slowly arriving on Monday and, uh, you know, kind of settling in, enjoying a little extra time with their, their spouse, maybe over the weekend or whatever.
It wasn't like that at all. We hit the ground running at basically eight o'clock on Monday and was pretty much back-to-back with customer meetings. There was so much intensity. I haven't seen the numbers. Someone in marketing has has those numbers, I'm sure, but just the level of intensity and energy was not like past years.
And what I love about the Venetian is people who have never been in that event center before and then constantly get lost and you keep turning corners. [00:26:00] I was here five minutes ago. Yeah. It's definitely a, uh, an environment that like, how is it possible to walk in one building on the same floor? So what were the big things that went down from an announcements point of view and what really resonated with you?
Well, we had some, Larry had some major multi cloud announcements. So that was a big part of the Oracle picture from the applications and AI standpoint, which is where I'm more, you know, engaged. It was really all about agents. So we announced 50, the availability of 50 AI agents. in this upcoming Fusion applications.
This is the rise of agentic intelligence, I think we're talking about here, isn't it? Agentic intelligence, agentic workflows. Yes. And, um, it's basically starting with RAG agents and [00:27:00] those are available now and starting with Single action taking agents and getting into autonomous agentic work, multi agent agentic workflows that really, you know, can completely transform the world of our business operation.
And what does that look like? Let's just, let's take that concept like multi agent agentic workflow. What actually is that? Just, just help us understand it. Maybe from the outside. And if I'm, if I'm experiencing that as a human, how would that feel? What would it look like? So. Let's take a recruiting example, okay?
So, you know, I want to recruit a marketing specialist in AI, right? Right. And, uh, that's a complex activity. It is going to involve a lot of automation steps. It's going to involve a lot of humans, i. e. candidates, interviewers, and basically orchestrating that whole process today. Really, isn't it? We never attempt to [00:28:00] automate it, right?
And the agents will allow that basically, you'll have a supervisory agent that can understand the goal and then farm out work to specialist agents that understand different pieces of it. For example, scheduling candidates for interviews where previously each of those agents might have been A human step in the process, so a previous iteration of ERP software might have set that process out, and it might have, might have automated some of that workflow.
You would still have humans at the point that, that now in the, in the new model, AI agents are driving the work. And, and it would have in the old, older world had to be very prescriptive. Yes, exactly. This is the workflow. This is the branch that you follow. Yes, yes. Now, we can take advantage of the reasoning capabilities of the LLM to be able to, you know, if I didn't get the right [00:29:00] inputs, I'll just clarify and ask the human.
I am glad you came in today because I was trying to explain this concept to Robert on WhatsApp the other day. He wasn't having it. So Robert, I think, I think Dave, you presented a particular view and I said there's some complexity and that's, that's all I pointed out. He just wasn't caught up with us, Miranda.
Hopefully this has helped Rob. Every day's a school day. Now, this is Cloud Realities, right? So let's, let's be clear. It's the practical and exciting alternate realities that can be unleashed through cloud driven transformation, Miranda. Oh, wow. Okay, so let's be clear. Um, we are relying on the LLMs to get better.
And as we put in place, the way I think about it is in two dimensions, like we're putting in place the use case automation and the guardrails that are necessary in enterprise, you know, to deliver the results and the LLM just keep getting better. at their reasoning capabilities. And [00:30:00] these two things are what are going to come together to deliver the real benefits.
It's pretty awesome, isn't it? In the sense of, if you think about, and this, all joking aside, this is the chat that Rob and I were having the other day. Think about like traditional ERP. You had a structured data set. And then on top of that structured data set, you had an out of the box series of processes, which you could choose to adopt as an organization without sort of compromising the integrity of those processes out of the box, or you could choose to configure those processes for your organization.
And then you might cause yourself some hassle down the line when product upgrades come along and stuff like that. Mm-Hmm. But to your point, once it's there, it's locked in. It's pretty prescriptive. And then you, it's change management to get it in and then it's change management to change it. Mm-Hmm. And you know it, and it can be quite, it can be quite, you know, it, it's, it's a, it's a thing you have to manage as part of your IT estate is what we're talking about here.
The next [00:31:00] generation being. You have data sets, which may or may not be structured, I'm guessing. They could be unstructured. You have large language models, and then agentic intelligence at the top, executing the same tasks, but sometimes in non-prescriptive ways that presumably would be tuned by the sort of outcome that you want to drive.
Have I got the right end of the stick with that? Yeah, absolutely. It's basically that, yes, that the ability to adapt to real world conditions that were potentially unanticipated. Right. Right. That's always been the human effect though, isn't it? Why you get, like, the workforce, why are humans still in the loop?
Because they need adaptation through lack of clarity, we're good at dealing with ambiguous situations and such like, so. And, and now we can, you know, pass some of the low-risk decisions, you know, to, to agents, to autonomous agents, but clearly humans are making the consequential judgments. I [00:32:00] think there's a bit about when you, I mean, regulation.
Is very clear on these types of points where a decision is made in particular scenarios. You need to show the traceability and the reason why the decision was made. It has to be open and transparent, and the models are starting to get better at showing the traceability through and saying, Actually, here's the pathway that was taken and why the answer came out because it was always a bit of, um, an issue that they're starting to mature in that area.
How long do you think it will be before like you'll get the mass decision making from the model that regulators will stand back and go? Okay, we're happy with that. Style of algorithm making these types of decisions because prescriptive ones that are deterministic, you can always prove data in, it would always produce that data out as long as you had the version of the algorithm.
Where, where's your viewpoint on LLMs getting that level of maturity where we can use them for significant decision making? Well, look at what they do now, even now, pretty well, is to, and when you ask in prompt to explain your reasoning. Or to explain the rationale, [00:33:00] they'll, you know, give you that explanation in language.
Now, seeing inside the neural network in order to have help regulators understand why one word was predicted as the next word, I'm, I'm not sure that's even that helpful, honestly. What's more useful is explaining in natural language, you know, with human style reasoning, what was, what was done. Uh, and do you think that's, that's a few years and it'll be in place and people will start to accept the risk?
Or do you think it's a bit of a further curve before organizations start sort of trusting core process to these types of algorithmic approaches? Uh, look, it's going to start with the low, lower risk, low hanging fruit, no question, right? And what we're trying to encourage our customers to do is adopt it in low-risk areas so that your company gets used to this.
And then the guardrails continue to improve over time. It's a very good point you make about the low-risk start, but it must be analogous to other things we're [00:34:00] doing as well, you know, in society where technology is changing the way we operate day to day and, you know, regulators and safety critical systems are moving.
Autonomies building, you know, it's still a viewpoint that says we will have to get used to the machine being more in control. So true. In fact, so, you know, I live here in San Francisco, and we have self-driving cars all over the place. I see, I see them every day multiple times. And, you know, it's just a few years ago, we were talking about assisted parking in this domain as the big thing.
And now I'll tell you, I put my kids in a self-driving car to get to school. It makes Yeah. Um, basically I feel very, very comfortable. And here's why. Number one, they are, they're trained to be cautious. And so it actually takes a little longer to get places, but that's okay when you're a mom. Right. And I think the self-driving cars, what it really hammers home is [00:35:00] the evolution of the guardrails and how that's so fundamental to our ability to have.
And I see it being exactly the same kind of evolution for autonomous agents in business.
So unfortunately, as is not with us today, but I am delighted to say that Rob has jumped into the breach. Now, when I say jumped into the breach, slightly manhandled into the breach, because to be fair, what do you mean? You pushed me off the cliff, Dave. That's what actually happened in this situation. So, thanks very much.
Rob. You were helpful, you jumped in, and you saved the day. What have you been looking at this week, Rob? Ah, well, so, tech confusion obviously is a theme I like. That's your specialization, I think. That's my thing, being confused. I'm very good at it. But there's in AI Do you remember when Kubernetes [00:36:00] came out and nobody knew what Kubernetes was?
And still loads of people still don't know what Kubernetes is. It's like Kubernetes. Yeah, whatever. We need some. How long? A long time. Were you getting emails about Kubernetes with the acronym K8S? How, how many, how many times did you have to read K8S before you realized it was Kubernetes? It was, well, for some of us who were invested in the platform and watched it evolve, it was there, but the apps, Pretty sure people still don't understand what it is.
But anyway, that's a side AI has the same sort of confusion reigning around it. So, I looked at was one of the areas and things that people need to understand demystify AI. So, I've got a list here and you can agree or disagree. Everybody knows I love a list to go through. But the first one is AI versus machine learning versus deep learning, and they're often interchanged, but they're not the same.
So, AI is the broad concept. Machine learning is about a subset that involves algorithms, and then the deep learning is neural nets [00:37:00] with multiple layers. Now, we may be getting into the, um, a lot of detail for, for many, but actually there's lots of different variants of AI and approaches AI. And I think a lot of people, especially when Gen AI came out, they all lumped it into it's all that, isn't it?
You go, no, these are very different approaches. So that was area number one of confusion. I would encourage our listeners to go back to our, one of our earlier episodes on where we, where we did a whole show, didn't we? Um, different types of AI and things like that. So if you're interested, uh, do, do dig through some of the earlier episodes.
Number two of 150. But it's actually 934. No. Um, uh, the second one is the AI. is not magic. And it is a computer and algorithmic approach behind it. Um, and it's not going to be the savior of everything. And there's these big maturity cycles we have to go through. And we just had the conversation about risk, et cetera, but it requires a lot of data and computational effort and power to function effectively and lots of refinement.
So that's the second thing that it's not some double click on. Yes. And AI will save the [00:38:00] world type thing. There is a huge maturity cycle we need to go through to be effective. I have got an excellent quote though, on your point about magic. Go on then. And the quote is from Arthur C. Clarke. Yep. And he goes, oh yes, I know this one.
Any sufficiently advanced technology is indistinguishable from magic. And I was having a conversation the other day about, because we've often chatted, haven't we, about what will be the moment when AGI happens? Will it be a ta da? I'm going to switch it on and like, we're going to unveil an AGI, or will it have happened In such discreet moments that actually we, you know, it might have happened already and we just haven't noticed it yet.
And I was chatting about this with one of our chief architects and his perspective was, well, we keep moving the goalposts. So actually, if somebody 30 years ago was looking at it, What Miranda has just described in terms of [00:39:00] agentic intelligence and adaptability would you look at that and go? Well, that looks pretty much like AGI to me.
It's the Terminator version Dave for me It's just going to appear from nowhere and subsume the human race. You're a stickler for having the dark side Dystopian futures are a thing if it's not trying to use me as a battery It's not AGI. I doubt it's going to be somebody takes like a cloth off a computer on stage and says, meet Bob the, uh, general AI.
I don't think it's going to be that. I think somebody will go, ooh, actually, it seems a bit clever, this one. Uh, um. Anyway, crack on, crack on. Yeah, yeah, no, no, yeah, yeah, we're going to be here all night, Dave, at this rate. Follow me. Right. Next one is general AI versus narrow AI, and of course we discussed this on the thing about generalist models versus domain specific ones, the agent at the top controlling specific tasks from that.
There is still some confusion about the different approaches to domain versus adding context versus non domain. normal models, et cetera. And again, that still causes a [00:40:00] lot of confusion. Um, do you have a comment? No, you want me to move on? Just check in with you. You look like you're about to say something intelligent and you would do one.
I'm waiting for you to come up with a good one.
Imagine having to work with you on a regular basis. Um, next one was like, it's constant delight, isn't it? But to my point earlier around security having to be baked in everywhere earlier, Miranda, you said the same around the data privacy concerns around these models are trained on data. Um, they become caught that model and the way it behaves.
So we have to think very carefully about data privacy, data usage, efficacy of how that data is distributed. Deployed, et cetera, et cetera. So that's one that I think we're going to have a few AI bloopers. We've already had a few, haven't we? But we're going to get a few more, especially around data, I think, in the future.
But that people don't always think about the complexity associated with that. Do we think, I've got one on this one. Do we think so? As part of that little PE team that we've been, [00:41:00] that I was describing earlier, one of the things we've obviously been thinking about is data sources. Now, obviously, we want it.
We're going to be very responsible about that. I'm going to use trusted data sources and et cetera. Your initial set of trusted data sources are only going to get you so far. They're only going to support, you know, potentially specifically limited things. And I was thinking, well, like a market in the future where organizations themselves at the moment then subsequently become data sources that you can then license that data.
So like a research firm, for example, I interact with the research from the moment through, you know, uh, researchers and analysts and, you know, people are going to have a conversation with their leveraging their research and white papers and such like, well, doesn't that just become, do we think an island of data in future?
with an LLM around that I can then connect into my environment. There's definitely a monetization [00:42:00] strategy point here. And the thing that gets monetized is always the things that remember. So think a Napster. Yeah. Napster was the one that got people used to music and streaming and that concept, but they never monetized.
And in fact, there was a legality issue there and everything else. The organization remembered for being the one to bring digital music to the masses was actually, um, Apple via the iTunes store, which is the first one to arrive where people got used to buying. Digital content. I think there's the same thing coming with AI, which is who effectively monetizes will be seen as the person who creates that well.
And will it create data boundaries? So, walls go up around the data that were traditionally open because people know they can sell it for, um, money. Yeah, yeah, it's gonna be really want to watch. That's really want to watch. Marcel, write that down. Make sure that we copyright that. Cloud Realities Productions as an idea.
Don't let that one go. And before this episode goes out, make sure we've done that. Um, the next one is the workforce stuff which people are confused [00:43:00] about and they know there's a workforce re planning coming with the agentic AI as a classic example. What does it do for the workforce? And people are confused about.
How they plan workforce for future. What's this mean for training and skills? We discussed how does an individual learn in the domain if they're being replaced by agents? So, we talked about the paralegal. Remember where it's the if I just does the paralegal role for me in the future. How does anybody ever train in the, you know, the legal discipline?
So there's a sort of what's the future for creating careers? Um, I think that's one of the more profound questions, isn't it? The impact on what humans do, you know, without wanting to oversimplify your point. And then the final point is ethics and bias issues and the complexity in ethics and bias, uh, and people still don't quite understand that or how to deal with it.
So there's two types of confusion. How do I tackle ethics and bias? effectively and people just understand that it is a massive issue with this type of [00:44:00] stuff as well. So, is that, is that as a, as a, as a summary, if you can remember the beginning of it, Miranda, about five hours ago when I started as a summer, as a long summary of the situation, does that, does that, does that resonate with you?
Anything missing? Uh, that, that totally rings true. And maybe I'll just come back to an example I used with in recruiting. Because you know, one of the things we didn't say is that we're going to have agents deciding who, you know, who gets hired, who does what, who gets promoted, you know, that type of thing.
Right. But there's a ton of opportunity for automation and smaller decisions. Great way to put a pin in it. Wonderful. And a really solid note to end on. Miranda, thank you very much for taking the time and two cups of coffee to come and join us this morning. It's been a brilliant and very educational conversation.
Thank you very much for that. It's my pleasure. Thank you. Now we end every episode of this podcast [00:45:00] by asking our guests what they're excited about doing next. And that could be, I've got a restaurant booked at the weekend I've been looking forward to, or it could be something in your professional life, or maybe both.
So Miranda, what are you excited about doing next?
I am looking forward to tonight, which is hanging out with the other parents of kids in my school dancing to Cuban music and um having a good time while the kids are taken care of by babysitters up above upstairs So that's going to be great. And like I said, i'll get them there in a self-driving car.
You were living in the future, Miranda, you were living in the future. I still have to push pedals in the car. It's not, it's not fair, this. Have you gone to automatic yet, Rob? Are you even changing gears still? Still Fred Flintstone version of the car, Dave. That is amazing. You're like, you know, maybe when you come on next time in 18 months’ time, you'd be like, I've stopped [00:46:00] using the self-driving.
I get the flying ones now. Be the Jetsons. Be the Jetsons. Jetsons, yeah. That's it. Well, that was wonderful. Thanks again, Miranda.
If you would like to discuss any of the issues on this week's show, and how they might impact you or your business, please get in touch at cloudrealities@capgemini.com. We're all on X and LinkedIn, and we'd love to hear from you.
So, feel free to connect and DM if you have any questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to Miranda, our sound and editing wizards, Ben and Louis, our producer, Marcel, and of course, to all our listeners. See you in another reality next week.
[00:47:00]