How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, I'm so pleased to be joined by Dr. Chris Marshall. Chris is the VP of Data Analytics, Sustainability, and Industry Insights-- I think I got those all-- at IDC Asia-Pacific. He's going to think about that and let us know if it was wrong.
And he joins us today to talk about what we should be keeping our eye on in AI in 2026 and possibly beyond. And so we'll be talking both about macro factors and trends that perhaps we don't talk about enough or at all, right down to the evolving story of applications such as agentic AI.
Welcome to the show, Chris.
CHRIS MARSHALL: Thank you so much, Kimberly. It's a real pleasure to be here today.
KIMBERLY NEVALA: Now, Chris, you are the very definition, I believe, of a lifelong learner. I had a little poke about and--
CHRIS MARSHALL: You found out my dark secrets.
KIMBERLY NEVALA: [LAUGHS] Your academic interests have ranged from a background, starting with things like theoretical physics to finance to business administration, and of late, analytic philosophy. So talk to us a little bit about how your academic career, if you will, has mirrored or influenced your
professional career or vice versa.
CHRIS MARSHALL: Well, that's a great question.
I suppose I came to this as a hardcore quant, really. I mean, I’m a mathematician and physicist. And in those days, if you wanted to be a quant doing hardcore research, you nearly always had to be in the Defense Department, frankly. In the early days of AI, it was all about DARPA funding and Ministry of Defense in the UK, as I was. And that's how I started.
And then I slowly morphed into another area that required a lot of quantitative skills which was options trading. I got sucked into that for a long time. And then I came back to what I always felt was my first love, really, which is AI. And of course, with the developments in the late '90s, early naughts, suddenly we started to see the end of what was a bit of an AI winter.
And coming back, obviously, my days of coding are over, sadly. But what I can do is understand what the larger trends are in the marketplace for AI and how technology is impacting business more generally. And that's still my interest. I mean, I think AI is also interesting because it does poke at a few really important philosophical and even issues about humanity itself. And that's one of the things I've been very interested to find out more about, the ethical dimension. Because that directly plays into the larger question of AI governance and AI sustainability, if you like.
KIMBERLY NEVALA: Absolutely. Yeah, it's interesting. We've talked to quite a few guests that have started in philosophy and then kind of worked their way over to ethics or into coding or all the way back around. So was interesting to see that evolution from theoretical physics into philosophy as the latest landing zone. Although, I think a lot of the narratives that we're seeing right now make that quite logical in its own way.
CHRIS MARSHALL: That's right. It's interesting. I always remember a colleague of mine who used to say that ultimately every business problem is a philosophical problem. It's about ethics. It's about what's the most important thing here? What are the factors that drive this decision? What are the factors that really make a difference?
And analytic philosophy is just a fancy way of saying, chop up the problem into different pieces and figure out what it's really driving. It's very easy in a world of soundbites just to throw platitudes at problems. And I think philosophy is a great training for anyone to think about what really is important here?
KIMBERLY NEVALA: Yeah. I think we're going to continue to see a bit of a merging between our philosophical or some of these softer sciences, if you will, and computer science as we move forward.
CHRIS MARSHALL: Employers are already saying we want people with critical skills. And ironically, the days of computer science degree being your passport to a lifelong career in computing seem to be over in many ways. And I think we're having to deal with that - all of us.
KIMBERLY NEVALA: And we're going to come back to that, I think, at the end here as we think about skills in the workforce.
But I'd love to take a step back and step out and widen the aperture a little bit and talk about some of the overarching stories you are keeping an eye on and you think that we should perhaps all be discussing more these days. And maybe we can start with a view of some market considerations that certainly boards, executives, and I think really all of us should be interested in. At least from a governance or risk perspective, if nothing else.
CHRIS MARSHALL: I have to be honest. I'm based in Singapore. My job is Asia-Pacific.
So I tend to look at this from a non-US, non-European perspective, increasingly. And I think there are some significant differences in how the rest of the world views AI compared to how it's viewed, say, in the US, or even the UK or Germany or whatever.
And I think some things that are very much on our radar, which I'm nervously contemplating and trying to figure out how we think about are things like-- many things.
One would be, I suppose, nervousness about sovereign AI. I think that's a real concern for us right now, particularly in the region. Where we're between, obviously, the US, with its big international vendors, usually emphasizing closed source AI, and that's on the one hand. And on the other hand, you've kind of got the China and India, to some extent Japan, a little bit Korea, countries that are really, for the most part, adopting open-source techniques as their vehicle into developing an AI capability.
And then we've also got geopolitical concerns in the background that's making people think, well, is AI-- how big a deal is AI? We can talk about that maybe a little bit more later. But I think they're certainly viewing the potential impact of AI on national economies, taking that very seriously. And, therefore, realizing that AI capabilities are themselves a bit of an economic weapon, and even in some cases, a geopolitical one. I mean, obviously, if there's heightened tensions between different countries, that makes a difference in terms of the adoption of AI.
And I think that the nervousness about sovereign AI is a reflection of that. Countries are saying, well, we need to deploy AI on domestic data, domestic platforms, domestic operations, domestic people, whatever, in our national economies. And this might not sound like such a big deal in the US, where you've got a huge economy and a huge range of very capable platforms to choose upon.
But if you're in a smaller country, like, for example, I'm here in Singapore, we have a real issue with being able to seriously adopt sovereign AI because it's just not practical. I mean, we depend on AWS for cloud services. We depend on Google for models. Maybe we depend on Anthropic. Maybe we depend on NVIDIA. Maybe we depend on Intel. Maybe we depend on Cisco. I mean, the list of vendors that are from outside our shores is just huge. And it's completely impractical, honestly, for most countries in most of the world to think seriously about a sovereign AI strategy.
That said, they're all going to be pushing in that direction, and we're already nervous that certainly multinational organizations will be forced to have different AI platforms in different sovereign regions simply to deal with the potential imposition of some sort of national rules about where data can be captured, where it can be resourced. And we've already got it with things like data privacy. But it's just going to be taken to another level in this space.
KIMBERLY NEVALA: So for folks who may have heard the term and think they know, can you define what sovereign AI in its purest instantiation, would look like? And then my question is, do you think this is-- what kind of effect will this have on AI adoption, on AI development? Is it just exacerbating tensions that are already there because we have such a mishmash of evolving regulatory regimes and such worldwide?
CHRIS MARSHALL: Yeah. I mean, there's many definitions of sovereign AI, but they boil down to the same thing. They boil down to where the data is captured, where it is managed, where operations are done, where models are developed, where models are maintained, where inference is performed, where compute networking, storage capabilities are found, and ultimately, who has control of those different pieces. And making sure that they are consistent with national regulations, in some cases, but also, perhaps more importantly, national economic agendas in some cases.
For example, Singapore may say, we need to have a national infrastructure to support LLMs, for example. And that's great. Maybe there's good reasons for that. Maybe there's language issues, for example, that might make a difference. Or maybe there's regulatory issues that force you to keep your data in one place.
The practical matter of it, though, is that sovereignty is always very, very difficult in our very interconnected world. And in practice, for example, you may have data sovereignty, but you might not have operational sovereignty. Or you may have operational sovereignty, but you don't necessarily have what I call technical sovereignty. In other words, we're using other bits of technologies from other parts of the world.
So in some sense, there's a sense that we want to be less beholden to vendors from overseas. I mean, that's part of the story, certainly, because we want to build up our own national economies. And again, translate that story not just to Singapore, but to the EU, to Brazil, to Russia, to Australia, the conversation is more or less the same.
And I think in the push back away from the world is flat kind of attitude we had 20 years ago or something, 15 years ago, that clearly is becoming an important topic. Especially if we say that AI is going to be so fundamental in changing our economies and therefore, logically, it becomes a national issue, where AI is managed, maintained, controlled. And we already see that things like data privacy and security are probably the biggest single challenge facing most enterprise CIOs working on AI projects right now. And once they start getting regulated, that's only going to get more of an issue.
And the consequence of that is going to be cost. It's just going to be a lot more expensive to have-- I'm going to need a-- maybe I need a platform in Hong Kong. I need one in London. I need one in Frankfurt. I need one in New York. The whole point of AI is ultimately about economies of scale. If I'm suddenly having all these different platforms, that's a real additional cost on making AI useful because I can't share data. Maybe I can't share training data. Maybe I can't share resources, compute resources, storage resources.
It just becomes a bit of a mess, frankly. And that will inevitably slow things down.
And that's particularly true for things like GenAI. Maybe even, maybe more so, for agentic. Who knows? We're not quite clear on that.
KIMBERLY NEVALA: Yeah, interesting. And you've said, and others have said as well, I think that AI, by its nature is somewhat naturally centralizing. It clearly affects large, as you said, multinational companies. But I wonder if that also potentially then boxes out effectively smaller, small or medium businesses who want to participate in the global economy but are located wherever they are in the world.
CHRIS MARSHALL: Yeah. I'm sure that's true. I mean, within the AI industry itself, it's already happening.
Yesterday, I think, or the day before, whatever it was, we had the announcement from Microsoft and their work with NVIDIA and Anthropic, I think. And what you see is the big players, not always but often from the US, North America and Europe, ring fencing and saying, well, now, we're going to build partnerships. We're going to build vertical integration across our supply chain, all good things from a macroeconomic perspective.
I mean, they make perfect sense. They increase your flexibility and pricing. They guarantee your research ability to plan for the longer term. All of these things make sense. The problem is that inevitably that locks out potential entrants into this business. So if you're not in that little cabal of preferred vendors, if you like, you're going to struggle. That's just a fact of life.
And it also has knock-on effects to the users of the technology. Suddenly, the users of the technology, whether you're in Timbuktu or Delhi or wherever it happens to be, you don't have a choice because nobody else can compete, frankly, with these big players because they've got these scale economies that they've obviously worked hard to get. But it becomes a bit of a self-perpetuating thing, really, in many ways.
KIMBERLY NEVALA: Yeah. And you referred to there a little bit of this market nexus or the gravity of the market being within a few players. And that certainly plays into - I think we'd be remiss into at least not touch on it - since we're talking about some of these more global or macro narratives around, is it a bubble? Is it not a bubble? Is this just a self-reinforcing economic cycle or not? So is it a bubble or not?
CHRIS MARSHALL: I am not a stock analyst. I am not--
[LAUGHTER]
KIMBERLY NEVALA: Do not take financial advice
CHRIS MARSHALL: If I knew that, I'd be a lot richer than I am. I tell you that.
But I think, let's not kid ourselves. There are some clear signals that there's a little bit of a hint of a bubble forming. I mean, just a massive investment. I mean, look at NVIDIA's earnings. I mean, look at the massive investment in infrastructure, and people are calling that investment in AI. That makes sense. It is an important investment, a necessary condition.
But the fact of the matter is that AI is more than infrastructure. It's more than chipsets. It requires investments in solutioning and data and skills and all sorts of boring things that maybe don't get the headlines. But without them, you're not going to see business value. And if that's the case, that means we've made all these investments in infrastructure. And they're not doing this out of charity. They're doing this because they want to see a return.
So where are those returns again? Trust me, it will come soon is not usually considered a good answer to that. So I think there's a real mismatch between the expectations, which have led to significant investments. Often, as you say - I don't know what the right word is - closed loop investments, you might call them. Where there are investments, as we saw the other day with Microsoft and NVIDIA and Anthropic, investments in partners. Which all make sense. But again, they have the effect of shoring up the barriers to entry into the market.
And I think one thing that we are seeing or worried about is that it wouldn't take much for management to start to say, well, you've talked about GenAI. Yeah, we've done lots of projects, but 30%, 40%, whatever number you look at, have been successful. But 60% have not been successful. And that's not good. I mean, it means that the ones that are successful have an even higher burden in terms of their ROI to overcome.
Agentic AI is likely to be even more so. So all this push to agentic AI just kicks the can down the road a little bit because it says, well, maybe GenAI didn't lead to the 300% returns that we thought we were going to get. But agentic AI will. But again, agentic AI, I think one thing people don't appreciate with agentic is that it's going to take years. It's going to take several years to be realized.
Why is that? Well, because agentic AI, by its very nature, needs to integrate with lots of different systems. It needs to change how people work in many, many different ways. Whenever you have integration with other systems, hard technology debt, if you like, within the enterprise, or you have to change people's ways of working, it's going to take a lot longer than you think. And then the question is, does management or the investors have patience for that sort of timeline? And are they going to get bored and want to do something else? And that might easily lead to a reaction.
KIMBERLY NEVALA: And there's any number of studies out right now that go anywhere - you see numbers anywhere - from the typical 80% to 90% of projects are failing with AI broadly, GenAI, specifically, or agentic AI, to whether it's leading to productivity gains or not, and if that's even the right objective.
And one thing that has, I think, been interesting, though, is as the generative AI narrative has morphed into the agentic AI or maybe bled into it. I don't know if morph is right - that probably sounded more pejorative than I intended.
CHRIS MARSHALL: No, I love it. I think it's great.
KIMBERLY NEVALA: But into the agentic AI narrative, there is this sense of, there has been a level of, I would say hype, in the sense of it's inevitable and it's really happening right now.
CHRIS MARSHALL: But I think it's also important to keep in mind there is a-- I'm an evangelist for the technology. The technology clearly will change how people work eventually. I mean, the timeline is always difficult. And the hardest part about being a technology analyst is figuring out how long is this going to take? How much willingness do people have to wait?
And we're not in the sort of world that people are very willing to wait for things. And if the ROI takes three or four years, which it could easily do for agentic AI, especially. GenAI, we've been a little bit seduced by some quite nice use cases, particularly on the productivity of end users, that deliver value almost immediately. And we say yeah, that's kind of useful. It saves me a bit of time. I don't need to go and do this research anymore. Or maybe I do a better job of the research as a result of using the tools. That's great.
But it tends to make us think that agentic AI is just the same, but bigger. Not at all. Not at all. It's a very different kettle of fish, precisely because it changes the way people do stuff, rather than simply understand or gather information.
KIMBERLY NEVALA: You guys did some research for us that looked at people's perceptions of things like generative AI. And I do always wonder whether this issue with people expecting agentic AI to just be there and be really simple has come from this expectation. It kind of looks like you can ask an LLM about anything, and you'll get an answer. But then there's this idea like, well, can't it just then do this automatically?
And it tends to miss that fine, not so fine, point that an agentic system is an engineered workflow. It's made of multiple components and a whole lot of decisions that have to be made about both the decisions that are automated and what that even looks like and what that means and how people are going to engage with that as well.
So how do you see that whole narrative around agentic AI changing as we move through this new year. And how do you see organizations maybe starting to think about, I don't know if it's about differently, or if they're just kind of hunkering down to do it with a little more deliberate intent?
CHRIS MARSHALL: Yeah. Let me deal with the first part of your question first.
The trust issue, I think it is so important. I mean, one thing I actually - we may disagree with Sam Altman on some things but on the other hand, I think he said something very profound and very important: AI, or GenAI, is exactly the sort of technology you should not trust by definition.
But the irony is, if you ask the average corporate workforce that uses the technology to any extent, and you asked them, do you trust the technology? They all probably say, yeah, I trust it pretty well. I mean, it's not perfect, I know, but I generally trust it. I trust it as I would trust a colleague, for example, maybe. That's the attitude. And yet the reality is that in many cases that trust is not justified.
So again, we were talking about philosophy earlier. The definition of knowledge is usually considered to be justified, true, belief. In other words, it's got to be true. If it's not true, you don't know it. You got to believe it, which is an interesting fact in itself. If you ask a GenAI tool whether it believes anything, it'll probably say, I have no idea what you're talking about because it doesn't believe anything. There's an emotional commitment to it.
The other thing is that it's justified. You've got a reason for believing it. And one of the challenges with GenAI is that very often the information is completely unjustified. And so if we're not careful, we end up living in an echo chamber, if you like, where information is just bouncing around, and we'll just-- riding a lift on the information as it's flowing around in this echo chamber. And that's a very dangerous thing for people to base their lives, careers, businesses, economies on, in that case.
And I think that unless we have this skeptical attitude about what AI can and cannot do, we're running big risks. Moreover, I spend a lot of time on risk management in my career and risk is one of those funny things. If there's a risk, eventually it will be realized. It's almost inevitable because that's how risks work. If there's a certain probability of something bad happening, eventually the bad thing will happen, unfortunately.
So it's really important that if you're serious about getting value from AI, you take the risk seriously up front. And this is why the work we did with SAS really focused on things like what is the trust gap? The gap between our ability to trust it - is it trustworthy, the information we get from GenAI, for example - versus the extent to which we do trust it. And they're quite different.
If you ask, I mean, you ask a bunch of kids-- I did this quite recently in a high school. And I said, who do you trust most? Do you trust ChatGPT? Do you trust your parents? Do you trust your teachers? Guess which one came on top?
CHRIS MARSHALL: I don't want-- you can make your own guess.
KIMBERLY NEVALA: I'm refusing to answer this on principle.
[LAUGHTER]
CHRIS MARSHALL: No, it's true, though. But the GenAI came on top because people are so attached to their mobile phones. And they said, yeah, sure. I trust it. I never trusted my dad. He's a real pain in the neck. And my teachers, no, I don't trust them particularly. They've got an agenda.
And that's kind of sad, really. That's not a good situation for, again, a person, a society, an economy, a country, a company to be in, where we don't trust expertise. Because guess where all that information from GenAI comes from? Ultimately, it comes from people writing stuff in reports and sometimes experts, sometimes not.
So we have to take that skepticism very seriously. And if there's a gap between what we believe, we say we believe and the truthfulness or trustworthiness of the technology, we have a problem. Because sooner or later that tool or technology is going to deliver things that we are surprised about. And maybe are just plain wrong. We've already seen high-profile examples of exactly that.
KIMBERLY NEVALA: Yeah. Yeah. And again, that tumbling downhill of that inherent or intrinsic trust we tend to get when we have this experience or interaction with a generative AI system.
Although, I will say, in defense of the kids, there was a meme going around not too long ago about, I think a middle schooler or something calling his parents out for their lack of understanding of pretty much anything. But I think his mom or dad said something, and they said, that's so AI. And they said, what do you mean by that? And they said, oh, you just made that up.
CHRIS MARSHALL: [LAUGHS]
KIMBERLY NEVALA: So all right, there's still some--
CHRIS MARSHALL: That's a wonderful American expression. I think it's fantastic. That' so AI.
[LAUGHTER]
KIMBERLY NEVALA: Yeah, maybe that is an Americanism. I don't know.
CHRIS MARSHALL: I think it's brilliant. No, it's probably true. But actually, that shows a level of wisdom in the student that is kind of interesting in itself, that they're seeing through the reality of the technology. Fascinating.
KIMBERLY NEVALA: Well and I want to circle back to you said something about expertise. Because I think what we are also starting to see now, we're seeing this in some of the research and results, is that expertise can amplify the advantage of using AI. But if you don't have the opportunity to develop the expertise, you are potentially in a world of hurt.
And it goes back to that ability to have any sort of-- People will say they need to use it to help them with critical thinking. It is difficult to do that, though, if you don't have, to some extent, a baseline of truth or wisdom. We'll stay away from philosophically defining those terms for the moment. I'll use them loosely and against my own principles. But this is an interesting problem as well.
Are organizations or are we more broadly, thinking enough or leaning in enough about the impacts of the workforce? Have we started to internalize and think more critically about what this might actually mean and take?
CHRIS MARSHALL: That was easy. No, we haven't.
No, it's true. I mean, we're already seeing the benefits of AI being somewhat distributed unevenly across the enterprise. So suddenly, we are enabling senior executives to have insights that perhaps they wouldn't have had otherwise. We're seeing experts in the organization who can actually query and argue with the data and have almost a dialogue with the tools.
In fact, this is really important. I mean, prompt engineering is just a fancy way of saying having a conversation with the tool and having an interactive discussion where you sort of poke out the holes and compare it with the results of other models, et cetera, et cetera.
And I think one fascinating thing there is that you tend to see the people who benefit from the use of particularly GenAI tend to be the ones who are the more experienced people within the organization anyway. So everybody benefits, I think it's fair to say, from the use of the tools. But the people at the lower end, particularly entry level people, benefit a little bit. No question. But the people who've been in the business a long time, who know where the bones are buried, if you will, benefit a lot more. And that means the gap between the experts and the guys at the bottom, or ladies at the bottom, only gets bigger.
And then at the same time, we've got the issue that people are saying, well, if that's the case, and imagine I'm a senior executive and I'm trying to figure out, well, where do I want to put my resources? If that's the case, then suddenly the marginal benefit of having more of these guys or ladies who are experts within the enterprise, who I can leverage and presumably sell more stuff, whatever it is I'm doing in my particular business, it's much more sensible to put more money into them than it is to put more money into the entry level people.
But then, as you bring up the point, these people didn't come out of the egg suddenly knowing everything about the world or their particular business. No, they started off down here. So of course, what's going to happen is if we start getting rid of the lower level folks, we throw away the rungs of the ladder, if you like, to enable the people who are going to be ultimately providing the real value within the enterprise as a whole.
So I think that transition is still very much an open question, and it's only getting worse. It's not getting easier. I think you've seen it most pointedly in the tech companies. And you've seen some high-profile companies that have let go of a lot of people. Sometimes they've retrained them. Sometimes they've ended up having to rehire them afterwards, even though they didn't realize that they were doing such an important job. But I think that kind of slightly disorganized chaos about hiring, retraining is going on.
And I think there's some insidious things going on as well, which I really have a real issue about. For example, I think some companies are saying things like, well, because of our use of AI. In fact, they're almost proud about the fact that they're laying people off, which I think is, again, morally completely unacceptable.
I accept that companies have a responsibility and a need to develop returns for their investors. But ultimately, companies have a larger stakeholder base. They have responsibilities to the nation. They have responsibilities to employees. They have responsibility to customers, as well as investors. So I don't think this can be understood entirely on a return on investment perspective.
KIMBERLY NEVALA: I was asked this really interesting question on a panel recently. And they said, how should organizations think about this question of the workforce and reskilling and restarting?
And I said, well, listen. At some point, we're all going to have to-- we don't really have an answer. It's not good enough, though, for organizations who are working with adults, hopefully mostly reasonable adults, to say well it's OK. You're just going to find another job or we're going to reskill you. If you tell someone that you are going to be reskilling or upskilling them, there needs to actually be a plan to do that. Otherwise, you should probably be fairly upfront about what that looks like.
And that's not an easy place to walk. I think that's not easy at that intersection. So I don't know if you've seen organizations that are doing this well or that have had really good success in moving people. Maybe, I don't know if it's up the same ladder or onto a different ladder.
CHRIS MARSHALL: Yeah. I think the argument or the hope is, and again, the game is still very much in play. So I don't think anybody's got the answers here.
But the hope is that obviously, as we use the technology - particularly agentic, maybe less so GenAI, but certainly agentic – the hope is that people become on the loop rather than in the loop. They become less constraints on the process and more analyzers of the larger process and opportunities that can come out of the process, maybe for reinvention or change or new products or new services, whatever it happens to be. That's the hope. At the moment, that hasn't quite happened yet. It might happen.
But if people are going to do that, they're going to need a level of experience. Which again, points back to our thinning out of the lower ranks of the organization and a general flattening of the organization, too. I mean, I think we're going to see much wider spans of control within the average enterprise across the world. I mean, many organizations, my own included, it's all very globally organized, and we have often people with 10 people reporting to a single manager. And this is probably something that would have been impossible just 10 years ago, frankly, practically.
So I think companies that have done this quite well, I think the companies that have actually adopted best, in my view, to GenAI are usually quite, in one sense, quite boring companies. I mean, I'm always impressed, I mean, I love manufacturing plants. Being from a banking space, it's always much more fun going to a place where they actually make stuff, real stuff, or energy companies, or health care businesses. Because there you see the practical aspects of how stuff gets done.
I was at a Hong Kong energy utility quite recently. And one of the things I was so impressed by was their use - not so much of GenAI, although that was there - helping their civil engineers worry about boilers and how boilers crack and when are we going to do maintenance on this particular system. One thing I found fascinating was the power and the extent to which these companies have deployed not just GenAI and agentic AI, which gets all the headlines, but rather the sort of, I know it sounds old fashioned now, predictive AI. And that's a bit sad that machine learning techniques that were hottest thing 10 years ago suddenly sound old fashioned. But predictive AI is incredibly powerful in many of these old-fashioned industries. And guess what? These old-fashioned industries are the things that put food on your table and power the lights and look after the hospitals and all the rest of it.
So I think we need to not think of everything too abstractly. It's not always about knowledge management or knowledge flows. Sometimes it's about fixing things that break. It's about being able to understand what are the potential indicators of defibrillation coming up in the next few months? And these are very, in one sense, very practical things and very powerful things.
And I think there is a slight danger that all the hype about GenAI and the genetic AI has slightly taken our eyes off the ball from the more traditional techniques. Which, unlike GenAI I might add and certainly agentic AI, have the benefit of being provably more reliable.
In fact, we mentioned the survey. one of the things we asked in our survey was, which do you trust more? Do you trust agentic AI, GenAI, or predictive AI, or I think we asked also experts as well? And consistently they said they really trust GenAI. And that was surprising because I would have thought, quite naturally you'd say, well, the whole point of machine learning is that you optimize based on the data you have available. So in one sense, it is ultimately as reliable as it can possibly be. If it could be made any more reliable, it would have been made more reliable. So it's almost incredible to think that people should not trust predictive AI.
KIMBERLY NEVALA: And I think it's just that, that just seductive—
CHRIS MARSHALL: It's personal.
KIMBERLY NEVALA: --thing about language. Language is a funny thing. And I don't know if we're necessarily learning all the right lessons from what GenAI has to teach us about the use of language.
CHRIS MARSHALL: I think that's right. We are language-forming creatures. We like to talk to each other. We like to hear what people say. And words mean more than perhaps they should do.
I mean, one of the challenges, obviously, with traditional AI was always it's lack of explainability, it's lack of transparency. I mean, if I've got a deep learning model that's looking at cracks in a boiler or something, and it tells me something. To some extent, I just have to trust it almost by definition because it's what the statistics tell me. But if you can't explain why that boiler's got a crack, maybe I'm not going to be quite so willing to fix it because it might be very expensive. It might have consequences.
And I think we don't have the same discipline and angst about dealing with GenAI that we do with traditional machine learning. Which is weird if you think about it. When it's exactly the sort of technology we should have more trust in in many ways. Because it was designed and engineered to be optimally reliable in a way that GenAI wasn't.
KIMBERLY NEVALA: Yeah. Yeah, exactly. It is actually not designed to be a truth-generating or knowledge management system. But again, I think this also ties back into why organizations, again, get a little bit wrapped around the axle when they're trying to do things or just rush into agentic AI. There's no magic formula here.
And someone said-- Maximilian Vogel said this. H said, we solve the mule problems and not the unicorn problems. We solve the boring problems, and those are the ones that pay dividends. It’s interesting, you also said it's interesting how much more traditional forms of AI and machine learning are used, predictive algorithms and things, in traditional industries. But even the digital industries, if we look at an Amazon, it is actually powered--
CHRIS MARSHALL: Equally--
KIMBERLY NEVALA: --primarily by those types of algorithms. So I do wonder how much we project an adoption or a use of something that's just right on the edge, when in fact, it's still, the engine room is still those old techniques.
CHRIS MARSHALL: Yeah. I mean, definitely. I mean, what we certainly do see is that at least in Asia - I can't speak for the US - more money is still being spent on predictive AI than is being spent on GenAI and agentic. I mean, again, it's not the high-profile stuff. But it's, as you say, it's the bread and butter stuff that works reasonably well, and the Rodney Dangerfield of the technology. But yeah.
KIMBERLY NEVALA: Do you see organizations then coming back to really thinking about composite technologies and how do we actually think about being very deliberate in architecting workflows and thinking about where we apply analytics of all shapes and sizes, if you will, and how that work gets done?
CHRIS MARSHALL: Well, I think what is happening is, for the most part agentic AI is still very enterprise-based, usually looking at a particular function, ITSM or HR or marketing, perhaps.
KIMBERLY NEVALA: Boring problems. Yep.
CHRIS MARSHALL: Yeah. I didn't say that. They're important, a bit boring, admittedly.
KIMBERLY NEVALA: They are important, but they're not exciting.
CHRIS MARSHALL: They're not revenue problems. They are cost.
KIMBERLY NEVALA: There you go.
CHRIS MARSHALL: And what has yet to happen, really, is the breakdown of, at least in most organizations, the breakdown of monolithic relatively simple agentic AI tools into smaller-- we always talk about swarms or fleets. Fleets is probably a better word than swarms, in my view because I don't think we're going to have that many agents. In most organizations, we'll have 20, 30, whatever.
But if we have those individual agents that are doing more specific or domain-oriented things, clearly that means that these are going to have to cut across different departments, across functional areas, and eventually into lines of business. Obviously, when that happens, then suddenly organizations take note.
Once they start to get away from the cost side, the functional areas, and towards the lines of business, then suddenly it's part of our core value chain. If that's the case, it's how we actually make money. The rest of the stuff is hygiene factors, quite honestly. If I don't do it, somebody else will do it. And therefore, I'll be at a disadvantage.
One of the things about agentic, and certainly GenAI too, I suppose, is that there's this promise, or equivalently, the fear of missing out. The sense that if I don't do this somebody could do it and completely take away my lunch. It's not going to be a slight disadvantage I'm at relative to another company. But rather, they can completely wipe me out if I don't do this. And that's what's been driving a lot of the spending, at least in the region, for GenAI and agentic, despite frankly, the relative lack of returns on most of the projects.
I mean, the success rates have been pretty embarrassing. There's good reasons for that. And I'm not saying that means that there's no value out of AI. What I'm saying is that there's a sense of throwing spaghetti at the wall and hoping that something sticks. We're doing too many use cases, often, in many organizations. We're not focusing on the top three or four that really make a difference, that have been proven to do quite well and be very effective and where we've got templates and tools to actually help them and support them in that journey.
Instead, we're saying, everybody and his brother is saying, I've got a project we can do. And the danger with that is that you completely lose focus. You don't have the discipline. You don't build the skills. You can't scale it very well. All of these things make a difference. And I think that is one of the things that's getting in the way here.
KIMBERLY NEVALA: Yeah, and it will be, I think, interesting to see. There's a level of foundational capabilities, whether we're talking about governance, whether we're talking about just the data foundations, whether we're talking about analytics, maturity, and understanding.
And we're also talking, I think, about business maturity. Folks understanding which bits of the business are amenable to where having more information would lead to better decision-making, and what that information is, and where we actually can automate something.
And I know people will be throwing tomatoes when I use automation in the context of agentic AI, but we're talking about smart automation here. Where does that make sense? And what do we really have to do? How do we deconstruct the business, and which bits of it can we do that and where shouldn't we, for a lot of reasons. And it may not just be because the machines can't do it, but because we have to develop, as you said, some of this other expertise.
So as we move forward, do you think that organizations will be, or are they already, kind of looking towards realizing that those foundational elements which, in and of themselves, are not ever particularly seen as fun or sexy, if you are going to be coming to the fore. Because in order to get that sort of leap forward, you actually have to have those things in place.
CHRIS MARSHALL: Yes. Clearly-- I mean, I think of most obviously of the data piece. AI, GenAI, predictive AI, they're all conditional on having quality data, accurate data, metadata, accessible data, updated data, all these usual things.
And that only gets more important as we move towards these more advanced technologies. If you try to deploy GenAI, the cleverest models that you can find on a data platform that's inadequate, you're doomed. It's not going to work. Not going to happen.
But the irony is that nobody ever gets promoted for doing a good job of a data platform. I mean, we always-- I used to be at IBM, and we always used to joke that somebody might be promoted to work on metadata or master data management or something like that. We always used to joke-- career death. [LAUGHS]
Because it's one of those topics that is so important, and yet nobody takes it as seriously as they should. Without those foundational elements, data is just one of them. There's others. I mean, obviously the technology platform is important, the skills, et cetera. But the data platform is usually one of the biggest things that companies struggle with, at least in my region.
KIMBERLY NEVALA: Another element where I think, again, we see this narrative that kind of goes up and down is about the importance of governance. And we certainly do see at least correlations between organizations that have good governance, that have invested, as we said, in things like responsible AI or trustworthy AI - both technically and from a business process standpoint and oversight perspective. Do you think that's going to be given its due respect, as we move forward here? Or is that just strictly wishful thinking on my part?
CHRIS MARSHALL: No, I think it is. I think it's almost as if, as I mentioned, a risk that is not managed eventually becomes a loss. It's just a matter of when. The timing is unclear.
And AI inherently, as we've said, generative AI especially, and agentic AI too, although it's not quite so clear how that's going to work out, involves risks simply because it's not necessarily trustworthy. It's as simple as that.
So if that's the case, then sooner or later you're going to get a situation where-- the famous example of the lawyer who goes into court with a couple of made-up cases from ChatGPT, and he used them as references to his court case. Of course, they were completely bogus. Or maybe it's going to be a medical situation where misdiagnosis is the result, leads to a lawsuits, what have you. And again, in the sort of litigious world that we're in, those mistakes will be seized upon very aggressively by the market, by your competitors, your customers and the like. So I'm not even going to mention the cybersecurity aspects, which are obviously in the background of GenAI.
But still, it means that if we don't have the larger governance, risk control, compliance, security frameworks on top of what we currently do, eventually, you're going to make some mistake. It's just inevitable. If that’s the case-- I think a lot of companies tend to take the attitude, yeah, GRC is a nice thing. Yeah, we have a policy document somewhere. Some academic came in and wrote it, and it's not a big deal. We've got it. Tick the box.
The problem with that is that that's not going to be much of a help in a lawsuit, or when your operations go down, or when your customer files a class action lawsuit or whatever it is. Those situations demand a level of discipline in your risk management that you just don't have usually in most organizations, frankly.
I mean, financial services are their own special area. And a lot of the techniques, I think, for GRC are migrating slowly from financial services into other industries, particularly as lawsuits become more prevalent and common, et cetera.
So I think that story where we somehow separate, and I think there's almost a separation of duties required here, most of the companies at the moment are worrying about building COEs to develop the AI capabilities. All makes good sense. And they're maybe tasking someone within that COE: I'd like you just to do a double-check and maybe come up with a risk control self-assessment or something like that.
That is not sufficient.
Why is that not sufficient? Well, because as banks have found out for forever, the people who take on the risks should never be the people who check the risks or manage the risks. That is a recipe for disaster. And that means you've got to have a team - how big the team is depends upon the organization - that is separate from the guys or the ladies who actually build the models, who build the tools, who build the platforms, who actually deploy the tools in practice. Unless you have that separation, you're never going to have independence.
And I think that's something we're moving towards. And the regulators have got to, certainly in some parts of Asia, they're doing a very good job at saying, you've got to have separation of duties for risk control, compliance, for AI in a way that I think most companies do not have.
KIMBERLY NEVALA: So I know that we've touched on a lot of areas there.
You talk to organizations a lot about this idea of smart scaling. Are there elements or principles that we haven't touched on that you think you'd like to just leave in people's ears? If you want to scale both analytics and AI, all the new stuff and the old, the old faithful and useful stuff as well, what are some of the core principles they need, or practices they need to look at, or mindset-- maybe it's a mindset-- about how to do this well?
CHRIS MARSHALL: Well, I think there's some obvious things to comment on.
I mean, AIOps is inevitably going to change. Perhaps the most important aspect of a genetic AI, really, in the short term, is its impact on the IT department itself. And I think AIOps revolutionizes how IT departments are run. And that will have knock-on effects to the rest of the business. Increasingly, business workloads are now IT workloads, and IT workloads are increasingly AI workloads in many cases. So the AIOps piece becomes even more important.
And I think that hints at things like, how do we scale? It hints at hybrid cloud management. It hints at security. It hints at proactive incident management, all these good things. So I think that would be a really important takeaway, a safe takeaway, a safe bet, frankly, in terms of your deploying agentic AI. I would start very much within the IT department itself. It's where you've got the skills, very often. And it's where we've already got situations where we're seeing significant ROI. It gets harder when we start to translate those AIOps into larger business kind of operations, but IT's a big enough place to start, I think. And that would be where I would probably look to do this first and foremost.
Obviously, there's the larger context about hybrid cloud and how that's going to be working. And we can debate whether there's a move towards on-prem, or even, I think there is a move to the edge, certainly, in many of the inference calculations and even tuning calculations for AI. So that probably is pushing the nature of how we manage our operations. That's going to be different moving forward.
So I think the scaling piece is about, certainly about, hybrid cloud management. It's certainly about edge management. It's certainly about AIOps. Those things will be the things to worry about.
KIMBERLY NEVALA: Excellent. Now before we let you go, I cannot resist because, as I said, I took a walk through some of your stuff, and you had posted about making your debut - I don't know if it's a return to the stage or it was your debut - on the stage. Which I thought was fantastic and brave because I'm a big chicken and would never do it. So I'm interested, though. What, if anything, have you taken away from that experience that folks could apply?
CHRIS MARSHALL: What a good question. One thing maybe, again, this is a little personal. But I didn't really appreciate before I went back to this how much of a team effort acting is.
And it gets back to the idea of trust. You can almost guarantee in any performance, people might not realize it in the audience, but you realize in any performance there's nearly always at least so many mistakes that happen. The chair’s in the wrong place and somebody didn't quite get the line right. And, therefore, they swapped to one line when it should have been here, and it should have been there.
But what you see there is almost a bit of a microcosm of what a company is like in practice. And agentic AI struggles with this sort of thing a little bit. Because it, like any sort of rule-based set of tools, it takes things too literally. And one of the arts of a good acting team, and it is a team, is that you're always listening to what everybody else is saying. And you're saying, oh, they fluffed up that line there. Maybe I should step in and say something slightly different or whatever it is.
And it's that sense of teamwork, which-- I think all of us quite easily fall into our little bubbles, our technological bubbles, and we assume that we have complete control of everything. The reality is, companies are still, and probably will always be, to some extent, teams. And teams mean we need to work together. We need to listen to what the other person says. Even if they say something stupid, we've got to be able to say something intelligent in response and vice versa. No doubt we will make mistakes. And that's the nature of being human.
Just because technology is out there, and we invested in so much trust doesn't mean it's not going to make mistakes. It will make mistakes. I guarantee it. And unless we're able to be flexible and adaptable and listen to what the hell is going on, we're not going to be able to respond effectively.
KIMBERLY NEVALA: Well, I think that is an excellent note-- we'll take it as a stage note-- to end on.
CHRIS MARSHALL: Stage note.
KIMBERLY NEVALA: So thank you so much for your time and insights, Chris. We could go on for a very long time. And maybe we'll get you back later in the year to see what we're seeing happening in the sphere. But for now, thank you so much, again. Really appreciate you coming on and having this conversation with us.
CHRIS MARSHALL: Thank you so much, Kimberly. It's been a pleasure.
KIMBERLY NEVALA: Alright, so to continue learning from thinkers, doers, and advocates such as Chris, you can find us wherever you listen to podcasts and also on YouTube.