Technology Now

What topics have to be considered while discussing AI? This week, Technology Now is returning to Davos, Switzerland, dive deeper into the topics surrounding the AI revolution. We ask how sovereignty in AI is linked to trust and explore how sustainability both impacts, and is impacted by sovereignty within the industry. Kirk Bresniker, chief architect of HPE Labs, tells us more.

This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Sam Jarrell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations. This episode is available in both video and audio formats.

About Kirk: https://www.linkedin.com/in/kirkbresniker/

Creators and Guests

MB
Host
Michael Bird
SJ
Host
Sam Jarrell

What is Technology Now?

HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.

MICHAEL BIRD
 Hello? Hello? Hello? Hello, Sam.

SAM JARRELL
Hi, Michael. You’re still at Davos then?

MICHAEL BIRD
 I am still in Davos, although by the time this episode goes out, the World Economic Forum to annual meeting will be over.

SAM JARRELL
 Okay. So then I'm curious now that the week is just about over, what has been the most interesting thing for you then?

MICHAEL BIRD
 There've been tons of interesting things, loads of interesting conversations I've had. Uh, but I have to say I saw four legitimately famous people. I walked past Matt Damon.

SAM JARRELL
 How was that you? That is pretty cool. Interesting to think of like Matt Damon being at the World Economic Forum.

MICHAEL BIRD
 Yeah, yeah. No, he was, uh, he was doing a talk on something, I think.

Uh, but anyway, very, very cool. Uh, right, Sam, it's, it's pretty cold. I'm not quite shivering. I'm not far off, though. So should we get on with the show?

SAM JARRELL
 Yeah. All right.

MICHAEL BIRD
I'm Michael Bird

SAM JARRELL
I'm Sam Jarrell

MICHAEL BIRD
And welcome to Technology Now from HPE.

MICHAEL BIRD
 Well welcome back to Davos and the World Economic Forum's Annual Meeting, and, uh, like last week, this episode is available as a video episode as well as audio once, so do make sure to check us out wherever you watch your podcasts.

SAM JARRELL
 Yes. And I bet the views from Davos are quite gorgeous when it's a little bit brighter outside.

MICHAEL BIRD
 Yeah, if you can imagine beautiful mountains behind me, uh, snow topped houses, and, uh, oh, it's just, it's just a, it's a very, very beautiful view from here.

SAM JARRELL
 I believe it. 'cause I mean, last week we got an overview of the topics being discussed from our very own president and CEO Antonio Neri, and the views from that interview were gorgeous.

So tell me, what are we talking about this week?

MICHAEL BIRD
Well, this week we are returning to a classic Technology Now topic: Artificial Intelligence.

But, we’re talking more around AI than about it, I promise.

SAM JARRELL
Because AI doesn’t exist in a vacuum.

MICHAEL BIRD
 Exactly, Sam. Exactly. And this episode, we are going to be looking at sovereignty, sustainability, and trust, all in the context of AI.

What do they mean in theory and how do they work in practice? And friend of the show, Kirk Bresniker, chief architect of HPE Labs, has the answers now. Sam, aside from the occasional sound of helicopters, we swapped the busy rush of Davos to a nice walk through the mountainside forest to chat all about it.

SAM JARRELL
 Oh, that sounds lovely.

MICHAEL BIRD
Well, Kirk. We're at Davos. Again

KIRK BRESNIKER
Again

MICHAEL BIRD
thank you so much for joining us on the show. I thought I'll take us for a little alpine stroll, if that's all right.

KIRK BRESNIKER
Sounds great

MICHAEL BIRD
Okay, well let's, let's get going now.
Last year, uh, we talked a lot about AI and I think we also talked about a lot, a lot about sovereignty and sustainability, and I think you talked about a lot of the AI house. Has much changed since last year as it, as it pertains to, to sovereignty and sustainability?

KIRK BRESNIKER
I think it has in some very interesting ways. When we were talking last year, everyone was like, I’m all in on AI outcomes, uh, country company community. But what I wanna know is, How do I sure supply on information, on energy and on infrastructure? Yeah, and I think in the intervening 12 months, let's just say that nothing has gotten easier in this world And so we're all still trying to understand and help people understand how they will get to those AI driven outcomes. What do they need to do? How can they secure those supply chains? Uh, not only to assure that they have access, but the access itself is guaranteed and secure

MICHAEL BIRD
could we just do sort of a few quick definitions? what do we mean by sovereign data and sovereign cloud?

KIRK BRESNIKER
So, when I think of sovereign data as sovereign cloud, I think about having control, predictable control, that no matter what happens in the world, I know that I will have access to these resources, whether it's information or infrastructure or the energy that powers both of them. How can I engineer systems to guarantee access so that no matter what happens.
That I will continue to have those resources available to me. Now, when talk about sovereign data, that you know what, the best way for me to control where the data is, is to always know where it is. Yeah. And guarantee I know about the infrastructure and the personnel that are helping me manage and access that data.
So that's another aspect of sovereignty. And it might not mean a comp country, it might mean a company as well. How can we all understand where our proprietary, our most important assets are and how can we, again, guarantee. Access and assurance of supply to that information.

MICHAEL BIRD
Okay. can we talk sovereign AI and sovereign AI factories?

KIRK BRESNIKER
 Sure.  the data. Okay. Uh,  so when we're talking about sovereign AI and sovereign AI factories, we're about taking that data, taking that infrastructure, taking the information.

And using it first to create a model and then to infer over the model. And so when we think about that sovereign ai, we want to be able to assure ourselves that we are in control, we're in control of every aspect of producing that model, of maintaining its integrity, maintaining its security, and then when we infer over it, maintaining control of the outcomes that come from that.

And one of the ways that we do that is by making sure the means of production are secures. And that's the AI factory.

So can we create the system, can we  run it independently? So let's say that something happened to the wide area network. Uh, is my AI factory still gonna be up and going? Do I have to worry about a trade dispute?

Do I have to worry about anything that might happen? Whether political or um, or geologic? Can I guarantee access to those outcomes? Yeah.

MICHAEL BIRD
 Alright, well we, we speak, I feel quite a lot about sovereignty on technology now, but it feels like it's still a hot topic here at the World Economic Forum. Like why is that?

KIRK BRESNIKER
 because there is no lack of, uh. Of interesting conversations about the relationships between, uh, countries right now, between economies and who is controlling these technologies, who will be, uh, afforded access to them.

And, uh, you know, it's, uh, certainly is a, it is a means where people are trying to understand, how do I guarantee outcomes? And, you know, because it's seen as a fundamental enabler of competitiveness. Uh, but it's also something where we really wanna understand communities all around the world wanna know that AI technologies are representing them, representing their language, representing their culture that they've been incorporated in, so that they have that ongoing access to information that really reflects, uh, their values and understanding.

MICHAEL BIRD
Yeah. how do you marry up the concept of sovereignty with the tagline for this year's, what economic forum? Um, a spirit of dialogue.

KIRK BRESNIKER
So I guess for me,
There are more dialogues, more one-on-one conversations rather than the large multi uh, stakeholder conversations.
The big round table. Mm-hmm. I think we're having a lot of individual conversations and that's okay. I mean, as long as it's not a monologue. Yeah, that actually is a dialogue. I think that's still healthy. It might be a little more complicated, something that takes a little bit more time. Uh, we could wish that there was just a big happy round table, but you know, if that's not the situation we find ourselves in, we still need to make progress on utilizing these technologies and make sure that they're. Uh, equitably accessible to everyone.

MICHAEL BIRD
Yeah. Okay. Well, the sort of, the next thing I wanted to ask you about was trust. How is sovereignty linked to trust

KIRK BRESNIKER
Well, when we think about trust, uh, we think about, I at least, I think about sometimes it's an overused word. Yeah. I mean, because if I, I'm trusting you, that means I don't have proof, and that's actually as an engineer, uh, that's what I'd rather have is irrefutable proof of something.

Now, when we have sovereignty, part of what we need to understand is how do I. Gain measurable confidence that the systems I'm put in place will continue to serve me and my community?

So part of this is understanding how we get to that proof, and that can be. Things that we're used to, like proof that I have made sure that I have redundancy in my networks, in my data centers, that I have resiliency in how I've engineered these systems. Uh, and it can also be that I need to understand and perhaps test and prove to myself that no matter what happens with relationships, uh, with politics, that I'm guaranteeing myself, I have this kind of access and so. Once I've done that, then I can trust the systems, but it's because I've engineered that outcome.

MICHAEL BIRD
What do we mean when we, we talk about trust in ai? Are we, are we talking zero trust? We talked about, about on the show quite a few times, uh, or the trust people in organizations have in artificial intelligence?

KIRK BRESNIKER
think there's a couple aspects. Certainly when we talk about zero trust, that's where we're talking about proof.
I, just simply cannot take the chance that I have to rely on an individual to not be coerced or a system not to be, infiltrated. I need to have proof, continuous attestation.

Uh, and that's what zero trust means, that every interaction is verified, every interaction is authenticated.
And uh, when we think about AI outcomes, we do think about. That entire supply chain, all that information, all that infrastructure, all coming together powered up with energy. And I want to be able to again, authenticate that everything that's been happening is exactly what I was hoping to happen.

Now we also have another aspect of trust, and that is, is any particular AI outcome fit for a particular use?

Yeah. You know, it's one thing to say, oh, it's passed. All the bar exams in all 50 states, uh, it's, it's taken all these medical, uh, school exam entrance exams.

Does that mean that it's the right model to, uh, run my, uh. Predictive maintenance on my, on my big expensive piece of equipment in a factory. Now that's where we need to understand how do we establish fitness for use, and that's part of a dialogue, uh, with your customer, uh, and listening to them and to gather, evaluating particular technologies for, uh, an individual use.

And so it can be great, pretty great at a lot of things, but when you're an enterprise and you're trying to utilize ai, you really wanna move beyond that. To proving a particular combination of technologies is going to satisfy you, it's gonna satisfy your regulators, it's gonna satisfy your team members and your community, customers and partners

MICHAEL BIRD
Can you sort of, I dunno, retrofit trust into an existing AI model? Or is it easy to sort of create it with. With that sort of trust in mind?

KIRK BRESNIKER
Well, it's hard if, if, if you ask your vendor, uh, where'd you get that data?
they say somewhere, uh, well, you 're like, well, I guess that that erodes, that ability to generate that comprehensive understanding.

So where'd you get the data? How much energy went in here?
All of those things about the engineering of these solutions. If, uh, if you don't have a good set of answers, then it's, it's hard to measure that confidence.

MICHAEL BIRD
Can we pivot and talk about sustainability in ai? Um,
it feels like a huge topic and one which a lot of consumers view is really important. How do you integrate sustainability into an AI
when it's being built?

KIRK BRESNIKER
You do that by understanding and recording, keeping the notes about how much energy, how, and not only how much energy, but where that energy come from. Right, was this sustainable energy? Was this carbon intensive energy?
How many liters of water did it take to answer that question? How many joules of energy? Because you're making a decision. You're expending resources in order to get an outcome. The more you know about what went in, the more you can do that value
comparison. Was it worth it?

MICHAEL BIRD
Yeah. And, and a bit like the trust question, can sustainability be reversed engineered into an or existing ai?

KIRK BRESNIKER
Well, in some ways that's what we do now, because a lot of the closed models, don't publish that data. Mm-hmm. And so if you go to some of the open of. Websites that track these, you'll see very intelligent analysis of trying to reverse engineer some of those input and output equations.

Uh, some of that resource intensity. That's about the best we have. And I guess part of this is. Us as consumers, both as individuals and as enterprises. Maybe we start saying, you know, I'd love to use your model, but until you can point to the data that tells me these things, maybe I need to choose a different model.
That's part of that fitness, uh, question is not just the output of the model, but the, the mechanisms and the means by which it was created. Do we need to understand that and demand that as consumers

MICHAEL BIRD
things be done to make, um, AI more sustainable, AI models, I guess more sustainable?

KIRK BRESNIKER
Uh, so there, you know, one of the things that we mentioned this last time, Uh, I went back and reviewed our, we used the word quadratic, which was important because, and I've used it, this…

MICHAEL BIRD
probably the one and only time on the podcast that quadratic used.

KIRK BRESNIKER
But uh, but uh, yeah, I've actually used it this week as well. When we make a model 10 times bigger. It uses
a hundred times more resources. Wow. So we just need to be thinking about that because right now people are making models bigger. And the question is going to be, as the models continue, is one gigantic model that has everything in it.

Is that the right approach? Hmm. Or with smaller models. 'cause that also means if we make a model 10 times smaller. Yeah, use this one. 100th of the resources. So maybe a collection of smaller models fine tuned with proper provenance is what we'll see. And when we talk about agentic systems, that's where these collections of models assigned a particular purpose tuned for use.

And this is some of the research we've been doing at labs.

Might be the, the way forward. And again, it's a quadratic curve is great when you're winding it down
towards savings rather than up towards increasing
costs.

MICHAEL BIRD
presumably having smaller models that sort of have a particular purpose, um, like, like maybe not just more sustainable or more efficient, like presumably they can do things a bit faster.

KIRK BRESNIKER
They can do things faster. And also, you know, if I wanna prove provenance, it's easier to
do Yeah. When I'm not using every single bite we've ever recorded as a species. Yeah. Uh, and some of that is probably do abuse information anyway.
So yeah, all these things add into that question of, as we move from this initial exuberance to getting something that's 50 fit for purpose, I think we will see, we'll see these changes, you know.

I think we're in this initial period, we're learning lots, we're exploring a lot. Uh, we will likely be humbled. at some point, uh, hopefully it won't be terrible, but, uh, you know, we will get bloody noses and have to figure our way out. But that's how you continue to improve and, and all of this will motivate people to make these systems better.

MICHAEL BIRD
And it just takes time.

KIRK BRESNIKER
it takes time and effort, but I think with the will, you can accelerate that, but hopefully, you know, we'll see. Complex systems operated at superhuman levels of efficiency with radical transparency. 'cause that's what gets people to buy in to sustainable systems. When they see the evidence and can know that, you know what, I just did my part. And having that, you know, giving that data to an AI system where you get that immediate visceral feedback might be what it really takes to prompt collective action.

MICHAEL BIRD
Kirk, those are all my questions. Thank you so much for joining us on technology now. I hope you've enjoyed the walk.

KIRK BRESNIKER
It's been nice. You know, it was a little bit to get up here, but the view is more than worth it.

MICHAEL BIRD
Well, hopefully the next time we interview you, I'll find somewhere suitably as interesting and nice to look at.

KIRK BRESNIKER
Well, I mean, we've done Davos twice. We've done two times in the garage and we've been there, uh, you know, in the briefing center looking at all that great HPE gear. So I think it's pretty

MICHAEL BIRD
Well, thank you so much for joining us in technology now.

KIRK BRESNIKER
Absolutely.

SAM JARRELL
 was pretty interesting.

You know, it feels like sovereignty and trust and ethics just keep coming up throughout this entire event. Uh, which I mean, it makes sense, right? It's AI is the new hot thing and you need to make sure that your data is secure and that, you know, you're, you can actually trust the AI that you're using to work securely and then also even whatever applications you use it for.

I really. Found it interesting when you guys were discussing, do we trust an AI that has passed the bar in medical exams in all 50 states or whatever country you have. We might trust a person who has done that, but I kind of wonder whether we'd be comfortable having AI give us like an official medical diagnosis. I dunno.

MICHAEL BIRD
 there's a definitely a conversation around. Um, having a human in the loop, like where is having a human in the loop something that we should be doing? I think ultimately it comes down to, and I think we talk about this a lot on the show, um, playing to a particular technology's strengths and but also considering from a human perspective, like what are we as humans comfortable with?

One of the things that I found really interesting chatting to Kirk was the conversation around, um, sustainability. Can we use maybe smaller models to do things more efficiently? So could you have a, um, you know, may rather than one massive model that is basically a general purpose model. Is it better to have smaller, specialized models and then have agents to call on those models to answer particular things at different times?

Um, and I wonder if that's the direction that this technology is going. Um. Because it sort of feels like some of the technology that we have in our pockets has sort of gone in that direction.

SAM JARRELL
 I agree with you. I think, uh, you know, all of these conversations around, you know, trust, sustainability, sovereignty.

Yeah. They kind of connect back to three mega trends that HPE has been seeing in the market. One of them is like the great VM reset. Costs and things around using VMs in the public cloud, um, have made it so that people need to reexamine their operating models, um, and make sure that they're designed for speed, resiliency, and control.

The AI gold Rush essentially AI isn't optional. It's urgent and the risk isn't adoption, but it's integrating and scaling poorly. So like you were talking about having sort of what are the bespoke uses of certain things? Um, you know, a lot of governments and things are. Chasing control, whereas enterprises are chasing ROI.

Um, and then security and resiliency imperative being the final one. Every outage, audit, cyber attack proves it, you know, resiliency is kind of that currency of trust and so you kind of need to make sure that security is in continuity or built in, not bolted on. Um, so you have to be able to stay. Running 24/7 globally, whether you're, you are government and enterprise, or you know, a nonprofit, whoever you are, if you have an outage, it doesn't just impact your business.

It impacts the, the trust currency that you have with your customers or with the people who use your services. So I, I feel like these things are all kind of coming together in those giant mega trends.

MICHAEL BIRD
 Yeah, I, I would agree with you on that. Now, um, Sam, I wanted to ask you something, which I did ask Kirk, but I want to get your opinion on before we hear the answer.

SAM JARRELL
 Okay. What's, what's the question?

MICHAEL BIRD
 Okay. So given how interconnected the world has become is true sovereignty possible, what do you think?

SAM JARRELL
personally, I don't think it's gonna be 100% possible in the same way that, you know, even today you have people, you have hacks, and you have breaches of security. It just, it will not be 100% possible, but that doesn't mean we shouldn't strive for it.

MICHAEL BIRD
 I think being very realistic there. And I think that's always an approach you take from a cybersecurity perspective
But anyway, let's hear what Kirk had to say when I posed the same question to him.

KIRK BRESNIKER
you know, it's, it's an interesting question because, you know, any, any virtue, misapplied becomes vice and sovereignty can quickly switch into, Slip into isolationism, and when we put up a silo, uh, we create opportunity for adversaries and for mistakes. Yeah. And so I think it, it reflects the world we're in. I think it is.
Possible to find that balance point, uh, where you're guaranteeing access and security guaranteeing that your community and culture is, is represented in AI outcomes. Uh, and do it in a way that doesn't, um, slip into, uh, an isolation, a silo.

SAM JARRELL
Okay that brings us to the end of Technology Now for this week.

Thank you to our guest, Kirk Bresniker

And of course, to our listeners.

Thank you so much for joining us.

MICHAEL BIRD
If you’ve enjoyed this episode, please do let us know – rate and review us wherever you listen to episodes and if you want to get in contact with us, send us an email to technology now AT hpe.com and don’t forget to subscribe so you can listen first every week.

Technology Now is hosted by Sam Jarrell and myself, Michael Bird
This episode was produced by Harry Lampert and Izzie Clarke with production support from Alysha Kempson-Taylor, Beckie Bird, Hilary Fisk, Allison Gaito, Alissa Mitry, and Renee Edwards. Our video editor is Leon Radschinski-Gorman and our theme music was composed by Greg Hooper.

SAM JARRELL
Our social editorial team is Rebecca Wissinger, Judy-Anne Goldman and Jacqueline Green and our social media designers are Alejandra Garcia, and Ambar Maldonado.

MICHAEL BIRD
Technology Now is a Fresh Air Production for Hewlett Packard Enterprise.

(and) we’ll see you next week. Cheers!

SAM JARRELL
Bye y’all