HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.
Aubrey Lovell (00:10):
Hey, everyone, and welcome back to Technology Now, a weekly show from Hewlett Packard Enterprise, where we take what's happening in the world and explore how it's changing the way organizations are using technology. We're your hosts, Aubrey Lovell.
Michael Bird (00:21):
And Michael Bird. Now, this week we are bringing you another episode from our trip to Davos for the 2025 World Economic Forum Annual Meeting. Now, in the last episode, we took a walk with HPE's president and CEO and Antonio Neri. This week we're staying a little bit more static and talking more about AI. We'll be discussing where we've come from, and where we are headed.
Aubrey Lovell (00:46):
This is one you don't want to miss. So, if you're the kind of person who needs to know why, what's going on in the world matters to your organization, this podcast is for you. And we're also making a video version of this episode. So, do check out the HPE website or YouTube channel as well. And if you're watching, subscribe to the Technology Now on your podcast app of choice. All right, Michael, let's get into it.
(01:10):
The AI House is a nonprofit organization based out of Davos, which brings together leading lights in the field to talk about the future of the technology, and how it can benefit the greatest number of people with the lowest barriers to entry. Now, HPE has a strong presence at the house this year, cementing its position as a leading global figure in artificial intelligence.
Michael Bird (01:30):
One of the key speakers this year was HPE Vice President, fellow and Chief Architect at Hewlett Packard Labs, Kirk Bresniker. I had the chance to catch up with Kirk at Davos for an all-encompassing chat about the past, present, and future of AI.
(01:48):
Kirk, thank you so much for joining us.
Kirk Bresniker (01:50):
Thanks, Mike.
Michael Bird (01:51):
We are here at the beautiful Davos here in Switzerland, at the World Economic Forum Annual Meeting. And while we're here, you are hosting the AI House.
Kirk Bresniker (02:02):
Yes, that's right.
Michael Bird (02:03):
So tell me a little bit about the AI house.
Kirk Bresniker (02:04):
So, the AI house, this is our second year at the AI House to have a space where our academic, industrial, public sector partners, can all gather. Because it's clear: everyone here at wants to understand the impact of AI, how can they utilize it, what cautions do they need to take, how do they use the right technology, and have confidence that they're using it the right way. And the AI House is a space where people can come, they can listen to panels, they can just grab a cappuccino and just look for one of us with our AI house pins and ask a question. Because they all have burning questions. They all want to know how do I utilize this technology in the right way, and then we get to have the fantastic conversations.
Michael Bird (02:49):
So in the spirit of the AI House, I'd love to take a look back at maybe the past, the present, and maybe the future of AI, as you see it. So, take us back to 2020, 2021. What did the landscape for AI look like?
Kirk Bresniker (03:05):
Well, for us at Hewlett Packard Enterprise, we were one year into understanding how we lived up to our AI ethics principles. Back in 2019, our global ethics and compliance office commissioned an external audit, exhaustive audit of our entire company and our human rights position. And while we got praise for being not just leadership within our industry, but global leaders in things like forced labor and conflict-free minerals, the message was clear. The three things that Hewlett Packard Enterprise will have to worry about with human rights in the future is AI, AI and AI.
Michael Bird (03:37):
Wow.
Kirk Bresniker (03:38):
AI in our product, AI in our process, and AI when people want to build on goods and service. So, we came together as a global community and in partnership between Hewlett Packard Labs and Global Ethics Compliance, we wrote up our principles. And we were then just starting to learn how to live up to them.
Michael Bird (03:52):
So, I guess back in 2020, 2021, AI wasn't really in the public consciousness, was it? It was seen as maybe a bit of a weird thing.
Kirk Bresniker (04:01):
It was. As a matter of fact, here at Davos, we introduced what we call the C-suite toolkit for AI, which was how do you, if you're the one person in your organization who thinks that AI will be material to your company's future, how do you have a conversation with your board members, your peers, your team members, your community, say that, "Okay, this is an interesting technology"? But then when that first large language model came out November '22, suddenly everyone, because suddenly all knowledge work seemed to have the possibility of being affected, to be augmented, to be really revolutionized by large language model technology. And that sort of opened up the floodgates and said, "Okay, we all have to be part of this conversation."
Michael Bird (04:43):
Yeah. Was there any suggestion that by 2025 we'd have the sort of generative AI field and the AI landscape that we have today?
Kirk Bresniker (04:53):
Well, I think it's been incredibly challenging to predict the adoption and the continuous evolution. There is literally daily breakthroughs in these technologies, as well as people committing unprecedented resources to these model training. It's incredible, back when we wrote our original AI ethics principles, one of our principles is responsible, AI should be responsible. And I remember trying, I had to argue to get some sustainability language into that principle, because back in 2019 it was like, "I could run this model on my laptop. What is the big deal?" And suddenly we're talking about not just single digits, perhaps double digits of energy consumption globally going to this technology.
(05:41):
So, it is going to materially affect our decisions on how we power this planet. And the assumption is we'll be better for it. But being an engineer, I don't want to assume, I want a proof. I want to see the math. I want to how we can measure these technologies, and measure their effectiveness.
Michael Bird (06:01):
So, grid demand is growing so fast for these AI models. Why is that?
Kirk Bresniker (06:08):
So, one of the things that you have to understand about the models is, every time, to be nerdy, the consumption grows quadratically. And by that we mean, if I make a model 10 times bigger, I need a hundred times the resources in order to create it. And that's just the first step, because first I create a model, and that's energy and computationally intensive, massively data intensive too. And I have to move the data into the training. All these things add up.
(06:35):
The second phase though is inference. I've created a model, now I need to utilize it, I supply it with new data. And that could be something that just grows much, much faster. We expect that model inference growth, inference on models is expected to grow about five times model training, and just think about pretty soon everything we do. We all got smartwatches on, we've all got mobile devices. When every human interaction begins to incorporate about hundreds of thousands of model inferences per hour, 24 hours a day, seven days a week, and hopefully all 8 billion people will be able to utilize this technology. That is going to be a tremendous impact on the globe.
Michael Bird (07:18):
And can our grids handle it? Can our grids keep up?
Kirk Bresniker (07:22):
Right now the answer is no. We don't have capacity. We're going to have to make hard decisions. So, when we think about systems like we just released and turned on El Capitan for the Lawrence Livermore National Laboratory, helping advance science mission of the Department of Energy, those systems, 25-30 megawatts, I think about it in terms of homes. That's about 18,000 average US homes. The AI data centers that people are contemplating building now are gigawatts, a thousand megawatts. That's the equivalent of 833,000 US homes.
Michael Bird (07:56):
Wow.
Kirk Bresniker (07:56):
And it's not just going to be one gigawatt data center, it could be potentially dozens. So, think about that. That's tens of millions of us homes, that is the equivalent that we're going to be utilizing for AI training and inference. That's going to be impactful. So, the quadratic, we make these models bigger, and that just goes up by a square law in the resources they demand.
Michael Bird (08:23):
I suppose another challenge that we will be facing today, or we are facing today, is around networking. So being blunt, can our networks cope? Can our networks keep up with the demands of AI right now?
Kirk Bresniker (08:35):
And so that's where we always want to think about every AI outcome is three supply chains coming together.
Michael Bird (08:42):
Yeah.
Kirk Bresniker (08:42):
Infrastructure, energy, but the last one is information. How do I get the data into the data center to train the model? And then how do I get the data to the model when I need to do the inferencing? And again, when we think about billions of people, hundreds of thousands of inferences on hundreds of thousands of models, understanding how we get that data into and out of the data center, how we get the data inside of the data into and out of the chips, all of these are very material challenges that we need to understand and predict what the demand is going to be. Because we don't want to be limited by any of those supply chains. Limited by energy, limited by infrastructure, limited by information. We want human ingenuity to thrive, but that means solving some really complex equations, and we want to do it ahead of time.
Michael Bird (09:36):
So Kirk, is the answer then just to move the data closer to the compute? Are you not having it on public cloud, bringing it into your own environment?
Kirk Bresniker (09:45):
Certainly the answer will be that we need to be thoughtful. And we need to do the math, and understand, when is it best? When is it the most sustainable to deploy a model at scale and put it closer to the data? Do I want to put a million copies of the model in a million million edge devices, or do I want to send a million streams of data back to that centralized data center? The energy consequence of data movement is actually the dominant factor. So, that calculus of understanding, not only how much energy, but also we also concerned with the emission. So, how carbon intensive is the energy that's available? Perhaps all those edge devices have a little solar panel, and they're just harvesting just enough energy to capture the data and infer on it and then send back conclusions. That could be vastly more sustainable than just streaming high-def signal intelligence back to a centralized data center. No matter how efficient that data center is, the calculus might still work out. A distributed system could be the right answer.
Michael Bird (10:47):
So sort of edge computing merged into some sort of, the way that your AI strategy or AI models work within your organization?
Kirk Bresniker (10:56):
That's absolutely right. And the other thing we think about when we think about where to place the compute, where to place the model relative to the data, is the speed of light. Professor Einstein is still in full force. And so that latency, sometimes if I can make a decision quick enough, it can make a profound difference in a business opportunity or a public safety challenge. So understanding, should I be pushing out the compute to where the data is, and take out that round trip time from the sensor to the cloud and back to the sensor again.
Michael Bird (11:27):
So Kirk, the example I've heard a few times is self-driving cars. There's the example of somebody walks in front of your car, and then your car's, just should your car be sending that data to a data center for the data center to say, "Oh yes, press the brakes and then send it back to the car." If that data center is the other side of the world, there's probably not enough time to react. So, that compute should be in the car.
Kirk Bresniker (11:49):
It should be in the car, and when you're making those kinds of latency-sensitive decisions, yes. As well as for the fact that sometimes communications fails. So, we have to be balancing that out. But self-driving car is an interesting case too, especially when we think about when a human is driving that car, a human burns about a hundred watts of glucose. And if that person happens to be a vegan, very sustainable energy source. When we think about autonomous vehicles, we think about not a hundred watts, we think about three, 400 watts, maybe even a thousand watts. So is the energy equation balanced out when we replace all the functions of a human? Or is it just really good driver assistance? Maybe that's the right answer. Again, these are subtle questions, they're deep questions. They're ones where we really want to do the work and do it ahead of time before we go all in on a solution and then find out, "Oh, you know what? This actually isn't saving us anything."
Michael Bird (12:47):
Which leads really nicely onto the workforce question. Because I think workforce is potentially another issue. What are the challenges with workforce?
Kirk Bresniker (12:55):
So I think that personally, and this is having conversations with my friends and my children, we all should be examining the work that we do and say, "Is this the potential? Does this work have the potential to be improved with using AI model?" I don't think about workforce replacement, I think about workforce augmentation and productivity increase. And I think if we want to be blunt, we'd say that no matter what your profession is, if you're an artist, if you're an engineer, if you are a leader, public or private, those professionals who embrace AI technologies will replace those that don't. So it's a wake-up call for all of us.
(13:40):
Now, again, we need to do the math. You need to be thinking about all of what you do, and how AI is either fit or not fit for purpose. And be really sober and ask hard questions to your vendors, and your partners, or your development teams, about your regulatory burden, about what you need to do in terms of robustness, and privacy, security, all of these things as well against sustainability. But if you've asked those hard questions and it's beneficial, you should be jumping on it.
Michael Bird (14:11):
We often talk of there being a shortage of skilled AI specialists. Do you feel like that is something that as a society we're starting to overcome?
Kirk Bresniker (14:22):
Certainly it is very attractive field of study. Now, but you are correct. I mean, that's the challenge we have, when it permeates, everything we are doing is we're wanting to have to ask the question, "Should AI be incorporated into this business process, into this personal productivity process?" Then that's very demanding. But hopefully part of this is more productive computer science, more productive and sustainable infrastructure. That is what will enable us to more equitably enable more people to participate, not only in the consumption of this technology, but hopefully on the supply side, that this is a chance for them to participate in a way that they couldn't before, because there's more things that are about participating in the supply side of technology that are now easier and more accessible because you're augmenting yourself with AI technology.
Aubrey Lovell (15:22):
Thanks, Michael and Kirk. There's some amazing insights in there, and wonderful conversation.
Michael Bird (15:27):
Right. Well, now it's time for Today I Learned, part of the show where we take a look at something happening in the world that we think you should know about. Aubrey, what have you got for us this week?
Aubrey Lovell (15:38):
Well, Michael, you and I love a good space story, and this one is slightly silly, so prepare yourself. So, the national flag waving is a big thing when it comes to exploring our cosmos. And every spacefaring nation wants to be first. The problem is flags don't wave in space, they largely stay still... Until now, that is. China is planning on planting an electromagnetically charged flag on the moon as part of its 2026 Chang'e-7 mission, which will also use alternating currents to gently flutter on the barren and atmosphere free lunar surface.
(16:13):
The high-tech stunt is the work of China's deep space exploration laboratory, and appears to be just that. While China's long-term aims are scouting for lunar water and eventually helping to build international bases for long-term habitation, the fluttering flag is apparently mostly just to show off China's capabilities in space. Very fascinating.
Michael Bird (16:33):
Yeah, that was pretty cool. Thank you, Aubrey.
(16:35):
Right. Well, now it is time to return to my interview with Kirk Bresniker to talk about the present, the past, and the future of AI.
(16:46):
So it feels like AI is, the developments, as you alluded to, just daily. I mean, rapid development, it feels like we're in at the moment. But presumably at some point we're going to hit some ceilings. So, what ceilings are we going to hit first? Are they technical ceilings, ethical ceilings?
Kirk Bresniker (17:05):
I think that when we look at the first one is just the resources. To train the leading size large language models today already is consuming all of the high quality publicly available data that's available on the internet. We've taken and we've eaten all the bites. And then if we think about that energy, the energy part of the equation, the overall cost, the capital operational costs, AI is voracious. And it's not just energy. It is energy, it is water, it's talent, it is wafer starts of semiconductor millimeters of area on the dyes that we produce. It is consuming.
(17:48):
If we just do dot, dot, dot, what will happen in short order, we would predict that by the end of the decade, if we just continue a pace, it will take more cost to train one model one time than we currently spend on global IT. Now, we're going to hit a ceiling before that. But that's also assuming two things. We continue to just pursue larger and larger models, as opposed to pursuing more tailored and smaller ensembles of models. And it also assumes that we don't change either the algorithms or the fundamental technologies that we've used to achieve them. And that's one thing we happen to be very fortunate. CPUs, GPUs, semiconductor, they were all there finally at the right time. The data was there because thank you internet. So it came together, but that might just be that first step. And the next real leap forward in accessibility and equity is going to be by replacing that first set, generation set of technologies, with the next.
Michael Bird (18:53):
So we talked about the past, the present. I just want to really quickly touch on the future. So, one of the challenges we talked about was power. How do you see the challenges around powering AI being solved? Is it building more power plants, more sustainable, more wind turbines, solar panels, nuclear, mixture of everything, use less power? How are we going to solve that challenge?
Kirk Bresniker (19:18):
So, I think there's certainly the question of operational efficiency. Given the resources that we have today and our march towards a more sustainable carbon-free grid, can we use AI in order to operate those systems at greater levels of efficiency, superhuman levels of efficiency? Now, part of that is where AI, that predictive capability of AI could be instrumental. But we also want to be careful, because when we think of the industry, the regulator and the community as a three-body problem, which is hard to solve, one way to solve that is to dehumanize the population, is to surveil them, is to curtail their activity, is to meter them, take away their agency. And AI is a very powerful force for changing opinions.
(20:12):
So we're trying to walk that line, understand how we enable greater predictive capability just in time, superhuman optimization of conventional technologies without violating privacy, without removing people's agency. I think beyond that, adding more capacity. And we have to remember, as much as we talk about this energy shifting towards AI, there's 750 million people in the world who have zero access to electricity, right? So, perhaps solving that simultaneously is something we also want to use these technologies to do.
(20:47):
But beyond that, it is understanding how to achieve those next levels of efficiency in somewhat times by leaving the technology that got us this far behind. So, conventional technologies, conventional computational technologies. So whether it is photonic, or quantum, neuromorphic, all of these exotic technologies, either for training in the massive data center, or at that edge, especially when we can do ultra-low latency, ultra-low power inferencing, that could be, again, one of those key distributed technologies that allowed us to do a lot more with a lot fewer resources.
Aubrey Lovell (21:28):
Thanks so much. And you can find more in the topics discussed in today's episode in the show notes.
Michael Bird (21:32):
Right. Well, we are getting towards the end of the show, which means it is time for this week in history, a look at monumental events in the world of business and technology, which has changed our lives. Aubrey, can you remind me of last week's clue?
Aubrey Lovell (21:44):
The clue last week was, it's 1795, and one Frenchman had this tasty prize in the can.
Michael Bird (21:51):
Ooh.
Aubrey Lovell (21:51):
Did you get it, Michael?
Michael Bird (21:53):
I don't think I did. Maybe something to canning food or something? I don't know. Anyway, what's the answer? What's the answer?
Aubrey Lovell (21:58):
Well, it was indeed the invention of canning food, or rather, the announcement by the government of France for a prize of 12,000 francs for inventing a method of preserving food and transporting it to its armies. And it wasn't until 1809 that a chef, Nicolas Appert, managed to perfect his method of tightly sealing food inside a canister, and then heating it for a certain period. Now, canning and metal cans was invented in Britain one year later by Peter Durand. In both cases, the heating process killed off any microorganisms in the food, though that process wasn't fully understood for another 50 years.
Michael Bird (22:33):
Amazing. Thank you, Aubrey. Well, the clue for next week is: it's 1959, and this all-in-one was an all-star IT smash hit. Okay. Any ideas, Aubrey?
Aubrey Lovell (22:46):
I have a feeling that it could also involve space, but I'm not quite sure. Producer Sam loves some space.
Michael Bird (22:53):
I hope it's in space. I hope it's space. Anyway, that brings us to the end of Technology Now for this week.
Aubrey Lovell (22:58):
Thank you so much to our guests, HPE Vice President and Chief Architect at Hewlett Packard Labs, Kirk Bresniker, and to you our fans. Thank you so much for joining us.
Michael Bird (23:07):
Technology Now is hosted by Aubrey Lovell and myself, Michael Bird. And this episode was produced by Sam Datta-Paulin and Alicia Kempson-Taylor, with production support from Harry Morton, Zoe Anderson, Lincoln Van der Westhuizen, Alison Paisley, and Alyssa Mitri.
Aubrey Lovell (23:21):
Our social editorial team is Rebecca Wissinger, Judy Ann Goldman, Katie Guarino, and our social media designers are Alejandra Garcia and Ambar Maldonado. Technology Now is a lower street production for Hewlett Packard Enterprise. And we'll see you next week. Cheers.