In this episode we are looking at how AI is forcing us to rethink efficiency - and pushing us to do better.
As the energy usage of our IT infrastructure - especially data centers - creeps ever higher, organizations are thinking more seriously about how to make the whole process more efficient, and get more out of the tech and resources we have - potentially making AI not only more sustainable, but also cheaper.
And that’s where today’s guest comes in. Discussing the topic with us is Dr John Frey, Chief Technologist for Sustainable Transformation at Hewlett Packard Enterprise.
This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.
In this episode we are looking at how AI is forcing us to rethink efficiency - and pushing us to do better.
As the energy usage of our IT infrastructure - especially data centers - creeps ever higher, organizations are thinking more seriously about how to make the whole process more efficient, and get more out of the tech and resources we have - potentially making AI not only more sustainable, but also cheaper.
And that’s where today’s guest comes in. Discussing the topic with us is Dr John Frey, Chief Technologist for Sustainable Transformation at Hewlett Packard Enterprise.
This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.
HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.
Speaker 1:
Hey, everyone, and welcome back to technology now, a weekly show from Hewlett Packard Enterprise where we take what's happening in the world and explore how it's changing the way organizations are using technology. We're your hosts, Aubrey Lovell.
Speaker 2:
And Michael Bird. So in this episode, we are looking at how AI is forcing us to rethink efficiency and pushing us to do better. As the energy usage of our IT infrastructure, especially data centers, creeps ever higher, organizations are finding themselves looking for new and innovative ways to still take advantage of the opportunities of technologies like AI whilst also staying on track with their sustainability goals and even net zero targets. So that's what we'll be examining in this episode. We'll be looking at how the extraordinary resource cost of the IT revolution is forcing us to be more efficient.
Speaker 2:
We'll be looking at how we can get the most out of our AI infrastructure, and we'll be asking whether sustainability and AI can ever really coexist.
Speaker 1:
Well, as you know, if you're the kind of person who needs to know why what's going on in the world matters to your organization, this podcast is for you. And if you haven't yet, subscribe to your podcast app of choice so you don't miss out. Okay, Michael. Let's get into it.
Speaker 2:
Let's do it.
Speaker 1:
If you've been listening to technology now for a while, you'll be no stranger to the sobering statistics around our AI energy usage.
Speaker 2:
Yeah. So according to the World Economic Forum, the green house gas emissions of certain generative AI providers has risen incredibly quickly. In fact, it's up 50% in the last five years. That's a rise largely tied to increased energy demands in data centers, and we've linked to that report in the show notes. Meanwhile, our report by Goldman Sachs found that AI is poised to drive a 160% increase in the amount of energy required to drive our data centers by 2030, resulting in data centers using 4% of the world's electricity by the end of the decade.
Speaker 2:
And we've linked to that report also in the show notes.
Speaker 1:
And it's not just electricity usage. All that power generates heat, which needs cooling, and that's leading to massive increases in water usage worldwide. It's not all doom and gloom, though. The challenges of running our AI is encouraging organizations to think more seriously about how to make the whole process more efficient and get more out of the tech and resources we have, potentially making AI not only more sustainable, but also cheaper.
Speaker 2:
And that's where today's guest comes in. Doctor John Fry is the chief technologist for sustainable transformation at Hewlett Packard Enterprise. John, thank you so much for joining us. Welcome to the show.
Speaker 3:
Yeah. It's great to be back.
Speaker 2:
So, John, how do we start to think about AI as an efficient and sustainable technology?
Speaker 3:
It's a great question, and I speak at a lot of conferences around the world on the topic. And often, we see a variety of concepts mixed together in those conversations. So I think it's really helpful to think about this topic in two pieces. One is sustainable AI or efficient AI, and that's how do we make the AI solution in the first place as efficient as it can be. Then the other half is AI for sustainability.
Speaker 3:
What can we do with the AI to decarbonize or drive social or an environmental positive progress? So AI for sustainability is really those AI solutions that provide social and environmental good. In fact, you had Colin Bash on a couple episodes ago talking about some of the things that Hewlett Packard Laboratories is doing there. But it's also things like how do we do manufacturing inspection to find defects before that product ever gets to market and has to come back? It's looking at ways to drive efficiency in renewable energy systems or other types of systems that serve to decarbonize.
Speaker 3:
It's doing things like recognizing the start of pandemics so we can develop vaccines and and control that pandemic before we lose more lives. So that's AI for sustainability. We're actually using AI to do things for social and environmental good. But in our practice, we actually focus on the first piece, and that's sustainable AI. So how do we look at the AI solution in total and drive efficiencies there and have it be as efficient as could be.
Speaker 3:
And this is really important because many of the AI systems that have been around for a number of years, and and we have to remind ourselves that AI has existed since the nineteen sixties. So this is not new technology, but some of the pieces are evolving as we speak. But it's how do we think about this AI solution from a power efficiency perspective and from a resource efficiency perspective and really make sure that these AI solutions that are being developed to do great things actually have a small of an environmental and social footprint as can be. And so that's where a lot of the work is happening. And some in technology would say, well, this is just another workload.
Speaker 3:
But as we have thought about it, and as we've worked with customers on IT efficiency over the past twenty five years, what we found is there's actually some nuances to AI efficiency that really make it worth paying attention to.
Speaker 2:
You said, I think recently in a talk, that two thirds of HP's carbon emissions come from customer use. So how do AI suppliers start to fix that without it having an impact on customers?
Speaker 3:
Yeah. It's a great question. Let me start by saying for those in the audience that aren't really versed in carbon accounting, and I'm not an expert in that topic either, Unlike financial accounting where if you gave me a thousand euros, it came off your ledger, your bank, if you will, and comes on to my ledger, my bank, if you will, carbon accounting doesn't work that way. You actually are doing double counting. So for example, when HPE sells a customer an AI solution, the hardware and software that go into an AI solution, and the customer runs that solution, it is on their carbon accounting because they own it, they're using it, and they're getting beneficial use out of that.
Speaker 3:
So that's in their direct operational control. But in carbon accounting, you're also responsible for upstream implications, so what your supply chain brings in terms of emissions, and also downstream, when your customers use your products, you're responsible for those emissions as well. So in HPE's case, only about 2% of our emissions are actually when we run our company, operate our own technology, and those sorts of things. About a third is actually upstream supply chain implications of bringing our products and solutions to market, and the biggest piece by far, about two thirds, is when our customers use our technology. So I tell you that to say, when we are thinking about that largest piece from an HPE perspective, know that there are multiple stakeholders there trying to reduce those emissions.
Speaker 3:
The customer is looking to reduce them in many cases because that's their own operational emissions. For we, HPE, we're trying to reduce them as well because they are our downstream or scope three emissions. So we're actually coming together to try to solve this challenge. So what does it mean? It means we have to work very closely with our customers to not only give them the most efficient solutions, but then power those solutions with renewable or carbon free energy so that we can take those emissions to zero.
Speaker 3:
Because for example, we deliver our products to customers in a balanced power performance state so that if they do nothing, they get the most efficient use of that product at a variety of use levels. But in reality, what we know from both our own internal data and from data from places like Uptime Institute, that many customers then turn that balance power performance setting off and go to a performance only setting, which makes it not as energy efficient. So focusing on this largest piece of our emissions, let us see how the customer is actually using our technology in their infrastructure, suggest ways they can do a better job at that. And we learn a lot about how to better deliver our products and solutions to our customers so that we actually get the intended outcomes.
Speaker 2:
So I believe you've got, is it, five levers for improving AI efficiency? We do. Can you just talk us through them, like, one by one? Tell us a little bit about them and how organizations can act on each of them.
Speaker 3:
We've came up with five. We call them levers because although the impact of each of them may be different for each customer, all five always apply from the public cloud to the edge. So it doesn't matter where the customer put their workloads. These are important levers for them to focus on if they really wanna drive efficiency. So their first data efficiency.
Speaker 3:
How do we make decisions about the data we need to collect, the usefulness of that data, the quality of the data, and even from an AI perspective, how do we look at the training set of data, process it once, make sure we're only including data that's relevant? So for example, if you're gonna build a large language model and it's only going to respond in English, but yet you bought a dataset, for example, that was a scrape of the World Wide Web. Well, the first thing you'd wanna do is take all the non English content out of that dataset, so that your data set is of higher quality. HPE does regular surveys of companies out around the world and our customers, and what we find is customers only use about a third of the data they collect. So it's having better conversations with the user community about why are we collecting that data?
Speaker 3:
How long are we gonna keep it? When are we gonna dispose of it? If we need to keep it for a regulatory reason or some other reason indefinitely, how do we take that to a lower tier of storage like tape? So that's data efficiency. Software efficiency is really about how efficient is the application in the workload.
Speaker 3:
There are more efficient programming languages, for example. Rust and C are some of the most efficient. In the same way, when we think about software efficiency, it's can the application take full advantage of the hardware it's sitting on? So if you have an application that only needs one core of a processor, but it's running on a server with 64 cores, is that really a great match of the hardware and the application together? And how do we refactor or rehost that workload on hardware that it can operate on much more efficiently and take full advantage of.
Speaker 3:
From an AI perspective, it's even the algorithmic efficiency. In many cases, algorithms running AI and particularly generative AI models do a tremendous amount of math, but yet there's ways to accomplish and get to the output product that doesn't involve quite so much math. In fact, you can use AI tools to estimate your way to some amount of the answer and then do math to refine that answer at the end. Equipment efficiency, which is one that we focused on for over 20, and that's really if you have a piece of technology equipment, have it do the most amount of work that it can. With AI systems, that's pretty straightforward in many cases.
Speaker 3:
We're using that equipment at very high levels of utilization, but more commonly across the industry, we often find utilization levels down around 30% of the capability of the devices if we're talking about virtualized environments and even lower if it's a single application on a single device. Energy efficiency, of course, we wanna do the most amount of work per watt of power. And this is an area with energy scarcity starting to occur places around the world. We even expect we'll see some regulation as regulators are gonna wanna make sure if we're consuming that power that we're doing a a good job of getting output and valuable output from that. And finally, the fifth lever is resource efficiency.
Speaker 3:
How much cooling do these solutions take? How many people does it take for the solutions to operate? And really looking at all of those other pieces, this is where many of the aspects of the data center infrastructure itself come into play, including energy conversion within the chain. Often, we see that that energy coming from a utility changes voltages a number of times, and every time you convert power, you lose some efficiency in the midst of that. So it's really thinking about it from that perspective.
Speaker 3:
So these five levers really provide a framework for the customer to say, am I paying attention to efficiency in all of them? And am I paying attention to the efficiency solution as a solution, not as individual piece parts?
Speaker 2:
That's fascinating. And a question that sort of spurred off what you were saying, do you think you'll get to a stage where people spinning up a large data center will just say, well, rather than connect to the grid, we'll just spec up a a small power station to go with at the same time?
Speaker 3:
Absolutely. Yes. I I think I think that's where we're moving. You already see indicators. We have a customer here in The United States, for example, that realizes that wind farms produce more power than the grid needs at various portions of the day.
Speaker 3:
They put their data centers in modular shipping containers and locate them at the wind farm, in fact, to take advantage of times the utility will actually pay them to take the capacity because the grid doesn't need it. Then all of a sudden, you can unlock some opportunities that don't always exist based on where data centers have been traditionally cited. For example, heat reuse. The number one waste product from data centers is heat. Heat has value, and we can sell that.
Speaker 3:
But to do that requires you to be within about 10 kilometers or so of that beneficial use of heat. Well, all of a sudden with modular data centers, you can then cite the data center where you could take more beneficial use of that heat as one example. So absolutely, yes. That challenge of trying to find the power sometimes unlocks other opportunities as well.
Speaker 1:
Thanks, Michael. Great interview with John, and obviously, of food for thought as always. Now it's time for today I learned, the part of the show where we take a look at something happening in the world we think you should know about. And, Michael, I think you're driving this one this week.
Speaker 2:
Yeah. Oh, good. Good pun. Yes. Because it is one from me this week, and we are talking about a driverless vehicle story.
Speaker 2:
Now this is an exciting one coming from Japan, which has begun construction of the first ever intercity automated highway. The road is being constructed along an existing 320 mile or 515 kilometer highway between Tokyo and Osaka, using space in the central reservation between the opposing lanes of traffic. On the three lane highway, automated pallet trucks with a capacity of approximately one ton will speed along completely autonomously. The vehicles will also be sorted, loaded, and unloaded autonomously at ports, airports, and train stations at either end by robotic forklifts and warehousing vehicles before being handed on to a human delivery driver for the final few miles. It's estimated the project will cost something in the region of $19,000,000,000 largely due to needing to cut dozens of new tunnels along the route.
Speaker 2:
It's well worth it for Japan though, which is struggling with a chronic shortage of haulage drivers. It's estimated the new automated highway will replace up to 25,000 journeys per day. Test runs of the system are due to begin in 2027 with the system fully open by the middle of the next decade.
Speaker 1:
Amazing. Thanks for that, Michael. Great progress. Alright. Well, now it's time to return to our guest, John Fry, who has been talking to Michael about how AI is driving our organizations to be more efficient.
Speaker 2:
So for any organizations out there that don't yet have a sustainable AI strategy, what would your first three steps be?
Speaker 3:
Yeah. Absolutely. Well, one is don't reinvent the wheel. HPE's got a free handbook that we developed for customers called six steps for developing a sustainable IT strategy. Start there, and it walks you step by step how to do that.
Speaker 3:
Step two is get all the right stakeholders together in the conversation because what we often find is the folks that build and operate the data center infrastructure, they're very fixated on utility cost reduction. That's often their number one metric that they're charging to. The IT team who puts the infrastructure in the data center actually often is not fixated on power consumption. They are measured on keeping that infrastructure running all the time. So in some cases, they could be working at odds with one another or at least not getting the benefit of collaboration.
Speaker 3:
Then you have for many of companies out there, sustainability team trying to work towards their company's carbon reduction initiatives. So they're really looking at how can we do things to decarbonize and how do we work with our suppliers to get more efficient products in the first place and operate them more efficiently. So they need to be part of that conversation as well. And then if you're gonna do that work, you're gonna save money, and you're gonna have a great story to tell. So bring in your PR folks and bring in the finance team and bring all of those folks together.
Speaker 3:
So that's step two. Step three is bring in those stakeholders like HPE that have decades of experience doing this work because it's not a cookie cutter approach to driving these efficiencies. Every customer has different pain points and different business constraints that they have to operate in. So again, don't try to do this work yourself. Bring in your technology vendor that has a lot of technical capabilities and decades of experience here so that you don't have to learn the lessons yourself.
Speaker 3:
You can take advantage of lessons learned by others.
Speaker 2:
Alright. Final question. Why should organizations care about driving sustainability in AI?
Speaker 3:
As we unlock those opportunities and try to do more and more with AI, again, we're gonna start hitting technological constraints, power constraints, perhaps social constraints that are going to inhibit the ability to do this work and even regulatory constraints, I predict. By managing efficiency all the way through this process, we're able to unlock the value proposition of these AI solutions. They can actually become force multipliers, and we can do that with the least amount of constraints around it if we think about these efficiency needs all the way through the process and think of efficiency at a solution level, not as an individual piece part level.
Speaker 2:
John, thank you so much for joining us. It's been really, really, really fascinating to talk, and you can find more information on the topics discussed in today's episode in the show notes. Right. We are getting towards the end of the show, which means it is time for this week in history, a look at monumental events in the world of business and technology, which has changed our lives. Aubrey, what was last week's clue?
Speaker 1:
Okay. So the clue from last week was it's 1984 and time for NASA to bring in the trash. Did you get it?
Speaker 2:
I thought it was something to do with, like, deorbiting a satellite satellite or something. Anyway, what was the answer? I'm dying to know.
Speaker 1:
Very close. It was the first time NASA managed to salvage a spacecraft. So as part of a mission, STS 51 a, the crew of the space shuttle Discovery was tasked with retrieving the failed Indonesian satellite, Palapa b two, which had failed to reach its proper orbit. In a daring untethered spacewalk, astronauts attached a specialized space hook to slow and tether the drifting satellite before pulling it in using the shuttle's robotic arm and quite of a lot of human wiggling since the satellite refused to lock into the cradle on the arm. Palapa b two was transported back to Earth before being refurbished and launched again in 1990.
Speaker 1:
Pretty amazing stuff.
Speaker 2:
Wow. That really is. Now the clue for next week is it's 1923 and this invention stopped us all in our tracks. Something train related maybe?
Speaker 1:
I think, yeah, definitely train related.
Speaker 2:
Well, that brings us to the end of Technology Now for this week. Thank you so much to our guest, doctor John Fry, chief technologist for sustainable transformation at Hewlett Packard Enterprise. And, of course, to you, thank you so much for joining us.
Speaker 1:
Technology Now is hosted by Michael Bird and myself, Aubrey Lovell, and this episode was produced by Sam Datapalen with production support from Harry Morton, Zoe Anderson, Alicia Kempson Taylor, Alison Paisley, and Alyssa Mitri. Our social editorial team is Rebecca Wissinger, Judy Ann Goldman, Katie Guarino, and our social media designers are Alejandra Garcia, Carlos Alberto Suarez, and Ambar Maldonado.
Speaker 2:
Technology now is a lower street production for Hewlett Packard Enterprise, and we'll see you at the same time, the same place next week. Cheers.