Technology Now

AI is huge business. But it’s also having a huge effect on our carbon emissions: The storage, cooling, and processing power required to train a large language model AI can produce as much CO2 as driving over a million miles in a family car.

In this episode, we’re talking to HPE Chief Technologist Matt Armstrong-Barnes about the concept of sustainable AI, and how making strategic decisions early in the design process – as well as proper data management – can help save organisations a fortune, as well as to potentially reduce CO2 emissions.

This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organisations and what we can learn from it.

We'd love to hear your one minute review of books which have changed your year! Simply record them on your smart device or computer and upload them using this Google form: https://forms.gle/pqsWwFwQtdGCKqED6

Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

About the expert: https://uk.linkedin.com/in/mattarmstrongbarnes

Creators & Guests

Host
Aubrey Lovell
Host
Michael Bird

What is Technology Now?

HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.

Aubrey Lovell (00:10):
Hello and happy day, everyone. Welcome back to Technology Now, a weekly show from Hewlett Packard Enterprise where we take what's happening in the world and explore how it's changing the way organizations are using technology. We're your hosts, Aubrey Lovell.

Michael Bird (00:24):
And Michael Bird. And in this episode, we are looking at an issue that's looming on the horizon for many organizations. Artificial intelligence has become a hot topic over the last couple of years, especially when it comes to large language models and generative AI. These tools are now everywhere and becoming more and more ingrained in our lives and work with every passing month.

(00:47):
But there's a cost to all this, and it's one many organizations haven't considered until now. How sustainable is our AI? So with that in mind, in this episode, we'll be looking at the true cost of AI systems, where current design trends are potentially going wrong, and what can be done about it. And we'll also be turning to you, our audience, for your questions to the expert and your recommendations on the books which have changed your year.

Aubrey Lovell (01:14):
Sounds incredible. So if you're the kind of person who needs to know why what's going on in the world matters to your organization, this podcast is for you. And if you haven't yet, please subscribe to your podcast app of choice so you don't miss out. All right, let's get on with the show.

Michael Bird (01:32):
Okay, well, you don't need me to say that AI is huge business. The field is already worth over $200 billion and that has more than doubled in the last two years alone. And it's estimated that it'll be a $2 trillion industry by 2030. And that means a few things. Firstly, there is a huge demand for AIs and a lot of core products being developed, released, and iterated on or adapted for different use cases.

(02:01):
And secondly, it means there is a lot of data being generated or stored to train and feed these AIs, particularly the so-called large language model systems used in chatbots, modeling, analytics, and generative technologies.

Aubrey Lovell (02:16):
Processing lots of data means using lots of energy. In fact, it takes an estimated 300 to 600 tons of CO2 to train a large language AI model, and that's the same as driving 1.5 to 3 million kilometers, and that's between 900,000 to 1.86 million miles in an average family car. It's just mind blowing, Michael.

Michael Bird (02:41):
That is absolutely bonkers.

Aubrey Lovell (02:44):
And those models may need retraining every couple of months or years. And when you start to consider the storage implications of that, the environmental challenge grows bigger and bigger with each step.

Michael Bird (02:54):
Wow. Yeah.

Aubrey Lovell (02:55):
A recent Bloomberg article suggested that training AIs already account for 12% of Google's entire annual electricity usage, and that's over two terawatts, enough to fully power over 200,000 homes per year.

Michael Bird (03:10):
That is a lot of power. Now, lots of companies have pledged to make their data centers carbon neutral or even carbon negative, but that doesn't stop the core issue that AIs are hungry beasts who love gobbling up resources in the data center. So what can we do? Well, today we are joined by friend of the show HPE Chief Technologist Matt Armstrong-Barnes. Matt, thank you so much for joining us.

Matt Armstrong-Barnes (03:35):
Pleasure to be here.

Michael Bird (03:36):
So first off, Matt, how does an AI become environmentally unfriendly?

Matt Armstrong-Barnes (03:42):
Well, I think the first thing to consider is, as you've already talked about, AI consumes a lot of power. What power produces is heat, and between power and heat, you produce a lot of CO2. Also, AI's need a lot of data and data needs to sit on environments that also consume power and need cooling because they generate heat. Thus, between the two factors together, you end up creating a lot of CO2. So this is something that really has gone hand in hand with AI since the start of the algorithmic driven generation that we see today.

Aubrey Lovell (04:16):
How widely understood is this problem?

Matt Armstrong-Barnes (04:19):
Well, I think what we're seeing is lots of organizations are implementing AI without fully understanding the ecological implications. We are running towards AI problems without fundamentally having the right strategy in place. And what that means is organizations are just really running towards large electricity bills and pretty hefty CO2 emissions.

Michael Bird (04:41):
So is this something that tech companies have to think about right now?

Matt Armstrong-Barnes (04:46):
So anyone using AI should be using it to tackle a real business problem. And what that means is planning and strategy. And if really you put these foundational capabilities in place, you'll be considering all the implications associated with going on the AI journey.

Aubrey Lovell (05:07):
So, Matt, there's something called a data debt. How does that play into this situation?

Matt Armstrong-Barnes (05:11):
Yeah, great point. If I just explain what we mean by data or technical debt. So if you imagine your email inbox, I'm sure everyone has one, if you stay on top of it, you file your emails, you categorize them, you group them together, you save any attachments associated with them, then it takes a very small amount of time, but it's kind of a daily task. If you wait for months or years, then it takes a lot of time and a considerable amount of effort, probably more than if you'd stayed on top of it in the first place.

(05:47):
Technical and data debt are the same. So really this is about if you leave your data problems without constant management and maintenance, it becomes a big problem. With technical debt, we accrue it more slowly because it's about maintenance of code. Because we are generating such a massive amount of data and unstructured data as well on a day basis, if you leave your data debt, it just becomes an enormous mountain to climb incredibly quickly.

Michael Bird (06:20):
And it's a hard problem to solve. And lots of organizations are encountering this.

Matt Armstrong-Barnes (06:24):
Very much so. Very much so. They're only thinking about it now. They're thinking about it when they've already got a lot of data that is in data warehouses or is in data lakes. We are now thinking about how we can retrospectively work out how we can get value from that data.

(06:41):
Organizations that have invested heavily in making sure they've got the right data strategies in place, this is massively paying off. Organizations that are thinking about it now are looking at this big mountain to climb, and what they need is the right tools and techniques that are going to help them address those kind of problems.

Michael Bird (06:59):
Which leads me very nicely onto my next question. What is the solution? What can organizations do?

Matt Armstrong-Barnes (07:04):
So firstly, when we're using AI for experimentation, AI creates a lot of CO2. So if you are going to experiment, understand why you're experimenting and what you're experimenting for. Also, if you don't have an AI strategy and data strategy, build one. It's all about making sure that you are not going to be flying over water at very high speed without really knowing where the next wave is coming from.

(07:33):
Make sure you put some planning in place to prepare you for these eventualities. And what that means is when you start to think about an AI project, you're going to be looking at both the cost and very importantly the benefits and pulling together a benefits realization strategy that makes sure that you get the most out of your investment.

Aubrey Lovell (07:57):
So, Matt, I think you touched on this a little bit, but really the golden question. What makes a sustainable AI? Does a lot of it come down to the coding? What are those factors?

Matt Armstrong-Barnes (08:07):
Definitely coding has a big implication. And one statement I want to make very early on, software engineering is not data science. They are very different disciplines in the technology space. Software engineering is all about building sustainable, manageable, maintainable systems that have long term use. These principles have been embedded in software engineering for decades.

(08:36):
In order to build sustainable AI systems, you need to take some of those principles from software engineering and apply them into your AI projects. And what that means is you're going to build in repeatability, and these things add efficiency. They minimize the delays in your AI projects, they reduce the complexity, and they minimize the risk of errors. So these principles mean that you need to bring software engineering principles and the data science discipline together. And a lot of the time that does mean separate people who have those specific skills as part of team AI.

Michael Bird (09:18):
So if you're a software engineer with a degree in data science, then you're going to be tripping over jobs because they're going to be in high demand.

Matt Armstrong-Barnes (09:27):
Either of those two. I think when it comes to data science, you've really got a number of ways that you can become a data scientist. I'll talk about two. There are other ways as well. One is through the computing side, where you have your computer science core principles around software engineering, how you build software. Or you come at it from a mathematical perspective.

(09:50):
Either one of those two disciplines have got rigor built into them about how they gain the skills. And what we can see is there's transferability of that rigor from the software engineering side into the mathematical side. But you don't just know these things. If you're a mathematician coming into the data science, you need somebody with the software engineering skills to help you understand how you build sustainable, manageable, which is a key component of long-term sustainability of AI projects, fundamentally into what you're trying to achieve.

Michael Bird (10:24):
You talked a little bit about data and how it comes with a sustainability cost. Can you just elaborate on that a little bit as well?

Matt Armstrong-Barnes (10:30):
Yeah, sure. Having a real in-depth understanding of the data that you're using is incredibly important because it's scarily easy to get into massive data duplication problems. If I give you an example. So I'm sure this has happened to everybody. Somebody sends you a spreadsheet, but it's not quite what you want so you change it so that it is what you want. And, of course, you save it, but you think, "I'll tell you what. I'll just keep the original version just in case I need to go back to it." AI takes these type of problems and just makes them exponentially bigger.

Aubrey Lovell (11:09):
So, Matt, if I am a business owner and I'm doing all the things that you're talking about and I'm really focusing on integrating AI into this environment, I'm keeping an eye on my data, what else can I be doing proactively with AI?

Matt Armstrong-Barnes (11:24):
One of the key steps in the model development lifecycle is tuning your AI models. And, unfortunately, this is one of the steps of the lifecycle that produces the most CO2, consumes the most power. However, there are lots of great tools and techniques that allow you to approach this in a very efficient way, meaning instead of producing lots of carbon, consuming lots of power and producing lots of heat, you can do it efficiently. That still means that you get highly tuned models for your specific problem, but you do it in as efficient a way as you can, which means that you have long term sustainability of what you're trying to achieve.

Michael Bird (12:12):
Now, Matt, we are almost at time. Now, you haven't got a book to plug, but you've written a series of blog posts on the topic. Can you just tell us a little bit more about that please?

Matt Armstrong-Barnes (12:19):
Yeah, so I'm just finishing up on my blog on the pitfalls of artificial intelligence and how you can avoid them and be successful on going on the AI journey. The next series that I've already got in draft and I'm hoping to release them over the next few weeks is all about the things that we've talked on the episode today. How to build artificial intelligence in a sustainable way that gives you long term benefit and realization of your investment.

Michael Bird (12:46):
Brilliant. Thank you, Matt. And that's incredibly interesting and also very important. We'll be back with audience questions for Matt in a second, so please do not go anywhere.

Aubrey Lovell (12:57):
Next up, it's down to you, our audience. We opened the floor for you to give your recommendations on books which have changed the way you look at the world, life, and business in the last 12 months. They could be technology-based, have changed the way you work, or they could just have made you look at the world in a totally different way.

Michael Bird (13:13):
And if you want to share your recommendations, there's a link in the podcast description. Just record a voicemail on your phone and send it over.

Book Reviewer 1 (13:29):
My name's Diane [inaudible 00:13:30]. I'm a manufacturing engineer. The book that I recommend is Handmade: A Scientist's Search for Meaning Through Materials by Anna Ploszajski. She's a material scientist who sorts to understand materials better through craft making. It's just amazing to read her quest for understanding how glass works, how different materials work through craft because there are master craftsmen who understand materials in such a way that material scientists might not.

(13:58):
Sometimes you're siloed through this microscope or telescope or whatever testing that you're doing in a lab where you never actually see the material in its raw form and using those properties to best effect. So, yeah, she did a really fantastic lecture at the Royal Institution about her book and about her quest, and the book itself is quite engaging. She's quite funny. She does stand up, but also really, really educational. So I learned loads of cool and stuff about different materials that I would never look twice at, so highly recommend.

Aubrey Lovell (14:25):
All right, thanks for that. So, Matt, have you read anything in the last year which has changed the way you look at the world? Way back in episode one, you mentioned Competing in the Age of Artificial Intelligence. So has anything else got you inspired recently?

Matt Armstrong-Barnes (14:38):
I don't know if inspired is the right word, but I'm currently going through the European Union Artificial Intelligence Act, and that's looking to put a regulatory framework around AI, and it goes into quite a lot of detail about some of the principles that you need to adopt and also how you view artificial intelligence from a risk perspective.

Michael Bird (15:00):
That sounds like quite a page turner, Matt.

Matt Armstrong-Barnes (15:03):
Riveting read, riveting read.

Aubrey Lovell (15:05):
All right. It's time for questions from the audience. You've been sending in your questions to Matt on sustainable AI, and we've pulled out a couple. So the first question is from Robert in Pasadena, who would like to know whether you think we're likely to see emissions regulations on AI in the same way some places have them for vehicles?

Matt Armstrong-Barnes (15:24):
Oh, that is an interesting question. So on my recent journey through regulatory frameworks, my bedtime reading, I've not encountered any legislation that is heading in this direction. I think the closest thing is going to be the Corporate Sustainability Reporting Directive, which is a policy operating in the U that requires disclosure of information from organizations about their environmental, sustainable, and governance performance on an annual basis. So I could definitely see some AI implication to how those reports are being written in the future.

Aubrey Lovell (16:02):
Interesting.

Michael Bird (16:03):
Now, Shannon in Tel Aviv would like to know whether you think a long term solution might be training an iteration of an AI to make itself more efficient?

Matt Armstrong-Barnes (16:13):
Well, that's an interesting one. We are already using AI techniques to do things like tune AI to make them operate more effectively. Or one of the big things is how you can use a simpler AI to look at large data sets and reduce them down to then be used to train more complex artificial intelligence models.

(16:37):
I do think there's a broader question about this in terms of generative AI. So we know that generative AI, it can write, it can analyze, and it can understand code snippets. However, human programmers are still needed for some of the more creative aspects and definitely some of the efficiency side in the short term. In the long term, well, let's just say that we are making rapid advances in this space. I still think we've got quite a way to go.

Michael Bird (17:06):
Perfect. Thank you, Matt. And, again, we'll drop a couple of links in the podcast description for more on these topics.

Aubrey Lovell (17:14):
Right. Well, we're getting towards the end of the show, which, Michael, you know it's our favorite time-

Michael Bird (17:19):
Yep.

Aubrey Lovell (17:19):
...to say.

Michael Bird (17:20):
Here we go.

Aubrey Lovell (17:21):
It's time for this this week in history. [inaudible 00:17:26].

Michael Bird (17:28):
This week in history. Matt, I'm so sorry you had to sit through that.

Aubrey Lovell (17:30):
We're shooting for a Grammy, Matt. How was that for you?

Matt Armstrong-Barnes (17:33):
Maybe a couple more rehearsals?

Aubrey Lovell (17:34):
Got it. Okay. Yeah. All right. Well, as you know, this week in history is a look at monumental events in the world of business and technology, which has changed our lives.

Michael Bird (17:44):
Now the clue last week was as simple as QWE. It is, of course, the first commercial use of the QWERTY keyboard. Matt, without looking at your keyboard, what are the three letters on the top row after QWERTY? Q-W-E-R-T-Y?

Matt Armstrong-Barnes (17:59):
It is, he says, closing his eyes-

Michael Bird (18:01):
There's just four letters.

Matt Armstrong-Barnes (18:02):
...there's a U in there somewhere. U-I-O-P. U-I-O-P.

Michael Bird (18:06):
U-I-O-P. That is the correct answer. Now the QWERTY keyboard found its way onto the 1868 Sholes and Glidden typewriter. I'm sure we've all still got one hanging around. And the keyboard was revolutionary for two reasons. Number one, it made typing quicker by arranging letters so that the left and right hand would usually work together without one having to do all the work. And it meant that the key jams were much less likely, so now you know.

Aubrey Lovell (18:31):
Next week we're heading to 2005 and automatically I am triggered because I'm like, "Oh, I was in high school then." The clue is, well, you are listening to it. All right. That brings us to the end of technology now for this week. Next week, we'll be heading back to HPE Discover with a special look at highlights of the keynote speech from Hewlett Packard Enterprises CEO Antonio Neri, as well as the latest news and announcements. Definitely something you'll want to check in with. In the meantime, keep those suggestions for life-changing books coming using the link in the podcast description.

Michael Bird (19:04):
And until then, thank you so much to our guest, Matt Armstrong-Barnes. Thank you, Matt.

Matt Armstrong-Barnes (19:08):
Thank you.

Michael Bird (19:09):
And of course, thank you all so much for joining us. Technology Now is hosted by Aubrey Lovell and myself, Michael Bird. And this episode was produced by Sam Data [inaudible 00:19:19] and Zoe Anderson with production support from Harry Morton, Alicia Kempson, Alison Paisley, Camilla Patel, and Alex Podmore. Technology Now is a Lower Street production for Hewlett Packard Enterprise, and we'll see you next week.

Aubrey Lovell (19:31):
Cheers.