Technology Now

AI is a great tool for lifting the burden of repeatable, predictable tasks off humans. Until it isn’t. The fact is, a lot of AI is biased and even unconsciously discriminatory. It’s trained on poor, skewed datasets as the result of traditional biases in the tech industry. That means in the real world,  unrepresented groups such as women are losing out on opportunities as a result of decisions being made by AI in finance, HR, healthcare, and more. So what can we do to solve this issue? We’re joined by Professor Anjana Susarla from Michigan State University to find out.

This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organisations and what we can learn from it.

We'd love to hear your one minute review of books which have changed your year! Simply record them on your smart device or computer and upload them using this Google form: https://forms.gle/pqsWwFwQtdGCKqED6

Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

About the expert: https://broad.msu.edu/profile/asusarla/
Anjana's book recommendation: Algorithms of Oppression: How Search Engines Reinforce Racism

Creators & Guests

Host
Aubrey Lovell
Host
Michael Bird

What is Technology Now?

HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.

Michael Bird:
Hello, hello, hello. And a very warm welcome to Technology Now, a weekly show from Hewlett Packard Enterprise, where we take what's happening in the world around us and explore how it's changing the way organizations are using technology. We are hosts Michael Bird.

Aubrey Lovell:
And Aubrey Lovell, and in this episode we're exploring AI specifically how poorly trained AI is making it harder for certain sectors of society during a cost of living crisis. We'll also be looking at whether companies are taking discrimination in their AI seriously enough, and if they truly understand the problem. We'll also be looking at taking your questions to the expert around unconscious bias and fairness in AI. And of course the books that are changing the way you, our audience, and some of our wonderful guests see the world.
So if you're the kind of person who needs to know why what's going on in the world matters to your organization, this podcast is definitely for you. And if you're enjoying it, subscribe on your podcast app of choice so you don't miss out. Right. Let's get on with the show.

Michael Bird:
Okay. So the so-called cost of living crisis has been in the news everywhere for the last few months, and with inflation still rising and recessions predicted in many parts of the world, it's never been harder for many people when it comes to people trying to get loans or credit cards or mortgages, it's getting even more challenging.
Okay. Aubrey, I know that you are also a big fan of stats.

Aubrey Lovell:
Oh yes.

Michael Bird:
So, let me give you some stats. So in the last three years, 12% of people in the UK have been denied alone. Nearly 40% of those people have also been denied a credit card. But not all credit applicants are treated equally. Figures from the US show that people of color are twice as likely to get refused a mortgage application. So I'm sure you're asking the disparity and what does it mean for business? Well over 50% of banks are now using AIs to process applications. Many job sites now filter applicants based on AI and they are being used for so many things we rely on in our lives that if they are trained badly, it can drastically affect our ability to get loans or credit or access to work. And right now that clearly matters to millions of people.

Aubrey Lovell:
To talk about why all of this should matter to our organizations. We're joined by Anjana Susarla, professor and responsible AI at Michigan State University. Anjana, thank you so much for joining us.

Anjana Susarla:
Thank you for having me.

Aubrey Lovell:
Okay, let's start with the basics. Can you quickly define what you consider as bad AI?

Anjana Susarla:
Yeah, I would say, let me try to define good AI. So AI that is fair, transparent, explainable, it's safeguards data rights and privacy and AI that has a human in the loop, if there are problems with automated decision making, we should be able to get a person, that's good AI. Something that's not doing all these things is bad here.

Aubrey Lovell:
Got it. So what do you think causes AI to discriminate?

Anjana Susarla:
First problem would be the data that is used to train the AI. That's a cause of the bias. Second problem is when we use the AI without really understanding how it impacts our customers, our employees, our stakeholders, et cetera. And I would say the third problem sometimes is we don't have best practices in developing the AI, that's also a cause of bias.

Michael Bird:
So what can be done about it?

Anjana Susarla:
Different solutions have been suggested. One is that companies need sort of better standards for what input data they're using to train different types of AI. For example, if you're a bank and there's been a history of let's say redlining, as they call it, in the United States, and you are using some of that data to train the software that is used to predict who to give a loan to, then you are ending up replicating these biases that existed 30 years ago or 20 years ago. So those are kind of questions we have to ask ourselves.

Michael Bird:
Isn't the problem with AI that actually when you build an AI model, it's actually really hard to figure out what's going on?

Anjana Susarla:
Yes. I think that also raises the other question, which is that, as I said, explainable, how easy is it for a bank to explain what are the inputs that go into the AI? In other words, somebody built a fancy AI model and they themselves they can't really explain why the model is predicting what it's predicting. And I'm here to say, this is not magic. This [inaudible 00:05:03] real people.

Michael Bird:
It doesn't have to be magic.

Anjana Susarla:
No.

Michael Bird:
It doesn't have to be a black box.

Anjana Susarla:
No.

Michael Bird:
So given we are living through tough financial times, potentially heading into recessions in many of the countries around the world and training AI is expensive, how seriously do you think organizations are currently treating the problem that there seems to be?

Anjana Susarla:
See, that's one of the hype around AI is that it's going to be this great tool that will bring in so much more efficiencies. So maybe we are not asking the same questions of AI that we are asking other established technologies. And we have to really ask ourselves this question, does the use of AI mean that somebody is less likely to get a loan? Does it mean differential access to housing, to education? Because even financing of loans. And so we have to worry about all these differential impacts on insurance, benefit discrimination, education discrimination, hiring, as you guys mentioned earlier. So all these are different harms that can be caused if you're using AI without thinking about what are the biases that are perpetrated by AI.
But I think the silver lining is with so much data and so much artificial intelligence, we can also establish the source of the bias more easily than if it's just human beings. So just like AI may have its blind spots, but unlike a human it's not going to say, "No, no, I'm actually very rational." So I think that could be a benefit.

Michael Bird:
I want to just ask you one more question if that's okay. Do you think this comes down to governments? Is regulation, is that what's needed?

Anjana Susarla:
Well, definitely in the United Kingdom and in the European Union, there are more regulations being wield around algorithmic accountability, as I would call it broadly. In the United States, there has not been regulation, but the White House has introduced an AI Bill of Rights, which is actually very comprehensive and I think it's more of a best practice. They are kind of hoping that companies will self-regulate rather than imposing top-down regulations. Because the challenge with regulation is, of course, these are very fast moving industries where maybe one size fits all regulation approach may not work for everybody. And so we don't want to maybe stifle innovation. At the same time a responsible oversight of companies and there also is pressure from social media, from consumer rights advocates, from stock market. The hope is that companies will do the right thing.

Aubrey Lovell:
So Anjana, I know you touched on this a little bit, but just to close out bottom line, why should organizations care about this?

Anjana Susarla:
Organizations should care about it because one thing that can happen is you can really end up hurting your goodwill with customers if you're seen as doing something bad with AI. And we've especially seen that in the last couple of years, consumers, younger generation, they are very, very sensitive to these kinds of biases and disparities. Because AI has the potential to magnify bias. Now we are using it much larger scale, so bias at scale is what I would call it. And we have become very sensitive to those biases at scale. So I think that's why companies should care. They should care because you know their customers care, their employees care, and the stock market also cares.

Michael Bird:
Okay, thanks Anjana. That was really interesting. We'll come back to you in a moment, so please don't go anywhere as we've got some questions from the audience that we'd love to ask you in a little bit. And of course, for those who want to dive in even further into this topic, we'll drop some useful links in the show notes about everything that we've talked about on today's show.

Aubrey Lovell:
Next up, it's down to you, our audience. We open the floor to you to give your recommendations on books which have changed the way you look at the world, life and business in the last 12 months. They could be technology based, have changed the way you work, or they could have just made you look at the world in a totally different way.

Michael Bird:
And if you want to share your recommendations, there's a link in the podcast description. Just record a voice note on your phone and then send it over.

Joel DiTrippani:
My name's Joel, I'm the co-founder and co-CEO of Vygo, which is a tech startup helping everyone in the world get access to world-class education. And my number one book for the year was Build by Tony Fadell. I hadn't really heard of Tony Fadell before. This book absolutely changed my life. If you are interested at all in your career in starting a business, this is an amazing business book for you. Essentially, Tony's the guy who created the iPhone for Apple, he made so many amazing impacts in the tech industry and then he started Nest. And this book is just all of that wisdom boiled down into a really digestible, fun read. More than anything, what I love in this book is that it's so clear how much he cares and it felt like the entire time Tony was there with his hand on my shoulder just telling me, "It's going to be okay." And sometimes that's what we need and that book is what I needed. So that was my number one book [inaudible 00:10:29].

Michael Bird:
Awesome. Anjana, have you read anything that you can recommend to us?

Anjana Susarla:
Yes. I think the one book that may be very interesting is called Algorithms of Oppression: How Search Engines Reinforce Racism. That should be like on probably a very nice way to explain how, you go and search for something on Google and you don't realize the biases and how these algorithms are trained, search a algorithms are trained. Or even the biases with recommendation because algorithms will amplify what's popular and that means we could be ending up amplifying a lot of unconscious biases in people's minds. I would start with that as good essential reading.

Michael Bird:
I'll add it to the list. Thank you.

Aubrey Lovell:
All right, well it is time for questions from the audience. So you've been sending in your questions to Professor Anjana Susarla, and we've got a couple lined up for you now. So Anjana, Cleo from London has asked, is there a concerted effort to stop biased AI from reaching the marketplace?

Anjana Susarla:
Well, I would say that there are certainly a lot of proposed legislations in the US. I think algorithmic accountability is one of it. There's also a lot of legislations [inaudible 00:11:50] in different states on safeguarding data, privacy and access. Because one of the things that happens with AI is we often as consumers don't know what aspects of our life is generating all these digital trails and how that data is being used by marketers. We often don't have that understanding. So that's also one step in making AI more accountable.

Michael Bird:
Okay. So my question is from Zoe, who's also from London, so the UK is really representing on this episode, she's asked whether there's a way of filtering out systematic bias in science and technology.

Anjana Susarla:
Yeah, I would say that the scientific community has also become very aware of these issues. So let me give an example, which is not about AI at all. I was reading this book called Invisible Women, which is also a really interesting book to read, and it actually describes that a lot of medical textbooks when they describe heart attack symptoms they describe actually symptoms as experienced by men. So this is really eye-opening to me because we think textbook is something very objective, we think science is actually very objective, but then there's like these biases that are there even in the way we think scientific standards that evolve.
So what is the scientific community doing? They've become more aware of these gender bias, racial biases, et cetera. Like during COVID, some of the pulse oximeter are actually less sensitive to darker skin tones apparently. So the community has really also become more aware and they are trying to have more representative data samples, and to their credit companies like Google also understand these problems now.

Michael Bird:
Gosh. Wow. Okay, great answer. Thank you for that. As before, we'll drop a couple of links in the podcast description for more on these topics.

Aubrey Lovell:
All right. We're getting towards the end of the show, which means it's time for week, this weekend in history.

Michael Bird:
This week in history.

Aubrey Lovell:
A look at monumental events, that was terrible, Michael, in the world of business and technology, which has changed our lives.

Michael Bird:
I thought we did a fantastic job. Anyway, the clue last week was www dot, I don't think that's the hardest challenge that we've ever set. It was of course, Tim Berners-Lee suggestion of a World Wide Web, which he submitted to CERN on the 12th of March 1989.

Aubrey Lovell:
Nice.

Michael Bird:
He wanted to develop a new way of linking and sharing information over the internet in a system that with, some refinement, would become the World Wide Web. And it was eventually launched to the public in April 1993, and the rest they say is history.

Aubrey Lovell:
All right, next week the clue is wiki-wiki wild, wild west. These are really not getting any harder, are they?

Michael Bird:
I love that. That's good. [inaudible 00:14:45]

Aubrey Lovell:
Although apparently this one's probably not quite what you're thinking, says Producer Sam. So he just wanted to make you sing. Thanks.

Michael Bird:
Again. Yeah, again. Right. Well that brings us to the end of Technology Now for this week and next week we'll be discussing the rise and fall of crypto and what it means for business. In the meantime, keep those suggestions, a life-changing books coming in, using the link in the podcast description below.

Aubrey Lovell:
Until then, thank you to our guest professor on Anjana Susarla. And to our listeners, thank you all so much for joining us. Technology Now is hosted by myself and also Michael Bird. This episode was produced by Sam Dotta Pollen and Zoe Anderson with production support from Harry Morton, Alicia Kempson, Alison Paisley, Alex Podmore, and Ed Everson. Technology Now is a Lower Street Production for Hewlett Packard Enterprise. We'll see you next week.