Technology Now

The ethical question of AI has been at the forefront of its development. Today, there is a global rush to establish legal and ethical frameworks around AI, such as the European Parliament AI Act, which aim to legislate around concerns surrounding potential bias from bad data sets or algorithms, privacy concerns, and non-discrimination.

Our guest this week is Principal Data Scientist and AI Ambassador at HPE, Iveta Lohovska. We’ll be discussing the practicality of placing guardrails around AI, as well as the ethical approach that needs to be taken for training models - and whether the sheer scale of its growth is leaving us vulnerable as a society.

This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.

Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA

About the expert: https://www.linkedin.com/in/iveta-lohovska-40210362/?originalSubdomain=at

Sources and statistics cited in this episode:
2024 Global Forum on the Ethics of AI - https://www.unesco.org/en/forum-ethics-ai
European Parliament AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
The Outer Space Treaty - https://www.unoosa.org/oosa/en/ourwork/spacelaw/treaties/introouterspacetreaty.html
International Astronomical Union - https://www.iau.org/
Dolly the Sheep cloning - https://www.nms.ac.uk/explore-our-collections/stories/natural-sciences/dolly-the-sheep/

Creators & Guests

Host
Aubrey Lovell
Host
Michael Bird

What is Technology Now?

HPE News. Tech Insights. World-Class Innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.

Aubrey Lovell (00:10):
Hello, friends, and welcome back to Technology Now. We're excited to have you here. We are a weekly show from Hewlett Packard Enterprise, where we take what's happening in the world and explore how it's changing the way organizations are using technology. We're your hosts, Aubrey Lovell.

Michael Bird (00:24):
And Michael Bird. And in this episode, we are taking a look into the ethics of AI. Now, 2023 was the year that AI really exploded into the public consciousness and public usage with the advent of mass market generative AI tools. This huge growth has outpaced regulation, with the U.S., EU and others now pushing through legislation to try and make sure that AI is built and used fairly and safely.

Aubrey Lovell (00:51):
But where does the responsibility for ethics in artificial intelligence lie? Is it with governments, with developers, or is it with us, the users? In this episode, we're going to take a brief look at what steps are being taken to keep us safe.

Michael Bird (01:04):
And, in fact, if it is even possible to put ethical guardrails in place. So, if you're the kind of person who needs to know why what's going on in the world matters to your organization, this podcast is for you. And of course, if you haven't yet, do make sure you subscribe on your podcast app of choice so you don't miss out. Right, Aubrey. Let's do it.

Aubrey Lovell (01:26):
In 2024, the second Global Forum on the Ethics of AI will take place in Slovenia. It will be run by UNESCO, the United Nations Educational, Scientific and Cultural Organization. It was back in 2021 that UNESCO delivered the first-ever standard on global AI ethics, which we've linked to in the show notes, and which was picked up by all of its 193 member states. But it was just a recommendation. And although the European Parliament voted in December 2023 to introduce the world's first package of AI laws, being lawful isn't necessarily the same as being ethical. Some of the big ethical concerns in AI at the moment are its potential bias from bad data sets or algorithms, privacy concerns and non-discrimination.

Michael Bird (02:13):
And these are all issues that, in her own words, keep this week's guest awake at night. Iveta Lohovska is Principal Data Scientist and AI Ambassador at HPE. She's been watching the evolving debate over ethics in AI in the last few years, and I had the chance to catch up with her at a conference at the end of last year. So your mission statement is to democratize decision intelligence and safe, reliable AI. Breaking it down, what exactly does that mean?

Iveta Lohovska (02:41):
You can see the current advancements in AI, and there are not a lot of guardrails and the ethics are not clearly defined. How do we build AI? How do we train AI? How we deploy and scale AI. As a society we have a legal and a human aspect to handle. So having a holistic, cautious approach of how we develop AI and what are the guardrails, similar to the doctor's code of conduct, is when we are building AI.

Michael Bird (03:10):
And what are the challenges that are keeping you awake at night at the moment?

Iveta Lohovska (03:13):
It's a very good question. Nuclear war, climate change, and AI. And for the last one year, AI and generative AI is basically on top of my list. But it truly keeps me awake. And you could see from my anxiety levels that I take it very serious.

Michael Bird (03:27):
And so are these issues caused by bad AI design or bad interpretation of the data used within AI?

Iveta Lohovska (03:34):
AI will expose and scale any human or societal weakness. And just the scale is so big and getting out of proportion that the worries I have is that whatever vulnerability we have as a society in the way we operate, in the way we interact with each other, will be used and scaled out of proportions that we cannot control. Because we don't have a common taxonomy or agreement or maturity as the broader society. And not just the elite 1% engineers who develop it, but just everyone else to be involved in this conversation.

Michael Bird (04:09):
And how seriously are organizations and I guess, governments taking these challenges?

Iveta Lohovska (04:15):
I think the traditional way is to try to regulate. The EU AI Act, Biden and Harris are talking about this. I think unfortunately through lobbying and through the way they handle GDPR and the AI regulation is that we might end up creating or solidifying a very elite sport for the big companies, where smaller startups and different kinds of research labs and universities will not be able to compete. So I'm a bit skeptical of how this is going to end, but I'm definitely pro open source and the involvement of the different stakeholders in this conversation.

Michael Bird (04:55):
What would your step-by-step plan be to overcome some of these challenges?

Iveta Lohovska (05:00):
I would say, being a technical expert and knowing our limitations when it comes to building guard rails or fighting bias on an algorithmic level means literally the linear algebra that we are building and designing and trying to solve as the models around simplified AI systems or more complex ones. The impact and the control you would have on an algorithmic level is much smaller and limited than you would have on a data source level.

(05:31):
So my step-by-step guide would be to try to address the concerns, the data vulnerabilities, the biases, try to implement AI ethics on a level of the data, how the data gets curated, how the data gets sampled, how it gets massaged, before moving to the algorithm. Because you have much more control and understanding of what is going on before it enters any algorithm, really.

Michael Bird (05:57):
You've worked on loads and loads of AI projects. What has your guidance been to organizations with those projects?

Iveta Lohovska (06:04):
First, have a data strategy. This will avoid a lot of disappointments and failed projects further on in the process. Also, this is in a way technology that's get tested and build up in front of our eyes as we experience it right now. So be cautious of how you approach the use cases in terms of IP rights, safety, ethical principles, fairness. All of those words sound a bit theoretical, but they have actual technical implications and way to build it up.

(06:34):
So have this in mind when approaching any AI case, especially generative AI use case, because we see how disruptive it is, how fast it's changing. And no one wants to be left behind. And we are definitely at a situation where we keep using the example how AI is going to take over the world or our jobs and create monopolies for specific industries. I think not specifically AI, but organizations and people using AI. So the best approach is to embrace it and get comfortable with it, learn new skills or embed AI into the process and operations, because basically there's no other choice.

Michael Bird (07:12):
So when you speak to organizations and talk through some of the things they should do, do most organizations listen or understand the issues that you raise?

Iveta Lohovska (07:20):
Every organization, as our organization, has different culture. As part of my global role is to be in different geos, to talk to different customers, different countries. So I would say there are definitely organizations that are risk-averse and they wait to see what their neighbor is doing, how successful it is, when they're going to adopt. And there are early adopters that want to try and break things and fail and move on. Industries like healthcare and finance and insurance are much more careful how they implement and build stuff compared to marketing and retail.

Michael Bird (07:52):
Are there any industries that you're talking to where you're just like, "You need to get on this right now, you need to come up with a strategy"?

Iveta Lohovska (07:57):
Yeah. I wish we could brute force our way in healthcare and use all of this powerful technology for healthcare and life science, because it's truly life-changing if you use it. Unfortunately, because of regulations and many constraints, healthcare and telco industries are a bit more cautious and careful. But I would say this is the role of the governments, to encourage and support and find ways to handle any AI adoption in those two examples as a deep tech investment. So you really need to have a long-term vision with long-term budgets and long-term support of how you are going to build this up. I think Europe is doing quite well, especially in healthcare, in the way they innovate. But there's a lot more to be done in this space.

Aubrey Lovell (08:45):
Wow. I mean, there's just so much that we can talk about in terms of AI and ethics, but definitely an amazing start. Thanks so much. And we'll be back with Iveta Lohovska in a moment so don't go anywhere.

Michael Bird (08:58):
All right, then. It is time for Today I Learned, the part of the show where we take a look at something happening in the world that we think you should know about.

Aubrey Lovell (09:05):
And this week it's mining the moon.

Michael Bird (09:08):
Nice.

Aubrey Lovell (09:08):
Now, gathering minerals from space is a hot topic at the moment and is widely thought to be moving from science fiction towards reality. In fact, a number of private and state ventures are already sending landers to the moon to try and work out exactly what's up there which could be valuable to us down here. And that's prompted a wave of concern from astronomers who are concerned that we're set to destroy some of the most scientifically valuable spots in the entire solar system. A working group set up by the International Astronomical Union is meeting UN officials in the coming weeks to start negotiations that they hope could set some of the first legislation around space mining. This is really interesting.

(09:48):
A spokesperson for the group highlighted that there are craters on the moon that have never seen sunlight and have been shrouded in shadow since the formation of the moon billions of years ago. They're some of the coldest and most untouched places we could hope to reach, and incredibly scientifically valuable. But all that could be gone in an instant if a rover arrives on the site to prospect for minerals. At the moment, the 1967 Outer Space Treaty prevents nations from making territorial claims on celestial bodies, but says nothing about space mining and exploitation of resources, according to the journal Science. The IAU group is hoping that will change, though, and some of the moon's greatest wildernesses will be protected.

Michael Bird (10:32):
Wow. Thank you for that, Aubrey. Very, very cool. Right. So from the ethics of mining in space to the ethics of mining data and AI, which I think you'll agree is a very smooth segue, it's time to head back to our interview with Iveta Lohovska.

(10:48):
So we're entering a period where there's lots of legislation around AI and ethics. In your opinion, is legislation the right route to take?

Iveta Lohovska (10:56):
I'm very opinionated in that, and I think there needs to be a regulation. I don't think we'll get it right from the first time. So I think we just need to iterate, as we did stop the AI Act when we saw what happened with large language models. And it wasn't part of the AI regulations, so they delayed the release of the regulation just to incorporate the large language models.

(11:18):
So just not being influenced by lobbies so much, but more by the technologies and the different organizations that are not specifically technical will help nurture a proper approach. I see different approaches, people who try to regulate it or keep it open source. But the open source mentality or basically from thinking from first principles of what open source AI implies is a good start to build any kind of regulation on top of that.

Michael Bird (11:48):
Is there a risk that with legislation only happening, say, in Europe or in the U.S., that other countries where there's not regulation, that's where the AI innovation will happen? And then Europe and the U.S. or whatever countries are introducing AI regulation will be, again, playing catch up.

Iveta Lohovska (12:05):
We have a clear evidence that it's stopping innovation. I think there are other factors, like any AI is still done by people, and any engineering power and brain power and talent needs to be retained in the region or the country where it's coming from and being nurtured. So I will say talent and engineering innovation is more crucial than the actual impact of regulations, but I think just the combination of too many factors could end up in a slowdown.

Michael Bird (12:35):
Do you think organizations and governments fully understand the challenges in creating balanced, reliable AI?

Iveta Lohovska (12:42):
No, and I don't think it's possible to create a balanced and reliable AI. It's the same principle as having a universal basic income. We cannot agree of what is universal. We cannot agree what is basic in Austria or the U.S. or the Philippines because we have different definitions of what people need. And it's the same on AI. So I think we'll have many kinds of different mini-AI strategies and tools and platforms, and it's just finding the right way to co-share data, co-share experiences in the different industry and how we build together. And try to basically be aware of how powerful this technology is.

(13:19):
From my engineering perspective, as a data scientist, I want to invest my time and skills in the right way forward. And this is precision medicine, precision agriculture, everything that incorporates sustainable AI and AI for good. And I think everyone should just pick their area of competences and interests and try to move the needle as a society, because ultimately that is what is going to count.

Aubrey Lovell (13:46):
Thanks so much for chatting to us, Iveta. It's been great. And you can find more on the topics discussed in today's episode in our show notes.

(13:55):
Okay. We're getting towards the end of the show, which means it's time for This Week in History, a look at monumental events in the world of business and technology which has changed our lives.

Michael Bird (14:05):
Now, the clue last week was, "In 1997, this Dolly was Polly." Do you know what it is?

Aubrey Lovell (14:14):
I do know this one.

Michael Bird (14:15):
Do you?

Aubrey Lovell (14:15):
But take it away, Michael.

Michael Bird (14:16):
I thought as a Brit, it'd be a kind of niche one for a Brit. But anyway, it is the cloning of Dolly the sheep, who was born this week in 1997. And I remember, I was a child in 1997. This was all over the children's news channels. This was a big deal in my childhood.

Aubrey Lovell (14:31):
It was. I do remember that coverage as well.

Michael Bird (14:33):
The same for you? Wow. Okay. Fair enough.

Aubrey Lovell (14:33):
Yeah, absolutely.

Michael Bird (14:33):
Well, Dolly was the work of a team in Roslin, Scotland who took adult cells from the mammary gland of another sheep and developed Dolly from them. It was the first time a clone had been made from fully adult cells rather than stem cells, and it was a major scientific breakthrough. Unfortunately, not all went well and Dolly was put down after developing several health conditions at around six and a half years old, which is half the age a sheep of her breed would normally live to. Though no evidence was found about it being related to her being cloned. Dolly had several lambs in her life and is now regularly displayed at museums in Scotland. And in case you're wondering, yes, she was named after a certain American singer.

Aubrey Lovell (15:18):
Fascinating. And next week, the clue is, "It's 1935, and beep, beep, beep. I see you." I'm not sure what that is.

Michael Bird (15:30):
Nope. I'm looking forward to finding out next week. Right. That brings us to the end of Technology Now for this week.

Aubrey Lovell (15:35):
Thank you to our guest, Iveta Lohovska. And to you, thank you so much for joining us today.

Michael Bird (15:40):
Technology Now is hosted by Aubrey Lovell and myself, Michael Bird. And this episode was produced by Sam Datta-Paulin and Al Booth with production support from Harry Morton, Zoe Anderson, Alicia Kempson, Alison Paisley, Alyssa Mittrie, Kamila Patel, Alex Podmore and Chloe Suewell.

Aubrey Lovell (15:55):
And our fabulous social editorial team is Rebecca Wissinger, Judy Ann Goldman, Katie Guarino. And our social media designers are Alejandra Garcia, Carlos Alberto Suarez, and Ambar Maldonado.

Michael Bird (16:07):
Technology Now is a Lower Street production for Hewlett Packard Enterprise, and we'll see you next week.