HPE news. Tech insights. World-class innovations. We take you straight to the source — interviewing tech's foremost thought leaders and change-makers that are propelling businesses and industries forward.
KAY FIRTH-BUTTERFIELD
I happen to sit next to a CEO of an AI company. On a plane journey for 10 hours.
I was one of the few people in the world he could have talked to about that. So Serendipity made me the world's first chief AI ethics officer. Wow. Okay. Always talked to people on planes as the motto of that
MICHAEL BIRD
That was Kay Firth-Butterfield, CEO of Good Tech Advisory and our guest for this week’s episode of Technology Now.
SAM JARRELL
And the world’s first AI Ethics Officer? That’s pretty cool
MICHAEL BIRD
Yes, yeah absolutely and as she said in the intro, the moral of the story is always talk to people on planes. Now one of the last flights I've been on I was in a middle seat and two blokes fell asleep on my shoulder and they took all the armrests Anyway, today we are going to be diving deep into the ethical world around AI.
I’m Michael Bird
SAM JARRELL
I'm Sam Jarrell
MICHAEL BIRD
And welcome to Technology Now from HPE.
MICHAEL BIRD
The speed at which AI has advanced has been both a blessing and a curse. We’ve seen its use in medical discoveries, we’ve seen it assisting organisations around the world to increase their efficiency and productivity, we’ve even seen its use in robotic bees used for crop polination .
SAM JARRELL
But AI is also advancing faster than regulation can keep up.
MICHAEL BIRD
Yeah exactly. The fifth point in the 2025 AI Index Report from The Stanford Institute for Human-Centered AI stated that the responsible AI ecosystem is simply not evolving evenly across the sector and that despite AI-related incidents being on the rise, responsible AI evaluations remained rare. There was, however, acknowledgement that while the area is not evolving fast enough within industry, governments are showing far more urgency when it comes to responsibility in AI with regulation starting to catch on to new and innovative uses of AI as they emerge.
SAM JARRELL
So how do we keep AI use responsible then?
MICHAEL BIRD
Well obviously we have our own moral compasses, right? But from a legislative perspective, we have experts in the topic of AI ethics who advise governments and organisations on policy. People like Kay, who you heard at the beginning, who talked to me about the complexities of responsible AI useage.
SAM JARRELL
But before we talk to Kay, let's take a look at one of those more novel uses of AI which has appeared in the past few years.
It’s time for…
Technology Then.
SAM JARRELL
This week, we aren’t going to far back in time, only three years in one case, and around 9 months in another, because while AI Ethics on film and TV might have been around since the 1950s, it has only been a practical topic for a couple of years because today, I’m going to be talking about the use of AI in the courtroom.
Both of these cases I want to cover made global news .
So back in 2023, a lawyer admitted to using an LLM for legal research after it came to light that the filing they submitted referenced multiple legal cases which didn’t exist – they were AI hallucinations! The mistake was not malicious, however it was described by the judge as an “unprecedented circumstance” which is… quite scary really.
But while this highlighted the danges of AI in a court room, I want to focus on something more recent, and something which I would firmly put into the heart of ethical uses of AI:
In 2025, the family of a murdered man gave consent for an AI deepfake of the victim to appear in court and give a victim statement.
The victim statement presented in court was written by the sister of the deceased. And it wasn't shown in front of a jury, just the judge during sentencing. It's also importantly though, it was not submitted into evidence , but even so questions about the ethics of using an image of a deceased person to create an AI for court were immediately discussed by experts around the world, given the precedent it could set for similar uses in the future. Michael, I'd love to know your views on this sort of novel use of AI.
MICHAEL BIRD
this sort of stuff can potentially save time, can save resources, but I think the thing that we have to be careful about is, you know, hallucinations or, you know, bad data that's gone in so that there might be some sort of biases within the AI models. But I think if some of those challenges can be addressed then I think potentially this sort of stuff can be quite useful. What about you?
SAM JARRELL
Personally, I'm honestly a little bit against it in the sense that like you can't assume to know, like in this case, you can't assume to know what a person would think or feel. Even if you are their relative or if it's trained on data about them, humans are unpredictable and AI just goes based off of the data that is there. But even if it all seems that someone feels one way about something and the actual heat of the moment or, you know, when they're presented with a situation they may actually feel or think differently than we expect. To me, would make sense to provide statements as a sister, but it's a bit presumptuous to then present as the actual person.
MICHAEL BIRD
Yeah, yeah, it's very true. Do you know what? I never knew what true randomness it was until I had children and my goodness, I have no idea why they do certain things. It's completely random. Anyway, Sam, we all know that AI has led to many paradigm shifts across society.
So to find out more about how we can use it responsibly, I took the opportunity while at Davos the other week to talk to Kay Firth Butterfield, CEO of GoodTechAdvisory LLC, all about what ethics and AI actually is.
KAY FIRTH-BUTTERFIELD
Well, in those days we were calling it ethics. Mm-hmm. But now we've rather moved to responsible AI or trustworthy AI because, um. When we started talking about it internationally, you get into this, well, whose ethics are we talking about? Whose ethics do we want to promote? And yet, when we did a survey at the World Economic Forum of all the national AI strategies and other documents out there around ethical AI, it turned out that everybody was worried about the same things.
But they were addressing them in different ways. And so that's how, as I say, we now talk about responsible ai. I actually have moved to, let's use AI wisely because it takes all those pejoratives of responsibility or ethics out of it and just says, well, you know, here's a tool that acts upon us. Let's use it wisely.
MICHAEL BIRD
so does this sort of boil down to the fact that the AI genie well and truly outta the bottle? I think it'd be fair to say like, is this something we now just sort of have to live with and try and figure out how we coexist with it?
KAY FIRTH-BUTTERFIELD
We do have to learn how to coexist with it, But actually, you know, AI is simply a tool without humans, without our data and without us using it, it just wouldn't exist.
Mm. So I think we have to. Put it into that context. we are going to be coexisting with AI from birth to death. Mm. And we, uh, are not at the moment ready. To do that. We don't have anywhere near enough AI literacy amongst the population, and unless we have that, I think we might lose some of the good things about ai whilst people really worry and probably rightly about losing their jobs or teenagers fall in love with AIs, which is bad for them and, um, and humanity.
MICHAEL BIRD
So, to some extent, understanding this problem is vital to the survival of our species.
It might be a slightly, uh, overarching the problem slightly, but…
KAY FIRTH-BUTTERFIELD
Uh, well, you could be, but, um, actually I was in Munich at a conference and the head of, uh, the Kinsey Institute actually said that humanity was in a crisis, right?
And so maybe you aren't over-egging the pudding. Um, and I think what he meant by that is that we are now. Extraordinarily lonely.
Um, but where does AI fit into that? I think the problem with, with it is that AI provides us a very easy. Way of dealing with loneliness. Mm-hmm. So you can talk to a chatbot and you know, one of the problems of talking to a chat bott is that the chat bot will never, never challenge you unless you ask it to.
Mm-hmm. Will always be nice. Will always think that you pretend because of course it doesn't think or care or love or any of those things. So it's always there for you. And we humans are much more difficult than that. And so it's just easier to talk to the chatbot
MICHAEL BIRD
Very interesting point. To some extent, some AI chatbots can be slightly sycophantic, aren't they? They can just of say yes to everything. Yes. Which isn't necessarily what you need.
KAY FIRTH-BUTTERFIELD
Yeah, absolutely. And you know, when we took it, when we look at what we call smart toys or AI enabled toys for the under sixes, one of the things that we are particularly worried about is that you know, when you are engaging at that, at that age where all of your beliefs and values are being created, you are engaging with an AI that your parents are not monitoring and, um.
It's always nice to you. Mm-hmm. And then you go to school and those rotten humans make you cry in the playground. And so, you know what then happens to our society if we rarely choose our AI companions? So AI literacy is so important
MICHAEL BIRD
Yeah. Okay. So, uh, when we talk about coexisting with ai, what, like, what are we actually talking about here?
KAY FIRTH-BUTTERFIELD
Things that we will see are, as they say, AI enabled toys for our children that we have to monitor and understand. Um, AI in education. We are already seeing studies that show that if we use AI too much, we actually get less educated and less able to think critically. So we have to understand what, how much AI to use, where to use it.
Obviously it in, um, medicine. And, I often tell this story where I had breast cancer in 2023 and my oncologist, when she found out what I did, she said, oh, wouldn't it be great if we could have an AI to walk you through your journey with cancer?
And I said, well, let's put. AI into your back office where it belongs, and let's leave you the human talking to me, the human, about my journey with cancer.
MICHAEL BIRD
It sort of feels like that misses the point of why people want human interactions,
So I guess maybe what we're talking about here is letting humans do the things that humans are good at, and letting AI do the things that AI's good at.
KAY FIRTH-BUTTERFIELD
Absolutely. But without the conversation happening amongst everybody, we have things like, you know, surgeons saying. we could let AI tell somebody that they are dying instead of us telling them, because it's just a script that we use, we learned in college.
I personally don't think that's the right way that we should be coexisting with ai.
MICHAEL BIRD
Well then whose job is it to lead sort of the, this process of what does AI do, what do humans do?
KAY FIRTH-BUTTERFIELD
it lays on all of us, but, uh, it obviously, it should be amongst the foundational model, um, providers. And, um, it's worrying to see that,
we've still have hallucinations bias, problems with data privacy, problems with accountability, explainability, all these things, um, because they can't correct
And of course, you know, why can't they correct them?
Well, AI is. Built on, you know, you built on imperfect people. It is built by imperfect people, is built on imperfect data from imperfect people. So say no wonder that it's riddled with all these problems.
MICHAEL BIRD
Uh, and do you think there will ever be a perfect model?
KAY FIRTH-BUTTERFIELD
what we have is a tool built on human frailties. Yeah. Designed to repeat human and used by humans.
That's not to say that, you know, it won't do fantastic things in various areas like science. Um, but um, with coexisting, we humans have to understand the problems around it and work around those problems. Yeah. It isn't a magic wand and the hype is, is. Beyond hype.
MICHAEL BIRD
Yeah. Okay. So is there a, I mean, why, why do you think there is a disconnect between the jobs that an everyday person wants to see AI doing and the ones it sort of appears to now be doing?
KAY FIRTH-BUTTERFIELD
I think because the tool that has been built is very good at doing things like podcasting or, writing emails or writing speeches.
Um, so it's really good at doing those things. and it's also good at faking empathy, which is really worrying, but, um. But you know, human beings want it to do different things and they want to see it regulated. Uh, 88% of people in Britain actually want to see AI regulated, and it's in the seventies in the US.
MICHAEL BIRD
I mean, do you think there is that balance that governments are trying to strike, which is.
You regulate too much, then you stifle innovation and then other countries will, will leap ahead because they won't have regulation. Versus if it's not regulated at all, then you know, who knows what could happen.
KAY FIRTH-BUTTERFIELD
Well, first of all, I don't think that regulation stifles innovation. I drive a sports car. I would not be driving it unless somebody had regulated that it had good breaks.
Those safety measures have required innovation. So I don't think it's true to say regulation kills innovation,
MICHAEL BIRD
And when we say regulation, what, what do we actually mean by that and how does that practically look to, to organizations?
KAY FIRTH-BUTTERFIELD
Well, I think that regulation would look like safety rules for cars, for example.
And that's what it would mean for a, for organizations. I spend a lot of time, as I said, working with, you know, fortune 100 companies. And what they need is clarity Companies work better when they know the rules that they're working to.
MICHAEL BIRD
It, it's, it sort of feels a little bit like the, sort of the rise of social media back in the day do you feel like AI's maybe in that beginning of that curve where at the moment AI's a bit like this could solve everything though I can't see a single downside with this.
KAY FIRTH-BUTTERFIELD
Well, um, undoubtedly that's right. Uh, Tristan Harris, um. Said, uh, I don't know, back in 2023, I think that, uh, we had, our first encounter with AI had been social media and that hadn't gone very well.
And our ne, you know, what have we learned to carry into this second encounter with ai? Well, not very much. M
How can we see all the risks? That in itself is one of the problems that I think we see for regulation and for human understanding. As humans, we are just not good at looking risk in the face.
MICHAEL BIRD
Yeah, yeah, that's very true. And, and I guess to some extent we're still in the quite early days of these AI models.
We still don't fully understand how people could use them and where they might use them for good and maybe for bad
KAY FIRTH-BUTTERFIELD
if we take health, um, you know, there's a company that has recently said, upload all your medical records and then we can help you with your medical questions.
In one way that's amazing because it, uh, you know, if, if you were in Rwanda for example, where there's one doctor for every 27,000 people, that's an amazing tool.
But, there are very few people in Rwanda who can use it and, um, and it's in deeply invasive and, um, completely unprotected,
with your data.
MICHAEL BIRD
Yeah. Interesting. Yeah. So, how can AI be used alongside people to assist them with work?
KAY FIRTH-BUTTERFIELD
Well, I think a first thing that you have to do if you're a business is train your employees to understand AI. A lot of businesses have just said you must use AI and it's going to be one of the things in your performance evaluation, how much you're using AI.
That has led to this thing that we call work slop, where people are using AI but haven't been trained on it. So getting it wrong, and then somebody else is going to have to spend up to two hours making it right.
I think in a lot of state situations it actually calms nerves that it's going to take your job and it enables you to really say, okay, so I can have AI write my emails, but I need to check. And one of the things with hallucinations, and we've seen it in the news, is you cannot put out a report. Without checking that it is not got hallucinations
In it. this is a tool that can really bite them back. So, training, training, training.
MICHAEL BIRD
Kay. Thank you so much for your time.
It's been, it's been a real pleasure chatting to you.
KAY FIRTH-BUTTERFIELD
Likewise.
MICHAEL BIRD
Thank you
SAM JARRELL
Wow, well, I mean, this just reinforces that AI is a tool, first and foremost, and we have to be really careful about its applications. I don't know about how you use it at work, Michael, but for me, I actually quite enjoy I made my own AI agent
it's aptly named somewhat Sam, because sometimes it does somewhat get things wrong. And it's important to just double check all of it.
MICHAEL BIRD
Hang on, Sam, is it somewhat Sam that I speak to when I send you messages, instant messages?
SAM JARRELL
You'll never know, you'll never know. I was curious about something though. So in this discussion, she was talking at one point about AI toys for kids and Michael, you're a parent, so I'm really curious about your thoughts about these AI toys.
MICHAEL BIRD
I mean this goes into the realm of like parenting decisions, but I am really careful with the sorts of like even like media that my children consume So like I have a slightly more curated approach to things That's not to say like they can't like and that my kids are quite young. So I think that's probably what a lot of parents do I
I have the same concerns about critical thinking and I'm sure you've experienced spending time with chatbots and LLMs, and I think I said it into you, they can be quite sycophantic.
We have to sort of build up that literacy and understand how AIs, like what they're great at and what they're not so good at.
SAM JARRELL
Yeah, I agree with you. like people who don't have a lot of experience with some of these tools and maybe who also don't have as much sense of identity or positive self talk. worry about the synchophantic nature of AI and the negative impacts that lead to like AI psychosis.
And I worry about people who have mental health issues potentially being, you know, falling down these kinds of rabbit holes and becoming convinced that this stuff is real, right?
MICHAEL BIRD
Yeah, and I think it really just, I think it comes back to, I guess the two things that Kay mentioned, is understanding what AI is really good at and understanding what AI is not so good at. So that the example of the surgeon or the doctor saying, oh yeah, we just use AI to tell a patient that they've got a terminal illness. And Kay's point being like, no, that's what humans are great at. Like that's the thing that... patients need, like, whereas AI is really good at spotting patterns. So analyzing, you know, scan data on mass or whatever that would be. Because, she talks about hallucinations, bias, accountability, responsibility. So, yeah, I think I think there's an element of like, really, really like, because it's still feels like we're the cusp of the AI revolution.
SAM JARRELL
if AI is going to accelerate, then the safeguards have to accelerate too. I get not wanting to stifle innovation, but we can't cut corners with regards to safety because the results could be pretty catastrophic. It's important that the safeguards are important.
MICHAEL BIRD
Yeah, and actually, I mean, I'm a big fan of motorsport. And actually, lot of motorsport is built on regulations. actually, innovation comes from those regulations, because you have to then be creative.
MICHAEL BIRD
So, in a full circle moment, I actually asked Kay about whether there is anything we should be worried about reguarding trustworthy or ethical AI and Sam, I think you might find her answer a little familiar!
KAY FIRTH-BUTTERFIELD
I think one of the things that really worries me is the extent of hallucinations and deep fake evident. That's coming into court. So for example, we're seeing deep faked medical reports and personal injury cases.
We are seeing deep faked pictures of car crashes for claims. And it's really. Difficult to control. And at the moment the onus is on us lawyers to go through, the whole set of legal documents from our side and the other side to seek out those hallucinations.
have to have your spidey senses out for fakes and, and we humans not really built to see
SAM JARRELL
Okay that brings us to the end of Technology Now for this week.
Thank you to our guest, Kay Firth-Butterfield
And of course, to our listeners.
Thank you so much for joining us.
MICHAEL BIRD
If you’ve enjoyed this episode, please do let us know – rate and review us wherever you listen to episodes and if you want to get in contact with us, send us an email to technology now AT hpe.com and don’t forget to subscribe so you can listen first every week.
Technology Now is hosted by Sam Jarrell and myself, Michael Bird
This episode was produced by Harry Lampert and Izzie Clarke with production support from Alysha Kempson-Taylor, Beckie Bird, Alissa Mitry, and Renee Edwards. Our theme music was composed by Greg Hooper.
SAM JARRELL
Our social editorial team is Rebecca Wissinger, Judy-Anne Goldman and Jacqueline Green and our social media designers are Alejandra Garcia, and Ambar Maldonado.
MICHAEL BIRD
Technology Now is a Fresh Air Production for Hewlett Packard Enterprise.
(and) we’ll see you next week. Cheers!
SAM JARRELL
Bye y’all