Futures Conversations

Professor Shannon Vallor, Director of the Centre for Technomoral Futures and author of "The AI Mirror," explores how her background shaped her approach to research on the ethics of AI. For her, AI is a mirror to society, and she argues that we need to look to humans, not machines, for how to adapt to this emergent technology.

What is Futures Conversations?

“Futures Conversations” brings you thought-provoking dialogue showcasing the intersections of disciplines, ideas, and possibilities being tackled at the University of Edinburgh's Edinburgh Futures Institute.

[Electronic beat]
[Enda:] Welcome to Futures Conversations,
the Edinburgh Futures Institute podcast
that showcases all the wonderful research
taking place
at the Edinburgh Futures
Institute.
Research at the Futures Institute
is challenge-led and interdisciplinary
addressing many of the greatest challenges
we face in the world today.
[Electronic beat]
I'm your host, Enda Delaney,
Director of Research at EFI.
And in this episode,
I'm joined by Professor Shannon Vallor,
who's Baillie Gifford Chair in the Ethics
of Data and Artificial Intelligence.
Welcome, Shannon.
Just to get our discussion going,
could you tell us a little bit
about your background, where you grew up?
What inspired you?
Who inspired you as a teacher?
What values were important to you,
and your wider social group?-
[Shannon:] So, I grew up in, a suburb
of the San Francisco...
area in California and, grew up in a...
working class household.
My father was a civil servant for a
Naval Station nearby,
where he calibrated ship instruments.
My mother was initially, uh,
a stay at home mom.
Later, as I got older,
she went to work as a PBX operator
at a hospital, manning the...
the phones.
No one in my family had been to college,
but my grandfather,
who probably had the greatest influence
on me growing up,
was, a frustrated scientist, shall we say.
He had never been to university,
but he was a scientific glassblower.
And he worked in,
the physics labs at UC Berkeley,
making cloud chambers and,
things that were used for,
you know, some of the most innovative
experiments of the 20th century.
And he was fascinated with science and,
always wanted me to
love science, be a scientist.
So I was encouraged academically
from a very young age.
I was also just transfixed by science-fiction,
by technology.
I was obsessed with airplanes.
Wanted to be a pilot.
Actually, it’s one of the first
ambitions that I had.
Until I was told
that, you had to be in the military
to be a pilot, to become a civilian pilot
first, that you normally
had to get there
through- through the military.
And that, as a woman,
I wouldn't be allowed to fly,
which at that time
was still probably true,
although it wasn't true
for much longer after that.
So I abandoned that particular goal.
But, you know,
I had the kind of traditional
nerd kid’s ambitions of being,
a pilot, an astronaut,
which, again,
I was dissuaded from, for similar reasons.
Um...
I was obsessed with Star Wars.
I saw it, you know, in the theater
when I was six years old, and,
Uh...
was particularly influential,
not just in terms of another fascination
with technology and,
you know, the first encounter on film
with robots
and mechanical, minds or- or beings,
but also,
because the one of the central characters
was, a strong woman, who,
was a, was a leader
and not just a sort of passive victim,
as in most kinds of stories
of that nature was-
was the trope in my early
schooling, was very interested
in science and mathematics,
but I, I began to kind of turn away from
that towards the end of, of high school.
And I got much more interested in
the science of the human mind.
I was really interested in psychology,
kind of prefiguring my later interest in AI.
I was very interested
in how the mind works.
So I went to community college,
which in California is a kind of public,
two year
path that allows you to get actually
a very good first two years of,
college education, then transfer
to a four year university after,
and because it's designed
for the kind of student I was.
The courses are often offered at night
to allow students to work.
So I worked full time from the time
I was 17
until I went to get my PhD, actually.
So I worked full time all the way through
my undergraduate degree, transferred
to California State University,
still a psychology major at that point,
but becoming dissuaded
because the psychology department there
was really more focused on kind
of training psychologists and counselors.
And it wasn't, you know, they didn't have
a strong program in
cognitive science, neuroscience, the kinds of things
that actually interested me more.
But when I got to the four year
university, the first thing I did was take
a general education requirement course
in ethics in the philosophy department.
And I didn't have any affection
for philosophy at that point.
So I took this philosophy course
because it just met a requirement
that I needed to graduate,
but I fell in love with it.
It was also a night class.
I fell in love with it the first night.
I changed my major
within two weeks of starting that course,
which is very difficult
because I was already two years through,
so I had to radically refigure
my entire schedule
and catch up on all of the requirements
in philosophy in just two years.
So it was- it was really challenging,
but I but I managed to do it,
but I was also very driven at that point.
It took me a long time
to realize it, by moral questions.
And so philosophy was also the discipline
where you could ask moral questions
about how we should live
and what we owe to one another.
I ended up, focusing
in the philosophy of science.
So kind of combining my-
my earlier interests of philosophy
of science, philosophy of mind.
And then in graduate school
in, Boston College, I took a course
in the philosophy of technology,
which at the time really wasn't...
a common
offering in philosophy departments.
It wasn't an area of study,
particularly in the United States or in
any English speaking part of the world.
That was-
that was well developed at that point.
There were some places in Europe
that focused on philosophy of technology,
but in the Netherlands.
But, in the U.S., it,
it was, very difficult,
to explore that area.
So I was very lucky
that I happened to take that one
elective course.
And it just brought everything together.
You know, once
I ended up, at Santa Clara University,
back in California teaching,
I was teaching the philosophy of science.
But I developed a course essentially
called Science, Technology and Society.
But it was a philosophy course.
I taught a unit on the ethics
of social media in 2006,
and my students just absolutely
threw themselves into it,
and were desperate to talk at that point.
And, so I realised that the ethics
of the new digital technologies,
the ethics of...
social media platforms, the ethics of data
were becoming increasingly important.
And my- my work focused on that.
But of course,
I had always had a fascination with
AI and robotics,
so I was working on that as well.
But back in 20-, you know, 2008, 2010,
but it made me very well positioned
for when AI became a commercial reality,
because I'd been working on questions
around what would it mean to have machines
that perform the tasks
that we think of today as intelligent?
So that's been that's been
my focus really ever since.-
[Enda:] Do you think your different
paths to academia,
do you think that informs the way
in which you conduct your research, lead
your teams?
Is there-
[Shannon:] Yeah, absolutely.
One of the things that it did was,
I think, remove for me
the desire
to stay within a traditional discipline
and earn the kind of status points
that you get by following
the rules and norms of one discipline
in a very rigid way.
But because I had that wandering path,
I was never fully satisfied
with a particular disciplinary approach
or a particular set of conversations
that would only happen
within a single discipline.
So, I was always interested in bringing,
for example,
moral questions and scientific questions
together,
questions about politics
and questions about technology together.
And because those questions were always-
and those,
those combinations of questions
were always at the front of my mind,
it really drove me towards
wanting to explore those questions
without being hemmed in.
So I stayed within the disciplinary box
just enough,
to get tenured
and quickly promoted to full professor.
And then the minute an opportunity came
to break out of that disciplinary box,
I jumped at it.
And that was the opportunity here
at the University of Edinburgh
to come to the Edinburgh Futures Institute
and, create a new-
a new interdisciplinary research programme
in the ethics of data
and artificial intelligence.
That was really-
that moment was really the fulfilment,
of what I'd-
what I'd long sought was the opportunity
to work on the
questions that fascinated me,
and pull in whichever kinds of expertise
were needed to answer those questions.
So to lead research groups
that had social scientists,
that had computer scientists,
that had, um,
human computer interactions specialists,
that had law scholars
that had philosophers, right?
Whoever's needed onboard
to answer the question,
that's who needs to be in the room.
And, um,
it’s something that's very difficult to do
within traditional academic structures.
And you have to have a lot
of institutional support and resources
to make it work.
That's what-
that's what EFI has been able to accomplish.-
[Enda:]
I have to ask you,
you moved halfway across the world in-
in the middle of a global pandemic to-
At the new chair,
the Baillie Gifford Chair,
in the Ethics of Data and AI.
What was that like?-
[laughs]
[Shannon:] Well, the funny thing is, is that
I didn't know it was a global pandemic
until, actually,
I knew it halfway through the trip.
So we flew from San Francisco to New York,
and it was at JFK airport
that things were getting very,
very bad, very, very quick.
And it was at that point that I realised
that we might be
heading for a very scary time.
But we were, of course, already committed.
And yeah, I arrived about, I don't know, 2
or 3 weeks before lockdown.
So it was a bit of a frantic period
as- as the situation,
escalated or devolved,
depending upon how you want to look at it.
And- and had to
very quickly
get out of our temporary accommodations
and find, find a permanent place
to hole up during lockdown.
So it was a very stressful period.
But it was also in many ways for me,
kind of proof
that I'd made the right decision
because the community here, even though
after the first couple of weeks
I only saw people online, the community
here was so welcoming and supportive
from the very beginning.
And we had such a wonderful, oddly enough,
experience getting the Centre
for Technomoral Futures off the ground.
And we had our first five PhD students
selected,
within a few months of- of my arrival.
But they came to the university,
starting in September of 2020.
And from that point forward, you know,
it felt like the Centre for Technomoral Futures
was a- was an incredibly close knit family
because those PhD researchers
needed each other
very- from the very beginning
as a source of mutual aid and support.
You know, at the time, there was,
of course, no EFI building open,
and had there been,
we wouldn't have been allowed to be in it.
So- so we used our online platforms
to try to create a supportive space.
So we had students coming in who,
you know, were working,
across the Business School
and Informatics.
We had students that were coming in
to work across, you know, philosophy
and psychology or,
or education and sociology.
And- and so they all needed to find a way
to knit themselves together
into- into a group.
It was-
it was a really wonderful experience
to see how quickly that happened.
So it was it was- it was a very
challenging first year, but one
that gave me
great confidence that this model would work,
and that there was something here
to- to build on for the- for the long term-
[Enda:] In a reasonably short period of time,
less than four years,
the Centre's really established itself
internationally from fairly much
limited reputation
in Edinburgh beforehand.
How do you account for that?-
[Shannon:] Um, a couple of things.
I mean, I think it helps that,
before coming here, you know,
I had developed a pretty wide
research community in the US and Europe.
So it was something that-
bringing that to Edinburgh meant that
the Centre immediately had connections
to a kind of wider ecosystem
of researchers and in AI and data ethics.
But the other thing I think that's
significant is I came here because
the Centre for Technomoral Futures
could be the thing that I had always said
needed to exist.
And my colleagues in the field,
we'd all sort have been saying,
there needs to be some kind of programme
that's actually
training the next generation of AI
and data ethics researchers,
and that does so without being tied
to a particular disciplinary lens.
So there was a hunger already for the thing
that the Centre for Technomoral Futures
was created to become,
particularly as AI technologies
accelerated in their commercial,
and social impact.-
[Enda:] One of the things
I've been really interested in
in learning about you and your work
is- is the interactions that you have
with engineers; you spent time at Google,
you- and your databases,
you work with people
who have quite a different approach to AI,
in terms of designing it and-
how do you-
how would you
characterise your interactions with people
who are involved
in designing these-
this type of technology?-
[Shannon:] I mentioned that, you know, my
my family was very supportive of,
of my interest, particularly with respect
to science and technology.
And even though they had,
very little money,
I still don't know how they did it.
They actually bought me
one of the early personal computers
and put it in my bedroom.
So in the early ‘80s.
Right, I-
I was programming games in BASIC the,
you know, early computer language
and learning,
the possibilities of- of coding
and what you could do with computers.
I was always aware that,
computing and software
engineering were potentially
as creative an endeavour
as anything that, I would do as a,
as a humanist or,
or philosopher.
That was met with an openness,
by the the first,
machine learning and AI researchers
that I started working with
and that, as you,
as you mentioned, was at Google, actually.
So I had interacted with people in AI
and computing before that.
But back in 2015, 2016,
those communities still hadn't
really formed
into the really large communities
you have now.
We have conferences like the FAccT
Conference in the AIES Conference
that happen every year, that are sponsored
by large computing societies.
But that bring together
computer scientists, machine
learning researchers, philosophers,
lawyers, social scientists.
That's a pretty mature ecosystem
right now.
But my interactions with computer
scientists at that point were, scattered,
interesting, but scattered.
I was brought into Google by Fei-Fei Li,
who at the time was the,
senior research scientist in Google Cloud.
And Google had just formulated
a set of AI principles that in 2018,
it had said it would use
to kind of govern the responsible
development of- of AI internally.
And Fei-Fei had met me
at a keynote that I gave at her lab
conference at Stanford
I think the year before.
She invited me to come to Google,
at first just on a temporary contract
basis, to advise them
a little bit about how to apply these
AI principles in practice.
At first, I was only there, you know,
maybe one day a week.
But very quickly,
something really remarkable happened.
The first day I arrived at Google,
I had people coming up to me
that I'd never met saying,
we are so glad you're here.
We're so excited that you're here.
And I had this expectation,
as many philosophers did
at the time, that, you know,
when you walk into a technical space
as an ethicist, you're going to be viewed
with suspicion.
You're an outsider, you're an intruder,
and you're there to tell them what not to do, right?
But there was something about Google
at that time that was so receptive
to thinking about how to do this work
responsibly,
that we had all kinds of machine
learning, and AI folks,
wanting to engage
with the work that we were doing,
thinking about how product-
AI product development could be done
responsibly from the ground up,
thinking about how you would allow
the engineering
teams to work with,
the social scientists
and the ethicists and,
and the legal folks
to actually do something that would allow
for a trustworthy
and socially beneficial product
to actually emerge on the other side.
That experience was so positive.
I remember my husband, commented
very shortly after I started,
going half-time at Google.
He commented that the days
I would come home from Google,
I always had a smile on my face.
It wasn't a perfect experience,
and there were struggles
and there were challenges and obstacles
and- and some of the things that we see
today in the tech ecosystem that prevent
responsible AI work from really happening
in industry the way it needs to, you know,
those those barriers were there.
But overall,
my experience was most technologists
and most people in the AI field
really want to build something good.
They really want to build something
that helps people and doesn't hurt them.
And that should be obvious, right?
Everybody wants to use their talents
for the most part,
to make the world
a slightly better place, at least.
So you don't have to have people
who want to save the world.
You just have to have people
who want to be proud of their work.
And that's almost everybody.
So I think I learned from that experience
that working with
technical folks is
as rewarding as working
with other social scientists or humanists.
As long as you can cross
that barrier of translation where you find
a common language to talk about
what it means for technology to be good or-
or responsible, or beneficial.
It's about reaching a shared understanding
of what that has to mean.
I found it incredibly challenging,
but incredibly rewarding
to learn to navigate those,
uh, translation gaps.
That really positive experience
that I had at Google for those two years
really is what gave me both the desire
and the skills
to work with technical experts here
at the university, to work with the people
in the School of Informatics who lead on
AI and robotics and- and machine learning,
to kind of figure out how to integrate
the ethical dimension
into technical practice.-
[Enda:] Just moving on to your wonderful new book,
‘The AI Mirror’,
which is beautifully written and doesn't
shy away from difficult concepts.
I think some of the concepts
in the book, be it virtue,
or others are quite difficult concepts.
The sort of core argument of the book
is that AI is a mirror of our society
with all the issues,
fault lines, problems and challenges.
Could you tell us a little bit
more about that sort of central argument?-
[Shannon:] Sure.
And I would say, you know,
the argument is that it's a
it's a mirror,
but a partial and distorted mirror,
right?
So one of the one of the key points
that I make early in the book
is that the mirrors that we
are building
with these current AI systems
don't reflect humanity as a whole.
They reflect a particularly privileged
subset of humanity
that, have their data stored on servers,
that have their writings digitised.
You know, the people
who've historically been given the license
to write and create,
you know, which are predominantly men,
predominantly wealthy,
predominantly in the global north.
Right?
So if you look at the digital corpus
that's used to train these models,
it's overwhelmingly representative of-
of a fairly small subset of humanity
as a whole.
But these mirrors
then are taken to represent us all,
which is- which is part of the danger.
The mirror metaphor is a way of providing
a better way of understanding
AI than the default that you often see
in media representations of AI today,
which is to think of
AI as another kind of mind,
as a factual claim
that's just plainly false.
But even as a metaphor,
it's a terrible metaphor.
So if you use the metaphor of a mind
to try to understand what AI systems do,
you actually will, end up with a pretty,
misinformed
view of how AI systems are developed.
So the the mirror metaphor provides
a much better basis for understanding
what AI is, what it's good at,
what it's not good at,
what it does, what it doesn't do.
And what we can expect from
it and how we can use it,
in ways that are,
in the long run, compatible
with human flourishing.
The book is really focused
on the kinds of AI tools we're building
today, like the GPT and Gemini models,
the large language models
and generative AI tools
that are built on large
volumes of data about us.
If you think about the way mirrors work,
there's three fundamental components
to a mirror.
You need a reflective surface,
and the physical properties
of that reflective surface
need to be very specific
and those reflective properties,
the physics of a mirror coating
and how that glass surface is polished,
the physics of that surface determine
how the mirror performs, how it reflects.
So you need an algorithmic surface
that's been prepared in a,
in a certain way,
because that's an, in fact, the part of the
of the metaphor for the algorithm.
The algorithm is the surface,
like the mirror surface.
And the properties of the algorithm
determine how the model performs.
So that's one part.
The second thing that you need
for a mirror, of course, is light.
And because the reflective surface
doesn't do anything
unless light, is directed towards it.
So of course we know what the light is.
In the case of AI, it's the data.
So instead of shining wide
beams of powerful light on glass,
we shine vast
troves of human generated
data on these algorithmic surfaces.
And it's the patterns within
that data that are processed,
analyzed by the algorithmic surface
and reflected back.
And the third thing,
of course, is the image.
And that's, of course, the output.
It's what you get from the model.
So when you talk to ChatGPT, what you're
really doing is talking to a mirror.
The surface, if you will, of that
mirror is the algorithm
that is responding to what you say.
Your prompts, your questions are light.
But of course, the surface of that-
that algorithmic mirror has also already
been trained on a large volume of light
that has determined
its shape and function, determines
how it will respond
to the data that's now coming from you
in the form of a prompt.
And then what you get back,
the answer you get
is actually just a reflection,
that is a modification of the pattern
that has already been analysed in the data
that comes back at you,
slightly modified
with some kind of randomness added to-
to make the answer sound a little bit,
you know, novel and surprising.
But that's how generative AI works.
And this is a really important metaphor,
because
when you look at yourself in a mirror,
you know there's no second person
on the other side of that mirror.
You know it's just a reflection.
And reflections have different properties
than the things they reflect, right?
You know that that reflection doesn't
have the warmth of your body.
Right?
You know that it doesn't have the depth
that you can't kind of press on it
and have it mold in the way
that if you press on your actual face,
it will conform to your finger.
Whereas if you touch the same spot
on the mirror, it won't.
So when you're talking to a generative
AI model,
it really helps to understand it
in that way.
It's reflections of thoughts.
It's reflections
of human voices, it's reflections of mind,
it's reflections of intelligence.
And just as the reflection of a body
is not a body and has very different
properties, a reflection of intelligence
is not intelligent
and it has very different properties.
So once we understand this metaphor,
we are inoculated, as it were,
from making a lot of the mistakes
that we're led to make
when we're manipulated by marketers
into thinking
that these tools are actually new forms
of intelligence cropping up in the world,
that we can talk to, that we can fall in
love with, that we can get good advice
from, that we can be comforted by that we,
you know, can be understood by.
Those are all incredibly distorted ways
of describing what AI does.
AI can't understand you.
It can speak to you in the way
that a mirror can, you know, speak to you,
a mirror image, right?
You can you can talk to yourself
in the mirror, but you know there's
nothing on the other side
understanding you in the same way
an AI tool is providing a kind of,
modified composite reflection
of human intelligence and understanding
rather than something
that actually has understanding.
I go in the book through
a lot of different things that mirrors do
they reveal, they magnify,
but they also distort,
they also occlude,
and it turns out that AI mirrors
can be described in all the same ways.
We can talk about
the things that they magnify,
and amplify, which can include things
like unfair bias or,
you know, disinformation.
We can talk about what they distort,
the ways that,
they can, kind of take reality and twist
its shape and feed it back to us
in a way that can be either entertaining
or- or quite harmful and dangerous.
And the ways that AI tools
can kind of occlude our vision,
so they can show us
only one side of something,
just in the way that a mirror shows you
only the front side of what-
of what you're looking at.
It can't show you what-
the other angle, right?
In the same way we can-
we can see the AI mirrors will-
will show us certain perspectives,
but not others.-
[Enda:] As humans, we've had various different challenges.
You cite
lots of great historical figures,
including Narcissus and Descartes and
all these people to say that, you know,
we have we've sort of been here before.
Obviously, the the particular challenge of
the technology is a new one.
But we have been here before in terms
of under- understanding of ourselves,
but also,
I think quite a critical interpretation
and a sort of sub-
sub-commentary on- on the sort of-
on the industry.
There's a recent report out from MIT,
I think, where they talk about
the fears of dominance
of a very small number of AI,
some of which we've-
we've mentioned already.
Do you think that’s a
genuine fear that because the resources
and the infrastructure that are needed
tend to fall into relatively few hands
and perhaps that- that might be a concern
for the future as well?-
[Shannon:] Yeah.
It's a very, very serious concern.
And it's one I touch on in the book.
So the book is optimistic in the sense
that it's trying to- to help you understand
that AI itself is not something to fear,
is not something to feel, uh,
endangered by, is not something to, um,
feel like is destined
to kind of overtake human intelligence,
right?
So a lot of what I'm trying to puncture
and- and diffuse
is- is this kind of marketing hype
that says AI is what's next.
Humanity is- is over.
Humanity is,
you know, destined to be replaced,
and superseded by these new super
intelligent machines.
So- so the book is optimistic and sort of
trying to
restore a sense of human agency and power
that we're still in the driver's seat
and that these technologies
can only replace us if we allow it,
and if we accept something
lesser than ourselves in in our place.
The dark side, though,
despite the fact that AI itself
does not pose a threat to us,
the organisations and powers
behind AI,
and the economic and political incentives
that are determining its current shape
and the way it's being used,
are, in fact, quite threatening to us.
So the the call is, as it were,
I say is coming from inside the house,
right?
It's not- AI is not this external threat,
but our own social,
economic and political systems
and their incentives,
which are kind of misaligned
with sustainable
human flourishing right now.
Those are shaping AI's form,
and that is the threat.
There's been a lot of focus recently,
rightly so,
on the environmental impact of AI models
and the fact that, large AI companies,
which, yes, overwhelmingly control
the AI landscape
right now, are pushing countries
and regions to build
a huge number of new data
centres to power AI systems
to an extent
that is likely going to prevent
those organisations
and the countries that they're pushing
to to build new data centres
away from meeting their climate targets
and commitments.
We cannot afford that.
Humanity cannot afford that.
And the idea
that somehow this will work out because AI
is going to solve climate change for us.
That's absurd.
AI is a tool,
and there are absolutely applications
of AI
that will help us meet
some of the challenges
associated with climate change,
but only if we don't develop AI in a way
which puts our foot on the gas of fossil fuel
extraction and and use,
which is what's happening right now.
There's lots of applications of
AI that you don't need
large language models for,
that you don't need training
that's incredibly
costly environmentally or economically.
You can use smaller,
more sustainable models
and smaller but higher quality data sets
to do a lot of the work
that we needed AI to do.
But these few large
AI companies have doubled
down on the strategy
of making their biggest,
most environmentally damaging models
the flagship product
that they want everybody to be using
and everybody to be buying.
And that's an incredibly dangerous
and unsustainable strategy right now.-
[Enda:] Another point that comes up in the book
on a number of occasions
is the the representation of non
dominant groups within models,
you make, I think, a very good point
about how manual correction occurs in
low wage economies, and how the companies
are able to use that,
how different races and ethnicities are,
rarely acknowledged and...
how serious do you think the AI companies
are about addressing those concerns?
Do they really take these seriously?-
[Shannon:] They do, yes, most of them, primarily
because
when these models
produce outputs that are,
you know, clearly,
expressive of harmful, racist
or sexist biases, for example,
that- that gives a,
a black eye to the company, right?
Both because it makes their product
look like something you don't want to use
in your business, Right?
No one- no- no business wants to employ
a product that's going to start
spewing racial insults or stereotypes
at their at their customers,
right?
So it's- it's a business risk,
but also because it makes the companies
look like they are endorsing these harmful
or toxic viewpoints.
But the problem is
that it's baked so deep into the models
by the training data
that you just can't easily pull it out.
So you end up having these solutions
that are,
in a way, like playing
a game of Whac-a-Mole, if you're familiar,
right?
You- you kind of- the problem crops up and
you- you build a tweak to the algorithm
that will suppress that particular
kind of output, but then a different one
pops up elsewhere.
So it's- this is a
a systemic feature of AI mirrors
that they're trained on large volumes
of human data that have not been curated
to remove the harmful
biases in our own history,
and our own social patterns.
And these models will always pick up on
and reflect
the patterns that run consistently
through the data set.
And bias is one of those patterns,
because our history
and certainly our social media, history,
which composes a pretty significant chunk
of what's
been, you know, put into the training
data, is rife
with this kind of-
of- of bias and harmful content.
So it's what the model has learned,
and you can't easily get the model
to unlearn it.
I can't pull on one corner of my sheet
without that
actually being felt in-
at the other end of the sheet.
Right? The- it's a fabric.
And if I- if I yank on one thread it
kind of runs through the whole thing.
It's- it's similar the way that AI models
work, it's very difficult to extract
something like a racial bias without
distorting the whole sheet.
So it becomes a very difficult problem
to solve, and one that
arguably cannot be solved
without addressing the underlying bias
or changing the way that we curate,
our data and train our models.
But what these companies are not
supporting
is the kind of firm
regulatory guidance. Instead, what
these companies mostly want
is to self-regulate and to invest
a little bit in mitigating the worst,
but without having to actually build
a better product from the ground up.-
[Enda:] Do you feel optimistic about the future,
taking in around all these
different issues that we- that we face
in terms of humanity?-
[Shannon:] I place myself on the optimistic
end of the spectrum only insofar
as it's still entirely in our power
to change the incentives
that are currently driving us
towards an unsustainable world.
And- and a planet that is simply incapable
of supporting flourishing,
intelligent life.
So I'm not
optimistic in the sense that if the status
quo continues, the chances for,
you know, sustainable
human flourishing are slim to none.
So that's the dark side.
But the positive side is
there is no kind of determined outcome.
The future is still unwritten, and
we actually know what we need to change.
It's not like we don't understand
why climate change is happening,
or what it would take to stop it,
or slow it down,
or mitigate at least its greatest harms,
and not add to the harms
that have already, in a sense,
been written into the future.
We know what needs to be done.
It's not like an asteroid coming
at the planet where you've got nothing.
We have ways to address this challenge.
What we don't have is the political will.
We've been here before where we needed
the political will to shift toward
the kind of world that people had always
said could never exist.
So I'm old enough
that I grew up in a world
where people believed
we would never see gay marriage.
In 2014 the- in the United States, and
shortly after, many other countries, right,
the political will shifted in
what felt like a landslide overnight.
But it wasn't overnight, right?
It was something that had been building
in the- in the culture.
And in our moral view of the world,
I think we're going to have to see the
same thing with respect to climate change
and the current economic order.
And I think you already see it happening.
A lot of people are resisting it.
A lot of people are saying, look,
you know,
the status quo is the status quo.
This is how the world works.
We can't imagine
a different kind of future.
They're wrong.
We can.
We have before and we have to now.
One of the great dangers that AI does pose
is that because it's trained
on all of the patterns of the past,
and I'm certainly
not the first to say this,
it is a fundamentally
conservative
and even regressive technology.
If we use it to automate decision making,
if we use it to predict
and write the future,
because all it will do
is take the patterns from the past
and push those into the present
and future.
A colleague, a philosopher, of mine,
Mark Coeckelbergh,
calls AI a time machine.
But it's a time machine that, sort of
instead of taking you and putting you
back in the past, takes the past and,
and brings it into the present and future.
Because if we simply use
AI to set ourselves on autopilot,
we'll basically stay on the same
unsustainable paths that we’re on.
As a roadmap into the future,
it's actually, it's actually the end-
it really is the end of humanity.
If we simply take AI to be the thing
that tells us how we should live.-
[Enda:] Thank you very much, Shannon,
for talk about your fascinating
research and most recently in The AI Mirror.
If you want to find out
more about Shannon's research, check out:
www.efi.ed.ac.uk
and our social media channels.
Thank you.
[Electronic beat]