Machine Learning: How Did We Get Here?

Tom sits down with Geoffrey Hinton, University Professor Emeritus at the University of Toronto, and co-winner of the ACM Turing Award and of the 2024 Nobel Prize in Physics.

Geoffrey explains how he got into the field, from his days as an aspiring carpenter to his conversion to a neural network researcher.  He explains the burst of neural network progress in the mid-1980s when the backpropagation training algorithm came into widespread use, and the re-emergence of deep neural networks in 2012 when he and his students soundly defeated the best computer vision methods around.

Geoffrey discusses his early realization that those GPUs being sold to accelerate video games were the perfect hardware to accelerate neural networks as well, his journey from academia to Google, the competition among the big AI companies, and his views on where AI is and might be headed.

Creators and Guests

Host
Tom Mitchell
Tom Mitchell is the University Founders Professor at Carnegie Mellon University, a Digital Fellow at the Stanford Digital Economy Lab, and the author of Machine Learning, a foundational textbook on the subject.
GH
Guest
Geoffrey Hinton
2024 Nobel Prize in Physics, University Professor Emeritus at the University of Toronto
Producer
Matty Smith
Writer/director/editor from Los Angeles, with experience writing and directing scripted television and national commercials. Mixed media producer with hands-on experience in all areas of production.

What is Machine Learning: How Did We Get Here??

Tom Mitchell literally wrote the book on machine learning. In this series of candid conversations with his fellow pioneers, Tom traces the history of the field through the people who built it. Behind the tech are stories of passion, curiosity, and humanity.

Tom Mitchell is the University Founders Professor at Carnegie Mellon University, a Digital Fellow at the Stanford Digital Economy Lab, and the author of Machine Learning, a foundational textbook on the subject. This podcast is produced by the Stanford Digital Economy Lab.

Tom Mitchell:
Welcome to machine learning.

Tom Mitchell:
How did we get here?

Tom Mitchell:
I'm Tom Mitchell.

Tom Mitchell:
Today's episode is an interview

Tom Mitchell:
with Geoff Hinton, one of the

Tom Mitchell:
pioneers in the field of neural

Tom Mitchell:
network learning.

Tom Mitchell:
Geoff started out early, as
you'll hear, in the seventies,

Tom Mitchell:
nineteen seventies, and has
continued working in neural

Tom Mitchell:
networks ever since.

Tom Mitchell:
During the period of the

Tom Mitchell:
nineteen nineties and early two

Tom Mitchell:
thousand, when neural networks

Tom Mitchell:
were really in disfavor in the

Tom Mitchell:
field of machine learning, Geoff

Tom Mitchell:
nevertheless persisted, and he

Tom Mitchell:
co-led the triumphant return of

Tom Mitchell:
neural networks in the form of

Tom Mitchell:
deep networks in the twenty ten

Tom Mitchell:
ish period.

Tom Mitchell:
In twenty eighteen, Geoff, along
with Joshua Bengio and Yann

Tom Mitchell:
LeCun, received the Turing Award
in Computer Science.

Tom Mitchell:
That's the highest award given

Tom Mitchell:
in the field of computer science

Tom Mitchell:
to researchers in twenty twenty

Tom Mitchell:
four.

Tom Mitchell:
Jeff, along with John Hopcroft,

Tom Mitchell:
were awarded the twenty twenty

Tom Mitchell:
four Nobel Prize in Physics for

Tom Mitchell:
their work on artificial neural

Tom Mitchell:
networks.

Tom Mitchell:
I hope you enjoy the episode.

Tom Mitchell:
I'm pleased to have with me
today Geoff Hinton, one of the

Tom Mitchell:
pioneers of machine learning.

Tom Mitchell:
Geoff, great to see you again.

Geoffrey Hinton:
Thanks for inviting me.

Tom Mitchell:
What I'd like to do today is get
two things.

Tom Mitchell:
Two types of things from you.

Tom Mitchell:
One is your own personal history
and how you got into this field

Tom Mitchell:
and what happened after you did.

Tom Mitchell:
And the second is kind of your
perspective on the whole field

Tom Mitchell:
of machine learning, AI, and how
things are turning out.

Geoffrey Hinton:
So when I was in high school, I
had a very smart friend who was

Geoffrey Hinton:
a very good mathematician and
read widely, unlike me.

Geoffrey Hinton:
And he came into school one day

Geoffrey Hinton:
and talked about how memories

Geoffrey Hinton:
might be distributed over the

Geoffrey Hinton:
brain rather than localized in a

Geoffrey Hinton:
place like a hologram, because

Geoffrey Hinton:
this would have been nineteen

Geoffrey Hinton:
sixty six and holograms had just

Geoffrey Hinton:
come out.

Geoffrey Hinton:
And that got me interested in

Geoffrey Hinton:
how our memories represented in

Geoffrey Hinton:
the brain.

Geoffrey Hinton:
And I've been interested in that
ever since.

Tom Mitchell:
Now, when I met you, we were
both at Carnegie Mellon.

Tom Mitchell:
It was nineteen eighty six when,

Tom Mitchell:
uh, we really got to do some

Tom Mitchell:
work together or teach a course

Tom Mitchell:
together.

Tom Mitchell:
What?

Tom Mitchell:
How did you get from nineteen

Tom Mitchell:
sixty six up till Eighteen

Tom Mitchell:
eighty six.

Tom Mitchell:
What was the path?

Geoffrey Hinton:
Slightly

Geoffrey Hinton:
rocky. So I went to

Geoffrey Hinton:
university. I studied physics,
chemistry and physiology, and in

Geoffrey Hinton:
physiology. In the last term,
they're going to teach us, um,
how the

Geoffrey Hinton:
central nervous system

Geoffrey Hinton:
worked. And I was very

Geoffrey Hinton:
excited. And they taught us how
action potentials are conducted

Geoffrey Hinton:
along an axon, which wasn't what
I meant by how it

Geoffrey Hinton:
worked. And so I switched to

Geoffrey Hinton:
philosophy. That was even less

Geoffrey Hinton:
useful. And then I switched

Geoffrey Hinton:
to psychology, which was

Geoffrey Hinton:
completely

Geoffrey Hinton:
hopeless. Um, and then I became
a

Geoffrey Hinton:
carpenter. And after I'd been a

Geoffrey Hinton:
carpenter for about nine months,
I met

Geoffrey Hinton:
a

Geoffrey Hinton:
carpenter. And he was so much
better than me, I decided it'd
be easier

Geoffrey Hinton:
to be an

Geoffrey Hinton:
academic. Um, so I went to
graduate

Geoffrey Hinton:
school in Edinburgh, um,

Geoffrey Hinton:
with Longuet-higgins, who

Geoffrey Hinton:
had published interesting stuff

Geoffrey Hinton:
on, um, using neural nets for

Geoffrey Hinton:
a

Geoffrey Hinton:
memory. Unfortunately, around
the time I arrived, Winograd's
thesis

Geoffrey Hinton:
came out and he switched his
allegiance to symbolic AI

Geoffrey Hinton:
and gave up on neural

Geoffrey Hinton:
nets. And so I spent five years
as

Geoffrey Hinton:
his graduate student with

Geoffrey Hinton:
him, trying to persuade me to
give

Geoffrey Hinton:
up neural

Geoffrey Hinton:
nets. And he never

Geoffrey Hinton:
succeeded. Um, in the end, he
was very helpful to

Geoffrey Hinton:
me. But for a long time, there
was a lot of argument about how

Geoffrey Hinton:
I should really be doing
symbolic AI, and all this neural

Geoffrey Hinton:
net stuff was complete

Geoffrey Hinton:
nonsense. And everybody else in

Geoffrey Hinton:
Edinburgh believed that neural
nets

Geoffrey Hinton:
were

Geoffrey Hinton:
nonsense. Um, we actually a
couple of

Geoffrey Hinton:
exceptions. There was a post-doc
called David Willshaw who'd

Geoffrey Hinton:
done associative memory, and he
basically done something

Geoffrey Hinton:
quite like Hopfield

Geoffrey Hinton:
nets. But a long time before

Geoffrey Hinton:
Hopfield and Aaron Sloman was a

Geoffrey Hinton:
visitor for a while, and he was

Geoffrey Hinton:
more

Geoffrey Hinton:
sympathetic. Um, but basically
they all knew it was

Geoffrey Hinton:
rubbish. And they would explain
to me

Geoffrey Hinton:
how neural nets can't even

Geoffrey Hinton:
do

Geoffrey Hinton:
recursion. So because
everything,

Geoffrey Hinton:
everybody believed in recursion,
then,

Geoffrey Hinton:
um, I actually figured out how
to

Geoffrey Hinton:
do true recursion in a

Geoffrey Hinton:
neural network and implemented
it on

Geoffrey Hinton:
a machine

Geoffrey Hinton:
with. I think by then it had one
hundred and ninety two

Geoffrey Hinton:
kilobytes of memory, and it was
only shared by forty

Geoffrey Hinton:
people. Um, but it had a huge
disk that had two

Geoffrey Hinton:
megabytes. So you never ran out
of your

Geoffrey Hinton:
memory? Um, because you used
virtual

Geoffrey Hinton:
memory. And I actually
implemented

Geoffrey Hinton:
a little neural net that did

Geoffrey Hinton:
true

Geoffrey Hinton:
recursion. That is, in the
recursive call, it used the same
neurons and

Geoffrey Hinton:
the same connection strings for
the recursive call as it did for

Geoffrey Hinton:
the high level

Geoffrey Hinton:
call. Now, if you do that, of
course it had to offload all

Geoffrey Hinton:
the parameters of the high level
call into some short term

Geoffrey Hinton:
memory onto a

Geoffrey Hinton:
stack. Eventually, and I figured

Geoffrey Hinton:
out how to implement a stack

Geoffrey Hinton:
with associative memory in a

Geoffrey Hinton:
neural

Geoffrey Hinton:
net. Um, so I had this little
neural net running that was
doing

Geoffrey Hinton:
full recursion in neural nets,
and that was the first talk I

Geoffrey Hinton:
gave. And people were very

Geoffrey Hinton:
puzzled. They said, why would
you want to do recursion in a
neural

Geoffrey Hinton:
net? I mean, it's so easy to do
in pop two, which was our our

Geoffrey Hinton:
sort of, uh, unfortunate bastard
child of Pascal and

Geoffrey Hinton:
Lisp. Um, although I don't think
Pascal existed

Geoffrey Hinton:
then. Um,

Geoffrey Hinton:
so.

Geoffrey Hinton:
Yeah. So I keep meaning to go
back to

Tom Mitchell:
I was going to ask, is there a

Tom Mitchell:
future to recursion for neural

Tom Mitchell:
nets?

Geoffrey Hinton:
Oh, yes.

Geoffrey Hinton:
I mean, to do true recursion,

Geoffrey Hinton:
you have to use the same neurons

Geoffrey Hinton:
and weights for the recursive

Geoffrey Hinton:
call.

Geoffrey Hinton:
That means you have to have a
stack, something like a stack,

Geoffrey Hinton:
to store the parameters of the
high level call.

Geoffrey Hinton:
That all works if you have fast
weights.

Geoffrey Hinton:
So that was the first thing I
did with fast weights in

Geoffrey Hinton:
nineteen seventy three.

Geoffrey Hinton:
I should say fast weights were
invented by Schmidhuber in

Geoffrey Hinton:
nineteen ninety, something.

Tom Mitchell:
Fair enough.

Tom Mitchell:
Okay, so then, um, you moved on
from Edinburgh.

Tom Mitchell:
Did you come directly to

Tom Mitchell:
Carnegie Mellon from there, or

Tom Mitchell:
how did.

Geoffrey Hinton:
Oh, no. No. Um, I dropped out

Geoffrey Hinton:
again after I finished my

Geoffrey Hinton:
thesis.

Geoffrey Hinton:
I dropped out and became a

Geoffrey Hinton:
teacher in a free school in

Geoffrey Hinton:
London.

Geoffrey Hinton:
Um, it was voluntary, I was
unpaid.

Geoffrey Hinton:
They were rough, emotionally
disturbed inner city kids.

Geoffrey Hinton:
And after a few months of that,

Geoffrey Hinton:
I again decided academia might

Geoffrey Hinton:
be easier.

Geoffrey Hinton:
Um, so I went back to a post-doc
with Aaron Sloman in Sussex.

Geoffrey Hinton:
Higgins had moved from Edinburgh
to Sussex and um, as I was

Geoffrey Hinton:
finishing my PhD, I got a
post-doc with Aaron Sloman.

Geoffrey Hinton:
Um, and there were no proper
faculty jobs in Britain.

Geoffrey Hinton:
Then there was one job in the

Geoffrey Hinton:
whole of Britain which Alan

Geoffrey Hinton:
Bundy got, um, and so I applied

Geoffrey Hinton:
for jobs in the States, and I

Geoffrey Hinton:
got a job as a postdoc in UCSD,

Geoffrey Hinton:
um, with Don Norman and Dave

Geoffrey Hinton:
Rumelhart.

Geoffrey Hinton:
And I really got along very well
with Dave Rumelhart, and that

Geoffrey Hinton:
made a huge difference.

Geoffrey Hinton:
So I moved from a country where
it was a sort of small country,

Geoffrey Hinton:
Britain, and there was only room
for one ideology, and the

Geoffrey Hinton:
ideology was symbolic AI and
neural nets was just rubbish.

Geoffrey Hinton:
And I moved to the states where,
um, on the West coast, on the

Geoffrey Hinton:
East Coast, it was symbolic AI
but on the West Coast they were

Geoffrey Hinton:
kind of more open.

Geoffrey Hinton:
And in particular, Don Norman

Geoffrey Hinton:
and Dave Rumelhart thought

Geoffrey Hinton:
neural nets were worth

Geoffrey Hinton:
considering.

Geoffrey Hinton:
Um, so it was a huge liberation

Geoffrey Hinton:
to be in a place where neural

Geoffrey Hinton:
nets were regarded as not

Geoffrey Hinton:
obvious nonsense.

Geoffrey Hinton:
And.

Geoffrey Hinton:
Um, I went there.

Geoffrey Hinton:
I, I got to meet Terry Sinofsky,
who I invited to a conference.

Geoffrey Hinton:
And we'd been sort of lifelong
friends and collaborators.

Geoffrey Hinton:
Um, I got to meet Francis Crick
later on, who was there.

Geoffrey Hinton:
So I was there for a couple of
years.

Geoffrey Hinton:
And then I got a job in

Geoffrey Hinton:
Cambridge in the Applied

Geoffrey Hinton:
Psychology Research Unit, um,

Geoffrey Hinton:
where I was meant to do applied

Geoffrey Hinton:
psychology.

Geoffrey Hinton:
And I was strongly reminded of
William James's comment about

Geoffrey Hinton:
applied psychology, which is to
do applied psychology, you have

Geoffrey Hinton:
to have something to apply.

Geoffrey Hinton:
Um. But I actually did some
interesting stuff.

Geoffrey Hinton:
It was just around the time some

Geoffrey Hinton:
workstations were coming out,

Geoffrey Hinton:
and they had a contract with the

Geoffrey Hinton:
British Telephone Company to

Geoffrey Hinton:
help with network management and

Geoffrey Hinton:
network management.

Geoffrey Hinton:
Then was all done by hand.

Geoffrey Hinton:
And you had information about,

Geoffrey Hinton:
um, the loads on various

Geoffrey Hinton:
switching centers, and the

Geoffrey Hinton:
information was on a huge wall

Geoffrey Hinton:
that was twenty feet high and

Geoffrey Hinton:
worked like you sometimes see at

Geoffrey Hinton:
train stations.

Geoffrey Hinton:
It was little flaps with white
letters on that come down.

Geoffrey Hinton:
They sort of rotate around until
you get the right flap.

Geoffrey Hinton:
And so you could see all these
numbers, um, that said how busy

Geoffrey Hinton:
each switching station was.

Geoffrey Hinton:
Um, and I figured a sun

Geoffrey Hinton:
workstation could do that and

Geoffrey Hinton:
would be a lot cheaper, and the

Geoffrey Hinton:
resolution wasn't that good

Geoffrey Hinton:
then.

Geoffrey Hinton:
So I had to figure out if you
could display the states of all

Geoffrey Hinton:
the switching stations in
Britain on the screen of a sun

Geoffrey Hinton:
workstation, and you couldn't
type the names, but if you had

Geoffrey Hinton:
two letters for each name, you
could get two letters there.

Geoffrey Hinton:
And I worked on a display with

Geoffrey Hinton:
two letter names, and there were

Geoffrey Hinton:
a large number of switching

Geoffrey Hinton:
stations.

Geoffrey Hinton:
There were hundreds.

Geoffrey Hinton:
And the question was could could

Geoffrey Hinton:
an operator remember which they

Geoffrey Hinton:
were?

Geoffrey Hinton:
So I actually taught myself to

Geoffrey Hinton:
remember all those two letter

Geoffrey Hinton:
names.

Geoffrey Hinton:
Um, they were in very small type
to fit on, and I actually got a

Geoffrey Hinton:
serious migraine from looking at
it too long.

Geoffrey Hinton:
Um, that was my interaction with
human factors, and then I.

Tom Mitchell:
It does remind me of that was
around the time that Unix was

Tom Mitchell:
getting invented, and all the
commands had no vowels in them.

Tom Mitchell:
So there was a theme there?

Geoffrey Hinton:
Yes. Um, so I wrote a report on

Geoffrey Hinton:
it and they said it was a very

Geoffrey Hinton:
nice report.

Geoffrey Hinton:
Thank you very much.

Geoffrey Hinton:
And they weren't going to

Geoffrey Hinton:
implement it, even though it

Geoffrey Hinton:
would have been much more

Geoffrey Hinton:
efficient and much easier to

Geoffrey Hinton:
update.

Geoffrey Hinton:
And I said, why not?

Geoffrey Hinton:
And they confidentially
explained to me that, well, when

Geoffrey Hinton:
people come and visit the
network control center, um, or

Geoffrey Hinton:
when actually when they visit
the headquarters of British

Geoffrey Hinton:
Telecom, um, they have to have
something to show them, like the

Geoffrey Hinton:
politicians have to see
something, and they would always

Geoffrey Hinton:
show them this huge wall that
displayed the state of all the

Geoffrey Hinton:
networks, of all the switching
stations, and they were very

Geoffrey Hinton:
impressed by that.

Geoffrey Hinton:
And if they got rid of the huge

Geoffrey Hinton:
wall and had just some

Geoffrey Hinton:
workstations, they were very

Geoffrey Hinton:
worried that network management

Geoffrey Hinton:
would get less funds from

Geoffrey Hinton:
British Telecom.

Geoffrey Hinton:
So they were going to keep their
huge wall.

Geoffrey Hinton:
I learned a lot then about
applied research.

Geoffrey Hinton:
It's not about whether it works,

Geoffrey Hinton:
it's whether about the company

Geoffrey Hinton:
likes it.

Tom Mitchell:
Fair enough, fair enough.

Geoffrey Hinton:
Then after that, um, I went back
to sea to San Diego for six

Geoffrey Hinton:
months, and that's when we
worked on the PDP books with

Geoffrey Hinton:
Dave Rumelhart and McClelland.

Geoffrey Hinton:
Um, I was one of the authors

Geoffrey Hinton:
until almost when they were

Geoffrey Hinton:
published.

Geoffrey Hinton:
And the last minute I dropped

Geoffrey Hinton:
out because at that point I

Geoffrey Hinton:
decided Boltzmann machines were

Geoffrey Hinton:
the future.

Geoffrey Hinton:
Boltzmann machines was just a
much better idea than back prop,

Geoffrey Hinton:
and back prop was a silly idea.

Geoffrey Hinton:
Boltzmann machines were a much
better idea.

Geoffrey Hinton:
Um, and there was no point being
an author of a book where the

Geoffrey Hinton:
main thing was, um, back prop.

Geoffrey Hinton:
Uh, that was a mistake.

Geoffrey Hinton:
Um. But in nineteen eighty four,
I figured out.

Geoffrey Hinton:
Yeah, in nineteen eighty two,
then I applied to CMU, and

Geoffrey Hinton:
because it was a private
university, um, you could just

Geoffrey Hinton:
they didn't have to sort of
advertise very widely.

Geoffrey Hinton:
Um, and Scott Fahlman was sort
of my, um, host.

Geoffrey Hinton:
He sort of interacted with him

Geoffrey Hinton:
at many workshops, and we got

Geoffrey Hinton:
along well, and he pushed hard

Geoffrey Hinton:
to get for me to get them to go

Geoffrey Hinton:
there.

Geoffrey Hinton:
And I had a very funny
interview.

Geoffrey Hinton:
So I went there.

Geoffrey Hinton:
On the first day there, I gave a

Geoffrey Hinton:
talk into computer science, and

Geoffrey Hinton:
then Scott Farmer took me out

Geoffrey Hinton:
for lunch at a place it might

Geoffrey Hinton:
have been called the oh, I can't

Geoffrey Hinton:
remember what it was called, but

Geoffrey Hinton:
it had a motto, which is if you

Geoffrey Hinton:
don't get sick, you got a bad

Geoffrey Hinton:
one.

Geoffrey Hinton:
Um, and I got terribly sick.

Geoffrey Hinton:
And the next day I had acute
diarrhea.

Geoffrey Hinton:
I couldn't eat anything.

Geoffrey Hinton:
I was living on coffee and
Coca-Cola.

Geoffrey Hinton:
Um, I gave a talk in psychology,
um, about mental imagery and my

Geoffrey Hinton:
theory of mental imagery.

Geoffrey Hinton:
And then there was someone at
the end who, um, asked a

Geoffrey Hinton:
question which I didn't
understand to begin with.

Geoffrey Hinton:
And then I realized the question
he was asking was, did I believe

Geoffrey Hinton:
the theory of someone called
Marcel just about how you

Geoffrey Hinton:
weren't really rotating an image
in your mind, you were just

Geoffrey Hinton:
looking backwards and forwards
between two things.

Geoffrey Hinton:
Um, and in my reply I said, oh,

Geoffrey Hinton:
I see you mean that silly theory

Geoffrey Hinton:
by Marcel.

Geoffrey Hinton:
Marcel?

Geoffrey Hinton:
Just where I didn't realize it
was Marcel.

Geoffrey Hinton:
Just asking the question.

Geoffrey Hinton:
Um, and after that, I got a

Geoffrey Hinton:
request to go and see Nico

Geoffrey Hinton:
Habermann.

Geoffrey Hinton:
Now, Nico and I were always

Geoffrey Hinton:
great friends, even though we

Geoffrey Hinton:
were politically extremely

Geoffrey Hinton:
different.

Geoffrey Hinton:
I was a sort of leftie, nineteen
sixties radical with long hair

Geoffrey Hinton:
and rather disheveled.

Geoffrey Hinton:
Niko was a European gentleman
who was very nicely dressed,

Geoffrey Hinton:
worked with the Defense
Department, set up an institute

Geoffrey Hinton:
I wasn't allowed to go to
because I was a foreigner.

Geoffrey Hinton:
Um, but we got along very well,
and I think it was because of

Geoffrey Hinton:
our initial interview.

Tom Mitchell:
So Niko was the department head
in computer science?

Geoffrey Hinton:
Yes. And so in the initial

Geoffrey Hinton:
interview, he said, so we've

Geoffrey Hinton:
decided to offer you the

Geoffrey Hinton:
position.

Geoffrey Hinton:
And I said, oh, oh, there's
something you should know.

Geoffrey Hinton:
And he said, oh what's that?

Geoffrey Hinton:
And I said, well, I don't

Geoffrey Hinton:
actually know any computer

Geoffrey Hinton:
science.

Geoffrey Hinton:
And he said, it's okay here.

Geoffrey Hinton:
It's okay.

Geoffrey Hinton:
We have people here who do.

Geoffrey Hinton:
So I said, okay.

Geoffrey Hinton:
So I said, okay.

Geoffrey Hinton:
In that case I accept.

Geoffrey Hinton:
And Niko said, don't you think

Geoffrey Hinton:
perhaps we should talk about the

Geoffrey Hinton:
salary?

Geoffrey Hinton:
And I said, oh no, I'm not
interested in salary.

Geoffrey Hinton:
You can pay me whatever you
like.

Geoffrey Hinton:
I'm not doing it for the money.

Geoffrey Hinton:
And he said, well, how does

Geoffrey Hinton:
twenty six thousand sound to

Geoffrey Hinton:
you?

Geoffrey Hinton:
So if that sounds fine, um, I

Geoffrey Hinton:
discovered I was being paid ten

Geoffrey Hinton:
thousand than the next lowest

Geoffrey Hinton:
paid professor.

Geoffrey Hinton:
Ten thousand less.

Geoffrey Hinton:
Um, but every year I got a big
pay rise, and me and Niko got

Geoffrey Hinton:
along very well after that,
because he knew I wasn't doing

Geoffrey Hinton:
it for the money.

Geoffrey Hinton:
Change.

Geoffrey Hinton:
Things have changed so much.

Tom Mitchell:
That's. That's fantastic.

Tom Mitchell:
Okay, so now we're up to the mid

Tom Mitchell:
eighties when really neural nets

Tom Mitchell:
are reborn.

Tom Mitchell:
Is that the right word?

Tom Mitchell:
How would you.

Geoffrey Hinton:
Yes. We back we back
propagation.

Geoffrey Hinton:
I mean, we didn't invent it.

Geoffrey Hinton:
We invented by several different

Geoffrey Hinton:
groups, but we showed that it

Geoffrey Hinton:
really worked to learn

Geoffrey Hinton:
representations.

Geoffrey Hinton:
And as you know, sort of one of
the big problems in AI is how do

Geoffrey Hinton:
you learn new representations?

Geoffrey Hinton:
How do you avoid having to put
them all in by hand?

Geoffrey Hinton:
Um, and my particular example,

Geoffrey Hinton:
which was the family trees

Geoffrey Hinton:
example, where you take all the

Geoffrey Hinton:
information in some family

Geoffrey Hinton:
trees, you convert it into

Geoffrey Hinton:
triples of symbols like John has

Geoffrey Hinton:
Father Mary.

Geoffrey Hinton:
Um, and then you train a neural

Geoffrey Hinton:
net to predict the last term in

Geoffrey Hinton:
a triple given the first two

Geoffrey Hinton:
terms.

Geoffrey Hinton:
So it's just like the big
language models.

Geoffrey Hinton:
You're predicting the next word
given the context.

Geoffrey Hinton:
It's just much simpler.

Geoffrey Hinton:
I had one hundred and twelve

Geoffrey Hinton:
total examples, of which one

Geoffrey Hinton:
hundred and four were training

Geoffrey Hinton:
examples and eight were test

Geoffrey Hinton:
examples, which is a bit less

Geoffrey Hinton:
than the trillion examples they

Geoffrey Hinton:
have nowadays.

Geoffrey Hinton:
Um, but it was the same idea.

Geoffrey Hinton:
You convert a symbol into a
feature vector.

Geoffrey Hinton:
You then have the feature
vectors of the context interact

Geoffrey Hinton:
um, via a hidden layer.

Geoffrey Hinton:
They then predict the features

Geoffrey Hinton:
of the next symbol, and from

Geoffrey Hinton:
those features you guess what

Geoffrey Hinton:
the next symbol should be, and

Geoffrey Hinton:
you try and maximize the

Geoffrey Hinton:
probability of predicting the

Geoffrey Hinton:
next symbol.

Geoffrey Hinton:
And you then backpropagate
through the feature interactions

Geoffrey Hinton:
and through the process that
converts a symbol into features.

Geoffrey Hinton:
And that way you learn, um,
feature vectors to represent the

Geoffrey Hinton:
symbols and how these vectors
should interact to predict the

Geoffrey Hinton:
features of the next symbol.

Geoffrey Hinton:
And that's what these big
language models do, except it's

Geoffrey Hinton:
a bit more complicated.

Geoffrey Hinton:
The feature interactions are
much more complicated.

Geoffrey Hinton:
They have many more layers of

Geoffrey Hinton:
interaction, so they can

Geoffrey Hinton:
disambiguate ambiguous symbols

Geoffrey Hinton:
and get refine the shade of

Geoffrey Hinton:
meaning of things where the

Geoffrey Hinton:
meaning depends a lot on the

Geoffrey Hinton:
context.

Geoffrey Hinton:
Um, but it's basically an
extremely simple version of the

Geoffrey Hinton:
current large language models.

Geoffrey Hinton:
I called it a tiny language

Geoffrey Hinton:
model, and that convinced the

Geoffrey Hinton:
editors of nature that we really

Geoffrey Hinton:
could learn interesting

Geoffrey Hinton:
representations, because the

Geoffrey Hinton:
vectors I learned for the

Geoffrey Hinton:
symbols, which were people and

Geoffrey Hinton:
relationships, they had six

Geoffrey Hinton:
components.

Geoffrey Hinton:
And if you used weight decay,
you could interpret what all

Geoffrey Hinton:
those components were.

Geoffrey Hinton:
And they were sensible semantic
features.

Geoffrey Hinton:
They were the nationality of the
person and the generation of the

Geoffrey Hinton:
person, and which branch of the
family tree they were in.

Geoffrey Hinton:
And so it would learn things

Geoffrey Hinton:
like the relationship uncle

Geoffrey Hinton:
requires the output person to be

Geoffrey Hinton:
one generation older than the

Geoffrey Hinton:
input person.

Geoffrey Hinton:
And so it would have generations
for people.

Geoffrey Hinton:
And if the input person was a

Geoffrey Hinton:
generation two, it would predict

Geoffrey Hinton:
that the output person would be

Geoffrey Hinton:
generation one.

Geoffrey Hinton:
Um, so it was actually learning

Geoffrey Hinton:
a whole bunch of little rules

Geoffrey Hinton:
just probabilistically.

Geoffrey Hinton:
And the people interested in
rule based induction got

Geoffrey Hinton:
interested in it because they
said, oh, we can do that too.

Geoffrey Hinton:
And it's true.

Geoffrey Hinton:
They could do that too, with

Geoffrey Hinton:
rules that weren't

Geoffrey Hinton:
probabilistic.

Geoffrey Hinton:
The point about neural nets is

Geoffrey Hinton:
they can mimic something that

Geoffrey Hinton:
learns discrete rules, but they

Geoffrey Hinton:
can.

Geoffrey Hinton:
They're also perfectly happy if
the rules are just usually true.

Geoffrey Hinton:
And they use the preponderance
of the evidence then, which is

Geoffrey Hinton:
much harder to do in, um, logic.

Geoffrey Hinton:
And so that it was that example
which, curiously, was a little

Geoffrey Hinton:
language model, um, that
convinced the editors of nature

Geoffrey Hinton:
to publish a paper.

Geoffrey Hinton:
I know because I talked to them
later.

Geoffrey Hinton:
The referees, I talked to the
referee, one of the referees

Geoffrey Hinton:
later, and he said, yeah, it was
that example that did it.

Geoffrey Hinton:
And then we were all very
excited.

Geoffrey Hinton:
We thought, we can solve
everything.

Geoffrey Hinton:
You just have to give it a lot

Geoffrey Hinton:
of training data and run

Geoffrey Hinton:
backprop, and it'll learn all

Geoffrey Hinton:
the representations you need and

Geoffrey Hinton:
it'll learn to do parallel

Geoffrey Hinton:
computation.

Geoffrey Hinton:
Because at that time people were
very interested in parallel

Geoffrey Hinton:
computation, but it was quite
hard to program.

Geoffrey Hinton:
And the idea was, well, this
will have all these neurons

Geoffrey Hinton:
inside and they'll all be
operating in parallel and it'll

Geoffrey Hinton:
figure out how to use them so
there aren't any problems.

Geoffrey Hinton:
At that point, people were very
interested in races and things

Geoffrey Hinton:
like that, and you didn't have
to worry about any of that.

Geoffrey Hinton:
It was all synchronous and you

Geoffrey Hinton:
just they just learned what to

Geoffrey Hinton:
do.

Geoffrey Hinton:
Um, so we thought we'd solved
everything and little did we

Geoffrey Hinton:
know we had.

Geoffrey Hinton:
It's just we needed more data
and more compute.

Tom Mitchell:
So then there's the long period

Tom Mitchell:
of waiting for more data and

Tom Mitchell:
more compute.

Geoffrey Hinton:
And yeah, not realizing that
that was the main problem.

Geoffrey Hinton:
Obviously, with other little
problems, there were more

Geoffrey Hinton:
sensible kinds of neurons to use
than more sensible ways to

Geoffrey Hinton:
regularize it and all that.

Geoffrey Hinton:
Um, and things like transformers

Geoffrey Hinton:
had to be invented to make it

Geoffrey Hinton:
really efficient.

Geoffrey Hinton:
Um, but basically backprop was
the way to do it.

Geoffrey Hinton:
And you couldn't convince

Geoffrey Hinton:
anybody when computers were

Geoffrey Hinton:
slow.

Geoffrey Hinton:
It will work for little
problems.

Geoffrey Hinton:
It will work for slightly bigger

Geoffrey Hinton:
problems, like a few years

Geoffrey Hinton:
later.

Geoffrey Hinton:
Yang got it working for for
mNIST, recognizing digits.

Geoffrey Hinton:
But all the vision people said,

Geoffrey Hinton:
you know, that's not real

Geoffrey Hinton:
vision.

Geoffrey Hinton:
Um, you're never going to do it
with real images that are high

Geoffrey Hinton:
resolution on the web.

Geoffrey Hinton:
And so it wasn't until about
twenty twelve that they had to

Geoffrey Hinton:
eat their words.

Tom Mitchell:
That's right.

Tom Mitchell:
That was the year when.

Tom Mitchell:
Well, you tell this story.

Tom Mitchell:
You were the first person.

Geoffrey Hinton:
Uh, well, I was the advisor of
the first two people.

Geoffrey Hinton:
Now, it's not quite fair,
because Jan had already

Geoffrey Hinton:
basically shown that they worked
for real images.

Geoffrey Hinton:
Um, and Jan realized when Feifei

Geoffrey Hinton:
came up with the ImageNet

Geoffrey Hinton:
dataset.

Geoffrey Hinton:
Jan realized they could win that

Geoffrey Hinton:
competition, and he tried to get

Geoffrey Hinton:
graduate students and postdocs

Geoffrey Hinton:
in his lab to do it, and they

Geoffrey Hinton:
all declined.

Geoffrey Hinton:
Um, and Ilya, Ilya Sutskever
realized that, um, backprop

Geoffrey Hinton:
would just kill ImageNet.

Geoffrey Hinton:
Um, and he wanted Alex to work
on it.

Geoffrey Hinton:
And I didn't really want to work
on it.

Geoffrey Hinton:
Um, Alex had already been

Geoffrey Hinton:
working on small images and

Geoffrey Hinton:
recognizing small images in

Geoffrey Hinton:
c410.

Geoffrey Hinton:
Um, and you pre-processed

Geoffrey Hinton:
everything for Alex to make it

Geoffrey Hinton:
easy.

Geoffrey Hinton:
And I bought Alex two Nvidia

Geoffrey Hinton:
GPUs to have in his bedroom at

Geoffrey Hinton:
home.

Geoffrey Hinton:
Um, And Alex, then get on with
get on with it.

Geoffrey Hinton:
And he was an absolutely wizard
programmer.

Geoffrey Hinton:
He wrote amazing code on

Geoffrey Hinton:
multiple GPUs to do convolution

Geoffrey Hinton:
really efficiently.

Geoffrey Hinton:
Much better code than anybody
else had ever written.

Geoffrey Hinton:
Um, I believe and so it's a
combination of Ilya realizing we

Geoffrey Hinton:
really had to do this, and Ilya
was involved in the design of

Geoffrey Hinton:
the net and so on, but Alex's
programming skills and then I

Geoffrey Hinton:
added a few ideas, like use
rectified linear units instead

Geoffrey Hinton:
of sigmoid units and use little
patches of the images.

Geoffrey Hinton:
I mean big patches of the

Geoffrey Hinton:
images, so you can translate

Geoffrey Hinton:
things around a bit to get some

Geoffrey Hinton:
translation invariance, as well

Geoffrey Hinton:
as using convolution, um, and

Geoffrey Hinton:
use dropout.

Geoffrey Hinton:
So that was one of the first
applications of dropout.

Geoffrey Hinton:
And that helped about one
percent.

Geoffrey Hinton:
It really helped.

Geoffrey Hinton:
And then we beat the best vision
systems.

Geoffrey Hinton:
The best vision systems were
sort of plateauing at twenty

Geoffrey Hinton:
five percent errors.

Geoffrey Hinton:
That's errors for getting the
right answer in the top in your

Geoffrey Hinton:
top five bets.

Geoffrey Hinton:
Um, and we got like fifteen
percent, fifteen or sixteen

Geoffrey Hinton:
depending on how you count it.

Geoffrey Hinton:
So we got almost half the error
rate.

Geoffrey Hinton:
And what happened then was what

Geoffrey Hinton:
ought to happen in science but

Geoffrey Hinton:
seldom does.

Geoffrey Hinton:
So our most vigorous opponents,
like Jitendra Malik and

Geoffrey Hinton:
Zisserman Andrew Zisserman,
looked at these results and

Geoffrey Hinton:
said, okay, you were right.

Geoffrey Hinton:
That never happens in science.

Geoffrey Hinton:
And slightly irritating the
Andrew Zisserman then switched

Geoffrey Hinton:
to doing this.

Geoffrey Hinton:
He had some very good postdocs
or students working with him.

Geoffrey Hinton:
Simonyan um, and um, after about
a year, they were making better

Geoffrey Hinton:
networks than us.

Geoffrey Hinton:
But that was really the.

Geoffrey Hinton:
As far as the general public was

Geoffrey Hinton:
concerned, that was the start of

Geoffrey Hinton:
this big swing towards deep

Geoffrey Hinton:
learning in twenty twelve when

Geoffrey Hinton:
we really nailed computer

Geoffrey Hinton:
vision.

Geoffrey Hinton:
But it actually happened before
that.

Geoffrey Hinton:
It happened in two thousand and
nine when we showed how you

Geoffrey Hinton:
could do speech recognition, or
rather the acoustic modeling

Geoffrey Hinton:
part of speech recognition.

Geoffrey Hinton:
We showed how you could do that

Geoffrey Hinton:
a bit better than the best

Geoffrey Hinton:
technology.

Geoffrey Hinton:
And that influenced all the big
speech groups, the big speech

Geoffrey Hinton:
groups that IBM and Microsoft,
um, and somewhere else.

Geoffrey Hinton:
Google.

Geoffrey Hinton:
Yes.

Geoffrey Hinton:
Um, all switched to doing neural
nets for acoustic modeling.

Geoffrey Hinton:
And so by twenty ten, it was

Geoffrey Hinton:
clear that neural nets were the

Geoffrey Hinton:
right way to do acoustic

Geoffrey Hinton:
modeling.

Geoffrey Hinton:
And we had lots of people
onside.

Geoffrey Hinton:
Um, and but in twenty twelve, it

Geoffrey Hinton:
actually came out for the

Geoffrey Hinton:
Android, and suddenly the

Geoffrey Hinton:
Android caught up with Siri in

Geoffrey Hinton:
speech recognition.

Geoffrey Hinton:
So really we demonstrated it for
speech before that, but that

Geoffrey Hinton:
didn't make a big impact.

Geoffrey Hinton:
The reason it worked for speech
was they had a big data set.

Geoffrey Hinton:
They had millions of examples.

Geoffrey Hinton:
They were one of the areas.

Geoffrey Hinton:
Unlike vision, they had big data

Geoffrey Hinton:
sets because of the DARPA speech

Geoffrey Hinton:
project.

Geoffrey Hinton:
Um, because they really wanted
to be able to benchmark systems.

Geoffrey Hinton:
Um, also, speech is easier than
vision.

Geoffrey Hinton:
Speech is just vision with
either one or two pixels.

Geoffrey Hinton:
It's just they change rather
fast.

Geoffrey Hinton:
Um, and.

Geoffrey Hinton:
So we demonstrated for speech
when we did it for vision.

Geoffrey Hinton:
The big companies already knew
it worked for speech and they

Geoffrey Hinton:
saw it work for vision.

Geoffrey Hinton:
And so they realized it was sort
of universal.

Geoffrey Hinton:
Um, it wasn't just a specific
trick for a specific domain.

Geoffrey Hinton:
It will work for perception in
general.

Geoffrey Hinton:
They didn't realize at that

Geoffrey Hinton:
point it would work for

Geoffrey Hinton:
language.

Geoffrey Hinton:
And nor really did we, even
though our very first impressive

Geoffrey Hinton:
example was for language.

Geoffrey Hinton:
Um.

Geoffrey Hinton:
Yeah.

Geoffrey Hinton:
So in twenty twelve, there was
this big swing to neural

Geoffrey Hinton:
networks And that's when Jensen
at Nvidia realized he finally

Geoffrey Hinton:
realized those Nvidia boards
weren't just for gaming.

Geoffrey Hinton:
They were supercomputers for
doing machine learning.

Geoffrey Hinton:
Now, I actually gave a talk at
NIPS in two thousand and nine

Geoffrey Hinton:
when I told a thousand people
this was about speech, I told a

Geoffrey Hinton:
thousand people, if you want to
do machine learning now, you

Geoffrey Hinton:
have to buy Nvidia GPUs.

Geoffrey Hinton:
Nvidia GPUs will make your
program go about thirty times as

Geoffrey Hinton:
fast because they're relatively
easy to utilize parallelism.

Geoffrey Hinton:
They're just right for neural
nets.

Geoffrey Hinton:
It was Rick Zelinsky who was a
student of mine at CMU, who told

Geoffrey Hinton:
me that in about two thousand
and six, um, and it was true.

Geoffrey Hinton:
And, um, I sent mail to Nvidia

Geoffrey Hinton:
saying, how about giving me a

Geoffrey Hinton:
free one?

Geoffrey Hinton:
Because I told a thousand
machine learning researchers to

Geoffrey Hinton:
buy your boards.

Geoffrey Hinton:
And they declined.

Geoffrey Hinton:
Um. Years later, Jensen came to
Toronto and gave a talk and

Geoffrey Hinton:
mentioned how Toronto, you know,
was the place where they

Geoffrey Hinton:
convinced him that Nvidia GPUs
were good for AI.

Geoffrey Hinton:
Um, and that it all happened in

Geoffrey Hinton:
twenty twelve, and I couldn't

Geoffrey Hinton:
resist it.

Geoffrey Hinton:
At the end.

Geoffrey Hinton:
I said, well, I told you in two

Geoffrey Hinton:
thousand and nine that you

Geoffrey Hinton:
ignored me.

Geoffrey Hinton:
And what he should have said
was, well, you're very silly.

Geoffrey Hinton:
You should have bought stock in
two thousand and nine.

Geoffrey Hinton:
If I'd done that, I'd be a
billionaire.

Geoffrey Hinton:
Um, but, um, instead, he gave
me.

Geoffrey Hinton:
He opened his briefcase and gave
me their very special, very

Geoffrey Hinton:
latest GPU, of which they'd only
made a few that had twice as

Geoffrey Hinton:
much memory as any other GPU.

Geoffrey Hinton:
So that was a nice move by
Jensen.

Tom Mitchell:
That's a great story too.

Tom Mitchell:
So then in the twenty tens,
things really just kind of rapid

Tom Mitchell:
fire started taking off.

Tom Mitchell:
Take us through that.

Geoffrey Hinton:
So speech worked.

Geoffrey Hinton:
Um, I we got a good
collaboration between the

Geoffrey Hinton:
research groups at IBM and
Google and, um.

Geoffrey Hinton:
Toronto and Microsoft.

Geoffrey Hinton:
Yeah.

Geoffrey Hinton:
Um, we actually published a
joint paper, which is sort of

Geoffrey Hinton:
quite rare in this stuff about
the sort of new view of how to

Geoffrey Hinton:
do acoustic modeling.

Geoffrey Hinton:
Um. And then we did vision, and
then, um, I started getting lots

Geoffrey Hinton:
of requests from big companies
who wanted to.

Geoffrey Hinton:
By me or by me and Alex and Ilya
or fund our company or get us to

Geoffrey Hinton:
come work for them.

Geoffrey Hinton:
Um, and I realized this stuff
was probably valuable.

Geoffrey Hinton:
We had no idea how much it was
worth.

Geoffrey Hinton:
Um, so Craig Butler, who was the

Geoffrey Hinton:
chair of the Department of

Geoffrey Hinton:
Computer Science, was an expert

Geoffrey Hinton:
on auctions.

Geoffrey Hinton:
And he said, you know, you

Geoffrey Hinton:
should actually, since you have

Geoffrey Hinton:
no idea what it's worth, but

Geoffrey Hinton:
there's people, many people

Geoffrey Hinton:
interested, you should set up an

Geoffrey Hinton:
auction.

Geoffrey Hinton:
So at Lake Tahoe, which seemed
like the appropriate place, um,

Geoffrey Hinton:
in a casino, um, a casino hotel.

Geoffrey Hinton:
In twenty twelve, Alex and I set
up a little company for the sole

Geoffrey Hinton:
function of doing an aqua hire.

Geoffrey Hinton:
And there was an auction
between, um, Microsoft and

Geoffrey Hinton:
Google and DeepMind and Baidu.

Geoffrey Hinton:
Um, DeepMind dropped out fairly
early.

Geoffrey Hinton:
Um, and on the ground floor,

Geoffrey Hinton:
they had all these people at

Geoffrey Hinton:
slot machines with cigarettes

Geoffrey Hinton:
hanging out the corner of their

Geoffrey Hinton:
mouth, just pulling these

Geoffrey Hinton:
levers.

Geoffrey Hinton:
And every so often they made
like a thousand dollars and

Geoffrey Hinton:
lights would flash, and we were
upstairs having an auction where

Geoffrey Hinton:
you had to raise by a million.

Geoffrey Hinton:
Um, that was fun.

Geoffrey Hinton:
And the auction went on for
quite a long time.

Geoffrey Hinton:
We were completely amazed when
it got to forty four million.

Geoffrey Hinton:
It was so much money that we
couldn't imagine that any more

Geoffrey Hinton:
money would be useful.

Geoffrey Hinton:
I mean, that seemed like as much

Geoffrey Hinton:
money as anybody could possibly

Geoffrey Hinton:
want.

Geoffrey Hinton:
Um, and so we then became much

Geoffrey Hinton:
more concerned about who we

Geoffrey Hinton:
worked for, and I wouldn't have

Geoffrey Hinton:
been able to get to China

Geoffrey Hinton:
because I couldn't fly at that

Geoffrey Hinton:
time.

Geoffrey Hinton:
And I'd spent the summer of
twenty twelve working with Jeff

Geoffrey Hinton:
Dean at Google.

Geoffrey Hinton:
And I got along really well with
Jeff Dean.

Geoffrey Hinton:
It was a really nice group, and

Geoffrey Hinton:
I figured it was much more

Geoffrey Hinton:
important to work in a really

Geoffrey Hinton:
nice place than to get more

Geoffrey Hinton:
money.

Geoffrey Hinton:
So we actually terminated the
auction.

Geoffrey Hinton:
We told Baidu we got an offer he
couldn't refuse, and the offer

Geoffrey Hinton:
we couldn't refuse was the
chance to work at Google with

Geoffrey Hinton:
Jeff Dean, and that all worked
out very well.

Geoffrey Hinton:
So then I was off to Google and
while we were there a year, um,

Geoffrey Hinton:
along with Shockley and Yoshua
and Bodner in Montreal, um, they

Geoffrey Hinton:
developed, uh, attention
language models with attention,

Geoffrey Hinton:
which was a precursor of
Transformers, and showed that

Geoffrey Hinton:
language models actually work
well for machine translation.

Geoffrey Hinton:
And I think that was the final

Geoffrey Hinton:
nail in the coffin of symbolic

Geoffrey Hinton:
AI, because if anything was

Geoffrey Hinton:
going to be good for symbolic

Geoffrey Hinton:
AI, it was converting symbol

Geoffrey Hinton:
strings in one language into

Geoffrey Hinton:
symbol strings in another

Geoffrey Hinton:
language.

Geoffrey Hinton:
The idea that you might do that
by taking symbol strings and

Geoffrey Hinton:
manipulating them actually
sounded quite plausible.

Geoffrey Hinton:
Um, but that's not the way to do
it.

Geoffrey Hinton:
The way to do it is to
understand what's being said in

Geoffrey Hinton:
one language, by associating big
vectors with words appropriately

Geoffrey Hinton:
vectors, and then convert that
to the other language.

Geoffrey Hinton:
Um, so it was clear by about

Geoffrey Hinton:
twenty fifteen that neural nets

Geoffrey Hinton:
were going to do everything,

Geoffrey Hinton:
including language.

Geoffrey Hinton:
That's the point at which Gary
Marcus published a book chapter

Geoffrey Hinton:
saying neural nets were okay.

Geoffrey Hinton:
Maybe they could do object
recognition, but they'd never do

Geoffrey Hinton:
language because language
involved novel sentences.

Geoffrey Hinton:
They were already doing it.

Tom Mitchell:
Well. So that was twenty
fifteen.

Tom Mitchell:
You were still at Google?

Geoffrey Hinton:
I was at Google. And Ilya then

Geoffrey Hinton:
moved to OpenAI, um, around

Geoffrey Hinton:
twenty fifteen, uh, maybe twenty

Geoffrey Hinton:
fourteen, I can't remember the

Geoffrey Hinton:
year.

Geoffrey Hinton:
And, um.

Geoffrey Hinton:
And then OpenAI did rather well.

Geoffrey Hinton:
Um, Over an hour.

Geoffrey Hinton:
I basically just took stuff that
had been done at Google on

Geoffrey Hinton:
Transformers and put a nicer
interface on it, and realized

Geoffrey Hinton:
which Google hadn't realized
that if you did human

Geoffrey Hinton:
reinforcement learning, you
didn't need that many examples

Geoffrey Hinton:
to make it behave nicer.

Geoffrey Hinton:
Um, you didn't need like one
hundred million examples which

Geoffrey Hinton:
you might have thought you could
do it with.

Geoffrey Hinton:
Like some fraction of a million
examples would already make it

Geoffrey Hinton:
behave a lot better.

Geoffrey Hinton:
So you could actually train it
up to have nicer behavior.

Geoffrey Hinton:
And that was ChatGPT.

Geoffrey Hinton:
Um, Google was then in the
classic situation of not wanting

Geoffrey Hinton:
to interfere with search, which
was its moneymaker.

Geoffrey Hinton:
So it was in this difficult
situation.

Geoffrey Hinton:
Do they do they release chatbots
or not?

Geoffrey Hinton:
But when Microsoft teamed up
with OpenAI, they basically had

Geoffrey Hinton:
to release them.

Geoffrey Hinton:
Um, but they lost a few years.

Geoffrey Hinton:
I think it was partly because
search was working so well, and

Geoffrey Hinton:
it was obvious search would be
better if instead of using

Geoffrey Hinton:
keywords, it used what you
meant, which would mean it had

Geoffrey Hinton:
to understand what you meant.

Geoffrey Hinton:
Um, but they didn't want to
undermine their moneymaker.

Geoffrey Hinton:
No, that's based not on any
inside information.

Geoffrey Hinton:
It just seems obvious.

Tom Mitchell:
Pretty amazing.

Tom Mitchell:
So. So here we are now.

Tom Mitchell:
And you were famously on record,
uh, warning people about some of

Tom Mitchell:
the risks of AI.

Tom Mitchell:
Um, what should what should

Tom Mitchell:
people who are working in this

Tom Mitchell:
area do in response to that

Tom Mitchell:
risk?

Geoffrey Hinton:
Okay, so I didn't actually talk

Geoffrey Hinton:
much about the risks until I

Geoffrey Hinton:
left Google.

Geoffrey Hinton:
I realized in the beginning of

Geoffrey Hinton:
twenty twenty three there was a

Geoffrey Hinton:
huge existential threat I hadn't

Geoffrey Hinton:
fully appreciated, because it's

Geoffrey Hinton:
a better form of intelligence

Geoffrey Hinton:
than us, and it's better because

Geoffrey Hinton:
it can share so different copies

Geoffrey Hinton:
of the same neural net, can look

Geoffrey Hinton:
at different data and share the

Geoffrey Hinton:
gradient, and then update all

Geoffrey Hinton:
their weights in sync and stay

Geoffrey Hinton:
the same so they can keep doing

Geoffrey Hinton:
that.

Geoffrey Hinton:
And when they share the
gradient, they're sharing

Geoffrey Hinton:
information they got from
different data sets.

Geoffrey Hinton:
Um, out of the order of a

Geoffrey Hinton:
trillion bits per episode of

Geoffrey Hinton:
sharing.

Geoffrey Hinton:
If they've got a trillion
weights.

Geoffrey Hinton:
Whereas what we're doing now is

Geoffrey Hinton:
sharing the information and

Geoffrey Hinton:
maybe one hundred bits per

Geoffrey Hinton:
sentence.

Geoffrey Hinton:
Um, so a few bits per second,
maybe if we're lucky, we're

Geoffrey Hinton:
sharing it ten bits per second.

Geoffrey Hinton:
Um, and so you're comparing like

Geoffrey Hinton:
trillions of bits with hundreds

Geoffrey Hinton:
of bits.

Geoffrey Hinton:
There are billions of times
better than us at sharing.

Geoffrey Hinton:
And that's why if I'm running on
different hardware, they can

Geoffrey Hinton:
learn so much more than us.

Geoffrey Hinton:
They can learn from the whole
internet.

Geoffrey Hinton:
It doesn't all have to go
through one piece of hardware,

Geoffrey Hinton:
and it's going to get more
important that effect as we go

Geoffrey Hinton:
to AI agents that operate in the
real world in real time.

Geoffrey Hinton:
Most AI you images, you can just
speed them up and send them

Geoffrey Hinton:
through one network very fast.

Geoffrey Hinton:
Um, because obviously computers
operate, you know, thousands of

Geoffrey Hinton:
times faster than a brain.

Geoffrey Hinton:
But, um, if you're operating in

Geoffrey Hinton:
the real world, you can't get

Geoffrey Hinton:
experience faster.

Geoffrey Hinton:
Um, because the real world has
an actual time scale.

Geoffrey Hinton:
If you're interacting with other

Geoffrey Hinton:
agents who take a little while

Geoffrey Hinton:
to reply, um, then this

Geoffrey Hinton:
advantage that different copies

Geoffrey Hinton:
of the same neural net can share

Geoffrey Hinton:
will be an even bigger

Geoffrey Hinton:
advantage.

Geoffrey Hinton:
So at that point, I decided

Geoffrey Hinton:
there's all these short term

Geoffrey Hinton:
threats, and it wasn't really my

Geoffrey Hinton:
intention to warn about those,

Geoffrey Hinton:
but I got sucked into warning

Geoffrey Hinton:
about those because journalists

Geoffrey Hinton:
always confuse the existential

Geoffrey Hinton:
threat with all the other

Geoffrey Hinton:
threats.

Geoffrey Hinton:
They just muddle all the threats
together.

Geoffrey Hinton:
They move seamlessly from
joblessness to fake videos to

Geoffrey Hinton:
cyber attacks to lethal
autonomous weapons, as if

Geoffrey Hinton:
they're all the same thing.

Geoffrey Hinton:
Um, so I had to sort of clarify

Geoffrey Hinton:
a lot of those threats, but my

Geoffrey Hinton:
main worry was the much longer

Geoffrey Hinton:
term threat, but not long enough

Geoffrey Hinton:
that they will be much smarter

Geoffrey Hinton:
than us.

Geoffrey Hinton:
It's not necessarily the case,

Geoffrey Hinton:
but I think most people, most

Geoffrey Hinton:
neural net experts, believe that

Geoffrey Hinton:
within twenty years we'll have

Geoffrey Hinton:
superintelligent AI.

Geoffrey Hinton:
We vary, you know, Demis thinks
it'll be about ten years.

Geoffrey Hinton:
I think it may be as long as
twenty years.

Geoffrey Hinton:
And it'll very likely be more
than five years.

Geoffrey Hinton:
Um, Dario thinks it'll be three
years.

Geoffrey Hinton:
Um, but then he runs a company.

Geoffrey Hinton:
Um, so any.

Geoffrey Hinton:
Ilya thinks it'll be sooner than
ten years.

Geoffrey Hinton:
Um, we all think it's probably
going to happen.

Geoffrey Hinton:
So the question is, what happens

Geoffrey Hinton:
when AI is a lot smarter than us

Geoffrey Hinton:
And when it's our agents that

Geoffrey Hinton:
are smart enough so they're also

Geoffrey Hinton:
more powerful than us, they can

Geoffrey Hinton:
collaborate with other AI

Geoffrey Hinton:
agents, get stuff done even if

Geoffrey Hinton:
they can't sort of fire guns or

Geoffrey Hinton:
pull switches.

Geoffrey Hinton:
They can persuade people.

Geoffrey Hinton:
And we know AI is already very

Geoffrey Hinton:
good at persuasion and will soon

Geoffrey Hinton:
be much better than people at

Geoffrey Hinton:
persuasion, like in ten years

Geoffrey Hinton:
time.

Geoffrey Hinton:
And so they'll be able to
persuade people to do things

Geoffrey Hinton:
just like Trump persuaded people
to invade the capital.

Geoffrey Hinton:
Um, so they don't actually have
to be able to do anything

Geoffrey Hinton:
themselves except talk.

Geoffrey Hinton:
So most of the tech bros are

Geoffrey Hinton:
thinking they have a model,

Geoffrey Hinton:
which is I'm the CEO, you're the

Geoffrey Hinton:
secretary.

Geoffrey Hinton:
You're much smarter than me.

Geoffrey Hinton:
Um, but I can always fire you,

Geoffrey Hinton:
and you'll make my life really

Geoffrey Hinton:
easy.

Geoffrey Hinton:
Because whatever I want to

Geoffrey Hinton:
happen, I'll sort of be like

Geoffrey Hinton:
Star Trek.

Geoffrey Hinton:
I'll say, make it so and it will
happen.

Geoffrey Hinton:
Um. and I don't really have to
understand it.

Geoffrey Hinton:
I'll still get the credit for it
because I said make it.

Geoffrey Hinton:
So, um, I think that's their
model, and I just don't think

Geoffrey Hinton:
that's going to work.

Geoffrey Hinton:
I think the big problem is how

Geoffrey Hinton:
do we prevent these things ever

Geoffrey Hinton:
wanting to take control or to

Geoffrey Hinton:
take over?

Geoffrey Hinton:
They may have control, but they

Geoffrey Hinton:
may still not want to replace

Geoffrey Hinton:
us.

Geoffrey Hinton:
And so I've fallen back on the
only example I know of a less

Geoffrey Hinton:
intelligent thing controlling a
more intelligent thing.

Geoffrey Hinton:
And that's a baby controlling a
mother.

Geoffrey Hinton:
And evolution has put a huge
amount of work into that.

Geoffrey Hinton:
So evolution has made sure the
mother cannot bear the sound of

Geoffrey Hinton:
the baby crying, and the mother
gets huge rewards, um, for being

Geoffrey Hinton:
nice to the baby.

Geoffrey Hinton:
Um, lots of pleasurable

Geoffrey Hinton:
sensations and just generally

Geoffrey Hinton:
good feelings.

Geoffrey Hinton:
Um, and We need to do the same

Geoffrey Hinton:
for these eyes, for being nice

Geoffrey Hinton:
to us.

Geoffrey Hinton:
We're still making them.

Geoffrey Hinton:
And if we could make an AI that
was super intelligent but cared

Geoffrey Hinton:
more about us than it cared
about about itself or other

Geoffrey Hinton:
superintelligent AIS, then we
might be okay.

Geoffrey Hinton:
Um, but we have to accept that
we're going to be the babies,

Geoffrey Hinton:
and they're going to be the
mothers, and people aren't

Geoffrey Hinton:
prepared to accept that.

Geoffrey Hinton:
Trump's not prepared to accept

Geoffrey Hinton:
that Trump would never accept

Geoffrey Hinton:
that.

Geoffrey Hinton:
Um, I think we have a lot more

Geoffrey Hinton:
hope of the Chinese

Geoffrey Hinton:
understanding it.

Geoffrey Hinton:
So I recently went to Shanghai

Geoffrey Hinton:
and talked to a member of the

Geoffrey Hinton:
Politburo.

Geoffrey Hinton:
Me and Eric Schmidt, who aren't
natural allies.

Geoffrey Hinton:
We're in terms of politics with
a rather different Eric Schmidt,

Geoffrey Hinton:
for example, thinks Kissinger
was a good guy.

Geoffrey Hinton:
Um, but we agree on this
existential threat, and the

Geoffrey Hinton:
Chinese leadership will
understand it much better than

Geoffrey Hinton:
any of the other leaderships,
because many of them are

Geoffrey Hinton:
engineers and they actually
understand how this stuff works.

Geoffrey Hinton:
They understand the argument

Geoffrey Hinton:
that it's a better form of

Geoffrey Hinton:
intelligence.

Geoffrey Hinton:
But I think all the countries
will collaborate on can we make

Geoffrey Hinton:
it so that it cares more about
us than it does about itself?

Geoffrey Hinton:
Because there if any country

Geoffrey Hinton:
figured out how to do that, it'd

Geoffrey Hinton:
be very happy to tell the other

Geoffrey Hinton:
countries.

Geoffrey Hinton:
That's like preventing a global
nuclear war.

Geoffrey Hinton:
And there the USSR and America
collaborated in the nineteen

Geoffrey Hinton:
fifties on that.

Geoffrey Hinton:
Um, the height of the Cold War.

Geoffrey Hinton:
They still collaborated to
prevent that.

Geoffrey Hinton:
So what I think we should have
is research institutes in

Geoffrey Hinton:
different countries that get
access to their own country's

Geoffrey Hinton:
super smart AI, which they're
not going to give to any other

Geoffrey Hinton:
country and can do experiments
on how to make it not want to,

Geoffrey Hinton:
how to make it care more about
people than about itself.

Geoffrey Hinton:
Um, and share with other
countries how to do that,

Geoffrey Hinton:
because I believe the techniques
for doing that will be roughly

Geoffrey Hinton:
orthogonal to the techniques for
making it smarter.

Geoffrey Hinton:
They're not going to share the

Geoffrey Hinton:
techniques for making it

Geoffrey Hinton:
smarter, because they're all

Geoffrey Hinton:
doing cyber attacks on each

Geoffrey Hinton:
other, and they all know the

Geoffrey Hinton:
best.

Geoffrey Hinton:
You want a better AI to do

Geoffrey Hinton:
better cyber attacks and better

Geoffrey Hinton:
fake videos and better

Geoffrey Hinton:
autonomous weapons.

Geoffrey Hinton:
They're never going to share
that stuff.

Geoffrey Hinton:
They're anti-aligned.

Geoffrey Hinton:
But on not having AI replace us,

Geoffrey Hinton:
they're aligned so they will

Geoffrey Hinton:
collaborate.

Geoffrey Hinton:
Now, one of the things you ought
to mention, I ought to mention

Geoffrey Hinton:
Russ Salakhutdinov, um.

Geoffrey Hinton:
He was one of my best students.

Geoffrey Hinton:
Um, he came to Toronto, did his
PhD at Toronto.

Geoffrey Hinton:
Um, he did a postdoc with Josh
Tenenbaum, and then he wanted to

Geoffrey Hinton:
come back to Toronto, and he had
a faculty offer from Harvard.

Geoffrey Hinton:
And I really tried to get the
Department of Computer Science,

Geoffrey Hinton:
which had an open position in
machine learning, to give a job

Geoffrey Hinton:
to Russ and they refused.

Geoffrey Hinton:
Basically, this was about twenty
eleven or twelve.

Geoffrey Hinton:
No, this was two thousand and

Geoffrey Hinton:
probably twenty twelve or

Geoffrey Hinton:
thirteen.

Geoffrey Hinton:
My department was one of the
last departments to accept that

Geoffrey Hinton:
neural networks really worked.

Geoffrey Hinton:
They had a big AI group, and the

Geoffrey Hinton:
big AI group said you had got

Geoffrey Hinton:
several people in neural

Geoffrey Hinton:
networks already.

Geoffrey Hinton:
That's your quota.

Geoffrey Hinton:
We're short on people in
knowledge representation, and we

Geoffrey Hinton:
need as many people in
computational linguistics as we

Geoffrey Hinton:
do in neural networks.

Geoffrey Hinton:
Um, and they refused to give us
a job.

Geoffrey Hinton:
So we eventually got a job in

Geoffrey Hinton:
statistics so that it could be

Geoffrey Hinton:
in Toronto.

Geoffrey Hinton:
And we were trying to negotiate

Geoffrey Hinton:
it for him to move to computer

Geoffrey Hinton:
science.

Geoffrey Hinton:
And then CMU swooped in, and I
think they offered him tenure at

Geoffrey Hinton:
CMU, and that was that.

Tom Mitchell:
Well, you have a whole cadre of
former students who are, um,

Tom Mitchell:
really leading the charge,
leading the way, and in a lot of

Tom Mitchell:
areas of neural nets, it's
pretty amazing if you.

Geoffrey Hinton:
Well, it was luck.

Geoffrey Hinton:
It was luck basically.

Geoffrey Hinton:
There were so few people who
believed in neural nets.

Geoffrey Hinton:
Who was Yann?

Geoffrey Hinton:
There was Yoshua, there was me,
the Schmidhuber.

Geoffrey Hinton:
Uh, there were a few other

Geoffrey Hinton:
people, but MIT didn't have

Geoffrey Hinton:
anybody.

Geoffrey Hinton:
Stanford didn't have anybody.

Geoffrey Hinton:
Um, Berkeley didn't have
anybody.

Geoffrey Hinton:
Um, Mike Jordan made sure of
that.

Geoffrey Hinton:
And, um, so the few of us who
believed in it got the really

Geoffrey Hinton:
good students who believed in
it, and that was great.

Tom Mitchell:
It worked.

Geoffrey Hinton:
People like Russ and E and
George Stahl and other people.

Geoffrey Hinton:
It was.

Geoffrey Hinton:
Yeah.

Tom Mitchell:
So if you could, uh, one final
question.

Tom Mitchell:
If you could give advice to new
PhD students now entering this

Tom Mitchell:
area, what would you say?

Geoffrey Hinton:
Sometimes I'd say become a
plumber.

Geoffrey Hinton:
You're too late.

Geoffrey Hinton:
Um. But actually, I say, if
you're a CMU and you're doing

Geoffrey Hinton:
this, you may be in the small
fraction of people who survive

Geoffrey Hinton:
in ER and don't get replaced,
because for quite a while

Geoffrey Hinton:
there's going to be creative
people making our work better.

Geoffrey Hinton:
And you've got a good chance of
being one of those people if

Geoffrey Hinton:
you're at CMU.

Tom Mitchell:
All right.

Tom Mitchell:
Well, we'll take that.

Tom Mitchell:
Uh, Jeff, thank you so much for
spending the time sharing that.

Tom Mitchell:
Um, it's it's always great to
catch up and, um.

Tom Mitchell:
Thank you.

Geoffrey Hinton:
Okay. Well, thank you for
inviting me.

Speaker 3:
Tom Mitchell is the founders

Speaker 3:
university professor at Carnegie

Speaker 3:
Mellon University.

Speaker 3:
Machine learning.

Speaker 3:
How did we get here?

Speaker 3:
Is produced by the Stanford
Digital Economy Lab.

Speaker 3:
If you enjoyed this episode,

Speaker 3:
subscribe wherever you listen to

Speaker 3:
podcasts.