Epoch AI is a non-profit research institute investigating the future of artificial intelligence. We examine the driving forces behind AI and forecast its economic and societal impact. In this podcast, our team shares insights from our research and discusses the evolving landscape of AI.
As long as the AI needs your supervision
because it makes lots of mistakes, then
the bottleneck is the human. I think
Darasimu has an excessive optimism about
two aspects of this. We are in a game
theoretical situation between China and
the US. I don't think the possibility of
slowing things down exists. The second
is the Wii. It says we can direct
technology because who is we here? We is
China, is it the US? Is it firms? Is it
workers? Is it lawyers? Is it truck
drivers? Who is we? you have this
superstar effect. A very good AI
programmer with lots of AI can have
enormous leverage and can reach very
large market size. Every single thing
tells you the GDPR has been bad for EU
business and now we're adding the EUI
act. Part of the risk is you try to
control the technology and you end up
without technology. Hi, I'm Anson. I'm a
researcher at EPO AI. Today I'm joined
by my co-host Andre Podia who is an
assistant professor at University of
Edinburgh and I'm also joined by Lewis
Garano who is a professor at LSE
studying economics. Le thanks for coming
on the podcast and
>> that's my pleasure. It's really it's
really great to to be here.
>> So I'd like to start with explosive
growth very briefly. So, one thing that
we briefly discussed on Twitter was
whether or not we're likely to see a
massive acceleration in global, you
know, gross world products growth rates.
And one point that I think is somewhat
underrated by economists is that if we
look at the last 200 years, maybe growth
rates have been exponential or growth
has been exponential. But if we look
much longer throughout history, it seems
like there hasn't acceleration. So,
shouldn't we think that accelerations in
growth aren't that implausible after
all?
the
probability that we get a very large
acceleration of of growth exists. I am
not going to dismiss that. And and you
guys were were arguing that that was
that was potentially the case with your
your taskbased model. My view was that
there were several things that are
likely to to make that take a long time
or or slow it down. So the first the
first obstacle I I was pointing out is
and and and in R&D for example it's very
clear you can develop as many new ideas
for proteins as you want and for bio
tech and for solutions to biological
problems if you don't manage to get them
approved by the FDA you're not you don't
have a medicine and if you don't get
doctors to use it and you don't get
people to learn it I mean so there are a
lot of bottlenecks that slow things so
that was my first objection
that people in Silicon Valley who are
only observing the very best application
of technology which is coding and they
are extrapolating from simply which
tasks do we have, how many tasks are we
performing run the risk of
overestimating how easy it is for
organizations and institutions to
accommodate this task. So ju just a
question. I think we're kind of on the
same page that a sustained explosive
growth is perhaps not that plausible.
What about kind of a an explosive growth
growth spurt kind of a shorter run thing
where you have a I don't know 5 10 years
of of much much faster growth than we've
recently expected just because you know
we kind of start from initial condition
where AI seems to be good at exactly
lots of things that humans are bad at.
So you start with this high productivity
sector being initially relatively large.
So could we have that?
>> I think so. I think I think that that I
I am an optimist in AI in spite of of
our of our disagreement on that. I do
believe that unlike people like
Dasimoglo or others who think even not
just 10 years but even the longer run
don't their models don't predict large
large growth spurs.
I do think we we will have you know I
think that the good way to see it is a a
field doesn't get autonomous. I think
the key distinctions between autonomous
and nonautonomous AI as long as the AI
needs your supervision because it makes
lots of mistakes then the bottleneck is
the human and the human is not improving
much. I mean, yeah, the AI is helping
the human do it a little bit faster, but
the human is kind of bottlenecked by
their own time. And so the AI is okay,
I'm a better lawyer. I'm I'm doing
better my my tasks. But okay, that's
just a qualitative difference. The
moment you get the AI lawyer,
the moment the AI becomes autonomous,
I think there you get a jump, a discrete
jump. So we could easily have a
situation where we we see very small
steps where the AI is helping us. We're
doing a little bit better and all think
of the of the Bolson customer support
chatbots. So there the chatbot is
helping the juniors be better customer
support agents. They suggest answers,
the junior use them, but it's still the
junior doing it. We know because the
paper is published 25 but the experiment
is from from a little bit before. We
know now that the chatbot is precisely
one of the areas where it's likely and
in fact we already seen in some of the
data that the humans can be earlier
removed from the production function
because at the end of the day there is a
set of questions that are relatively
repeated and common and then you can
do a lot of the customer service fast,
reliably etc. And you could always have
a layer like in in in in in my knowledge
hierarchies type type work where
basically you have the routine tasks by
done by some agents and the exceptions
done by experts. That's kind of how
stuff is produced in the high value
tasks are done by consult by the high
level consultant. The entry level
analyst that's the routine jobs. You
could still have that layer of people
who get big leverage. if all of these
tasks that are more junior get replaced
and you get that that that big sport
that you are that you're expecting. So I
I would think that it could easily be
that we are all thinking oh nothing's
happening nothing's happening nothing's
happening nothing's happening and then
boom something happens in one particular
profession
>> something major like the one the one
type of of sport that you're that you're
mentioning.
>> Yeah. So I guess we're all working in
this kind of long run macro way of
thinking about the effects of AI, but
what about the short run macro of it? So
what would we expect to happen to things
like unemployment, inflation?
I mean I think the short run I think the
short run macro is is is is the is the
is the problematic one because if you
get let's suppose that let's just hold
this experiment that we are having in
our mind that we have two sectors let's
say sectors A and B and sector A
basically gets produced for for free so
the price of sector A is zero so the
shortrun effects are you need to
reallocate the labor and the capital to
sector B now the first
that is clear is that I think we will
all agree is that wealth
is going to improve if for example let's
say sector A is medical services and
legal services this is autonomous AI we
get medical and legal services have zero
pro zero price now first
huge increase in consumers fantastic
right I all my illnesses I can diagnose
myself I can get all my legal problems,
you know, I need to buy a house. The AI
does it. You sign it. It all goes in the
chain in the in the in the crypto chain.
All automatic. Perfect. Okay. So, fine.
Consumer service goes up. But what
happens to GDP and what happens to
employment even in let's talk about the
short run. Let's say that you need a
neurosurgeon. So, in this crazy example,
but sector A can be anything. You need a
neurosurgeon to become somebody in
sector B who is maybe a plumber. Like
just to make the extreme example clear
to to our listeners then you you have
somebody who has a very specific human
capital has been completely appreciated
was used to earning several hundred,000
now has to start working in a new sector
that doesn't have any I mean I don't
think any of his human capital is going
to be very viable capital the machines
the all the things that were
complimentary with the lawyer or or the
doctor are useless says we need to
depreciate it. We need to redeploy them
depending on we have an increase in
supply in sector 2. We have an increase
in demand
in the short run only the increase in
demand. So the supply is is is
reassigning itself. It's really hard to
get these machines to be useful to so in
the short run I mean I would imagine
that that the prices in sector 2 are
going to go up but in the long run I
don't know. I wouldn't talk about this
as inflation. This is a change in
relative prices in sector 2. I mean we
could have deflation if all of these
people are kind of unemployed etc. But I
mean I when it's a price shock I am kind
of reluctant to talk about inflation. I
mean it's really just just a price shock
which is that uh all of those skills and
all of that capital is worth nothing and
people in this new sector have to
reaccommodate this extra demand and this
extra labor and capital. That would be
how I would see this this this the
situation. Obviously the problem in the
short run of my scenario is that the
very short run completely contradicts it
which is the lawyers will get the law
the bar association to say it's illegal
illegal to sell your house with a lawyer
signing and the doctors will get the
medical association but I guess the one
of the intuitions I had and I'm struggle
to reconcile in my head is like you know
you have this situation where in sector
a productivity has gone nuts and the
price is almost almost zero but wouldn't
actually be worried that in the short
run we'd have a recession, right? I
mean, all these people would be worried
about their jobs and would stop
spending. So there's this demand side
thing happening in the short run. How
how would you how do we reconcile those?
>> I would I would think that's why I said
deflation if you want to call that price
shock as as as
in this first sector there is a lot of
consumer surplus but in terms of actual
P. So we have the price of sector 1
times X in sector one plus the price in
sector 2 plus the quantity in sector 2.
The price in sector one has been is zero
by assumption. So that part of GDP has
fallen off a cliff and those that
capital and labor is unemployed. So yes,
I think the short run effect until you
get this reallocation is I think a big
increase in welfare probably still a lot
of people are very happy you are in
Ghana and you don't have access to good
medical services in some rural village
and you suddenly can just get a doctor
an AI doctor that's great but that
increase in welfare doesn't necessarily
translate into a GDP increase indeed and
definitely those people who have to be
reassigned could be in long-term
unemployment a lot of them because many
of them might depending on what their
old skills were might find it very hard
to readjust the new world so one thing
that I also wonder about them about this
so part of what we're getting at also
with the signing thing is the
distributional consequences of of this
these potential shocks right and here I
sense a little bit of attention both
when I read kind of the news about you
know the entry- levelvel job market and
what's happening into the entry- level
job market and also when when I kind of
read papers worrying about reskilling.
So on some level we expect AI to be bad
for entry- level workers and less
skilled workers at least within less
skilled workers within skilled
professions. On the other level we're
worrying worrying about this
deskkilling. So will will AI be good for
less skilled worker than skilled
professors or bad for them? How do we
think about that question?
>> So it's it's a it's a great question and
and one that is really being played out
right now.
I I joked in the NBR conference on AI in
in Stanford a few weeks back about
versus Bolson. So there is a Stanford
AI, there is a Stanford
>> That sounds particularly problematic.
Yeah, there is I think we can reconcile
it. There is a there is a Stanford
economist who's had two really important
papers. One is the one I was referring
to before which is in the quarterly
journal of economics earlier in the year
doing software
chatbot assistance to the AI assistance
to the customer service support agents.
And indeed
he finds big increases in the
productivity of the most junior ones
because basically you get into the job
and you get already a tool that allows
you to solve most of the problems. They
actually get trained also faster. They
seem to learn faster. So when you
eliminate this you you turn it all off.
They seem to have picked up stuff.
So in all dimensions they provide more
quality the clients are happier etc. You
get the more junior of them are helped.
And there is also a field experiment. So
this is one field experiment. There's
another interesting field experiment
with software developers that goes also
into that direction. Finds like some
gigantic uh increase in product maybe 20
something% from August this year. So it
says look you we gave in three companies
these tools and we saw the software
developers
increase productivity a lot particularly
junior ones. So that's your side. That's
like okay it's not the scaling. Then
when we look at the aggregate data two
very recent papers one by her gyos and
co-authors find something very different
already. So this is not in the big macro
data that the fed finds and the fed
economist haven't found it and etc.
These are not big shocks that we would
have expected in 22. But we do see let
me tell you the the the two the two
findings. So, so one is the early early
September is this paper by
Likenberg and a co-author Hosini.
This paper does is called seniority
based technological change. And
basically what it finds using something
like 62 million workers. So it's really
very very significant in the AI exposed
occupations.
You don't see anything happen to senior
employment. You see it growing.
You see junior employment really
dropping. And the way it's dropping is
through hiring. It seems like a lot of
people are not hiring junior employees.
The
logic behind it seems to me clear. If
you talk to a McKenzie partner, which I
have done on exactly this question, a
person recruiting for them, he was
telling me like things like the deep
research does the job that the
junior researcher could do. The
PowerPoint slides, you can do them
automatically quite well. The a lot of
the junior tasks are can be done by the
by the software. And so you get this
this replacement of juniors that you
don't hire anymore. And we'll talk later
probably about about some work I've done
on this on this on this training the
missing training ladder.
So these junior jobs are are are gone
and so you're hiring less. You're not
firing people. So that's why I say this
is subtle. This is this is the seniority
based technological change. The Eric
Bosson paper from Eric this from August
this year is the canaries in the coal
mine. Basically it finds something
similar. it finds for workers between 22
and 25 years old. So again, let's look
narrowly, let's be careful and let's
look AI exposed versus not AI exposed
professions. We again see pretty clear
drops and pretty robust with on
aggregate data. Now how do we reconcile
this? I would reconcile it in the
following with the following two ideas.
One is this idea that I was arguing
before that you get like oh I'm a better
customer support agent I'm a better
customer support agent I'm a better
customer support agent oops I don't have
a job because the AI has been helping me
become better until the moment the is
sufficiently better that I am not needed
anymore that is one idea that autonomy
kind of we start with non-aututonomous
AI that enhances and complements your
skills so Talamas have a recent journal
of political economy paper on um I think
it's actually the issue of the JPE from
this from this month where they contrast
autonomous and non- autonomous AI at
different levels of the skill
distribution and basically part of the
argument is the autonomous AI is going
to basically pin down the weight
distribution like it replaces people at
that point and produces an enormous
supply shock on that point everybody
below that it's going to have to compete
with AI is going to have to earn less
than the AI charges or the AI is worth
and so the moment it get becomes
autonomous things change and that's I
think one way to reconcile it autonomous
versus autonomous and and and the other
way to reconcile is of course the level
of the AI which is very related to
autonomy as the AI advances I think
we're going to see the complimentarity
in some of these lower end jobs become
substituted ability. Now, this does not
necessarily yet affect the higher end
jobs. I think if you're on the higher
end, your leverage increases. Your the
knowledge hierarchy becomes more
productive. You have this superstar
effect where if you are a AI, we see
these salaries for the AI engineers that
have been offered 100 million and things
like that like football players. When
Messi is is watched in the World Cup
final or in the Champions League final,
he's watched by 500 million billion
people. So being a little bit better a
player
gives you huge market size because
people are going to many people are
going to want to pay a little bit more
multiply by 500 million people that's
whatever it is that little bit more is
bigger. Now that gives you superstar
effects and that basically Sharon Rosen
who was a a very important labor
economist makes this point on when there
is limited substitution between quality
and quantity. I cannot substitute 20
players by Messi. I cannot substitute
100 players by Messi. There is 11 and
there's only one the field that is like
that any number of players is not going
to replace Messi. And when you have
market uh markets that have joint
consumption that one person can reach a
lot of people we cannot consume the same
football game
then you get the superstar effects and
these superstar effects are affecting
the top of the wage distribution a very
good AI programmer with lots of AI
developer with lots of actual AIs LLMs
that are being deployed by him can have
by her can have enormous leverage and
can reach very large market So the extra
skill they can add is really very very
valuable. So I think on the top
distribution we could see we we could
see this bifurcation between on the
bottom getting the substability on the
top getting this complimentarity and I
think of course as the threshold the
supervisory threshold the the threshold
that the eye can do on its own goes up
this sector that is actually getting the
superstar gains will become smaller.
So, one thing I'm curious about is that
if I'm an entry- level worker and I want
to try to, you know, figure out how I
can get into this job and like learn the
skills I need to be valuable in this
job. There's sort of like a strange
situation, right? It's like if I get to
the points where I can be valuable,
then, you know, get to become an expert,
I can learn the skills to be an expert,
then that's great. But there's like a
period in between where like I would
normally do these routine tasks, but
then right now I'm not able to do them
as often because the AIS are doing them
for me. How do I know when like it's
worth it for a company to hire me if I'm
an entry- level worker?
>> Yes, that's is it's a it's a question
I've been I've been thinking about with
with Rio, my co-author from from
Kellogg. I I I like to call think of
this as as an AI beaker problem. So let
me let me tell you Gary Becker was a was
a famous economist who developed the
theory of human capital and he made this
distinction between general and specific
training by companies and he said look a
company can always give you a specific
training because they're going to
appropriate it but are they going to
give you general training? Well general
training can only be given
if the company can recover it afterwards
but you once you're trained you can just
walk in get all the all the benefits
from the training. So he would argue
like he would say how how is this how is
this going to work? Well either there's
a market failure because we don't get
enough training in the economy or
basically somehow the workers pay for
the training. And with with Luis Rayo,
we have we basically wrote the analysis
that appeared in the American government
review. We basically say look the way
that these contracts are going to work
is the master there's a masters and
apprentice and the master is going to
basically slow down the training
so as to extract all the value of the
surplus from the apprentice while the
master is giving little nuggets of
training. So I'm giving you just enough
that you want to stay because you want
to be an expert but not so much that I
train you very fast and you walked out.
So that's kind of the the the the
solution that we proposed. Now in that
solution
the AI as you are as you are hinting is
going to create a problem which is that
it basically devalues the currency with
which the apprentice is paying. The
apprentice is basically paying not in
dollars, it's paying in menial tasks
like, okay, you're a lawyer and you're
working for Crevath and
it really is not worth your time to
spend all your time reviewing all these
contracts. I mean, sorry, like it's
boring as hell, but okay, you're
learning something and you're receiving,
but it's basically menial work and
you're in McKenzie and you're the
smartest person in your class or in
investment bank and you are the smartest
person in generation and there you are
kind of doing silly spreadsheets that
many other people could do. But that
manual task is the way you pay for
getting this training. Now if the AI can
do the basic research at McKenzie, can
do the contract review at Crevat or
whatever law firm this is and can do the
basic accounting at an accounting firm
or basic programming, then how do you
pay for your training? So our argument
is that the AI
devalues the currency with which you pay
and as a result makes the firm reluctant
or the the or the expert reluctant to to
get the worker in the first place
because they were going to get okay I
get this worker it's going to be a pain
and so on but you know I'm going to get
paid for my training them through their
work. Now it's so cheap to do with an AI
that the value of the worker is the
value. So basically we show in the paper
we build a very simple model in which
this this exchange is happening and we
show that there are two basic things
that are happening and the ratio between
those two is is what is crucial. One is
the AI the substitution aspect of the AI
that is basically devaluing this
currency with which the worker is
paying. So basically the AI as it gets
better the worker basically has less to
add to this production to this
production function of the of the of the
of the partner or the or the the more
expert person but at the same time the
fully trained worker is worth more. So
that means that there are actually the
trained ship is still worth it. So the
basic result that we have is that there
is a ratio a key ratio which is how much
the AI complements the expert an expert
fully trained expert with AI how much
has that gone up relative to how much
the AI replaces the untrained person if
the expert with AI's value is is going
up a lot then even though the trained
untrained person is not worth a lot you
can extract them so much from that value
they're going to be worth that the
contract still exists. So basically that
ratio determines whether you are going
to want to to uh to employ that worker
or not and to train. In the absence of
that then the training ladder disappears
and we have a big societal market
failure which is imagine like all of
this tacid knowledge a lot of this
training that happens in the job it's
not in any manual right if it was in the
manual it would be told in the law
school it's about how you deal with the
client it's about how you are really
precise with the contract it's a lot a
lot of hundreds of things that are hard
to describe it knowledge is the idea
that there is a lot that we know that we
can describe and if the worker is not
acquiring this tacid knowledge because
this all this training is not is not is
not taking place from the master this
transfer of knowledge from the master
directly that's he's the one or she's
the one who has this knowledge
then the economy has a problem in the
longer run to the extent that the AI is
not perfect we don't have those experts
that can supervise the AI in 10 years or
in 15 years then we have a hole in our
growth model growth depends on
human capital and suddenly we have that
some somehow all this pipeline of
intermediate people acquiring skills is
is disappeared and that's actually a a a
big I think a potentially big
consequence of AI a problem that AI
could could cause eliminating those
lower ranks on the training ladder and I
think as I was arguing before with the
with the canois and the and and the coal
mine and the the seniority based
technological change papers. I think
there's there is a lot of anecdotal
evidence from these companies that these
very junior employees are not really
being hired but there is in these two
papers there's already from August and
from September starts to be systematic
evidence that this could be happening.
What do we know about like the value of
this ratio? like do we have like any
empirical evidence of
>> No, I think that we are saying that
people I mean it's a theory paper and we
are we are suggesting that people should
look into this empirically. We are
inviting people to analyze it
empirically.
I think we are seeing both. I think
we're seeing senior people really
complimented and more productive. and
look at the at the 100 million checks
that we were referring to on on this on
these big AI companies. So the senior
AI experts, AI engineers are getting big
big paychecks which would be
unimaginable without AI. So they're
being complemented.
I think that uh in our jobs we already
can see that the productivity is
increasing with with AI.
We are also seeing substitution. So the
question is how big is the ratio
different professions and and the larger
the ratio the more the training the
training ladders will remain.
>> One thing I'm a little worried about
with like trying to uh estimate this is
that like you know if we had tried to do
this exercise of estimating the ratio
three years ago the models were so
different and so much worse and like the
ratio might have been pretty different
and I'm worried that if we try to do it
today like three years in the future
it's going to be also like similarly
irrelevant. I think you're right but
this is true for all AI right it's also
true for for all the for all the micro
models that are trying to estimate how
much is compute transformed into advance
how much I mean we have some general
patterns and some general scaling loss
but these things are are we don't really
know how much we can extrapolate we are
in a period of massive technological
change and the good news is that it's
massive and the bad news is that we have
to peak into the future with with like
really just in the dark with like a
little bit of light. You guys at EPO are
trying to help people see further into
the future and and and and we are all
trying to to use the best tools that we
have. But the truth of the matter is if
this is as revolutionary as we expect,
the future could be could could give us
big surprises. Yes, I I I do agree with
that. How much does this model depend on
the tasks that are hard for humans also
being the tasks that are hard for the
AIs as opposed to some kind of like
different skill distribution for the AIS
which seems to be the case like it's
kind of like Morovac's paradox in AI
like the things that are easy for the
humans are hard for the AI.
>> So I think I think the Marx paradox is
is a it's a huge discovery for all of
us. I mean we discover it every day,
right? things that we find impossible to
do the the computer is doing perfect and
then we are we end up like spending time
kind of fixing some stupid mistake the
computer's un the AI is unable to fix
and and and so it goes the opposite way
in some sense as you as you are
suggesting
we are indeed
kind of starting a situation where the
AI is little by little replacing
things that the the lower skill worker
can can can do. I think the reason why
it makes re so so yes I think I think
your point is well taken. I think the
reason why it makes sense in this
context is because the AI makes mistakes
and I I like to refer to this this
cutoff is this supervision threshold. So
you need to be smarter than AI in order
to be able to correct the AI. Think of a
kid who is now going to school and they
can do the chip. They can make chip make
the essay much better than them. So the
CH GPT is they just do the essay and
they hand it in. They can't see where
the mistakes are or the things are
actually not perfect. So they are never
going to arrive to the supervision
threshold. They're never going to arrive
to the point where they are able to read
the essay and see the mistakes because
they basically spend all their years and
you have a young kid. My kids are
already out of this but you have a young
kid and this is this is going to be an
issue, right? I mean like I have a
friend who is a high school teacher of
English and he tells me like you know
how do I make these kids want to write
and read. They read like quickly Hamlet
in the morning with a child they take
the key questions that were asked that
have to be answered in the class and
they kind of BS their way through the
answer and they don't read anything. So
I think that the reason that we are
thinking of this is like we are in a
context where in a law firm in a
consulting firm etc. as you are
acquiring the seniority you are
acquiring the ability to add value and
and and be above above what the AI can
do to the extent that it's the opposite
to the extent that AI is doing all the
difficult tasks and anybody can do the
correction of the points then then then
this will be a different world indeed. I
think companies will have to think of
training in different ways. Maybe they
have to think of okay we're going to
train the workers by maybe we hire less
of them but the ones we have we train
them by going over the AI output and
reviewing it. So so that there is
actually a way that you're still
improving but you're not going through
all these routine tasks that at the end
of the day don't have any value at all
anymore. Uh so in response to this uh AI
Becker problem, could there be more of
equity type arrangements involving human
capital where where firms have some sort
of exposure to the human capital they
help help create?
>> I mean the human capital is is is
inherently with the person and the
person I mean you have a big moral
hazard problem right so once somebody
has invested in you you are going to
decide how much you work you could
decide not to work because you are not
getting the upside the company is
getting the upside. So it has been
historically very hard to find market
solutions to this. Similar with with
loans. I mean there are loans for MBAs
for certain high-end things. But but
loans again it's it's hard to see how
you secure the loan against the human
capital.
You cannot secure with the human beings
because of slavery being forbidden and
you cannot pledge yourself as
collateral. So human capital uh
transactions I mean I don't say they are
impossible because they exist. Often
these loans are government programs. In
the US there's a lot of of of government
guarantees in the UK there government
guarantees.
I think that equity has been has proven
really hard maybe with football players
right maybe with football players you
get the upside you train a football
player you sell it to another team etc.
It has it's like an equity- like
arrangement, but it's the only context
where the firm train the football
player, which is the I don't know if it
happens in the US, professional sports,
is able to get a fee, a transfer fee for
having trained that person. Um, but it's
it's a very it's a very unusual it's a
very unusual context. I would say equity
is hard, debt is more promising, but
even debt is tricky
because of moral hazard and repossession
and all that. Going back to the bigger
picture a bit on AI and training, do we
have a sense at the moment if AI is
making training training of humans I
should
easier or harder? because on some level
you were mentioning that there's all
these learning tools, AI powered
learning tools that you could tailor to
the student assuming regulation allows
you and that could be helpful, but on
the other hand, you know, I'm an
instructor myself and I can't get my
students to read anything. I I can get
them to read the AI summary of the AI
summary of something and that that seems
bad. Is there any evidence? You know,
>> I don't I haven't seen evidence. I think
we are all we are all observing exactly
what you're what you're observing. We're
all observing that
students have been learn using AI for
for for for for code cheating. Let me
let me tell you what I do with AI. So my
view on education is summarized by what
why what I do with AI in my two classes.
So I teach the microeconomics class in
the first year of the master. And my
view is that if you want to be thinking
in the future, you need some basic
models and some basic facts and some
basic tools. And that is not going to
change. you're going to otherwise you
you're you cannot think right we are
trying to triangulate is 400 billion big
or small is that big valuation or I mean
you need to have something in your brain
to use to think so at the basic level I
want them to use the old blue books
write the problem sets write the exam
and the exam is going to be and I think
there
honestly like I tell the students this
is like these basics you need in order
to operate in life so there. I think AI
is an enemy of us because AI kind of
okay I can do the problem set
automatically why would I go through the
problem set and then you get to June and
you have the exam and you're like oh
what is this exam about so there is an
enemy but indeed there are tools and I
try to tell the students like you can
ask uh clo for help you can ask to
explain if you don't understand what a
go class is you do it you do it two ways
you do it three ways you until you learn
it okay
on the other side let me tell you what I
do to my second year class my second
year class is like it could be called
what I uh learn in politics that I
didn't know before
as an as an economist. So I start from
they start from the policy. What is the
policy that you're looking at? So group
is looking at the galpa. They have a
huge water problem. The water runs out
all the time and there's only a few
hours of water a week. Okay, that's the
economics the economic policy. Now you
want to look at the politics. What is
the political economy? Who are the
interest groups? Who is in favor? Who
are against? You want to then talk about
the narratives? How do you discuss this
in public? How do you give a speech?
What is the message that you give? What
do you want people to hear? People don't
hear what you say. They hear something
else. What are the pre the
preconception? And then you want to talk
about implementation. How are you going
to implement your solution? Well, in
this class, I tell students AI use is
obligatory.
They need to all of these things. the
analysis, the politics, the narrative,
all of these things. They need to build
models. They need to understand the
data. They need to actually figure out
stuff that three years ago would have
been unthinkable. They they couldn't be
doing. So I think that my view on on on
how AI is is is working education is we
need to make sure that they are learning
the basics and that is going to be a
struggle and I agree with you. But at
the same time, we need to be able to get
our students to do enormously more than
they could have done. So if you're
teaching a macro international class
like you do and the students can
actually do a trade model of the Ukraine
sanctions, they could actually change
this substitution. They could I mean
they could do these things that before
it would be like the amount of computing
and programming that you would need
would be a PhD could do it. So I think
that the way that that that training is
going to work has to radically change in
in using the AI tools to learn and using
the AI tools to get much further. But at
some basic level we need to be able to
persuade the students that's the
difficulty that the basics they need to
learn. I mean maybe they don't learn you
know
you maybe your papers will be written by
an AI but if you don't learn to write
you're not going to learn to think. I
mean, I know that argument is difficult
to make, but if I have a child a
seven-year-old like you have, I would
try to hammer home that argument
somehow.
>> Maybe for this part, maybe for this
part, what we're going to have to do is
homework in the classroom, right? Maybe
the way for this part that is like
actually, okay, you're going to be
writing
>> just notebooks and adults in rooms.
>> Maybe we have two hours uh you know, in
the library of the school from 2 to 4,
which is homework time. No phones, no
computers and you guys have to do the
homework for this basics part. And then
we need to also use the AI. I mean I I
believe in both. I don't think it's
either or.
>> So this one's fascinating. Should this
make us a little bit pessimistic in the
sense that my my my sense that there was
kind of this more optimistic line of
thinking that I would associate with
Donachimogu which is oh but we have
options. There's this directed technical
change. we can choose to develop
technologies to keep them compliment
with human labor and then we won't have
so many problems whereas here it sounds
like almost something inherent is
happening whereas as the AI gets more
advanced it becomes a substitute so we
don't have a choice we either accept
advanced AI but we accept substitution
or we don't accept advanced AI or
there's this kind of advanced AI and no
substitution might not be on the menu
>> I think I I think that's my view indeed
I think Darasimodu has has a bit of an
excessive optimism about two aspects of
this. One is how much can we control
this this this this runaway train. We
are in a game theoretical situation
between China and the US and the and and
I mean there is interaction between
them. They if if the US decides not to
develop then China is going to China is
going to develop anyway. So I don't
think the possibility of slowing things
down is exists. Second, I always think
when he says so actually I'm going to it
was two ports but I'm going to make
three. So one is the interaction part.
The second is the we. He says we can
direct technology because who is we
here? We is China? Is it the US? Is it
firms? Is it workers? Is it lawyers? Is
it truck drivers? Who is we? All of
those people have very different
interests. Is it the people in the AI
industry which is now generating a big
part of the growth in the US? Does the
US not want to have this growth? So we
is kind of it's always kind of hidden
away a little bit like this. We I I I
find it kind of for somebody who is as
super sophisticated about political
economy. He knows better than me. He's
written a whole book and and lots of
papers about the the institutions and
how they mediate this week.
The the other thing is that I think that
the risk of trying to interfere is is
many unintended consequences. So I want
to I want to tell you about Europe
because that's what I know I know well
apart from being an economist I I spent
a few years as a politician. I was in
the European Parliament and Europe has
made a very as a morgloo effort. In
fact, let me tell you, let me tell you
that this letter that as a Mogul and El
Mask and many I think many others signed
the future of life as a
>> future of life institute was this
February or March 23 something along
these lines. This letter actually came
in the middle of the elaboration of the
EU AI act. So the AO AI act was finished
the draft. Yeah, the draft was finished
in November of 2022, the two drafts, but
then the two drafts have to be
reconciled and the act was passed I
think in the spring of 23. In between
when they were just finishing
there was the Chip PT and there was this
the introduction. If you remember,
tragic was November of of 22. And that
moment was the moment where there was
this existential risk pandemic. I mean,
everybody was like, "Oh, we're going to
get turned into paper clips and humans
won't exist anymore." And so, they wrote
this letter and it was there was a
moment of panic in Europe. And this the
person who actually wrote the law from
the commission has given an interview to
a Swiss newspaper. I I I wrote about it
in my blog. I somebody wants to see it
in my silicon continent blog is it's
called why is it dare you I act so
difficult AI acts so difficult to kill
and basically I I argue that he argues
and I quote him that it was a bad moment
for that letter because really Europe
decided okay this is too risky let's put
all these guardrails all over the place
and the consequence for Europe is that
as you were hinting
a lot of the productivity gains that we
could be getting from AI are not
possible to get. So let me give you an
example. The AI act is built on on four
risk categories. So there is forbidden
uses which includes detecting emotions
that's not allowed or government
controlled kind of surveillance and
point system social
scoring systems that's forbidden but
emotion detection is also forbidden.
Second, social scoring by the
governments. Okay, public transport.
Second, high high-risk uses which
involved energy infrastructure decisions
that that the legislature says AI
shouldn't be taken without a lot of
steps. Now, those decisions
include education
and health. So in education
you would very much want for example
your students in Edinburgh or you would
want them to to take an AI quiz and to
help you see how they're doing that can
kind of you can probably eventually
courses are going to it's poss going to
be possible for them to do the problem
sets in a customized way so they can
jump a step etc. they access these
things are high risk. And so the fact
that they're high risk means that when
you train the system, you have to make
sure that all the data are correct, that
all the data is free of errors to the
extent possible, that it's unbiased, and
that you have the relevant data. Now,
data error-free training data doesn't
exist. The training corpus right now is
the internet. I mean errors must be all
over the place. Somehow for some bizarre
reason that I don't know if anybody
understands
after all of this is aggregated all the
errors get washed out like in a low
large numbers kind of effect right so
the it kind of works but the act the
training data has to be unbiased and
free of of data now you need to keep
detail locks on these high-risk
applications you need to keep your
records you need to keep documentation
of everything of the system for 10 years
you need to prove accuracy and security.
You need the conformity assessment and
you need to register with the EU
authorities. Now there are 55 EU IA act
authorities that are going to be that
will do that and these authorities are
supposed to have personnel that is
highly qualified in AI highly qualified
in data protection etc etc. You're an
entrepreneur. You're starting your
company, your little startup with
education. You have to do all this plus
GDPR, the general data protection
regulation. Business know it because of
the cookies. It's a pain, right? I I
mean, you know, in economics, people
often disagree about things. I can tell
you there's been like 15 papers on the
GDPR and all of them find less venture
capital investment, less startups,
higher compliant cost. Every single
thing tells you the GDPR has been bad
for EU business and now we're adding the
EUX in terms of for for startups. So
part of the risk is you try to control
the technology and you end up without
technology which is kind of the world
where Europe has a risk of finding
itself. We don't have foundation models.
We have great researchers. We have a lot
of a huge savings pool right for many
reasons that we could go you if you guys
care to to to go into that. We have the
researchers we have the ideas.
Businesses don't scale in Europe. We
don't have foundation models. We I mean
basically we don't have a competitive I
mean I think there are like I don't know
something like two foundation models in
Europe compared to
50 in the US. I mean it's like the
numbers are are really really very
disproportionate and we have very little
AI implementation. So that's that's a
problem. I'm curious what you would say
to a person who is like, "Oh, no, but I
actually think that these risks are
really serious." And even if we don't
think like all the way to, you know,
immediately turning all humans to paper
clips, like they think that, oh,
actually, if we have like a ton of AI
systems that are deployed throughout the
economy, then they're going to be if
they're not optimizing for the things
that the humans care most about, then
you could slowly gradually just like
shift things off the rails. And so maybe
they would say, well, the EIA acts the
most serious risk, the systemic risk for
GPAI systems for general purpose AI
systems. These are the models that
require over 10 to the 25 training slop
and maybe a bunch of other requirements.
And so truly this is just like applying
to the most capital intensive like or
most capital rich companies. And so
maybe like for most other people this
particular thing doesn't matter so much
but it's just this particular group of
actors that need to be subject to
additional scrutiny. Yes. Like what
would you say?
>> Systemic risk categories is maybe a
different a different story. I was I was
talking about systems more more broadly
there is a systemic risk category.
Indeed as you as you are pointing out I
think it's 10 to 24 flops but it's it's
24 or 25. We can
>> I think it's 25. It's based on GBT4.
>> Okay. So, so indeed GPT4 is above uh
Lama is above. So, we have already
>> the previous generation systems are
already above
>> the previous generation are already
above and yes they have to be subject to
indeed adversarial tests. They have to
be made that you have to prove that
they're sure and sure that they are that
they're safe. To me,
this is probably these kind of
existential risk issues in these very
very large systems. they do deserve
additional scrutiny.
>> So in that context, are you more or less
optimistic about the EU? When you think
about AI, do you become more or less
optimistic about about the European
Union?
>> I am I am desperately worried. I'm
desperately worried. I I I think that
that we are in a situation where
if if these kinds of effects that we
were discussing before of of big
productivity growth, big welfare gains
for many citizens that can do their,
you know, they're driving, they're not
going to get killed in the car crash
that are going to be able to do their
contracts. They're going to get legal
advice to do smarter things that they
would have done negotiating with their
landlord. Many many things that are not
going to happen potentially
because they will not be allowed. I
think that productivity growth will
suffer, growth will suffer, welfare
gains will not happen. And I think that
Europe has a demographic problem and a
high debt problem. So Europe needs
growth more than many
other places to pay its bills. I mean
Europe is is in a in a very tricky
situation. Look what happening in France
that on the one hand is not growing and
on the other hand has this big debt and
explicit debt implicit debt for their
pension the p pension liability. So it
desperately needs growth and I fear that
the European Union has overregulated
itself and it's not going to get that
growth.
>> I'm curious how much of what you say
also applies to the UK because for
example like the UK does have like a
frontier which is Google Deep Mind. So
how much of what you say also applies to
the UK? I think the UK
has taken its I mean let me just start
by saying that I don't like Brexit and I
don't think Brexit is a good idea but
Brexit was bad for the UK but it was bad
for Europe because the UK was the force
that was pushing Europe in a more free
market and open-minded way. It was UK
was the moto for the single market
project that was about making sure that
Europe had an integrated market.
And so once the UK left, we had this
divergence. The UK has is has a very pro
AAI posture.
It hasn't diverged in other areas,
environmental areas, etc. is still
applying the EU rules. But in other
areas, it's actually taking a more a
more positive posture. I I mean I'm I'm
a professor here. I'm I'm at the London
School of Economics because I actually
believe in the UK. I do actually think
that they have the UK has a very bright
future. It just the governments are are
are not kind of making the decisions at
the speed necessary to to profit from
them. But if you think of AI, you have
capital, you have nucleus of talent
which are Oxford and Cambridge which are
at the at the at the cutting edge. You
have deep mind, you have all of these
other labs around it. I think the the
the UK could be Silicon Valley. I don't
see why that could be impossible. Maybe
the risk-taking mentality is the one
that is missing. It's not quite there.
>> So thinking a bit about the AI value
chain, you were saying that there was
this kind of infrastructure layer, this
kind of lab layer and this kind of
implementation layer. How do you think
about where the value will go? How will
the value be distributed across those
layers and how the prospects of
different part of the world depend on
which layer gets gets the value?
>> That's a great question. So, so I've
been arguing that that Europe could try
to get the value of the AI. We're not
going to get it from the from the lower
layers. We could get it from the
implementation. So, so the idea for
Europe could be if we manage to make the
other layers to be competitive,
interoperable,
then uh we could get value on
implementation. So let me let me split
it bit by bit. So on the hardware layer,
it's clear that China and the US are
capturing the value. So on the hardware
layer, if the hardware layer is where
the is where the value is, then that's
clearly going to benefit the US. And I
think that it looks like learning curves
are very steep. Look at Intel. It had a
it had a competitive advantage in PC
hardware for I don't know decades, four,
five decades until just this last
generation which they got they got they
got hammered, but they they've had
decades. It's very hard. The learning
curves are very steep. You need to keep
it clean. you need to print it
carefully. I mean know you need to
design very complicated things. It's
very hard to enter. I don't think there
is there's really an entry possibility.
So a lot of capture will get into the
into the hardware layer. I think the
evidence is is pretty strong regardless
of what happens upstream, right? Profits
will will go in there. Cloud computing,
I think there could be big switching
costs moving your data from one to
another cloud. I mean Europe is really
trying to avoid that. is really trying
to make sure that the data is yours and
you can move it. But the cloud players
can add features to make sure that you
want to stay and that if you move it,
you lose some value. So there could be
quite quite a bit of switching cost. I
think we need to make sure that cloud
computing that the data is encrypted and
it remains on servers that are located
geographically here Europe so that it's
not all all the all the all the value
going back back to the US. But but I
think both geopolitically and
economically
the risk is clear that also on the cloud
layer the cloud act of the U US will
have extr territorial reach because
those are American companies.
On the LLM
layer on the foundation model layer it
seems to me that what we are observing
is that there is very strong competition
and very hard to obtain a competitive
advantage. It seems to me that what we
see is all the time one company gets
some feature, we all love it for three
months and then we suddenly start trying
the other one because it just got a
little bit better feature. I I'm
basically fluctuating between Gemini CL
and the and the open AI. I'm switching
between all those. It seems like very
hard to get an advantage. Also there is
a big open architecture possibility open
not open which is that llama was trying
mistrol is building on that all those
weights are out in the open so we're
going to be able to have some at least
some of the applications that are kind
of more energy efficient or smaller they
can go on the old systems and they can
actually enjoy the fact that these are
are open open architectures so I would
think that that layer remains quite
competitive with one caveat
which is the introduction of switching
cost through memory. If the system
starts to remember you and starts to
know how you are then switching systems
is going to be costly. I think that all
of that data we should make sure we
should do our best to make sure that
that data is yours and that it's very
easy to port. Think of I mean I think
the portability is crucial. I think of
the example with with social media. In
social media, there's no portability.
The data of my graph and of my
everything about me belongs to
>> meta or
>> meta or to Twitter. I will never say X.
It's just my one principled objection
or to Twitter. And if you're in
disagreement with them, you start again
from zero. Okay. I have whatund and a
few thousand followers in Twitter. If I
want to abandon them and start somewhere
else and that's my problem but if not
then I stay there. So imagine a world
where I send a message and everybody who
likes me can follow me from any platform
but it's completely interperable. market
power would change radically right and I
think the regulation you were talking
about automat regulation it would it
should do its best to make sure that
this interperability exists that we
don't fall in the same track that we
fell with social media on some of those
verticals to appropriate quite a bit of
those value at the European level now
how do you do that and avoid
extraction
by all those other upstream players that
we have been talking about from hardware
to infrastructure to to the LLMs. Well,
we have to move fast, which we're not
doing, and you have to keep the markets
competitive. We have to do our best to
keep those markets competitive through
interoperability and all these other
demands that data can be moved, that the
clouds are not proprietary, etc. I think
that it's possible, but it's tricky
because the truth of the matter is if
you don't have the hardware, everything
else flows downstream. So you think it's
kind of one of the key points here seems
to be that you think the EU should be
using it le the levers it has to kind of
move as much of the value to the
implementation layer because that's the
layer where where Europe Europe is
strong.
>> Yes. been pushing a second mover, a
smart second mover strategy for Europe,
which is a strategy that basically has
Europe trying to,
let me say for clarity, free ride in
this gigantic investment boom in the LLM
development and the data center
development that is already taking
place. Okay, we take it for granted.
We're not going to try to imitate
because we're too far behind. and let's
use all our scar resources in securing
the autonomy, encrypting the data,
having the data centers locally based,
but mainly in developing a strong
implementation layer indeed. And in that
context, would you worry that Europe
would have some of the same problems it
has had with regulating the tech giants
in the past? Because I'm guessing this
becomes a geopolitical game pretty
quickly.
>> It has become a geopolitical game.
That's the problem that the US
government is really throwing its way
behind these these big giants and it's
going to be very hard for Europe to
insist in kind of level playing field
and interoperability etc. And we are
seeing it now.
Of course, there was a digital tax.
There was the OECD pillar uh two that we
were going to harmonize and aspects of
corporate taxation. Trump has said
that's out of the table. I mean, it's
going to be very difficult to do certain
things that rely on mutual acceptance.
The US is going to throw its power and
we're going to have to just basically
swallow it. I mean the Torbury agreement
quote agreement between Donald Trump and
on the lion this summer there was a
trade dispute
in every trade dispute until now the way
they work is okay you put the tariffs I
want to reply with the same here Europe
comes out of the room saying huge
victory he's putting all these tariffs
and we're not doing anything
uh sorry what's the victory no they're
not going to do any other things like
where does it say that Donald Trump is
not going to do any other thing. No, no,
they've promised not to do any other
thing with our cars and we of course I
mean there's no promise. So we accept
the tariffs. We don't do anything in
retaliation and on top of that we didn't
really get any commitment not to do any
further actions from the US. The truth
of the matter is geopolitically we are
very dependent and the Ukraine war which
would take us in other directions is
part of the reason we need the US
defensive umbrella and we are going to
be struggling a lot to get that
defensive umbrella to continue. Yeah,
given that like curious how you think
about economic security because like I
think a lot of the reasons for this
smart second mover strategy is that it's
a lot harder say to build out huge
amounts of energy infrastructure and
data centers but then very common in
these kinds of discussions about data
centers is that well we want to have
some kind of like sovereign compute or
something we want to be if this is so
important to the economy then we want to
make sure that we have our own ability
to you know have data centers in the EU
and like if people need to use AI then
you know we need data centers there how
do you think about that
>> I don't think the public investment in
this is going to be the big solution so
the EU has two sets of rules one is like
the factories and programs and the giga
factories the gigafactories have five
big factories that AI data centers but
kind of the level of investment that is
being put now into this one gigabyte
watt plus centers which are really
really like extremely costly we are
going to have one of those I think which
is in Portugal. It's private sector
investment. It's a partnership between
it's one company that is invested by
Nvidia. Uh it's a UK company and and
data maybe.
So this is going to be one data center
that will be local. We're going to have
more local infrastructure. Spain I mean
so basically it's Portugal, Spain and
northern countries because of energy
issues that are getting some big big
data center investments in Spain too by
the Abro River. I don't know how the
Abro is said in in English the Iberus
the the big river that comes
through the north below the Pyrenees
taking all the Pyrenees water there's
going to be two big investments there so
we will have kind of code sovereign data
centers but these are not the true
sovereign because in some sense the ones
in Spain are basically Google and Amazon
they're they're Azour and and and Amazon
web services investments
but if they're local
we get some control I mean ideally
Eventually
there will be some local European
companies doing this. I don't think the
public investment is the solution
because I mean the numbers that we're
talking about I mean we're talking about
hundreds of billions of buildup per year
up to a trillion for 2030. I mean these
are numbers that are are are really very
very large and of course public
investment is is not is not is not at
that level. All of these companies are
spending in R&D. Amazon, Microsoft,
um, well, all of them, Apple, etc.
They're all spending in Amazon in R&D
more than any government, just one
company in Europe. So, it's not going to
be possible to keep up through public
investment. It's going to have to be the
private sector has to want to do it. And
in order for the private sector to want
to do it, regulation is is crucial both
in terms of permitting but in terms of
also all these regulatory obstacles that
we seem to be throwing all over the
place. So on the net from this sort of
geopolitical game because I think a lot
of people in Europe are upset by the
relatively aggressive stance that the US
government is taking on a on a number of
these issues.
On the net is this good or bad for
Europe? Because on one level, I guess
aggressive US government action means
that we're less likely to be able to
move value to the layer that's
convenient to Europe. But maybe this
sort of aggressive action would also
make it less likely that we'll get to be
too risk averse, right? Because, you
know, we want to, you know, our instinct
is to stop a lot of things that the tech
giants don't want us to stop and that
the US government might not want to
stop. So, will the US government save us
from ourselves?
Um, so that's that's a that's I mean
that unintended consequence would be
would be welcome. I mean I would in or
at least to some extent welcome.
We had a year of wakeup calls. We said
wakeup call number one. Trump gets
elected. Wake up call number two. The
sofa scene where Vance and Trump
ambushed
Zalinski, the Ukrainian president. We
had and all the time it's like this is a
wake up call for Europe. We cannot trust
our old eye the Europe the US we need to
act together and then we go back to
sleep like people have this wake up call
like okay every time you're going back
to sleep the wakeup calls don't seem to
be waking us up at all. Um so to to some
extent what has happened this year
should have unleashed a wave of like
okay we're going to invest in AI and in
and and and destroy this legislation and
and one one post that that I I wrote on
this I I mentioned before and on the
silicon content blog was exactly asking
why is it so difficult to undo this
thing and Europe doesn't have a very
easy error correction mechanism in
Europe the same European Commission And
they had this explosion of legislation
that was the green deal and the digital
legislation over the five years between
19 and 24 is now tasked the same
president who love on the line is now
tasked with undoing it. Oh, we went too
far. Let's undo it. Well, you know, the
reporters, the people who wrote
organization in parliament and in the in
the council, the people in the
commission who pushed it, all of these
three institutions, the governments, the
the parliament and the European
government, which is a commission, all
of these three institutions are going to
be tasked with undoing a lot of rules
that they themselves push and they saw
big big victories when they pull them.
So now say, "Oh, you know what? We
thought the act was great, but now that
we realize it's going to slow us down
and Trump is is going to be a risk,
let's continue. That is very hard to
happen. The coalition that runs Europe
involves the center right, the center,
the center left, and the Greens. All of
those parties basically were the same
ones that passed the first legislation,
and they are the same ones that have to
now undo it. And there are many
differences inside that coalition as to
what can happen. The very first piece of
legislation should have been removed
which had to do with excessive corporate
respons reporting and and paperwork. It
was guaranteed to pass. Everybody
thought it was going to pass and then
parliament turn it down because yeah a
lot of people were invested in the
existence of that legislation. So I hope
Trump partly saves us from ourselves in
in this or the US party saves us from
ourselves in some of these aspects but I
am not very hopeful.
So one direction I was also hoping to
bring back into the discussion is uh a
bit more the micro finance angle right
there's been quite a bit of discussion
about the potential impact of AI on
things like interest rates right uh and
and things like that how do we think
about that in the context of fiscal
sustainability fiscal sustainability
micro financial stability you know these
are you know hot topics in general and
hot topics in the European Union in
particular. Any any thoughts on that?
>> Yes. So, so I I I I wrote a post that I
titled without G talking about the how
the European Union could get the high
interest rates and not the growth. Let
me unpack this a little bit but not do
it to Europe for Europe yet and then
apply to Europe. So, so there was a a
very recent paper by OKA and and and
some co-authors that was presented in BR
this summer. People can get a link.
Maybe we can post the links to the
papers that I mentioned.
So that that provided a very simple
demand and supply framework and and
apply to to AI. So basically they talk
about the price of assets as being the
result of the demand and supply equation
and when there is a lot of demand prices
go up. The tricky thing is prices go up.
Everybody has to remember in our
audience that that means interest rates
go down. So those two things go in
opposite directions. So they argue that
over the last 40 years demand has
greatly outstripped supply and so we've
got the prices go up and the interest
rates gone down. So we have a very long
secular drop in interest rates and they
basically say in their calculation the
asset demand has multiplied by four. So
a big big increase in asset demand
because of all of these things that have
to do with with slow growth with
demographic change. People need assets
for when they retire. They are they are
they are old. They need safe assets in
particular. All of that has led to a
very big drop in interest rates. It's
been a a godsend for everybody who was
in debt. Particularly countries that
were in trouble. They they could issue
debt for free.
But they argue that AI is going to
change this and that AI is going to
increase interest rates first because of
the impact of what we have been
discussing of higher productivity higher
productivity growth. That means the
supply
is going to increase. They're going to
need also they're going to supply
equity. They're going to have to raise
equity. That's as a supply. Are they
going to have to raise equity to pay for
the AI investments, for all the AI labs,
for all this? Well, all of that. And at
the same time,
the demand might go down because younger
workers think, "Oh, wow. The economy is
growing a lot, so I don't really need
assets because the the the economy is
going to grow so much." So they say this
is going to lead to a drop in price and
an increase in interest rates. And
that's their argument. Now
so an asset price and as a result
parallelly a drop in rate. So basically
their their view is that we will have
bigger G higher growth rates and bigger
R but the growth rates will be higher
than the R. So no problems for fiscal
sustainability. Remember remember that
sustainability depends on R minus G
depends on R is how much do you have to
pay when you issue debt and G is how
much is the pi growing with which you
pay. So if R grows a lot oh my goodness
I have to pay now 6% but my growth
doesn't go up I'm like each time I have
to be paying more and more and more
squeezed. If my growth rate is growing a
lot, then what I used to pay the debt, I
can afford it. So they say, well, the
growth rate probably grows a lot and it
grows more than they are. So overall,
that's sustainable. So what I worry
about Europe
is that you are going to have the bad
part of having to pay higher rates
without having the good part of having
higher growth rates. If you're putting
obstacles in the way of how you are
adopting the AI, the taxi drivers oppose
self-driving cars, the legal the legal
profession opposes
AI in the law and the doctors don't want
AI and you get this human bottlenecks
everywhere, then you're not going to
have increases in growth rates, but you
will have to pay the global higher
global interest rates that everybody is
facing because of the AI revolution and
the higher productivity of capital that
that comes with that and the higher
investment boom and all that. So as a
result what you could have is that you
make much worse the debt sustainability
problems which block our welfare state
in the European Union. So for me I mean
we have we have countries that have not
only high explicit debt 120% of GDP in
France almost 120 to 116
and they also have high implicit pension
depth three or four times GDP probably
more in some countries
and all of this has to be financed with
the G and you have to pay all this
increasing R on it um if you don't get a
here and you get the higher R, you're
going to be in big trouble in terms of
sustainability. So you said before,
would Trump be somebody who would wake
up Europe? This is another reason to
wake up. I mean, we have a problem
demographically and this is not just
continental Europe, it's also the UK.
And we need more growth and we need to
take a much more aggressive progrowth
stance. Much more aggressive progrowth
stance. So in terms of of of this
overall problem and and and that piece
in particular, I'm a little bit
surprised by the fact that in the
economics profession there seems to be
if not a consensus a very strong
majority view that AI will lead to an
increase in in in interest rates. But
couldn't you make an equally strong case
that it could lead to a decline in
interest? I mean precautionary saving
you know I get this vibe thing
>> cuz I'm a bit scared
>> right I'm very scared right I think in
in the valley people are having this
discussion about you know I need to do
well in the next 5 years otherwise I'll
be a surf forever or or some something
like couldn't people just save because
they want some exposure to the you know
companies that will own the economy and
that's kind of a first order thing so
they really want to buy assets now
because their human capital will
depreciate
>> so I I think that it's not impossible
that that we have other that that we
have that we have a a I mean you're
right that now the world seems very
uncertain also it's possible that
inequality grows very much and that's
going to go in the opposite direction as
well indeed because the rich people
serve more they don't consume they don't
consume as much I mean at some point
Elon Musk is not going to consume his
one trillion package if that happens
so yes there are there are a couple of
forces
the precautionary saving is is one and
the other one is the is the inequality
increase that could push the the in the
other direction. I would kind of side
with the consensus of the economic
profession, but you're right that there
is a question mark over it
as a first order approximation the
slowdown in growth over all of these
years led to this drop in R and an
acceleration in growth. If what we think
is going to happen as a first order
effect I think will increase the return
on capital and will lead to to an
increase in R.
>> So empirically there's those things goes
together as opposed to
>> I would I would expect I would expect
that but but
as we said we are peering into the
unknown and we have to well be all be
modest and humble. Mhm.
>> And the other thing that surprised me a
little bit was the fact that
you were you were tying this um increase
in R with problematic implications for
Europe in particular. And the reason why
I was a bit surprised by that is that I
think of Europe as as a continent of
creditors, right? We run, you know, huge
net surpluses uh with the rest rest of
the world. So it would seem to me, okay,
we're we're creditors. R will go up. So
>> we are exposed to this uh to these
gains.
>> We will get we will get richer. So in
some sense we will get richer. Our
governments will have more problems but
we will get richer. So as long as the
government finds ways to tax let me
unpack that. That's a great point. So so
it's true that we are net savers and
that means that as a continent
we should get some exposure to the good
side of the AI. So it's true that
>> even if it happens in the rest of the
world, right?
>> And Ricoleta was writing a report about
how Europe is is doing badly and he he
he came up with this expression that
European savers are exporting their
savings into American companies that are
employing the European workers,
entrepreneurs who cannot make their So
this is a lot happening. I mean you are
you are down in in in in the in the west
coast and you see all these all these
Europeans running and Indians and all
these other nationalities. So so it's
true that the savings should capture
some of this additional R uh there
should that we should be benefiting on
we should be on the on the on the good
side. Now the distributional impact is a
bit tricky right because who is going to
benefit from those savings rates? I mean
for example
Holland has big pension funds with big
exposure to to to to interest rates but
places like Spain, France essentially
the state is doing the the whole pension
through a pay as you go system. So the
overwhelming majority of the population
has zero financial wealth. have housing
which could also go up and so the ones
who are long housing yes the reason why
so the long if you're long on housing
you're going to benefit probably from
from this from this runup
>> but they don't they're not exposed to
financial assets only the very very top
I would say three four 5% of population
will have significant exposure to this
to these financial assets so the
distributional issues are are not
obvious but you're right that there is a
net saver income effect that is
positive. Income effect meaning
Europeans are wealthier as you put it
when R goes up.
>> One thing you were hinting at when
discussing the the micro finance
question was demographics
and I must say that I'm I I worry about
demographics quite a lot in the European
context and in the global context as as
well. So does should demographic change
change a little bit our view of the
trade-off between the benefits and the
and the risks of AI? I think a lot of
people are in this mindset that you know
at least in the rich west you know
things are pretty good as they are. Um
so we can kind of afford continuing with
things as they are and be quite
riskaverse when we write down AI
regulation and and and and do things
like that. But aren't we actually on a
on a burning platform? Aren't things
going to get worse unless something
turns up like AI?
>> So So you're you're very right and uh my
my colleague and and co-author Jesus and
friend Jesus Favia Verde has been has
been sounding the alarm about the fact
that demography standing
total fertility rates are plummeting not
just in the developed world like we
expected but actually in the developing
world. um Colombia, Tunis, Turkey,
they're seeing collapsing rates, which
is very strange because normally you
they're saying is they're going to grow
old before they get rich. Normally you
you you not the other the opposite
happens. The countries get rich and then
they start aging.
So, so the demographic collapse is is
really really problematic and and it's
true that you you will have a need of a
care economy for example in the scenario
in the positive scenario that you were
describing where AI does many of the
tasks that we think are human
care is something that maybe can be
helped by AI and so replacing some
humans on those professions which are
going to be very hard to the enormous
share of that will be old will be useful
and I remember discussing it with Joshua
Gans and I said like oh people are not
going to want a robot to care for them.
He said, "Are you kidding? I would much
rather have a robot uh take care of my
needs, uh like cleaning or showering me
or whatever it is than than a human." I
was like, "Oh, actually maybe maybe that
makes sense that robot is gentle. Maybe
maybe you can do it." So he he was
arguing that the robots will have a big
value as as as carers potentially that
people will want them. I don't know.
We'll have to see. But if if not in
caring in many other things and again
there is evidence that people are doing
therapy with with AIS. So maybe there is
more more range but if not in therapy in
many other things we need the growth we
need the uh the label that we're not
going to get from from this lack of
agility and and so that that that says
hey let's have a more AI positive
posture. Absolutely. Of course, it could
be that AI leads people to want to have
AI companions. And I don't know if
>> makes the fertility crisis worse.
>> Crisis worse. But okay, that's that's a
consumption choice that we cannot we
cannot predict how how that will how how
that will play out. But it does seem
like people like to have AI friends. I
mean, I think that's that is happening.
So I think the question one of the
things I was had in mind when I asked
the question was about stuff like R&D
and things like that right so uh you
know we R&D we think of R&D is being
done by relatively young people although
I appreciate that's not changing so it's
not so much AI helping us with care
economy although that's important as
well you know in semi-indogenous growth
models we end up with you know we need
population growth to get any uh any any
growth and I guess with fertility
declines
that's problematic. So at the very least
you need to be able to shift more humans
into R&D and
>> and no and not only humans but but but
AI as well like some of the work by by
Jones and maybe by by Philip Aon the
recent Nobel Prize winner and Ben Jones
and and Chad and Chad Jones
>> argues that basically to the extent that
AI is just capital is not going to make
a big deal where it really makes a big
impact it's in R&D to the extent that AI
can accelerate the production of ideas
then AI can really accelerate growth.
That's I think the scenario where you
will see the big growth acceleration
having taken into account all the
caveats that I put about you you need
the regulatory approval and so on. But I
agree with you that in terms of
generating ideas which is really the
driver of growth. If we don't have the
scientists we better have the AI
generating ideas or or we need to move
more people into a scientific
production. I I really I am pretty
optimistic about how AI will help in the
production of ideas. We are I mean
somebody like Terry Tao writes already
like okay I could solve a problem thanks
to he he was arguing it was actually an
interesting argument. He was arguing
that AI was helping him collaborate with
many collaborators. So basically he says
like look you can you always have small
teams of mathematicians because you
cannot you need to trust each other
because you don't know if one key step
in the proof wasn't well done. Now with
AI, we just need to kind of uh do the
little bits of the proofs. We kind of
decentralize it and then we we kind of
can check each other's work and somehow
we have bigger teams. All the
mathematicians are saying that AI is
already helping them to make a
proposition or not quite it. I don't
think there's an AI theorem but I think
there are some results already. So I
mean combinatorics the protein folding
Nobel Prize we do see some impact of of
AI in accelerating research which could
be crucial indeed given our
demographics. We we need to have that
research sector somehow
produce.
>> All right I think that's a great place
to end. All right. Thank you Louis and
Andre for coming on to the podcast.
Thank you. Thanks Andrew. It was it was
a lot of fun. I appreciate it.