Epoch After Hours

What will AI progress look like over the next 15 years? Informed by current trends, Epoch AI researchers Jaime Sevilla and Yafah Edelman argue that the default expectation should be wild. They discuss whether AI will solve the Riemann Hypothesis in 5 years, what AI agents will be able to do in 2030, and what happens if we have 100,000 self-improving robots. They also explore what might make progress much faster or slower than they expect. 

0:00:00 - Preview
0:00:41 - Intro: Does 5× compute scaling continue?
0:08:15 - Largest training run in 2030 & what does it imply?
0:12:44 - Impact on Software Engineering & other cognitive tasks
0:23:27 - Economic impacts near the end of the decade
0:31:34 - 2030 bifurcation: Slow down or take off?
0:35:49 - Physical vs cognitive automation
0:44:37 - Timelines and impact of full cognitive automation
1:02:37 - Returns to intelligence
1:08:51 - Three cruxes after 2035 (Robots, technology & intelligence)
1:16:28 - What happens in 2040?  
1:23:16 - Recap: Three eras of forecasting
1:37:42 - Closing remarks

For full transcripts of all Epoch After Hours episodes, visit: https://epoch.ai/epoch-after-hours

-- Credits --
Participants: Jaime Sevilla & Yafah Edelman
Design: Robert Sandler
Podcast Production & Editing: Caroline Falkman Olsson & Anson Ho
 
Special thanks to The Producer’s Loft for their support with recording and editing this episode — https://theproducersloft.com/

What is Epoch After Hours?

Epoch AI is a non-profit research institute investigating the future of artificial intelligence. We examine the driving forces behind AI and forecast its economic and societal impact. In this podcast, our team shares insights from our research and discusses the evolving landscape of AI.

Preview
Yafah: [00:00:00] I would not be that surprised to see the Riemann hypothesis solved by AI in the next five years. So far it looks like diffusion of AI is going extraordinarily fast. The roadblocks that people expect to exist don't seem to exist in practice so far.

Jaime: [00:00:12] Now, we have this concrete thing on record that economists are going to point to and say "These people are insane, they say we can get to 10% growth a year by 2035". And then, the AI people, they're going to look at us and they're gonna be like "Oh, these people are insane. They're only projecting 10% growth by 2035."

Yafah: [00:00:30] I would like to note that by 2040, we are at the point where my forecasting fails. It goes bananas. The next stage after this might look very intense.

Intro: Does 5× compute scaling continue?
Jaime: [00:00:41] Hello everyone. Welcome to Epoch After Hours. I'm Jaime Sevilla, I'm the director of Epoch AI. And I am today joined by Yafah Edelman, our new head of data and analysis at our organization. How are you doing today, Yafah?

Yafah: [00:00:56] I'm very excited to look at this and discuss how the trends, which we've been monitoring for years, have implications for the future and where we think things are going. We've done a lot of studying the past and analyzing what's going on, and I think we've developed a lot of thoughts on where things are going. And I'm excited to work them out with you.

Jaime: [00:01:17] A lot of how I will describe this exercise here that we're doing is projecting these lines that we've been collecting, into the future. Doing the straight line approach to forecasting on AI to a greater degree of speculation than we have dared do before and seeing where those lines are going to take us. All right. Let's let's talk about how much can we keep up with the current regime. So, right now this rate of scaling, it's very, very fast. We're talking about scaling like five times a year. How long can you keep this up?

Yafah: [00:01:48] So you know I expect to see a slowing down from 5x in the next, I would say, two years. I think that we're probably not at 5x anymore. A little bit slower, I think. Just because the data centers of today are taking a long time.

Jaime: [00:02:07] Wait, wait, that's a pretty bold claim here. Even taking into account the stated plans for Stargate and Abilene, do you still think that we're not going to be able to keep up 5x per year in the next two years?

Yafah: [00:02:20] Yeah, I think so. It's possible. So there's a question here about what's driving it. If they wanted to keep up, if they were willing to use all of Stargate once it was built to do a very long training run, they could. They can totally keep up – the capability is there. My expectation, though, is that we've already seen some of the longer training runs in terms of duration: a few months. And that's a big part of what's driving the scaling of training compute. And I think just R&D time and everything has become a big enough deal that you won't see that increase. And once you remove that from the equation, then you're at 3x, or 2.5 or so, in terms of just how much compute you have.

Jaime: [00:03:00] Can we go over the numbers here and make sure that I understand. So things that I will push against, one of them is, actually, I do think that there is a very explicit intention here of "we're building these giant clusters to train a bigass model". Like the Memphis cluster that was built by xAI, it has been pretty much, in its entirety, used for training Grok. That is my modal belief.

Yafah: [00:03:29] I think we know that the Grok 3 in particular we know about. We know it was trained on about 80,000 GPUs, we think. And the cluster at the time had somewhere between 100,000 and 200,000 GPUs, and the cluster did have a decent amount more, and definitely very soon, had a lot more not too long afterward.

Jaime: [00:03:51] When did Grok 3 come out again?

Yafah: [00:03:53] Earlier this year.

Jaime: [00:03:53] Okay.

Yafah: [00:03:54] Yeah. Grok 3 came out earlier this year. Grok 4 also came out. It's possible Grok 4 was trained on more GPUs. Important to note, though, we definitely don't think Grok 4 was trained... We think that the amount of additional compute that was done for Grok 4 is less than that was done for Grok 3 in the first place. Grok 4 was a Grok 3 plus some RL on top — additional reinforcement learning. It seems very possible if they trained it on more GPUs, it would have been for much shorter. It seems also possible that they trained it on fewer GPUs for longer or for a similar amount of time.

Jaime: [00:04:31] I see. Okay. Yeah. I think that moves me a bit in your direction of "okay, we might already be slowing down". On the training duration part, do you want to expand on why we expect things to not get any longer?

Yafah: [00:04:45] Yeah. There's two reasons here. One of them is that algorithmic progress and compute improve over time. So, the later you wait to start your training run, the faster your training run will go, the faster you'll be able to make progress. That provides an incentive to use your compute for other things earlier on and wait for a little bit to train. This is part of it. Another part that I also think is very important in this is just that the R&D time on applications, on algorithmic [progress], and on post-training, and on getting out products is very fast. And you want to get things out very fast. And I think that this is just a big enough deal that you don't want to have to wait six months before you even start fine-tuning your model, getting it ready for product, figuring out how it works. Just the point of getting it to the point where it's very useful for consumers, there's a lot of work even after the pre-training phase. So, yeah, I think that three to six months seems reasonable in terms of time. You could, of course, continue training them for longer. But there are decreasing returns to that.

Jaime: [00:05:58] Right. So this will be one of the reasons why we might expect the 5x per year go a little bit down but not that much, right? Right now training durations in the last five years or so, for frontier runs, they have grown around like 30% per year, if I remember correctly.

Yafah: [00:06:15] I think we got a bit higher than that.

Jaime: [00:06:17] Okay.

Yafah: [00:06:18] I think right now we have something like if you take out training durations, we end up with something, you know, somewhere between 2.5x and 3x increase in training compute each year. Which is still pretty substantial. And there is room to go faster if they want to, if they want to meet these deadlines. And if they want to, you know, continue scaling models at the same pace, they could switch from using parts of clusters, which I think is relatively common right now, to using the entirety of cluster. Like Grok 4 and Grok 3 could have been trained on a lot more GPUs. They could have done a lot more compute. I'm talking about these two because we know more about xAI right now. At least we know a bit more about xAI's compute than a lot of what's going on for other companies. Although we have some ideas of what's going on with OpenAI now, thanks to Stargate and some research we've done into that. But they could have pushed it and tried to keep up at 5x and, you know, for at least a couple of years that is feasible.

Jaime: [00:07:34] But then like it slows down.

Yafah: [00:07:36] It slows down. I also don't think they're going to push it. I think that we're probably going to be seeing a slowdown. Now, this is complicated by the fact that we haven't actually seen... probably you should add another year or so onto my estimate. Just because the time between a training run starting and actually a product coming out can be quite long. We believe that OpenAI has clusters that are far larger right now than the ones that we believe they've trained with. But probably later this year or so, they'll have larger models coming out. Yeah.

Largest training run in 2030 & what does it imply?
Jaime: [00:08:15] Let's put numbers onto this. It is the end of the decade, what is the largest training run we have done?

Jaime: [00:08:26] Do you get to 1e29?

Yafah: [00:08:28] Do we get to 1e29? I think so.

Jaime: [00:08:30] For context here, 1e29 FLOP would be a training run which uses a thousand times more compute than current training runs. It would be like the gap between GPT-2 and GPT-4. It was like 10,000 times more compute, roughly, as a sense of scale. What do you get with that? What kind of AI do you get with that?

Yafah: [00:08:52] What kind of AI do you get with that? This is a big question. I think I expect you to get fairly competent agents, is one major thing you'll get. Agents where the simple tasks, computer use tasks, are very consistently doable, very cheaply and very fast. I expect simple failures of reasoning and confusion, there's a lot less of them. And this alone is going to be a huge deal. You also probably see some other AIs. We see some ability to do reasoning that is sufficient to probably start making some novel discoveries in math, physics, other fields. Probably especially in fields that get less attention from humans or that have a few expert humans looking at them as opposed to a lot, just because of the amount of competition that exists. By the end of the decade I would not be surprised if you just don't have coders, if people are just not writing code.

Jaime: [00:09:57] No, I will be very surprised if you don't have [people writing code].

Yafah: [00:09:59] People might still be telling the AI what to do: designing algorithms, designing overall systems. But in terms of writing code, I mean, if you want to write code, that's easier. Sure. Go ahead. But, it's already sort of at the point where I barely write code. I do a lot of data analysis stuff, and I'll just have ChatGPT write all my code. I expect this to extend to larger tasks.

Jaime: [00:10:21] Okay, fair. There's this very big gap between automating the code writing and automating everything that an engineer does in the day-to-day job.

Yafah: [00:10:28] Yeah, I'm not going to say that all the engineers are going to be out of the job by the end of the decade. But getting to the point where AI can write all the code totally seems doable. Getting to the point where the code it writes won't have bugs, or it will be able to find and diagnose the bugs itself, will probably be doable. And agents that are able to do much more complicated extended tasks by the end of the decade certainly seems very feasible. Yeah.

Jaime: [00:11:02] Yeah. What else? I do expect them to be really good at solving and advancing scientific problems in math, physics, other STEM fields, where there is objective truth, so to speak. It seems pretty plausible to me that you are going to be able to tell them, "Okay, I want to find a matrix with these characteristics" and it will just like go on its own, think for a couple of days, and then get back to you with having cracked this hard problem that a human mathematician would take potentially weeks to solve. Yeah. To make a bold prediction here, I think my modal story, by the end of the decade: there has been one famous problem in either math or physics, where an AI has basically solved it. It has required the input of some smart mathematicians and humans to orient it in the right direction. But then the actual bulk of the work was done by an AI.

Yafah: [00:11:58] Yeah. This definitely seems likely.

Jaime: [00:12:02] Which is crazy, right? Like the Riemann hypothesis solved by AI. Maybe not that specific one, but still something.

Yafah: [00:12:08] I would not be that surprised to see the Riemann hypothesis solved by AI in the next five years. That definitely seems very feasible. I mean, maybe it doesn't, I'm not a mathematician. But AI, math and a lot of these checkable, vigorous fields that we currently think of as involving one person thinking very hard - maybe not interacting with the world a huge amount - those sorts of things it turns out AI is particularly good at. Seeing them make huge strides in math definitely seems super possible.

Impact on Software Engineering & other cognitive tasks
Jaime: [00:12:44] Give me some color. Where is going to be the impactful applications of AI?

Yafah: [00:12:49] Where is going to be the impactful applications of AI? Yeah, this is a good question.

Jaime: [00:12:56] You automate call centers, this is one that comes to mind.

Yafah: [00:13:00] Yeah. I mean, that's an easy one. That's happening today. I think you will probably see a lot more... It seems plausible you see a lot higher quality technical products, programming, just things that ordinarily require more dev time might just be a lot easier with AI, or automatic. Let's see. Yeah. Do you have ones in mind here?

Jaime: [00:13:31] I mean, I have takes. In particular with this part on how AI coders affect the world. It does seem likely that it helps you code up quick things where, I don't know, you're organizing an event for the weekend and you're just, oh, just let the AI have a plan for you. Like a fancy app for inviting people, which again, that seems like very easy. That seems that's already happened today. I do still think that the bulk of commercial software is going to be developed under the management of experienced software engineers who are checking that things are mostly correct, that there aren't like big bugs. People care a lot about security in this context and such. You don't want to have this unmaintainable slop of a code that only the AI understands, that seems to me something that people are going to be quite averse to.

Yafah: [00:14:26] Oh, I think I disagree with you on this. Or, I think that the quality is going to be higher than human quality on the actual code.

Jaime: [00:14:35] I think it's going to be higher. But people are gonna be nervous about not understanding it.

Yafah: [00:14:39] I just think that they're not going to be nervous with their wallets. I just think it's not going to be obvious which companies out there are, you know, really doing different practices. And if, in fact, the companies which are using AI are just offering very similar products that are actually similar or even better and they're charging a lot less, it just seems really likely that people will just use those products.

Jaime: [00:15:11] Do I believe this? Definitely, I do expect that, I could see that most code is being written by AI. I do still expect most code to be reviewed by software engineers.

Yafah: [00:15:27] I think that this probably depends to me. I think this might depend on the extent to which, wages for software engineers react to this. They might go down a bunch, which makes this more feasible. And also just on how much appetite there is for new software. If there's a lot of appetite for new software, then that puts a powerful incentive on not hiring software engineers. It seems like we're already getting near the point where software engineers just aren't good enough to help by reviewing code.

Yafah: [00:16:06] I think that it's just going to be an obvious pointless exercise for a lot of people. You know, they'll have a few instances where they're like, "I found a mistake". And they'll ask, and they'll look into it further and they'll be like "No, this was not a mistake, the AI was right". This will happen a bunch of times. It will be really... I don't know, maybe it will be so easy that they'll do it anyways, but it sounds really dispiriting to spend a bunch of time reviewing code that's much better than yours and won't have any bugs. If you assign someone to do this, they'll just be like, "Yeah, oh, it's good. I looked at it. Sure. Right." Because they know it was. I think if it can write good code on the first try and replace coders, I expect this to happen very quickly.

Yafah: [00:16:46] In general, diffusion seems to me like it's going to be extremely fast. Everyone already has ChatGPT on their phones. It'll just be front and center on product. It'll be very easy to use. A lot of the things that currently make it harder to use will get solved. It will be very obvious to a lot of people very soon after any given feature is released very soon — I say maybe a year or two. Although, I think possibly that this will be measured in months. That you will see: if an AI is able to do things, it will just be doing them, especially on the large scale of doing all of the coding and even doing all of the coding without reviewing. The roadblocks that people expect to exist don't seem to exist in practice so far.

Yafah: [00:17:33] We did a great piece on this. So far, it looks like diffusion of AI is going extraordinarily fast. ChatGPT was released 2023. Late 2022. Sorry, very late 2022. And it's now used by some large percent of the US population. They make quite a lot of money doing this just from subscriptions, and it's only going to grow from there. The adoption of additional AI features often won't involve some separate app starting up. It'll just involve a feature getting added to ChatGPT, and they'll put advertisements and suddenly everyone will be aware of this, just like the next day.

Jaime: [00:18:17] And AI can also help you adopt it faster. If you don't have the right scaffold to just plug it into your work, maybe you just ask the AI to build that scaffold for you.

Yafah: [00:18:29] Yeah. I mean, it will help you adapt it to your work. If a lot of people aren't adopting a feature that they could adopt because it came out and the last time they tried it it didn't work, which I think is a real problem right now, it's not actually difficult for OpenAI to be like "Hey, ChatGPT, just for everybody's accounts, review what they've been doing. If they've been doing anything which you have this new capability of doing, let them know that you have this new capability and give them a link." The diffusion seems to be, the problems of diffusion and integration all seem to be very quickly addressable. The harder problems in terms of scaling, in terms of things and the slowdowns that I think are very likely, mostly come on the infrastructure and training side in particular.

Jaime: [00:19:16] So, let's talk more about okay, we have this AI. I think that this AI is going to be really good at many entry-level jobs. I think it's going to be an AI that you can use to automate call centers, absolutely. That you can use to draft first contracts, absolutely. Write a lot of code. I do still think that people are going to have some hang-ups, "oh, I still want to have a human review what is being outputted by the AI". And this, you know, it's not so much a diffusion thing.

Yafah: [00:19:51] I think they'll pay the human, and the human will have the AI do it and say it was good. And you can't tell the difference. You won't be able to tell the difference. Even if maybe you'll still pay a human, the human won't need to do anything, and they won't, in practice, do a lot of work if they don't have to.

Jaime: [00:20:10] Well, then it's this kind of weird equilibrium in which you're hiring these people to just use the AI when you could just have used AI directly.

Yafah: [00:20:18] Yeah. I expect people to realize this is going on, also. And to just use the AI directly, or someone will launch a service which looks very similar to the other service and charges much less. And you'll find that, and you won't be able to tell the difference between the two services. The other, the one that charges much less, might be advertising a lot more. Or there are additional options to make this happen. There are some companies out there which will buy up existing companies that are in this state where AI could do all the work, and people are using AI to do all the work, but they're still paying people's salaries and they'll fire all the people; or reduce the number of people or the quality of the people they're hiring because they don't think it's necessary.

Jaime: [00:21:02] But they still hire people. Why are they hiring these engineers for, in your world?

Yafah: [00:21:06] What are they hiring engineers for? I think there's a lot of things that aren't coding that are going on. Like figuring out what products do, figuring out how to architect things in a way that's easy to maintain, to some extent. Figuring out how to test things, just figuring out what sort of products make sense, what sort of systems make sense. Even on sometimes not a final consumer product level, but "What sort of things should we expose in our internal database?" Or something. This seems pretty plausible. You still have human engineers who are doing this, who are thinking about what sort of things are feasible, deciding on things to test, having the AI's test it. All of these things seem plausible. When I talk to engineers, people who are programmers, a lot of them I know don't spend their time programming. Especially the more senior ones. They spend their time testing things, talking to other teams, editing config files, is one thing I've heard. Things that are like figuring out how to tell the technology what to do, more than building the technology.

Jaime: [00:22:15] Okay, so, end of the decade, we have this AI – has been trained on a lot of compute. We're using it, and it can automate all coding. It can automate many of these tasks, which are kind of like low context, that an entry-level person could do. And most people, what they're doing and what their job is, at least in office contexts, are managing AI to produce the thing.

Yafah: [00:22:42] Yeah, this seems plausible to me. I mean, I don't think they're going to spend their time managing AI. I think they'll spend their time talking to other people, doing other things. The managing the AI bit will not take up too much of their time a lot of the time. Something I've noticed in AI use even now is that if I have to spend a lot of time talking to AI, then it probably means it can't quite do the thing I want it to do.

Jaime: [00:23:03] Yeah, where the management, to be clear, I think it also involves a part of like I tell you something and then the AI just like just gives me a mockup for the website. And I look over it and I'm like, "Oh, I want you to change bla bla".

Yafah: [00:23:16] That sort of thing, definitely, yeah. Okay, I think I've changed my mind. In fact, managing AI does seem some sort of variant of this where you're interacting, it seems very possible. Yeah.

Jaime: [00:23:27] Okay.

Economic impacts near the end of the decade
Yafah: [00:23:28] What sort of economic impacts do you think we'll be seeing in terms of revenue or GDP growth in 2030 or beyond?

Jaime: [00:23:36] Here's the thing. Revenues right now are in the order of tens of billions of dollars for AI companies. They have been growing very fast, on the order of like 3x a year. Tripling your revenues every year is incredibly hard to maintain. But it's not crazy to me that you can double your revenues every year from here until 2030. And that already gets you really big. If you have AI companies producing multiple hundreds of billions a year, that already will lead you to doubling the rates of economic growth that we see right now. Which in modern economies, they fluctuate around something like 2% a year. They have been stable like that over the last 70 years.

Yafah: [00:24:21] Right now, we see NVIDIA making 100 billion dollars a year already. And we're likely going to see much more than that. Not to mention the efforts to build the data centers to host these chips, or the investments by TSMC to build fabs to produce chips. And all of this is at the cutting edge of AI in a way where the consumer products might not be. That is, we've already found this stuff is useful. What's going to be the next step? And this stuff is, you have to start building for the next step and the next stage. And so, you might see trillions at this point. Definitely seems plausible that you'll see trillions in yearly spending on things like compute, on things like GPUs, on things like fab investments, on infrastructure, data centers and projects that already are very, very large and they could just keep getting larger. And this might show up even in the economy even more than actual applications of AI by 2030. Just because it's a little bit of, there's some lag in terms of when you've built the data center. There's a decent amount of lag from when you invest to when you actually get the results. So the investment runs a little bit ahead of the current day.

Jaime: [00:25:40] Okay, so, end of the decade, we have AI that's producing hundreds of billions of dollars of value a year. I do argue that it's enough to already see growth rates accelerate and maybe see them go from 2% a year growth rates to 4% a year growth rates, which in my book is already quite massive.

Yafah: [00:26:01] I think it's like plausible GPUs show up in GPU spending, show up in GDP. I'm not actually sure. I think it's plausible. They're just like secondary effects from the amount of infrastructure spending at this point are even more obvious in terms of like just the economics, just like hundreds of billions of dollars in this and trillions of dollars, substantial fractions of the US's GDP being invested in infrastructure. Assuming scaling is still continuing, which seems very likely you'll see people building these fabs building more fabs probabl. The scale of planning and data center size will increase dramatically. You'll see some slowdowns or you'll be at a point that the infrastructure spending seems likely to also have effects on what's showing up in the economy. The way I think about it is this: Investors have a very reasonable uncertainty about what order of magnitude of money should be spent on AI. Should it be billions, trillions, hundreds of billions, tens of trillions? It's very unclear. They're uncertain about this. They don't want to overshoot by several orders of magnitude. If you do, you've lost a very, very large amounts of money at this point. There've been other technology in the past which scaled up and which they had similar uncertainty about in the internet with the.com bubble and railways in the past in the UK, where there's this very quick scale up and lots of investment.

Yafah: [00:27:37] One way to handle this in a way that doesn't have you massively, massively overspend and doesn't risk that is take whatever exists in the current day in terms of results and project that that things will continue at the current speed for another year or two. And this of exponential growth. So, you know, you'll predict that, you know, Nvidia might make three x more or ten x between three x and ten x more revenue than it is right now. And you'll invest based off that. And if you're investing based off that right now, you don't need a huge amount more fabs. On the other hand, this stops being the case. If you are at the point where you're where NVIDIA is making trillions of dollars and hundreds of billions of dollars in revenue to companies like, like OpenAI. Then you're going to have to build more fabs, even if you forecast out only a few years, or a few generations of the technology. There's a complication here where at that point, the number of years will increase, because the time between the time between when you can invest and get this next generation is longer because you have to do more complicated and large infrastructure projects.

Jaime: [00:28:41] I think this gets at the core of why our economic modelling has been so aggressive. I think these kind of effects are hard to build into an economic model that actually is built on the premise that scaling is going to is going to continue.

Yafah: [00:28:54] What kind of effects are hard to build onto this?

Jaime: [00:28:56] These effects where like people are reluctant to invest and like take a huge bet from the beginning because they are they're nervous about maybe this thing is going to completely collapse.

Yafah: [00:29:08] I think with our prior work GATE, this is my main disagreement with it. Or my single biggest disagreement with it, is its investment model. It assumes people are willing to invest in this way where they're evaluating probabilities and uncertainties. I think if I'm talking about modelling the future just empirically, I think what we've seen is better matched by assuming scaling continues, they will be willing to spend some fraction of the current revenue of AI, some multiplier of the current revenue they'll be able to invest. Yeah. So if AI is making 100 billion, they'll be willing to invest a trillion or something of the sort or 10 billion. They'll be willing to invest 100 billion, something of this sort with some details. Sometimes it's not revenue. To some extent, results can be compelling. Although I think as we move into the stage where it's becoming a — I don't want to say a more mature technology — but a technology that's having financial impact...

Jaime: [00:30:09] Let me introduce you to Yafah Math on investing. Yafah Math on investing is telling us: the amount of investment that's going to flow into a field in a year is like 10x, the amount of revenue that that field is generating. And every 10x scale in infrastructure build out in AI is going to take one year later. Is that a good summary?

Yafah: [00:30:30] Yeah. Every time you want to scale, if you want to scale to the next 10x it will take an extra year. So the first time it will take one year. Second time it'll take two years and so on and so forth. These numbers very approximate. But yeah, this is my model of how things work.

Jaime: [00:30:50] Yafah Math folks. You heard it first here.

Jaime: [00:30:52] Okay. So let's center back to our story so far. For the next five years, we expect that scaling continues. People are investing more. Revenues grow alongside that because you get to automate more and more tasks. And this gets you by the end of the decade to people having launched ten training runs that are a thousand times larger than what we have now. It leads you to a revenues that are in the multiple hundreds of billions of of dollars which, like already have a measurable effect on economic growth rates.

Yafah: [00:31:28] Yeah. That seems right. I think I'm on board for this.

2030 bifurcation: Slow down or take off?
Jaime: [00:31:34] I think that for me here is where the story bifurcates and I hesitate a bit on saying which way it goes. There is a world in which things just radically slow down, and it just takes a very long time to keep growing. There's another one in which you just keep automating more and more, and that accelerates growth rates. And those growth rates just accelerated production. that you can put back into AI.

Yafah: [00:32:00] I think there's like two stages here. There's the infrastructure scaling up phase, where it's obvious at every point in time that you want to scale up. There's enough money for it. You have some slowdown due to the infrastructure taking longer as it gets larger. It's sort of unclear how long that lasts. I think that to some extent, 2030 might even be aggressive for how long this lasts and what it looks like at the end. Sort of unclear. But our model case we're talking about this continues till 2030. It's making a substantial but not very dramatic impact on GDP.

Yafah: [00:32:00] Then there's the second stage where you start seeing this explosive economic growth potentially. And this feeds back into investment. Even though it's slow, even though the amount of investment is limited to one generation ahead, at this point, you might not even be able to invest 10x more on that because you might not have 10x more than that. My overall thought on this is that I think it's very reasonable — and there are numerous very good arguments — that if you get to the point where AI can automate all of the jobs, you have explosive growth seems totally right.

Jaime: [00:33:15] Let's be precise: what's explosive economic growth in this context?

Yafah: [00:33:21] At a minimum 30% GDP growth per year. Although our models often give much higher. I think if you automate all the jobs, absolutely you're getting 30% economic growth per year. For many years or for at least five consecutive years you get, on average, 30% economic growth. Quite possibly much faster. If you can automate all of the tasks, you'll get this explosive economic growth. And right now, we can only automate a fairly small percentage of tasks. I think in terms of economically useful tasks, it's very hard to put a number on this, but probably more than 0.1% and less than 10%.

Jaime: [00:34:01] I think right now my kind of a rule of thumb that you can have here is: if you want to get to the point where you are increasing growth rates by an extra percent in a year, in that year, you need to automate like 1% of the tasks that people are doing. That seems to me a rule of thumb, which is approximately correct; and I do believe that by the end of the decade, we're going to be living through a fast enough period of growth that this is going to be true.

Speaker1: [00:34:31] I'm not sure if I... I have more uncertainty about this rule of thumb than I think you do. It doesn't seem unreasonable to me. But the thing I have uncertainty about is, right now we're at some small number of tasks, very small percentage of tasks that are being automated, maybe less than 0.1%. I don't know. And if you get to 100%, you get explosive growth very fast. At some point in between, you're going to start seeing less than explosive but still fairly substantial effects. I have a lot of uncertainty on where that is. You seem to think that, you know, you get like substantial growth effects at a few percent being automated a year. This doesn't seem unreasonable to me. I could be persuaded that the number is much higher, that the amount you need before this is much higher, especially if you want to maintain growth. But fundamentally...

Jaime: [00:35:23] Yeah. To be clear, if you want to maintain growth.

Yafah: [00:35:26] You have to keep automating.

Jaime: [00:35:27] You have to keep automating.

Yafah: [00:35:28] Yeah. And if there's even a slight speed up in this, then you will see, not not super long after it, you'll see all of them. The question is...

Jaime: [00:35:40] This part that "long after that you will see all of them automated, or not long after that" I will actually disagree with you.

Yafah: [00:35:47] Not super long after, you know, a decade or something.

Physical vs cognitive automation
Jaime: [00:35:49] Yeah. Okay. In a decade it does seem like enough time. But here the subtlety that I want to point to is, automating cognitive tasks I feel is going to be way easier than automating tasks that require a physical input. And this is something, a big hurdle that we're going to have to face somewhere in the next decade.

Yafah: [00:36:10] What's my thought on this? I think I'm definitely much more skeptical of this claim than you are, or I'm definitely pretty skeptical. There's a few things going on with automating physical tasks, and maybe you can point to which ones are the ones you think are going to be particularly difficult. There's just there's just the difficulty of making robots that can do tasks. There's the cost of producing those robots. And then there's the actual intelligence necessary to do those tasks.

Jaime: [00:36:41] I think the the former are the ones that are going to be the bottleneck. The later ones, I don't think so much. The cognitive part is like, yep, you're going to have the software; in principle, you have the right data for it, you're going to be able to automate it. Still, in order to match the productive capacity of billions of humans, you're going to need like billions of operators out there. And those take some time to manufacture.

Yafah: [00:37:06] It's not clear to me how much time those will take to manufacture when you're at the point where this will make you $1 trillion dollars. It's also not clear to me how obvious it will be and how close the cycle time will be on this. But right now, it certainly seems to be the case that you can get humanoid robots that are worse than humans for an amount of money, where the interest on them would substantially exceed the minimum wage. So it definitely doesn't seem even if you had the software problem solved right now, it doesn't seem like you would see these robots become much cheaper. There's a few reasons these robots be implemented on a massive scale. There's a few things that I think are important to point out here. One is that even if you aren't at the point where you are actually replacing people with robots, you could be at the point where you are replacing skilled labor with unskilled labor, where people are using or holding up their phones, and it's telling them, or highlighting exactly what they should do. Or maybe they're wearing a headset to tell them, although that feels less concrete.

Jaime: [00:38:12] Maybe Mark Zuckerberg was right.

Yafah: [00:38:13] Yeah, maybe Mark Zuckerberg was right. We'll see augmented reality. You could also just imagine people holding up their phones and it uses their camera and then does like an animation showing them exactly what to do; or monitors their hand movements with a body cam and we'll tell them if they're doing something wrong. There's a lot of things you can imagine that just take a varying degrees of complexity, to turn potentially highly skilled blue collar work, or pink collar work, into unskilled labor in this way. Or even if you don't have the robot bodies you'll have a lot more people who can work on things. It's unclear how much this unlocks, but it seems very possible to me that this unlocks a much, much larger workforce. If you have AI monitoring people, things like quality control are a lot easier. You have AI seeing everything at once. If you've solved all of these software problems, humans make decent robots in a lot of ways. They might not enjoy it, and the amount of wages you'll have to pay them will vary. And this will have a lot of... I assume for a lot of people, this sounds like a very unpleasant way to live. On the other hand, there's a lot of people who don't have access to jobs, much less jobs that pay pretty well. And so, there are probably going to be a decent number, or a large portion of people, who are willing to take these sorts of jobs.

Yafah: [00:39:44] And the amount of education, access to resources they'll need in order to outsource these things will go down by a lot. And this will allow, I think, for a lot more manufacturing output in a lot of places just being able to utilize human workforces at much higher level. This will also drive some combination of outsourcing and also falling wages, unless something's done to prevent this policy wise. This also happens for white collar jobs even earlier. And the sort of knowledge worker jobs. We're you still need a human to do it. You need to human to check over it. But you don't need the human... There's not actually a reason to hire the human who's good at it. You could just hire the human who's says they'll do it, but then really uses AI and the quality of work is the same and you can't really distinguish. You ask your friends what their experience with this person was like and they're like "yeah, it was good." Because the fact that the person's not actually skilled at their job doesn't really matter. And this means you'll see falling wages because anyone will be able to do it and they'll just undercut each other. Unclear how long this will take. But the lack of skills, the de-skilling of jobs probably happens in a lot of areas before you see full automation of jobs, especially in blue collar jobs, I expect. This will have a large amount of economic effects. It's unclear to me how much you unlock by deskilling blue collar jobs.

Jaime: [00:41:14] Yeah. To me, this doesn't seem as big of a deal as eventually you have AI that does the thing, and you are able to scale that up to a very large degree.

Yafah: [00:41:24] One thing that's unclear to me in particular is how much cheaper and easier to scale up does manufacturing get? Let's say you have you can build your robots. How much cheaper and easier does this become if you suddenly have a billion, or several billion, people who are suddenly capable of doing high skilled technical jobs because of AI oversight. it might be that this substantially increases the ability to produce something like robots. And that you just see a substantial increase in manufacturing output from this that will then potentially lead to robots because you hit limits as to the number of people. But it's very unclear to me how far you can get on just on descaling human labor alone. And it seems very possible to me that you can get to the point where robots become much cheaper and more numerous.

Jaime: [00:42:19] I think my stance on this is that I don't believe this makes that large of an effect. I do think that it makes enough of an effect that it's substantial. Like potentially, I could see you growing, 50% your production of robots through reallocating human labor. I don't think you get much more than that because of two things. The first one is, it's just hard to relocate all of that labor to work. It's hard to coordinate. The second one is you're still going to be bottlenecked by the capital you have. You only have that many machines, even if you have a surplus of operators that suddenly are scaled, you still only have the same machines, you have to build more machines. Eventually, this is the kind of thing where the economy kicks into place and you get to build all of the machines and you continue growing. I don't think this is going to be the primary effect that dominates.

Yafah: [00:43:17] It's not clear to me how long it takes. If you've solved the software problems and everyone understands you've solved the software problems, you've demonstrated this, and it's just a matter of selling cheaper machines. It's not clear to me. I mean, also, of course, at this point, possibly the main effect here is actually going to be from automated R&D. According to me. I'm not sure if you agree with this. Just like AIs that are able to optimize and design cheaper versions of these machines, seem to have a potentially very substantial impact at this point.

Jaime: [00:43:49] I don't think I believe that. I think that the primary effect is still going to be driven by: you build a new version of the machine, you see how operators use it, and then you realize these things that you can do to make it better. And you iterate on that. And this takes a bit.

Yafah: [00:44:06] At this point, you don't have to iterate it on one version. I mean, you can have 100 — I mean, possibly this will look like startups or something — You could have a lot of people try a lot of ideas at once. There's not very many limiters on that.

Jaime: [00:44:18] Yeah, fair enough.

Yafah: [00:44:19] And this seems very plausible to me that that makes a huge difference. Just the scale at which you can iterate might be a very large, might be very substantial. The scale at which you can try new things.

Timelines and impact of full cognitive automation
Jaime: [00:44:37] Let's maybe get back to the bigger picture of what we're trying to figure out here. At which point do you expect we're going to have AI that basically can automate all cognitive tasks and do them as cheaply as a human.

Yafah: [00:44:50] When I'm asked this question, I typically say 2035. I do want to note my 90% confidence interval on this is going to look something like 2027 – 2050 or 2045. I don't want people to mistake this for definitive. I take a much faster and much slower and even some slower timelines pretty seriously. But when I'm asked for this, I say 2035.

Jaime: [00:45:23] I did a little bit of thinking on this, not much, and I arrived at a modal point of 2034.

Yafah: [00:45:31] Yeah. Okay, so we're pretty much in agreement here.

Jaime: [00:45:33] The way that I arrived at 2034 or so, it's very crude, but I'm thinking about — okay, if by the end of the decade we're training models like a thousand times more compute. Then like this is going to be like a gap equivalent. This is going to be as large as the gap between GPT-4 and GPT-2. And I kind of think about, what do you expect that model to get? And it's like, I expect that model to be really impressive. Again, I do think that solving open problems in physics and mathematics; I do actually believe in the model that automating all code that's been written by humans, just the code writing part, that seems that seems correct.

Jaime: [00:46:13] I do also expect that there's going to be this long tail of cognitive tasks that are pretty hard to automate, where humans have been particularly well optimized for that. For things like fine motor control, things like keeping track of the strategic situation, keeping coherence and agentic-ness. Which to me seem they might require a little bit more. And then a little bit more, is that another 10,000x gap in compute? Like no, I don't think there's like 10,000x gap in compute. Maybe it's half of that. Then given that we expect infrastructure build-out to slow down, maybe we will already have a models trained on like 100 times more compute than by the end of the decade by 2034.

Yafah: [00:47:02] I think I'm less certain than you about the specific tasks. Fine motor control. Definitely. By the other cognitive ones I'm maybe more skeptical of, but just from the fact that I don't expect AI to be able to do cognitive tasks in the model world. Don't expect this to happen until 2035.

Yafah: [00:47:23] I think I did a similar amount of effort to get this number as you did.

Jaime: [00:47:28] I wouldn't want people to over update on the specific things that I said. For me, the more important thing is there's going to be some cognitive tasks. I wouldn't be able to say which ones are...

Yafah: [00:47:39] Which take a little bit longer than the others. Yeah, absolutely.

Jaime: [00:47:42] Yeah, that seems about right.

Jaime: [00:47:44] So let's talk about like what does this buy you? Even if you have in this model world, where like things just scale up, everything goes kind of like according to plan. If you get to automate like all if you get to AI that can do all cognitive tasks by 2034, 2035. What does that get you in terms of impact in the world and what AI can do? And then for me, I think this has a large impact. I think this can already get you to the point where you get a 10% growth a year over — in a way that can be sustained over a few years. That doesn't seem that insane to me. I think if you want to go further than that, I actually start to believe that you're going to need the robots and the robots might take a little bit longer to get built.

Yafah: [00:48:33] Yeah, I think I'm less confident than you are on this. In particular, there are a lot of things that cognitive labor might unlock, making robots much cheaper. It's very unclear to me how much this can be done by just running a large number of experiments in parallel. Like if experiments give a few bits of information each time, you could just run a million experiments at a time. And this is the same as doing iterations, like seven times faster or six times faster, however fast you want to go. And if AI is making hundreds of billions of dollars, the thing I just described might only cost you tens of billions, maybe 100 billion. It's a huge investment, but it's not 3x your current revenue. And if you're pretty convinced it would work it seems like a profitable thing to do.

Jaime: [00:49:20] But it's still hard. Even if you are only automated cognitive tasks, if you don't yet have the robots, then right now — we did some estimates of this, of how many tasks right are "remote only" so to speak — And we got like 30% of tasks that people do today in the US. 30% of their time is being spent collectively on remote only tasks that could be automated with just brains, without without robots. Which is a bit, but it's not that much. It's something that can get you 30% growth that you automate progressively over, say a decade. Maybe that can get you up to 10% growth a year. But I don't think that can get you to like 30% growth a year.

Yafah: [00:50:08] We don't really look like a world where doubling the number of the top 1% of researchers gives you very, very dramatic gains; to the extent where it might be possible to do that. We put resources into training better researchers, but we're not willing to give up 10% of our economy to train to double the top 1%. Which, I might expect it to be if it where that effective.

Jaime: [00:50:33] I think that key thing we should be discussing here is whether we should expect by default for AI to be different in kind and have different scaling properties here. If we expect that for humans, even if you put twice as much labor into R&D, you get only these very modest returns, which I do totally believe. Why will AI be different?

Yafah: [00:50:55] Yeah. I mean, so a lot of the questions here are plausibly you're putting a lot more than 2x, you're plausibly putting 100x or 1000x just because of inference. A lot more R&D. We currently have this where, once AI can automate a task and it gets automated, especially cognitive tasks, the amount of it increases dramatically. And even if this is not something you can necessarily keep increasing each year, a one time 1000x in the amount of researchers. This seems to me very plausible. And depending on how you follow the numbers, you might actually get very, very dramatic impact from this. Even if you don't get explosive economic growth, this could still easily give you, many years added to human lifespans. This could give you wild technologies that just aren't the explosive economic growth type. This could give you... I know Dario Amodei talks about basically every cancer getting solved, basically every disease getting solve, doubling of human lifespan and things like this. Even without the insane economic growth, you might start seeing things like that, which are just a very, very large deal. Also, even without this explosive economic growth, the amount of economic growth will still seem really fast and intense to everyone. Yeah, like even at 10%, it's just that's a lot.

Jaime: [00:52:26] I definitely do not have here a rigorous argument against it. Perhaps an intuition that we can give is there seems to be a limit to like how smart of a choice you can make when you're designing an experiment. Eventually when you have a million scientists like working really hard on coming up with, okay, this is the optimal next design that we're going to try. Then this bottoms out. You eventually have this idea that then other people can look at and be like: "oh yeah, this seems like a good idea." I don't expect it to be this super brilliant ideas that are hard to understand.

Yafah: [00:53:02] So there's a few models here. One is that, you can't really increase the number of experiments you're doing a lot. Marginal scientists improve the quality of the experiments. Another model is that deciding on what experiment to do. Maybe not actually that hard. And I think this is true to some extent, these are extreme models. But you can see AI increasing just the sheer number of experiments you're trying. I think this is the world. I imagine not where a million people try on one experiment, but where there are a million eyes doing a million experiments, and we're able to deploy capital at a much larger scale.

Yafah: [00:53:46] Arguments against this are of the form: Then why aren't we training more scientists right now, if this were going to be so effective? And this is a very valid point, but plausibly there's, much more investment costs in terms of investing in the people running the experiments, even like all of the labor involved in experiments is hard.

Yafah: [00:54:09] Like lab experiments, you'll need people who are trained lab technicians and things. And if you're de-skilling these jobs that's also a very huge deal. And just it seems very plausible that just automating cognitive labor drops the cost of... Allows you to substantially increase the number of experiments you are doing. More in software than some other things. But there's a lot of other places where it seems plausible they'll let you do it without that. Also, in a world where you're getting much wealthier and you've suddenly discovered a lot of labor, there's just a lot more people who are unlocked as potential workers. I would be pretty surprised if this doesn't speed up technology substantially, whether it leads, whether this is enough to give you an explosive economic growth on its own is less clear to me.

Yafah: [00:54:59] You could see a 1000x in cognitive labor on scientific tasks very quickly. And it's this could very well look like an increase in scientific progress and just a general R&D that looks like skipping ahead a decade or two decades.

Jaime: [00:55:20] Wait wait wait. Say that again. What gets you to skip a decade or two?

Yafah: [00:55:25] If you can add 1000x the number of researchers you currently have. You get 1000x...

Jaime: [00:55:30] In a year, you get one decade increase. That seems wild to me.

Yafah: [00:55:36] Do I want to walk back this claim? Maybe my actual take is this. This will vary a lot by field. There will be some fields where it's slower and faster. The fields where it is. Where this has a huge impact will have an outsized impact on the world. It will be the most noticed ones and will be the most impactful ones. And there'll be some where it's less than a decade and maybe the median or whatever. The average is less than a decade. It's three years, whatever. But there might be enough instances where it looks more like a decade.

Yafah: [00:56:06] I think that we're... I'm not sure, actually, this is much more gut based than numbers based to some extent. I think we both basically agree on a substantial jump ahead. Three years on medical science is pretty large. Especially because I think you keep seeing this increase in speed, as the AI's get to do more experiments. And also maybe you see this three years and six months. It's just that from the perspective of someone watching this happen, it will feel truly wild, I think I.

Jaime: [00:56:45] I don't think it will feel truly wild. The way I like thinking about this is in terms of economic growth. And if you have a world that's moving like 3x faster, both on the scientific side, but also on the economic side, this is this is like a growth rate that you should expect like 6 to 10% a year, which to me is like it's, it's that seems that you could get that with AI. That seems that's something you could totally get. I think my model world in 2034, like we get to this like 10% growth a year, which is wild.

Yafah: [00:57:18] You're moving three times faster. Since 1995, it's been 30 years. So that's, in the next ten years you would get the same level of technological change that you got from 1995 to now.

Jaime: [00:57:31] Yeah, you get the smartphone, you get this, this new fancy vaccines that we have. You get like all of these AI progress.

Yafah: [00:57:41] Significant advancements on treatment for HIV, that substantially change how it impacts the world. I mean maybe we'll get used to it and we'll think: "oh, this is just the speed technology increases at". But at first, this will look like things are moving very fast.

Jaime: [00:57:58] Yeah. And I don't want to overly litigate this speed because here the point I'm making is about the speed that you get at the beginning and without any robot automation.

Yafah: [00:58:08] We think without a robot automation you get a amount of speed, which looks like a lot. It's not the extremely wild outcomes that we'll talk in a bit on a lot of technology. Also, I'm not sure how much you keep this up for a lot of technologies. I think you probably get a boost and it slows down a bit as the impact of additional thought. At this point, it's unclear. Maybe you do keep it up for a while. I mean, large amount of uncertainty around all of this.

Yafah: [00:58:37] Also, I think just in terms of impacts on the world I expect the impact on the labor market and on people's jobs and livelihoods to have secondary effects. Even at this point...

Jaime: [00:58:50] Explain that.

Yafah: [00:58:51] I expect high unemployment and incredibly high deskilling, a lot of jobs that you might make a lot of money from, you either don't make money from or the jobs don't exist. When we had a substantial spike in unemployment during Covid we saw a several trillion dollar stimulus package being passed in a bipartisan effort by both parties. Almost overnight, by a bipartisan effort, almost overnight. This is the ability of the United States government and the the people, The people in general. The American people to respond to labor changes and to these sort of situations is impressive and honestly, quite rapid. And the incentive to do so and the desire to do so is, I think, very much there.

Yafah: [00:59:41] I think that you will see very large societal reactions to AI. You know, this is before 2030 where this becomes a dominating issue for elections. I expect all parts of AI to get a lot more attention just because they're all associated whether it be existential risk or just automation in particular or the infrastructure costs or the effects on the environment, I expect all of this to get a lot more attention and a lot of policy to happen. And for this to affect everything a lot. And just in terms of what do I expect, if I think in 2030 I'm thinking about what are the big news AI things, I'm going to be like: Yeah, the big news AI thing is that now everyone is talking about this AI a lot, and they're talking about, to some extent, the significant technological strides they've made and also how there have been a significant increase in... You probably see a significant crime spike just from unemployment, it just happens. There's been a lot of turmoil, a lot of chaos, probably the government attempting people attempting really dramatic things to deal with this. And this to be like one of the most salient things about the world. And one of the most...

Jaime: [01:01:12] Seems about right.

Yafah: [01:01:13] Yeah. Definitely by 2035, probably by 2030 in our modal case, I think you start seeing this. One thing that's unclear is how fast you get the unemployment effects. This has to do with to what extent are different tasks correlated on a given job? If they turn out to be pretty correlated, meaning that jobs go from no AI to fully automated very fast, because any given task is going to go from no AI to automated very fast, I think then it seems pretty likely that you see a lot of jobs [disappear]. If they're very correlated, it seems pretty likely you'll see a lot of jobs that are automated very fast. Even if they're not, you'll still see a lot of jobs that are automated pretty fast, just slightly slower. And this just is the secondary effects of this sort of thing is going to be extremely substantial. And maybe one of the...

Jaime: [01:02:05] It's a big deal. It's going to be a big social thing.

Yafah: [01:02:08] Yeah, it's going to be very hard for a lot of people and hopefully if the economy is doing well and technology is doing well, hopefully there's things that can be done to actually make this a good world for people and make sure people benefit both in the short term and the long term. I'm optimistic that this will be possible. Though it seems very likely it's going to be a painful time for a lot of people, regardless of this.

Jaime: [01:02:35] Yeah. God, yeah. God, yeah.

Returns to intelligence
Jaime: [01:02:37] Okay, let's go back to a summary of where we are with our timeline and where we are standing: We have described this world in which AI keeps being scaled up over the next 4–5 years. We reach a lot of automation, there's a lot of unemployment because of that and this already has measurable economic effects. AI keeps moving, infrastructure keeps being scaled up at a slower rate after that. So that in our model world by 2035, we have AI that's really, really competent, that can basically automate like all cognitive tasks that a human can do. Clearly this is already enough to get you to 10% per year growth, which is a 5X increase over the rate of growth that we're seeing today in the US, a 3–5 x increase. This is already a really big deal. If you want to go beyond that, for me the robot question starts becoming super salient. When do you start building robots in this timeline? For me, it's a really important question.

Yafah: [01:03:46] I think what is maybe more important to me is the degree to which returns to intelligence continue. It's just that, I think this is a bigger question to me. Maybe all of these things, all these best guesses we have apply to humans right now, but do they apply if humans can get way smarter? I really don't know.

Yafah: [01:04:13] I think it's very reasonable for someone to say that: "Yeah, sure. Maybe the top 1% of humans isn't a huge deal now, but imagine the top 0.000001% of humans." You can imagine that you can you effectively push someone a standard deviation smarter. They have maybe a substantial more impact. And you're not limited on how many such people you can get because they're AI. It just it's I think this is my uncertainty and question, more so than the speed at which you can get robots.

Jaime: [01:04:45] What do you think about this: What happens if you replace the human population with Von Neumann? If you Von Neumann the world. I think this has an effect. I do think that you get a speed up from that. And this compounds with the effect of — suddenly you have access to much more labor. I do still think that, this still seems the kind of thing in my model world, which is very uncertain, it is the kind of thing that helps you move maybe twice as fast and not something that helps you move like 100 times as fast.

Yafah: [01:05:22] I think I agree with you on this. The extent to which this feeds back into itself and lets you scale up faster is a big question for me here. Our current thoughts on algorithmic progress, and this is the main question of what happens here.

Jaime: [01:05:40] In my model world, algorithmic progress is super tightly correlated with compute.

Yafah: [01:05:45] Yes, I think we both agree on this. I have a lot of uncertainty. It's a very important uncertainty. But yeah, in my modal world, or the world we're discussing here, algorithmic progress is a result of being able to run a lot of experiments...

Jaime: [01:06:00] And specifically experiments at large scale.

Yafah: [01:06:03] At sufficiently large scale, possibly growing scale and possibly quite a lot of experiments, and writing them fast so you can iterate. There's a lot of support for this. I think just if you read explanations from the top people in ML(Machine Learning) as to how they do what they do, or as to why something works, it definitely seems very plausible that this is the case. And that even making a bunch more AI researchers who are smart, doesn't actually have a huge impact compared to being able to run experiments. At which point this means that when your growth in compute has slowed down to a 1/10 of its current speed, then you will be at a point where algorithmic progress will also slow down to a 1/10 of its current speed. So you see this sort of magnification of the impact of compute and importantly, you overall see a lot slower [growth].

Yafah: [01:06:55] If this isn't true, I will note algorithmic progress takes over as the most important thing by far, because compute scaling slowed down. This has a lot of implications for who in the world do you expect to have a lot of compute or power? It just comes down to this, bunch of researchers. Being able to invest a lot of money isn't as important. But I think we both agree that modal world seems really likely that it's the vast majority experiment related to having a lot more...

Jaime: [01:07:21] Yeah, well, I mean even in this world, algorithmic progress is still very tightly correlated with like, inference compute.

Yafah: [01:07:28] Yeah. Right now it's definitely... If you talk to people about, why don't they do some research about something? It's often that they're compute bottlenecked. It's just I mean, it's a common complaint by academic research.

Jaime: [01:07:42] And I'm getting at the point of... In this world where algorithmic progress is not tightly correlated with the scale of the experiments, you still need this large workforce of AI scientists to do algorithmic progress for you.

Yafah: [01:07:58] Sorry, because?

Jaime: [01:08:00] Because AI scientists take compute to run. If you have a thousand times more compute than other factions, you're going to be able to translate that into a lot more algorithmic progress.

Yafah: [01:08:13] I do think that, in the world at which you don't actually need experimental compute, maybe; or either it doesn't grow that fast or you don't need much. I think once you can automate AI scientists, you get wild superintelligence basically the day afterward. This is the world where AI 2027's ending, looks basically correct. Even if yes, it's the case that compute is still important, the thing you have very soon after, within a year of that, you have a world which looks wildly sci-fi compared to the present. And I don't want to predict anything at that point.

Three cruxes after 2035 (Robots, technology & intelligence)
Jaime: [01:08:51] What I identify is that the three key things that we will want to understand in our modal world, to see how fast things are going to be moving after around or after 2035, we need to understand better: (1) How quickly are we manufacturing robots? Jaime: (2) What are the returns to scale to further scientific labor? (3) What are the returns to intelligence to get an AI that substantially smarter than humans?

Yafah: [01:09:22] Yep. I think those are the three too. I think that if we're talking about modal world, we're not accounting for politics and responses, this is what happens. It's useful to understand what happens if you don't do interventions. People are going to do interventions, but it's useful to have a baseline so people can understand sort of what they're intervening in favor or against. I think that, absolutely, I think those are the main three things I would focus on as important questions to answer.

Jaime: [01:09:54] If I had to venture a guess of what's 2035 beyond? I'm more on the side of skepticism to see these very fast returns to scale in science and returns to intelligence. Where I will freely admit that a lot of this is rooted on, it's hard to make guesses on it, but I think I will still say that. And then the robot question for me, there's a question of foresight, of how quickly our investors are convinced that this is going to be the next big thing and start investing in that early enough to already have the robots when we can get to automate all of this, when AI will be good enough to use them.

Yafah: [01:10:40] I think my take here is that it seems very plausible that by the time this is a major question, the revenue from AI will be so substantial that some AI company will not even need to spend a very large amount of their money on making this. If they think it's plausible, it'll just be like a small budgetary item. Compared to what they're... Building them in bulk will be. But even if it's like a $100 billion investment. So let's say to be efficient, you need $100,000 per robot body. Seems pretty likely. So what does this mean? If it's $100 billion, you get a million robots. That's not as many robots as you'd like. It's a fair point.

Jaime: [01:11:28] Robots are expensive.

Yafah: [01:11:29] Even if they're $100,000. Right now, I think my impression is $100,000 to $1 million for a humanoid robot, that's not as good as a human. Yeah, you do need to get that down. When we're talking about AI, it's very easy to get used to things 10x-ing or 3x-ing or improving at this very fast pace. This is how we're used to thinking about it. I've said that before that if something's not improving at least 35% the speed of Moore's Law, per year, then why do I care about it? It's not moving. Most things don't move at this speed. AI is unusual. And expecting that something that might cost $1 million now, will be less than $100,000 in many years is a big difference. It requires R&D being done by AI to have maybe a more substantial effect than we expect. I think that this does seem to be reasonable that you need to lower the cost more than I expect to make it better than human labor to do these things.

Jaime: [01:12:52] And that's a big deal. I think this can substantially slow you down. One intuition pump here is when you look at how many cars we make a year. Worldwide we make like 100,000 cars a year. Obviously the robots at this point are much more useful than a car. You're going to have like a much stronger incentive to make them. But the point it to me, it makes a big difference when you realize that this is going to be the case and start preparing for it? If you just want to go from now to producing 100,000 robots a year, this might be equivalent to cars.

Yafah: [01:13:29] Cars don't sound like a great case, a great comparison here, just because the amount of cars you make is limited by the number of people who want them. And once you get to that poin... and the features and everything.

Yafah: [01:13:39] It is the case that going from not making much of something to making even 100,000 or tens of millions pretty hard. If I were to spend... Let's say you're taking it seriously. Your AI is a huge deal. You spend $10 trillion on $100,000 each robots. Then you're getting to the point where that's that's 100 million robots. That's a big deal.

Jaime: [01:14:09] $10 trillion in a year, even in 2035, with AI being like this big of a deal is still a big...

Yafah: [01:14:16] I mean, the thing that this really looks like is spend $100 billion and then you spend $1 trillion as fast as you can. You spend $10 billion, then $100 billion. I mean, yeah, spending $10 trillion at the point we're currently discussing, still a big deal. But it might not be. You test it out with $100 billion first. Not that much. You start making a bunch of money. It's also sort of unclear how many robots you need to automate to lower the cost of doing all of this.

Jaime: [01:14:45] Yeah. Well, I think you need... One intuition here is if you want the robots to have this measurable effect on how fast things are going, you need to have at least as much robot labor as human labor.

Yafah: [01:15:00] I'm specifically saying about the speed at which robots are being built. At least as much robot labor involved in the construction, which it's sort of unclear to me how many humans and how much human labor ... It's not obvious that you need more than a million or more than 100,000.

Jaime: [01:15:18] ...100,000 humans working. Can you say the statement fully?

Yafah: [01:15:22] That at a point of 100,000 robots, you might be able to make robots substantially faster by having the robots move them make themselves. If this is the case, you spend $100 billion on this. You see it works. It's very easy for you then spend $1 trillion on this and then $10 trillion and this. Now you're at the point where it's 100 million robots. It's a big deal.

Jaime: [01:15:49] This still has taken you, like three years to get there or something.

Yafah: [01:15:52] This still has taken three years. I definitely think three years from the point at which you first see 10,000 robots. Totally reasonable. This is also there's a lot of details here. The robots are not currently as good as humans at things. Even if you're willing to spend more than 100,000. And so there's a lot of things that have to go right here. But three years after the robots, you're at a point where maybe traditional economics no longer is a good way of modelling this.

Jaime: [01:16:28] Correct. So in our model timeline to put this all together, 2035, you get full automation of cognitive labor and people start building robots. And then it takes three years for these to operation to fully pan out. And then by 2038, you have this army of robot workers, that number potentially in the hundreds of millions to billions. Is that kind of the world where we're imagining?

What happens in 2040?
Yafah: [01:16:56] This is by 2038? And then I would like to note by 2040, we are at the point where my forecasting fails. It goes bananas. Things are wildly sci-fi, I don't know what's happening. Don't assume that just because it takes three years to get to a third of the labor force or there's 100 million or a large number of robots... The next stage after this might look very intense.

Jaime: [01:17:22] That seems that seems correct. What comes after the world goes bananas? How quickly you go from 2038, you have all of these billions of robots you build... You build a Dyson sphere and capture all the energy output of the sun? My claim is that...

Yafah: [01:17:43] Five years. Less?

Jaime: [01:17:44] No, I don't think. I think more actually. I think this is very hard to think about in my model world. It's probably more than that. We're one way that I think about this is, roughly right now, if you wanted to capture the energy output of the sun, that would be like 14 orders of magnitude greater than than the energy output of the Earth right now.

Yafah: [01:18:11] Okay.

Jaime: [01:18:12] If you wanted to get there, if we were naively growing energy output, as we're doing now at roughly 2–3% per year, it will take you a thousand years to get there. And can you go ten times faster? Absolutely. Like we have already established that you can go like ten times faster than that.

Jaime: [01:18:33] Can you go like 100 times faster than that? Can you like can you get your growth rates to get like 100 times faster than that? I don't know, I think my modal world you hit something along the way; there's something there that slows you down.

Yafah: [01:18:52] I don't know any roadblocks you might hit on the way, sound like the sort of thing that could be solved. I think my actual take here, this goes back to something we were talking about a bit earlier, where the question is like: How much does super intelligence get you? Because the prior, you've actually not able to scale that fast arguments fail again at this point, and suddenly you're able to scale compute very fast. Once you have robots you have this scale up right now that slowly slows down as infrastructure becomes a problem, and then you eventually solve enough problems that you unlock these robots and infrastructure stops being a problem. Or at least the speed at which you can scale suddenly starts becoming this more intense exponential again, that isn't as linked to the prior economic growth. You spend four years doing AI iteration and then it does something with nanomachines. I don't know. I'm just like: man you can get a lot of progress, just in terms of... Maybe this is wrong. I think your argument is a reasonable baseline as well.

Yafah: [01:20:10] I don't know, at this point, we'll have gone from AI contributing 0% to AI contributing 1% to contributing 10% to contribute 100% to GDP, basically something like this. And each and each of these stages is only taking a few years. Doesn't seem unreasonable to say that there's AI contributing 1,000%. It's just at this point, the curve, if you draw all these points out, it doesn't look exponential. It looks faster than that.

Jaime: [01:20:41] So here, I think that what would point to my intuition is this idea of we have gone over like 10 orders or so of the economy in human history. And at some point in the last century growth rates in fact stopped accelerating in like frontier economies. We hit a roadblock, economists debate exactly what what it was. But it is undeniable that growth rates stabilized for at least a period. I think that you could get a gap like that. And that makes it so that it takes you at least like 2000 years to get there.

Yafah: [01:21:18] I don't know. My vague impression is something like this is: 10,000 years ago, human economies start growing and eventually you get like this industrial revolution, they grow even faster after thousands of years and after a few hundred years of that, that slows down briefly. But it's just a brief patch of this is slowing down.

Yafah: [01:21:41] If you want to draw this sort of line, which I don't know how much you can, but if you want to draw this sort of line it looks like the 20th century is a little bit of a speed bump, or not even,but a little bit...

Jaime: [01:21:58] I don't understand what you mean by speed bump.

Yafah: [01:21:59] I'm saying that it hasn't actually been that long of a slowdown, since AI takes over and suddenly we go back to speeding up. The gaps between the increases in speed, seem to be shrinking a little bit. It seems totally reasonable to be like, any further gaps will also... The overall arc of human history in this argument looks very much like super exponential, with like some a little bit of noise in somewhat recent history.

Jaime: [01:22:36] I think eventually you get there. Growth rates keep growing until you hit some physical limits.

Yafah: [01:22:41] Yeah.

Jaime: [01:22:42] Although and then it's kind of unclear where those physical limits lie. I do think that you hit some bottlenecks around around there, maybe energy, it might be like available space to build in some kind of ways...

Yafah: [01:22:55] What's our gigawatts of energy use for AI in 2040? What's our modal case?

Jaime: [01:23:01] I'm not gonna give a number to that. I'm not gonna give a number to that.

Yafah: [01:23:05] This is I did this. I got, I don't know, Yottawatts maybe. Who knows? Petawatts.

Jaime: [01:23:11] Yafahwatts.

Recap: Three eras of forecasting
Jaime: [01:23:16] Yafah. I want to move towards summarizing what we have said here. Over this conversation, we have tried to develop what I would call the default Epoch model. Maybe not call it the Epoch model. I'm not sure the rest of the of our of our colleagues will necessarily agree with it. But it is our best guess of what happens to AI if current trends continue as they have so far.

Yafah: [01:23:43] Which we think a lot of these trends by default continue, unless something interferes.

Jaime: [01:23:51] To me it does seem like we have identified these three periods of AI development. There is from here until the end of the decade or so, where you roughly can continue the current pace of crazy scaling that's being enabled by redirecting investment from the world into this new technology; continuing the rapid build out of AI infrastructure that allows you to train bigger models, deploy them at large scale, and automate more and more tasks over time. And to me, I think that this is already enough to get you to a world in which AI is very economically impactful. On the order of already adding a couple of percentage points of growth per year to the US. A world in which, we break out the spell of stagnating growth rates due to this new technology, which is insane to me.

Yafah: [01:24:47] Even at this point, I think it will be very obvious to everyone that AI is the most important invention that's happened in their lifetimes. At some point, it will stop seeming like all of our conversations are about AI, because there will be so much AI that it will just be specific subsets. It'll become this very huge part of everyone's lives in this world. It's hard...

Jaime: [01:25:16] It's hard to overstate it. It does seem that then by the end of the decade, still not every cognitive ability has been automated. There's still a long tail of some cognitive abilities that are resist automation by AI. But maybe they're not that far away. We also get to the point where it becomes much harder to scale, not enough to completely slow down scaling. It just becomes so ubiquitously useful that people want to keep scaling. But every extra order of magnitude, it requires more and more time for planning, for infrastructure build out, more investment that needs to be justified. And that overall slows things down, but doesn't stop them completely. And then roughly our modal story is by 2035, every cognitive task that humans do, essentially AI can do just as cheaply. Maybe they do not do everything because we do not have enough robots to automate all physical tasks. But they are they become this "force majeure" economic, which speeds up economic growth significantly. Here I'm envisioning a world in which AI is earning — companies are generating trillions of dollars of revenue, where growth rates worldwide are getting to 10% growth or greater. And that doesn't seem to me that insane. I think that this is something that will perfectly happen in this modal world.

Yafah: [01:26:46] I think it's pretty reasonable to expect this at this point, which is wild and feels too much. It does seem pretty reasonable. I do think this is our modal world. I think there's a lot of very reasonable considerations which point to things going much faster.

Jaime: [01:27:09] That's right. And then I think that to determine exactly how fast things are moving here and how fast they're going to be developing over the next years we identify these three factors where we need to have a better understanding. We need to understand (1) how quickly you can build robots and deploy them to do useful work? We need to understand (2) what are the returns to scale, especially when it comes down to developing new technologies? And (3) what are the returns to intelligence? As you keep building AI that becomes progressively more sophisticated and outsmart humans not only by a small margin but by a wide margin.

Yafah: [01:27:45] I think with these three factors, we would basically have put our modal worldview together. We have guesses for them. But I think on all three of these we are pretty uncertain. And don't really feel entirely comfortable with our estimates for. Especially compared to a lot of our other things. And they explain a very wide variety of possibilities.

Jaime: [01:28:19] That's right . And regardless of that, we still have like this confidence of eventually you build a robot and eventually you get to the point where AI has taken over the economy, essentially. And that puts you into this new economic regime of hyperbolic growth. Where it's just very hard to make guesses about how fast that goes, how long you can keep that regime going for, what is the final scale of the of the economy that you can get ten years after that fact, for example?

Yafah: [01:28:51] I think depending on some of the answers to some to some of the hard questions, it's might happen before robots are even relevant. But we definitely think that given all of these things that you eventually can get to this point.

Jaime: [01:29:10] Yeah. How I will summarize.

Yafah: [01:29:11] In our modal world. Sorry, in our moda world.

Jaime: [01:29:13] In our modal world, of course.

Yafah: [01:29:14] There's also possibilities that you that you see a more sharp drop off in sharp drop off, drop off in scale might push out timelines pretty far. Or you might have things that happen faster earlier. Yeah.

Jaime: [01:29:28] How would I summarize this whole modal viewpoint that we've been exposing here is: These three eras from now till the end of this decade. The next five years this scaling era where you keep things moving fast and you see this rapid growth, you see these commoditization era in which AI becomes like this "force majeure economica" that gets us to at least 10% growth a year by 2035. And potentially we get to automate every conceivable cognitive task and then from 2035 to 2040, 2045 or something at some point there, you get this transition into this new economic regime where like AI is the economy, it's driving everything. We shift into this fully hyperbolic, fully hyperbolic growth. And then our models break down. I want to be very transparent about it, our models break down in this world.

Yafah: [01:30:25] Yeah, I usually I forecast out to the point at which things become bananas is the term I use in my head at least. In this modal world that we're discussing, yeah that happens around then.

Jaime: [01:30:42] That's right. And I mean, I think this is to me something that I don't know, to a degree is something hard to believe because when I stop to think about the consequences, it's very crazy. But when I try to go through what is the fault here, I'm not sure if I can find it. Everything that we have said of how the world develops it does seem like it could happen. I think this is this exact story obviously, it's very, very unlikely.

Yafah: [01:31:11] I think there's a lot of different choices here that we're unsure and we need to make a decision. When we feel like we can pick a modal outcome or we can guess, but it could go either way. One thing in particular that I think about in regards to this, is that it's not clear to me, if you just made some of these choices differently, a lot of the times this will lead to a much faster world, and a lot of times this will lead to a much slower world, which I think is a good thing. I think that this world that we just discussed is one where I both I both strongly feel that it's too fast, and I strongly feel that it's too slow.

Jaime: [01:31:54] An interesting epistemic position.

Yafah: [01:31:55] Which I think is a good sign to be clear. If I think about it one way, I think this sure has a lot of research being done without it having this giant effect on anything. That's sort of bizarre. Scientists are running it several times faster and there's a lot more of them. It feels off and it feels like it should go a lot slower than that. It feels like maybe you hit the end of scaling and it just stops. Or like scaling continues, but at the same similar speed. And ...

Jaime: [01:32:26] Or people are reluctant to scale. Maybe they go: "Well, we're not seeing that many economic returns out of this. Maybe we won't want to go 10x bigger."

Yafah: [01:32:36] Yeah. This is also a possibility. Then there's a lot of things... Sorry. Which one were we doing? Things that made it go slower or faster? I thought we were just doing things that...

Jaime: [01:32:51] We were doing things that will make the trajectory go off...

Yafah: [01:32:55] Yeah, it can go a lot slower especially given the world we're describing is radically different from ours in terms of the amount of growth and the events. And on some level feels like even from the current trends. We haven't seen those current trends lead to an entire world that's changed this way. I am sympathetic to people who are like: "You're just drawing a straight line through a few points and extrapolating it out wildly far." On the other hand, I think we have enough data to say that it looks like all these straight lines exist. All these trends are real. And in terms of the impact of these trends, it's very substantial if you work them out. We are creating intelligence, and there's good reason to believe this is different than a lot of other technologies of the past. So far, current events seem to hold up with this interpretation. This world makes me happy in terms of neither feeling way too fast or way too slow to me.

Jaime: [01:34:11] Maybe. One thing that I will add is this distinction between modals and medians. I think that this is my modal. I think this is what the exercise of me just randomly saying: "okay, what is the most the next most likely thing to happen?" I do think that my median is lower than that. Actually, if I were going to put my median, I think I will put it like substantially farther than this, like potentially a decade later than this. I could see more ways in which this gets slowed down than ways it gets sped up. I do see a very compelling way that it gets a speed up, that we have discussed through returns to intelligence. But I do think that my AI timelines are such that my modal is substantially ahead of the median.

Speaker1: [01:34:56] I'm not sure if my modal is quite as far ahead of the median, although I do think if you start going a bit slower than this you do end up having some more dramatic slowdowns. All the faster worlds are a lot more salient to me. And when making decisions, feel a lot more important. And I don't think this is unreasonable. If I think there's a decent chance of those faster worlds in terms of things I can have an impact on now, until things that we can think about worlds of which AI is a big, worlds at which you get this hyperbolic growth in 2030 or sooner are just some of the most impactful things to work on and consider. I'm hopeful that we'll get more focus on thinking through those outcomes. As AI scales I don't expect those outcomes. And you know that I think they're less definitely less than 50%. But I think I spend a decent amount of time thinking about the faster worlds, and I think that this is in fact, on some level, correct. Those faster worlds are more important to think about. But it's also useful to have a baseline to be able to look at in two years, it's useful to be able to look and say: "Did AI go a lot faster than we expect? In terms the next two years after that, is it looking like the modal world? Is it looking like AI did in fact go much faster?"

Yafah: [01:36:25] A thing that's very useful of having this sort of world to think about is in two years, AI is going to look wildly more advanced and important and doing wildly... In very many worlds, including our modal ones and fast ones and many slower ones, and I'm sure your median, AI looks way bigger and more impressive and doing way more sophisticated things in two years. It's useful then to be able to say: is this on trend for even faster than we expected or the slower or the same? Because even in slow worlds, AI looks like this. And it's useful to have set out some sort of standard to judge this against so you can avoid — there's going to be a temptation to update towards the faster worlds just based off it being very cool. In some respects, there will absolutely be some results that look faster than others and are surprising. Some things that are really surprisingly advanced and it's useful to have these metrics and this discussion of what sort of what's the benchmark. What can we what can we measure against to check if this is actually.

Closing remarks: The two sides of insanity
Jaime: [01:37:42] One thing I'm very much looking forward to is now we have this concrete thing on record that economists are going to point to and say: "These people are insane. They say we can get to 10% growth a year by 2035."

Jaime: [01:37:56] And then the AI people, they're going to look at us and they're going to be like: "These people are insane, that they are only projecting 10% growth by 2035."

Yafah: [01:38:04] Yeah. I agree. I think that there will be some people who... Who am I more sympathetic to here? I'm definitely more sympathetic to the economist side of this, even if I think it's more important to think about. My actual take here is I'm much more sympathetic to "it's insane" from the perspective of it's insane that it goes this fast. But I think it's more important to do work and more salient to me, the potential for it to be even faster.

Jaime: [01:38:43] All right. Any closing statement, Yafah? Something you want to people to remember or to take away the single thing you want people to remember?

Yafah: [01:38:50] That the default world trends continue, nothing very exciting happens, is completely wild.

Jaime: [01:39:00] I'm into that. I think that to me, this is a world which feels quite plausible, and this is still a world in which AI can replace all cognitive tasks by 2035. And it can replace all human labor, pretty much by 2040. This is very insane.