Exploring the frontiers of Technology and AI
Ejaaz:
[0:03] I can't believe I'm saying this, but over the last 30 days, over $1 trillion has been committed to building out over 50 to 100 gigawatts of power to build AGI.
Ejaaz:
[0:16] XAI, OpenAI, Meta, Microsoft, Google have each committed hundreds of billions of dollars to building out the next generation of data centers that are going to power the most powerful and transformative technology we've ever seen. Now, I'm just going to call it. I think Elon wins this one. Despite being over five years late to the race, in the last two years, he's built out one of the best frontier AI models the world has ever seen and assembled some of the best engineers and builders to scale compute and data to train these models. Not only that, but Elon's leaving nothing to chance. He wants to own the entire table. Tesla, which started off as an electric car company, now produces megablocks and megapacks that can power cities. He's building robots. It's never going to end. Josh, I want to remind the audience that in order to build the best AI models, you need tons, trillions of dollars worth of compute to reach what we perceive as AGI. I think XI is going to win it. Do you?
Josh:
[1:16] I think they are so far currently, just based on like the, again, rate of acceleration. We always talk about this. They're moving the fastest. They have the most cracked team of engineers who are working on this. They have the most vertical integration.
Ejaaz:
[1:28] Come on.
Josh:
[1:29] I mean, Colossus 2, it's going to be pretty amazing. The thing that I've really come to grips with recently is that this is the largest capital expenditure project any...
Ejaaz:
[1:37] One has ever.
Josh:
[1:38] Done before and everyone is doing it independently this is like the manhattan project at a scale that we could never imagine except instead of one in entity working on this there are several of them spending the largest companies in the world spending all of their additional money and even going into debt to pay for this and there's a strong case to be made that like this could be the last thing that we ever spend money on the transformer kind of created this way of converting electricity into intelligence. And there's a very strong case to be made that the more energy we can get, the more we should just funnel it into generating intelligence. And that's exactly what's happening with these companies. I mean, they're spending a huge amount of money. It seems like XAI is in the lead. And I think you have some fun things to share about the Colossus 2 build-out, which is the new build-out for XAI.
Josh:
[2:22] Can you please walk us through what's going on here?
Ejaaz:
[2:24] So just to remind the listeners, compute is the most important thing to determine how good your model is going to be. And I want to start off with XAI and Elon Musk's strategy. They recently announced this year something known as Colossus 2, which, as this tweet says, is going to be recognized as the world's first gigawatt scale AI training data center. Simple translation is it's going to be the biggest data center to train AI models that we've ever seen before. And I think Elon has invested over $10 billion already into building this thing out with the goal to scale this to 10 and 100 gigawatts eventually, which is enough to power multiple, multiple towns.
Josh:
[3:04] And even a terabyte in there,
Ejaaz:
[3:05] I see. I was almost scared to say that because I was like, can we do this? But, you know, trillions are being spent at this point. To give some context for the listeners, Colossus 1, the team deployed 100,000 GPUs, which is the thing that you need to kind of like train these AI models. And it took them 122 days. With this new Colossus two, which is much, much bigger than the original one, it took them 19 days. Two digits. That's insane. To build out the first iteration of this. So Elon's ability to scale these data centers is just actually insane. And I want to kind of point out this snippet from Semiconductor, which do analysis on infrastructure races, saying XAI built in six months what
Ejaaz:
[3:48] took 15 months for Oracle, Crusoe, and OpenAI to build. But I think, Josh, Elon has another secret strategy to winning this war. Can you tell me about it?
Josh:
[3:59] Yeah, well, I mean, the Elon strategy and the reason why all of these things work so well is because he is amazing at manufacturing and production. And I think a lot of people overlook how difficult it is to actually make things in the physical world. It's really difficult to manufacture things. And he learned this over the decades through Tesla, through SpaceX, through actually creating physical hardware in the physical world, in a space in which a lot of these companies really only create digital goods or handheld products. A lot of this has to do with vertical integration and the unfair advantage of having this suite of companies that kind of complement each other one of them like we're seeing on screen now is tesla and the advent of the megablock which basically not only allows you to create a micro grid by having a transformer connected to a bunch of batteries but it allows you to smooth the electricity from the grid to your power systems that are training the models so a big problem when you're training large models is there's this thing called like power jitter where every time the GPUs spin up, they require a lot of energy and they spin down very quickly. And those jitters are very difficult for the grid to kind of smooth out. So these megapacks do that. They are also working on another effort where they're going to create their own in-house chips, starting with AI5 and then eventually AI6.
Josh:
[5:10] And the assumption is that, well, eventually these chips will be powerful enough that they could just embed them in their own data servers, data centers, and actually use their own in-house chips to train this. So not only are they really great at the vertical integration, but they're just really hardcore at creating these products. And I mean, like you said earlier, Ejaz, the speed at which they're able to deploy this stuff is just faster than everyone else. And whether it's a testament to the actual work culture or them doing it all themselves versus a company like OpenAI, who's outsourcing a lot of the actual infrastructure to Oracle and a lot of the construction stuff to a third party
Josh:
[5:44] company, XAI is doing a lot of this in-house. And I think that's going a long way and actually making them move so much faster than everyone else he does how old do you think xai is like they haven't been around that long
Ejaaz:
[5:56] Has to be like what three years they're quite.
Josh:
[5:59] It's like two and a half years it's less than three years and open ai has been around for nine that is a long time granted a lot of the a lot of the earlier years were spent just doing research and preparing the world for the world of ai but xai moves so fast and i think all these things kind of converge together to come to their advantage and as we move faster and further along on this path to agi that rate of acceleration on an exponential curve starts to really make a big difference. And it's just the more I think about this, like these outrageously large numbers, the trillions of dollars. I mean, this is the biggest capital expenditure ever made in history. It's like seven or eight Manhattan projects all going at once from all the largest companies in the world. And it kind of makes sense. Like it seems like this is the last thing you'll need to spend money on.
Ejaaz:
[6:42] Yeah, it actually reminds me of something Zuck said in an interview last week, which was, I am willing to lose hundreds of billions of dollars on a massive mistake if it means I have, a 10 to 20% chance of building out super intelligence. The point being is it is such a high stakes game that everyone's playing right now that they can't afford to lose. Let's say AGI is achieved in five years time. Let's say that's your projected estimate and you achieve it in three years time. That two year lag is going to cost you your entire company. And that was the point that he was making. You're absolutely right. This is the most important race ever.
Josh:
[7:21] Yeah. Things move so fast along the exponential curve. If you are off by a couple of months, on that vertical line you are just you are in trouble and we're going to talk about meta and zuck and open ai and all the other crazy news but first we do have to make mention of our sponsor keygen keygen is building the world's largest verify distribution protocol aka verify what is verify you may ask it focuses on ensuring only real high quality users participate in digital platforms addressing issues like fake accounts bots and fraudulent activity how does this work well this protocol uses advanced biometrics and fraud prevention technologies to block fake users the system is essential for ai and consumer applications alike for developers who require high quality fraud-free data and engagement to train models and expand their products globally there are already used by over 200 plus clients across ai gaming defi and consumer apps and has reported for them over 99.8 percent reduction in fake accounts so if you are need to train your ai on real data check out keygen we'll have a link in the show notes thank you key gen ejes back to you let's talk about open ai because this seems like a pretty big thing that's happening in the world of data and training that they just announced this week okay
Ejaaz:
[8:29] So we have xai and elon which as you pointed out is the newcomer to this race but somehow leading the leading the front, But they have some history with Sam Altman, who is, of course, the founder of OpenAI. They go way back. They actually founded OpenAI together, right? But then a rivalry ensued when the two of them kind of broke up because they didn't align on the same types of business interests. Sam wanted to take it from a non-profit to a for-profit. He wanted to start building AI in a certain type of way that Elon didn't agree with. And so Elon broke off and founded XAI. And that's kind of like how those two companies kind of have progressed since. But Sam has been equally, if not more aggressive in building out data centers to train the best AI models. If you remember, Josh, I think earlier this year, actually the start of this year, they announced their Stargate project, which is basically the name that they're giving to the different types of data centers that they're building to train these different models. These Stargate data centers exist all over the world.
Ejaaz:
[9:29] Geographically, I think they have the highest concentration. I think they've planned for five to 10 of these Stargate sites to be in the US, but they're building one in the United Arab Emirates, as well as a multiple in Europe, just to have this presence and support for open AI users anywhere in the world. So the Stargate project initially announced to invest $500 billion over the next four years as part of a larger AI infrastructure bill that they committed in tune with the American government as well. But they then had some banger announcements over the last week, Josh. The first of two being their deal with Oracle, which is a $300 billion compute deal, where Oracle will supply the GPUs and the compute to help train OpenAI's models. And this commitment is going to be had over five years. This in tune results in Oracle stock surging 42%, which led to a lot of critics saying, okay, so they don't have the compute. They announced this deal. Oracle stock jumps up 42% and they use the money to buy NVIDIA GPUs and it kind of releases results in this kind of like cycle. But the point is, OpenAI commits to buy $60 billion of compute per year, 300 billion total over the next five years, which results in 4.5 gigawatts of power, roughly the output of two Hoover dams. Enough for about 4 million homes, Josh.
Josh:
[10:58] Yeah, you don't. Okay, so I went through the math i like actually i queried grok about how much energy one gigawatt is just for reference because we always talk about like okay it's about 834 000 homes but it's like kind of outrageous think about exactly how much it is so with solar panels each gigawatt is equal to approximately 3.3 million standard residential solar panels so multiply that by five you are like over 17 million solar panels it's equivalent each gigawatt is 400 large wind turbines so multiply that by five, you're at two million wind turbines. So the numbers here are staggeringly large. Like this is a significant, this will become a significant percentage of the total energy output. And it's probably a likely trend that continues where the more energy we actually throw, the greater percentage energy we throw towards solving AGI, the more efficient everything else gets. So it makes sense that these numbers are just going to keep growing larger and larger. And I mean, think about a terawatt that you mentioned earlier in the post. That's what these companies are aiming for where they get that energy i don't know but i think the downstream effects of this war are really cool like if a company is able to conjure up one terawatt of energy the assumption is that we've gotten some sort of technological breakthrough that allows some sort of downstream fact that will not only allow these data centers to get better but also like our cost per electricity goes down a lot and the world always gets better when the cost per kilowatt decreases and we see this across the board there's no energy rich energy poor countries that are rich
Ejaaz:
[12:27] But Josh, like some of the math isn't mathing, dude. Like, look at this. OpenAI will spend $60 billion a year, six times its current revenue. So I only have to imagine that OpenAI is structuring some sort of deal. And we're going to talk about one with NVIDIA actually soon, where they're kind of raising money in the future. Or they're agreeing with like SoftBank and Oracle that, okay, you give me this amount of GPUs. And by that time, we'll have raised the money to begin with. And we're kind of seeing this with a kind of follow-on announcement that they made after this Oracle announcement, which is they're going to build five new Stargate sites in cooperation with Oracle and SoftBank. Masayoshi-san is putting up a ton of money to support this as well.
Ejaaz:
[13:09] And the details are pretty interesting and, of course, includes a lot of large numbers. $400 billion pledged over the next three years, locations in Ohio, Texas, New Mexico, plus an unnamed Midwestern site. So, again, we're concentrating very much in the U.S. it's funny AI computers pretty much becoming like a war chest literally between nations and whoever has the most power basically wins which is I guess what's instigating a lot of the USA versus China rivalry and a statement from Sam I found really interesting is he goes, AI can only fulfill its promise if we build the compute to power it. If we are limited by compute, we'll have to choose what to prioritize. No one wants to make that choice. So let's go build. And that's his reasoning and justification for spending this amount of money.
Josh:
[13:52] And it's funny, Sam has mentioned twice publicly now, once in a blog post and once in an interview with CEO Jensen Huang about the deal that we're about to mention, that they can choose between education for all or curing diseases. And I think that's a very interesting way of approaching the sales pitch to this is like, hi, guys, you have to choose. Unless we get more electricity, you're going to have to make this very difficult choice. And I don't love that. And I'm starting to see the tactics that some of the CEOs are using in terms of messaging. Like Dario of Anthropic recently came out this week and he was talking down on open source. Sam Altman is now making this like critical emotional decision to the public. So it's interesting to see how they actually deliver this and try to get the messaging across. So it becomes appropriate to raise and deploy this much energy, this much power and AI compute.
Ejaaz:
[14:39] Yeah, I mean, I think the frustration that he's probably facing is people have been promising AGI, including Sam. And notice how he didn't mention AGI once in that letter that you're referencing. And they haven't really been able to deliver it at the consumer level. They're achieving it with coding. They're achieving it with math and a bunch of other nerdy, very niche things. But they haven't achieved it to the wider audience where it gets the masses to start believing in it. And I think he's trying to therefore supplement that with a very strong and purposeful vision. And he kind of gets the story or the point across, right? When he says, we either cure diseases or we have the best education for children. Which one do you want? We don't want to have to choose. And it makes you kind of think. But yes, moving on to the most recent announcement, which I actually think is the craziest. NVIDIA is investing $100 billion in OpenAI. $100 billion.
Ejaaz:
[15:30] $100 billion. This is after already committing, what was it, like $5 billion in one of their series, whatever the hell, whichever letter they did. This partnership will supply 10 gigawatts of GPUs to fuel OpenAI's data center growth. That is a staggering amount of GPUs. I think this is strategic. I think that Sam wants some sort of guarantee that Jensen and NVIDIA are going to deliver OpenAI GPUs and they're not going to kind of falter on their promise because Jensen's supplying XAI, Jensen's supplying Meta, Jensen's supplying Google as well, right, to an extent. So there's kind of like this trust issues forming between the competitors with so much reliance on Jensen. Obviously, Jensen's just sitting back laughing and enjoying all of this,
Ejaaz:
[16:19] but i think this is a strategic alignment do you have any comments i mean.
Josh:
[16:23] It's like 10 gigawatts here 10 gigawatts there suddenly like you're talking about some serious power some serious infrastructure some serious money where this is all coming from i don't know there's very clear that there is a reliance on a single man being jensen huang that all of this is built on top of so it's just i mean it's another big deal that we will see how it plays out we have yet to see anyone really eclipse more than a gigawatt so to go 10x at full order of magnitude in the physical realm where you actually have to build physical infrastructure like okay this is great go do it like let's see what happens when you go do it because so far no one's been able to figure this out yet
Ejaaz:
[17:01] Now, if you're listening to this episode and this is starting to sound like a bit of a pyramid scheme or a Ponzi scheme, you wouldn't be alone. Like a lot of people are looking at all these recent announcements and they're kind of connecting the dots and they're realizing, hang on a second. If OpenAI invests $100 billion in Oracle to buy cloud computing services, and then Oracle invests $100 billion in NVIDIA to buy the graphic cards, but then NVIDIA just announced that they're reinvesting $100 billion in OpenAI to build AI systems, isn't it just the same money that we're talking about, which doesn't even exist, which hasn't even been raised, which hasn't even officially been committed just yet, but all of their stock prices are massively soaring. So like which comes first the chicken or the egg and it's just kind of like a funny point that i think is worth mentioning josh and i are equally as excited about this growth and we think agi and investing in compute is super important but we're also not trying to put the cart before the horse and we're admitting that this might feel a little bubbly and it might actually be a little bit of bubbly but that's the a little bit bubbly but that's the risk that we're willing to kind of take or investors are willing to take to build this.
Josh:
[18:12] Out yeah you know what ej is i don't i don't care for the big numbers anymore they mean nothing to me a hundred billion dollars means nothing to me go build the infrastructure go launch it go show me half a million coherently training gpus like that's what i'm interested in now because everyone has a big deal everyone has a gazillion gigawatts incoming go build the damn data center please and i mean like you mentioned earlier it seems like xai is very much on their way but they're not the only ones we have no more companies started working on this. What have we got next? Microsoft.
Ejaaz:
[18:40] No.
Josh:
[18:40] Another giant.
Ejaaz:
[18:41] We have a tiny company called Microsoft. So Satya... Last week, announced this thing called Fairwater, which is basically Microsoft's data center, which they hadn't entered the race yet. I want to point that out. They were working on a lot of this on-site at some of their offices or slightly just adjacent to their offices, but now they're going fully committed into building out their own AI supercluster.
Ejaaz:
[19:06] He goes, if intelligence is the log of compute, it starts with a lot of compute. And that's why we're scaling our GPU fleet faster than anyone else. He goes on to explain how Microsoft had basically already built out 10 gigawatts of capacity, which is pretty big. But he wants to go much, much harder than that, targeting another 10 gigawatts. I'm noticing that, Josh, all of these companies are touting 10 gigawatts. It's the new gigawatt now, dude. And if I remember prior to that, it was 500 megawatts. And so we've come a long way. We are going by orders of magnitude almost every quarter now at this point. Again, seems kind of bubbly, but this is basically Microsoft's attempt at building out a supercluster to train their own models. Now, what I find is interesting here, Josh, is Microsoft was very closely aligned with OpenAI.
Ejaaz:
[19:59] In fact, they were the premium cloud and compute provider for OpenAI up until I think about a month ago, where they sort of gone through a bit of a subtle breakup where Microsoft is now kind of like supplying more of Anthropix models to their co-pilot users and Google's Gemini models. And OpenAI has announced that.
Ejaaz:
[20:20] After 2030, they're basically not really going to be relying on Microsoft as much. So this I see as a clear signal that Microsoft is going to be going out on their own and maybe even training their own model.
Josh:
[20:31] Yeah, this is, I mean, it's very much the, this is the AI Game of Thrones. This is how it goes. Everyone is out for themselves. You have deals when it's convenient. You destroy those deals when it's not convenient. Where the only person that matters in this, the king of them all is Jensen and those GPUs. Because without those GPUs, none of this happens. So you start to see how these dynamics work in play where, I mean, Microsoft was using OpenAI to the extent that it was valuable. The second there became resistance and it lost value. See you later. We're moving on to the next big one. And I think that's going to be the nature that we continue to see as we go through. But there's more. Intel is in the news. What does Intel do? Intel, I don't think I've touched Intel products since they stopped making the chips for the MacBooks. So what are they doing here, please?
Ejaaz:
[21:14] Okay. So we've come a long way. And by a long way, I don't mean a good way. Intel's stock price and general infrastructure or quality of chips that they were producing declined pretty massively over the last three years, I would say, to the point where they, I'm sorry to call it this, but it is, they needed to get a government bailout. Josh, did you know that the US government currently owns 10% of Intel?
Josh:
[21:39] Yes. In fact, we mentioned it on an episode a week or two ago, which was a huge deal. Like the government is now investing in private markets and they own a company.
Ejaaz:
[21:46] I've never seen nationalization on this scale until probably the, until since the industrial revolution or to some extent, I've never seen nationalization of a core technology so aggressively as the US government today. And yeah, they own 10% of Intel and actually Jensen Huang NVIDIA invested $5 billion into Intel as well. And the breaking news this week is Intel has asked Apple to make a similar investment into them. Why I found this funny, Josh, isn't to do with Intel at all, but it's to do with Apple. Because we're giving a breakdown of the top companies and their strategy towards AI. And I'd be remiss if we didn't announce our friends or talk about our friends. I wasn't going to mention Apple at all because they have zero strategy, zero data centers, zero gigawatts. But I wanted to throw them a bone. If they do end up going forward with this investment, I do think it will end up being a really good strategic move for themselves. As you mentioned, they use a bunch of Intel chips for their MacBooks, and potentially they might end up using a design chip that Intel makes for whatever Apple AI product. Maybe that's me being too optimistic, but I see it in the future.
Ejaaz:
[22:58] Intel also signed a really close partnership with NVIDIA recently, which agrees between both of them that they're going to be building out very specific GPU architecture and GPUs that work cohesively amongst their product suites. So I think all in all, Apple has a chance to take a shot here. Okay. It makes sense that.
Josh:
[23:17] I mean, Apple has their own silicon that they make and they've, they've been adding these like the neural cores into it to do the AI training. To me, it seems like my interpretation of this news is, is Intel is kind of becoming a national, a matter of national security where we need to just onboard our chip manufacturing and the government is getting involved and saying like, you cannot die. We need you. Jensen, go help your friends. Tim, go help your friends. Like we need them to live. And that's kind of what we're seeing here. I'm not sure Apple like has much of a use for them. I was looking at an iPhone tear down recently. they have the apple chips they have qualcomm uh processes for cellular and that's like that's kind of it so this might just be a like hey guys you got to go help them and like we'll spare you on tariffs and things like that is my take at least so i'm kind of bullish on intel just through all the subsidy that i presume they will be getting going
Ejaaz:
[24:00] Forward what was the name of that ex open ai researcher leopold abenschreiber oh.
Josh:
[24:07] Yeah dude made a killing yeah
Ejaaz:
[24:08] Yes he left open ai and he started a fund i think he raised a billion dollars to invest in companies that were in accordance to this massive thesis that he wrote on how a ai and agi was going to pan out and one of the biggest investments that he made with that fund was in intel this was before anyone was talking about intel and he is now up 120 since bravo so one of the best investors in returns ever.
Josh:
[24:37] EJS, I have one last question for you. Wooyers Anthropic.
Josh:
[24:42] They seem to be a big company. Yeah, right? Like Anthropic, the ones who make Claude, the ones who like leading code model with the frontier for a long time, kind of quiet.
Ejaaz:
[24:51] Oh, you mean the one that's fudding AI every day despite them being an AI company? Sorry, Daria. I've just got to call it out, right? Okay, so I'll tell you exactly where they are through the words of Elon Musk himself, which is a tweet that I have pulled up here. Winning was never in the set of possible outcomes for Anthropic. An absolutely savage sentence to describe that Anthropic hasn't really got any compute providers. They haven't really made much effort or as much investment in data centers, and they're running out of funding. They're running out of backing. Josh, I have a take that you may or may not agree with. I don't of how hot this is okay i think your favorite company apple acquires anthropic.
Josh:
[25:36] That's a hot take that'd be kind of crazy i'm i'm not sure what apple would do with anthropic
Ejaaz:
[25:42] No well they need a model that they can own themselves but.
Josh:
[25:47] They need a small model they need a model that can run on iphones they need a model that can like improve my macbook and i'm not sure anthropic is worth that because like apple's making pretty good local models and to be honest it's not that hard. In fact, open source models in the next year are probably going to be good enough where they can just grab one Make it custom for their workflow, and then it's good. I don't think Apple's going to be making bleeding-edge LLMs.
Ejaaz:
[26:12] Okay, so fine. Then maybe they're just two loser companies. Because Anthropik, for some reason, doesn't have the funding.
Josh:
[26:19] Yeah, the way I see the AI race currently is there's three players. It's Google, it's OpenAI, and it's XAI. Everybody not listed there is not really interesting. And will probably have some sort of eventual demise or just will not exist at the frontier. That includes Microsoft, that includes Perplexity, that includes Anthropic, that includes anybody else. It's really challenging. And as these numbers get bigger, as the infrastructure grows, it's only going to get harder. This is like a tremendous scale that we've never tried to do before. And it's really going to be difficult. So to even have a seat at the table now is very impressive. It's gonna be really interesting to see, I mean, just over the next six months, how all these build-outs are gonna go. Because if you are late on a build-out, If you cannot actually complete this
Josh:
[27:04] new 10 gigawatts of compute, you are cooked. That's it. The frontier has moved on without you.
Ejaaz:
[27:09] Well, speaking of being cooked, I want to take a second to talk about Europe. What we've just discussed are all the companies that are in America. USA, USA, USA. And you might be wondering, well, what's the rest of the world doing here? How's our friends over across the sea? The answer is not very well, but they're celebrating incredibly tiny wins. And I mean 4,000 GPU, tiny. So we just spoke about a bunch of data centers that are being invested in by separate companies to the tune of hundreds of billions of dollars for millions of GPUs. Germany is celebrating setting up 4,000 GPUs to help them become digitally sovereign.
Josh:
[27:53] 4,000, I used to do like a reselling where I would buy the new hot NVIDIA GPUs when they first came out with using BOS and resell them. And I feel like I've accumulated a comparable amount to the entire country. That's very disturbing. 4,000 GPUs is like, we're talking about a million coherent GPUs. So I'm happy for you, but you have some work, my European friends. Sorry, they're just going to keep over-regulating.
Ejaaz:
[28:19] That's the biggest gripe that I've had with Europe in general. Whatever new fancy technology takes place and gets created, they just over-regulate the hell out of it. So much so that founders want to leave the country or the region and they want to come over and build an American. I don't blame them. But there is a foreign adversary, Josh.
Ejaaz:
[28:39] Presents themselves as a real threat to the US winning this compute race. In fact, I wouldn't actually say that it's America versus America. I would frame it as it's very much America versus China. There are two real things that I want to highlight here, because there's a lot we can cover here. It probably deserves an entire episode to itself. So point number one is China has secretly amassed over 3,300, that's according to Grok and OpenAI, gigawatts of energy supply over the last decade. This is before AI became a mainstream topic and use for this energy. They just built out this infrastructure. And as we know, China has been kind of the infrastructure and manufacturing kings for a while now. There are the kings of building out cheap tech that can scale, that's efficient, cost-efficient, energy-efficient, all of the efficiencies, right? And they're now funneling all this energy towards training some of the best AI models. But they had a linchpin, Josh. They didn't have the best hardware, chips, GPUs specifically. So they relied on our friends at NVIDIA, on Jensen Huang.
Ejaaz:
[29:55] And this tweet kind of highlights another major concern, which is the second topic. In 2022, the US banned high-end NVIDIA GPU exports to China, wanting to slow down AI development in China. So China invested in homegrown AI chip production and caught up in three years. Now they are banning buying NVIDIA AI chips. And this highlights the second story, which is China's banning tech companies from buying NVIDIA AI chips. And I dug into this, Josh. In fact, I covered this in our most recent newsletter, which we released last week. You guys should definitely go and check this out. And it compared the chips that Alibaba had created and Huawei. And they are as good as not the latest iteration of NVIDIA GPUs, but the iteration just before this. So they have caught up so unfathomably quickly that they present themselves as a real threat.
Josh:
[30:49] Yeah, the thing about China, and we talked about this in a lot of previous episodes with DeepSeek and other open source companies like that, is they are so good at being resourceful, where they can take the limited resource constraint as the biggest benefit to them. And they're able to optimize within the confines. So the second they unlock additional resources, those optimizations bubble out into this like huge explosive growth. And what we're seeing now is they've been, I mean, a lot of these large language companies, these AI labs, they've been accustomed to having these tremendous amounts of constraints on how they can actually train these models. But they've still been at the frontier because they've optimized the software so much. So now that China is starting to build their own in-house optimized GPUs, well, that's a really big deal. Because if you think about the way they've been building GPUs in the past, I mean, they haven't been that good, but they've been kind of being resourceful because they don't have all the capex that NVIDIA has. They don't ramp that up and they have they clearly have the software side now they're getting this off the hardware side in addition that becomes a serious thing to look out for like nvidia's monopoly suddenly starts to look a little shaky because
Ejaaz:
[31:58] I mean, at the same time.
Josh:
[31:59] China is moving really quick. The benefit of China moving very quick means there is more NVIDIA for the U.S. And because there's more NVIDIA in the U.S., all of these projects that we have grow much quicker. So I think it's good for China. It's good for the U.S. We just need to make sure that it is not bad for the U.S. and that we still continue to move quickly because, my God, China is moving very fast. And if we don't keep up this hardcore rate of acceleration, we are also going to be in trouble. Because I mean, like we mentioned earlier, if you are late on that curve, it moves very quickly. And if we are one cycle behind, if we get left in the dust, that's a huge difference that we're going to notice.
Ejaaz:
[32:36] It's funny, the way I picture America versus China is America and the West throw money, tons of money at the problem. China doesn't have as much money. So they need to be more resourceful, as you point out. And it's not just chips. It's not just energy, right? Over 50% of the world's top AI researchers reside and are produced in China, right? So they have the talent density, they have the manufacturing and scaling capability,
Ejaaz:
[33:03] they have the energy, and now they might have the chips. I wouldn't be surprised if we saw some of the best AI models in 2026 come out of China, but we'll wait to see.
Josh:
[33:12] And we'll probably see some signals, right? Like this first signal is China's banning external GPUs, where now they feel confident enough in their own gpus that they will force everyone to go in-house the next signal we probably get which is when we should start raising some red flags is when they stop making these models open source they've been making all the models open source because that's how you kind of get this iterative development that's how you access more developers once these models start to become closed source once they start to have their own in-house gpus from china that's when things get a little scary that means that they have caught up and that means that we are then we're the neck and neck and they're confident enough to keep all of this now behind closed doors. And this is a very high stakes race. This is probably the new Cold War, right? Because, I mean, you never really know what the downstream effects of reaching AGI will be until we get there. But you can assume it's probably pretty important and pretty impactful.
Ejaaz:
[34:00] So that is the breakdown of all the top AI companies and their scaling efforts. You might notice that we didn't mention Google. That's because Google keeps a lot of stuff under wraps. They buy a hell of a lot of NVIDIA chips and GPUs, but they also have their own in-house TPUs, which they use to train these models.
Ejaaz:
[34:19] So once we dig up more information, we'll definitely talk about that on the show. But Josh, I want to kind of round this episode up with one final question, which isn't one that you're going to be unfamiliar with. Are we in an ai capex bubble and it's funny because i mentioned this in the last episode.
Josh:
[34:35] But so
Ejaaz:
[34:36] Much has happened since that last episode that i need to ask you again are we crazy or is this valid.
Josh:
[34:41] I think i probably have the same answer as i did last episode which is okay compute will be the only and last thing we will need to spend money on at a long time scale at a short to medium term time scale i don't know how how much money is left to throw at this before it starts to run dry before you can no longer prove to investors that it is worthy, it is able to generate the revenues required to offset the costs. It seems, I mean, directionally, this is absolutely not a bubble. This is the real thing. This will be the only thing we spend money on forever. The conversion of energy to intelligence is tremendous. But along the way, I don't know, we're moving quick. And I mean, like we said, 10 gigawatts is very hard. That's an order of magnitude bigger than where we are. If they can build it, then we are probably going to extend the length of this bubble, maybe not even see it but in the world where we start to run up against walls in this development of these these order of magnitude gains every couple of months to a year that's when that's when things can get a little bit shaky but the numbers are getting big quickly so i have to say what about you do you have any strong feelings either way no i
Ejaaz:
[35:45] I think if you're not spending, most of your time, money, and effort on training the best AI models, a large part of which is just simply spending money on compute infrastructure, you're going to lose. You're not even trying to play in the first place. So I think it's valid. Will this end up as a winning move? I think there'll be a lot of failure, if I'm being honest with you. I think we'll find out retroactively or in hindsight that a bunch of this money was misspent. We didn't talk about the potential red herring of having a completely new AI architecture be produced which drives the cost of compute down even more to train a frontier model and of course that's kind of like not wishful thinking but kind of with my tinfoil hat on but I think right now it's the right move yeah.
Josh:
[36:34] That sounds about right and again we'll see we'll be here right on the frontier covering it all keeping everyone up to date I am very excited to learn more about Google I am so optimistic on Google, but they are very private in how they handle everything. And you know what? To be fair, Anthropic is fairly private too. So there's a lot still left to be known, but there's only so many ways you can hide a 10 gigawatt data center. Like you can see that from the stars. So people will find this out. We will get the answers we need and we'll report
Josh:
[36:59] them right back here on the show for everyone to hear. But yeah, I guess that's it. Ejaz, any parting thoughts before we wrap here?
Ejaaz:
[37:06] Nope. That's it. I hope that this was a useful overview in understanding the pure effort and money that is going into building out these data centers and why they're building these out. I think it's the most important race to watch and we're going to be covering it very closely as we produce more episodes on the show. Separately, we've been getting so much feedback from you guys. We ask for feedback at the end of every episode thinking, oh, maybe we'll get a few more comments. our YouTube comments and our DMs are piled up. And it is some of the most useful feedback that we've got. Everything from the graphics that we're using on these episodes to the way that we sign off these episodes to the topics that we discuss to things that we might have missed and things that we might be wrong on. So I appreciate all of the people that are calling us out. I appreciate all the people that are cheering us on. If you aren't liked and subscribed to any of our episodes please like and subscribe. And if you have anyone that you think.
Music:
[38:05] Music