TBPN

This is our full interview with Tae Kim, recorded live on TBPN.

We discuss why he believes fears around Nvidia and AI infrastructure are overblown despite recent market pullbacks, unpack how exploding inference demand from coding agents and enterprise adoption is driving a sustained compute shortage that Nvidia is uniquely positioned to capture after locking up key supply, and debate what this next wave of AI means for everything from GPU scarcity and chip strategy to token demand, vertical agents, and whether the current boom is the early innings of a multi-year expansion or the setup for a future compute glut.

Sign up for TBPN’s daily newsletter at TBPN.com

TBPN.com is made possible by:
Ramp - https://Ramp.com
AppLovin - https://axon.ai
Cognition - https://cognition.ai
Console - https://console.com
CrowdStrike - https://crowdstrike.com
ElevenLabs - https://elevenlabs.io
Figma - https://figma.com
Fin - https://fin.ai
Gemini - https://gemini.google.com
Graphite - https://graphite.com
Gusto - https://gusto.com/tbpn
Labelbox - https://labelbox.com
Lambda - https://lambda.ai
Linear - https://linear.app
MongoDB - https://mongodb.com
NYSE - https://nyse.com
Okta - https://www.okta.com
Phantom - https://phantom.com/cash
Plaid - https://plaid.com
Public - https://public.com
Railway - https://railway.com
Ramp - https://ramp.com
Restream - https://restream.io
Sentry - https://sentry.io
Shopify - https://shopify.com
Turbopuffer - https://turbopuffer.com
Vanta - https://vanta.com
Vibe - https://vibe.co
Sentry - https://sentry.io
Cisco - https://www.ciscoaisummit.com/ai-virtual-summit.html


Follow TBPN:
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

Hey, Kim. How are

Speaker 2:

you doing? Thank you so

Speaker 1:

much guys again.

Speaker 2:

For taking the time to go and chat with us.

Speaker 3:

And and Congratulations on Yeah. The launch. Launch of your business.

Speaker 1:

Yes. Thank you. I mean, it's been really gratifying. Yeah. That first day, you never know who's gonna show up.

Speaker 2:

Totally.

Speaker 1:

I was thinking maybe 15 subscribers or 20 subscribers, but, like, hundreds of people showed up. Amazing. Tons of billionaires and tech founders. It's insanely gratifying.

Speaker 2:

Yeah. It's great. So is it is it over for Nvidia? They're down 21%. We just read since the fifty two week high.

Speaker 2:

Is it doom and gloom? Is it over?

Speaker 1:

No. I mean, I think I was on last December, and Yeah. The stock is semis and chips have gone up, and now they're back down to where they were in December. Yeah. The chip sector's flat flat on the year.

Speaker 1:

Nvidia's down 10%. And it reminds me a lot about a year ago. Yeah. Do you guys remember? Everyone's freaking out about deep seq, the Yeah.

Speaker 1:

Super efficient models were gonna destroy AI compute. There'll be a huge compute glut, and then everyone freaked out about Trump's tariff wars, preparation day. And this year seems very similar to that. Almost as like Groundhog Day. Mhmm.

Speaker 1:

We have fears over AI CapEx. People think that it might be the peak, And then we have the Iraq war. And one of these things is

Speaker 3:

Iran.

Speaker 1:

Oil up here. Iran.

Speaker 3:

Yeah. Yeah. Easy to get them mixed up. Yes. They happen.

Speaker 1:

Feels like the same same thing over. Yeah. But

Speaker 3:

I'm sorry to distract. We wanted to throw we wanted to

Speaker 2:

We wanna show respect.

Speaker 3:

We wanted to show

Speaker 2:

respect to a real podcaster. I

Speaker 1:

mean, it's very similar to Iraq, but that's that's I love it.

Speaker 2:

These These are great.

Speaker 1:

But in a $100 oil, this stuff is unsustainable and it'll probably Okay.

Speaker 2:

So so so because when I when I I like the deep sea analogy, and I feel like the market half digested the agentic coding narrative and the Suttrini article, whether you thought it went too far or was too hypothetical, like, clearly, markets did react and a lot of names sold off. But in in a world where you believe that narrative, you would think that Nvidia would be going up, but you're saying that there are other factors at play that are sort of, tamping down the excitement in the market broadly.

Speaker 1:

Mean, there there's no doubt. Just like tariffs a year ago, Nvidia had 30% drawdown Yep. When their business was actually flying Yeah. The actual fund out of the business. I think the same thing is happening here with the Iran war.

Speaker 1:

Mhmm. Things will eventually subside. Oil can't be $100 for forever, and Trump will probably backpedal in the next few weeks Mhmm. Ahead of the Trump

Speaker 2:

So let's, let's recap a few of the key stories around Nvidia. We just came off of GTC, and there's a lot going on at the company. I mean, it's a huge company. Maybe it'd be good to start with just next generation chips, changes to strategy, what people are actually buying. Maybe that means Grace CPU stand alone sales or the the development with the Grok partnership.

Speaker 2:

What's sticking out just on the actual AI product side to you that you're most excited about?

Speaker 1:

Well, inference demand is exploding driven by the AI agents Sure. And the coding assistance. Yeah. What I met with, Ian Buck. I met with dozens of, engineers at Meta, Google, Nvidia, and all of them are seeing crazy, inference demand and AI compute shortages.

Speaker 1:

So across the board, people are in crazy clamoring need for AI.

Speaker 2:

And we're and we're I mean, we're yeah. You're seeing that from talking to engineering leaders at big tech companies, but we're also seeing it from vibe coders who are just on X and Twitter and talking about how they're hitting rate limits and their their subsides, and they have multiple plans. And they actually shift around from one model provider to another just to make sure that they're getting the tokens they need to build whatever they're building.

Speaker 1:

And you see the tweets. Like, people are, like Yeah. Building, bots to to pick up any kind of b 200 GPU that can that

Speaker 3:

Oh, yeah.

Speaker 1:

They're waiting, like, weeks and months or whatever.

Speaker 3:

Like sneaker bots, but for Neo Clouds. That's crazy.

Speaker 1:

Exactly. I can't believe that. And the the the great thing is Jensen, you know, he's very prescient. He probably saw this demand months away. He locked up all the the supply agreements for memory, CoWoS, you know, connectors ahead of time.

Speaker 1:

He saw this inference demand. And, to take advantage of this coding system, boom, it's it's almost like a gold rush. You see OpenAI pivoting toward it. Anthropic obviously is thriving on it. Yeah.

Speaker 1:

Billions of ARR every every few weeks. Yeah. And Jensen's acquired Grok. Yeah. Acquired the assets of Grok and the people of Grok.

Speaker 1:

And this the combination of integrating Grok's technology together with VeriRubin lets Nvidia serve this tremendous wave of compute demand economically. And Ian Buck talked about it, Jensen talked about it. So Nvidia is positioned perfectly to thrive on this coding agent wave that we're seeing right now.

Speaker 2:

On the Gaurav deal, Jensen did a fantastic interview with Ben Thompson and was sort of asked the same question two years in a row about ASICs, the threat of ASICs, the idea that the GPU, the general, like like, general architectures can truly satisfy 100% of demand. It feels like there's a shift in Nvidia's strategy there. Do you see that? It feels like the right move, but do you see it as a shift in the philosophy of the company or the strategy? Or is this just something that the gears have been turning for a long time, and this is maybe just an unveiling of a strategy that makes a lot of sense and has made a lot of sense for a while?

Speaker 1:

I think what Jensen does, he sees where the market is shifting and where the economic value is. With Mellanox, he did this in 2019. He saw

Speaker 2:

2018? World Bye.

Speaker 1:

Shifting to, it's a networking chip, but he saw the world shifting to, like, these 10,000, a 100,000 GPU clusters and Mellanox need for that. In the same manner, he saw AI agents and and the inference behind that taking off. And he said, oh, this Grock thing will work perfectly with Veru it it doesn't replace everything. It just has talked about 25% of the inference demand would be Grock would work on that. Yeah.

Speaker 1:

But them working together where 75% of the inference is Verirubin, 25% is a Grok low latency stuff, that's it's like the perfect combination to to to take advantage of this. And the other thing is, like, we're just in this great lift off of AI innovation. Yeah. We've talked about Anthropic and Mythos, the blog

Speaker 2:

Yeah.

Speaker 1:

Blog post that leaked out. So we're gonna have this, you know, step up function. They they told Fortune that it's gonna be a huge step up change. Yeah. OpenAI is coming out with their model soon.

Speaker 1:

And then when I went to GCC, the the biggest takeaway I had was this session between Jeff Dean and Bill Dally, both chief scientists of Google and Nvidia. And it's it's online. I highly recommend people watch it. And he talked about Jeff Dean talked about the context have context window innovations where they could focus on the 10,000 documents that that work well with your your request and query. So we're gonna have this context window innovation.

Speaker 1:

Both chief scientists talked about stacking memory right on top of the GPU or TPU, and that's going to be a huge innovation in the coming months or years. And so you have and then Jeff, can you talk about synthetic data for audio and video? There there's this huge runway that data is not over, and then they're gonna be able to take advantage of all all this data that people don't realize yet. So you have, like, all these vectors where AI models, you can just keep getting better and better.

Speaker 2:

Yeah. How are you processing the idea that Nvidia will be investing in an open source frontier lab capability? That feels like potentially competitive with some customers. Nvidia has, like, never really been in that market before. But at the same time, I've been the biggest, like, supporter of open source American AI models.

Speaker 2:

I loved when Meta was doing it. I want more of it. I loved when OpenAI, open source GPTOSS. It feels really, really important, really great, but it does feel like a strategic shift. How did you process that announcement?

Speaker 1:

It's it's not acute. I think it's, like, 25,000,000,000 over the next few years, which doesn't really compete with what OpenAI and Anthropi doing. Yeah. I guess you're right. These these smaller models are gonna be helpful for people running these smaller use cases.

Speaker 1:

So GPUs, as long as they're utilized even locally or in the cloud, Nvidia benefits. Yeah. And saw the the the top people at Quen Yeah. Left and we don't know where they left to. Quen is an amazing model.

Speaker 1:

It's kinda like what d p is Yeah. What people thought d six should be when works well locally. Yep. If Gwen kinda subsides because all the Wait.

Speaker 3:

What's your theory on where what's your theory on where they all went? Another Chinese lab? Or

Speaker 1:

I asked I asked all engineers when I went to GTC. No one really knew. But people are people are trying to say Nvidia should actually hire them. Yeah. Because the more capable open source model Nvidia doesn't care if you're using g t to run open source or not.

Speaker 1:

They just want, you know, more AI adoption across

Speaker 2:

the board. Nvidia has more, probably, more levers to pull if it if it turns into a negotiation with China. Like, we're we're we're tracking, the Manus story with Meta, and there isn't that much that Meta can give to China in exchange if there's like, hey, like, look the other way on this particular deal. Like, let this one flow through. We'll trade this.

Speaker 2:

Meta not really doing any business there, but Nvidia, of course, is going to be selling Blackwells at some point in the near future. And there's probably some level of pricing. You know, it can be part of a larger discussion, which makes a lot of sense.

Speaker 1:

And one thing that kind of went on under the radar, Jensen literally said at GTC, they got license approvals on both The US and China side. So we're gonna see billions of dollars of h 200 orders.

Speaker 2:

Okay. So yeah. I mean, it seems like it seems like there's a path on the demand side that's very, very clear. You've mapped it out a few times. It's a huge number.

Speaker 2:

It's already massive revenues, just incredible growth. But what is the supply side looking like? Because it feels like TSMC is not ramping CapEx nearly fast enough over the next few years. And if we see another 10 x increase in compute demand, we could be really constrained on the leading edge bib, fab side. So how do you think Nvidia is gonna process that?

Speaker 1:

Well, Nvidia is in the driver's seat because Jensen goes there five, six times a year and he's best friends at TSMC and speaks at their employee day. So they're gonna get higher they are getting a higher allocation to wafers and all that stuff. So Nvidia will benefit. But I agree with you that industry wide Yeah. Like, Google is dying to get more TPU wafer test.

Speaker 1:

Sure. All the all the hyperscalers that have ASICs are trying to get more wafer capacity. Yeah. So there is gonna be a AI compute shortage, in the years to come, just like you said. Yeah.

Speaker 1:

And Nvidia just benefits because, you know, they're the biggest dog in the house, and they can prepay, tens of billions of dollars to get the allocations they need.

Speaker 2:

Yeah. I mean, maybe there's some offtake in ASICs that can potentially be fabbed somewhere else at some point. I I don't I I know that a lot of the ASIC companies wind up fabbing at TSMC, but it feels like if you're already doing some sort of rearchitecture, maybe there's a way that you can get you can squeeze something a little bit out of, you know, an Intel deal or something else. I'm I'm not exactly sure.

Speaker 1:

But Samsung and Intel are

Speaker 2:

their only Samsung and Intel. Yeah.

Speaker 1:

Fabs that could possibly do it. Yeah. That's the bookcase on Intel. Yes. Yeah.

Speaker 1:

Is that is that at some point

Speaker 2:

people labs and and Google, like, like, we're across TPU, extra GPU capacity, Nvidia, the new R. Like, there's just so many buyers of lab of fab capacity now that you could imagine everyone coming to the table potentially in Washington, D. C. Or Mar A Lago since the U. S.

Speaker 2:

Government owns a slice now and everyone's saying, okay, let's hold hands and jump across this and say that if the supply comes online, we will buy it at this price because we have really, really solid use cases that that will justify the investment for us and for Intel. So that would be a really, really good case. But again, even if the money is there, how long does it take to get to, you know, good production numbers?

Speaker 1:

I mean, I suspect, like, Apple and Nvidia are considering, either Intel or Samsung for, their lower end stuff. Yeah. Whether it be like a mid range iPhone or Nvidia side, definitely, their consumer gaming GPUs still. They may go back to Samsung and maybe even Intel.

Speaker 2:

Yeah. I have one more, but

Speaker 3:

you Go go for it.

Speaker 2:

I wanted to know how you're processing the ARM CPU announcement. It's an interesting dynamic because they're sort of frenemies with Nvidia now. They're competing in many ways to break the x86 monopoly because they both are selling ARM CPUs, but then they're also competing. And so I'm wondering how you think that plays out, what that means for Nvidia and just the rest of the semiconductor supply chain.

Speaker 1:

I think ARM is their CPU opportunity is a longer term, you know, for even they said 2030, 2031.

Speaker 2:

Yeah.

Speaker 1:

It's a longer term opportunity. I don't really expect the major hyperscalers like Amazon to switch to ARMs, you know, product offering. They have their own. And same with same with Nvidia, they have their own ARM CPU that they're they're gonna incorporate and sell. So it's not that big of a I I don't think Amazon or Nvidia are really worried that ARM is gonna take any big share.

Speaker 1:

It's probably gonna be on the margin for companies that can't develop their own ARM CPU, the more the mid tier hyperscalers or enterprises that use these things. But I I think the ARM thing is very important because it kinda confirms what the biggest underlying thing that that's not really consensus yet is this massive CPU shortage that we're seeing. Yeah. Just over the last few months, we have Dell, AMD, Intel CFO talked about they're talking about three to five year locked locked in supply contracts from hyperscalers. Mhmm.

Speaker 1:

So this this is a major trend that's gonna go over the next few years. And the reason why is AI agents need more CPUs. On the ARM CEO talked about four times more CPU quarter cores versus last year's kind of AI infrastructure model. So we're gonna see this massive demand for CPUs that people aren't really understanding yet because AI agents, the whole thing, requires orchestration, tool calls, database queries, web searches, and that's all handled by the Yeah.

Speaker 3:

Give me your bowl and bear case for TeraFab.

Speaker 1:

TeraFab, I'm not that optimistic. Okay. I mean, it's so hard to

Speaker 3:

Give me the give me do do your absolute best to give me the bull case.

Speaker 1:

Because TSMC is so short that, you know, Elon needs to find. But even then, are they gonna buy, like, semi cap equipment from ASML and AMAT? Like, there there's just no capacity there, so I'm I'm not optimistic on that. And this is this is stuff that takes decades. Chipfabs is almost like cooking, and it's not like something you could just follow follow a manual.

Speaker 1:

It's like it's almost like cooking where it takes a lot of, trial and error accumulated over decades, TSMC, even Intel. So it's not something you could just jump right in and do. Yeah.

Speaker 2:

It it

Speaker 1:

unless It's somewhat

Speaker 2:

goes back to the yeah. It's somewhat goes back to the x AI debate about, like, do they need AI researchers, or should everyone be an AI engineer? Like, are we in a research period or a you know, the the Ilya Setskiver age of research versus the Elon Musk age of engineering? Where are we in, semiconductor production? It feels very engineering, like like an engineering process, but what we've seen from ASML is that it, and, and TSMC is that it does feel like there's a little bit of research and artistry to it and the cooking and knowledge holds.

Speaker 1:

Yeah. I've been doing a lot of research in this space, and it's a lot of trial and error and Yeah. Almost like cooking a recipe.

Speaker 2:

And and and it also feels like in at least with x AI, if all the researchers are in San Francisco, you can sort of just, like, walk across to the coffee shop, poach someone. But if the best semiconductor engineers or technicians are in Taiwan and they see it as a national urgency to bring you know, stability to the country both economically and geopolitically, then you have a very different calculation. It's like, oh, yeah. I could make five times as much if I left my home country to, like, be abandoned. That's a very different calculation.

Speaker 2:

And and everything that I've heard about the culture at TSMC is that the the folks who work there are extremely dedicated beyond the economics. They are true missionaries, not necessarily mercenaries. And so it does feel like it's even harder to do, like, a talent raid in in the leading edge fab world than even the AI world, which is extremely competitive, and there are still tons of missionaries. But fabs

Speaker 3:

I guess I guess another question I have is would you expect would you expect x AI slash SpaceX at any point to get to basically just open up a a shop as like a Neo Cloud? Because the thing that was like probably the one of the Biggest least compelling aspects of of the TeraFab pitch was him just saying, we need all of this compute. We need to do this because we it we're gonna be so chip constrained. We're gonna be so supply constrained. But there was no explanation of

Speaker 2:

Where the demand was coming from.

Speaker 3:

Where the demand was gonna from? Is it gonna come from

Speaker 2:

Training Tesla models, Optimus, or Croc or

Speaker 3:

Yeah. It was it it it was just very unclear.

Speaker 2:

It was

Speaker 1:

a lot.

Speaker 3:

But there's even the question right now is should XAI be kind of renting GPUs? I don't know. Don't know.

Speaker 2:

Renting out GPUs. Renting out because Because because the biggest win has been Colossus two. Yeah. Colossus two, which was built very fast.

Speaker 1:

I I I think Elon's pitch, with the SpaceX IPO, and we'll see it in the coming months, is the AI compute. It's gonna be so there's gonna be so much demand over the next five, ten years that you're gonna have to use the SpaceX satellites that have GPUs in them, to Yeah.

Speaker 2:

To serve that today. And and maybe maybe I mean, even though Tesla's been vertically integrated to the point of being a consumer product, SpaceX has not. It's been a railroad, and there is a world where you fab the chips, you put them on satellites on Starlinks in space, and then you let other companies do whatever they want with those GPUs.

Speaker 1:

Think what Elon did with Starlink. I mean, that's a telecom infrastructure play, and this would be a AI computing. Yeah. Yeah. Yeah.

Speaker 1:

Fits that model.

Speaker 2:

There's a world there.

Speaker 1:

I'm not gonna bet against Elon. It might

Speaker 2:

just take long.

Speaker 3:

Yeah. Yeah. What about what's going on with helium? What what are you tracking there? There's chatter about, helium shortages potentially.

Speaker 1:

Jensen has talked about this. This this is a risk, but there is probably, like, six months six to nine months of inventory in the channel. Bernstein has talked about it's not a risk in short term. So so if this thing if this Iran stuff lasts Yeah. In, you know, two, three, four, five months, then becomes a problem.

Speaker 1:

Okay. But if it, you know, gets solved or straight or moves opens up with the toll or whatever final negotiation they come up with over the next few weeks, I don't think it's gonna be a problem.

Speaker 2:

Yeah. I do think that, like like, most of these materials, there are extra deposits. They're just not economical to mine. I don't think that all the helium exists in The Middle East that

Speaker 1:

would very to the rare thing. Yeah. Just like you said.

Speaker 2:

Yeah. Where where, you know, supply constraint scenario, it becomes more economical to mine American helium.

Speaker 1:

Let let me put it this way. If helium becomes the issue, we're gonna have bigger problems on our hands.

Speaker 2:

Okay.

Speaker 1:

I mean, there's gonna be world starvation.

Speaker 2:

Let's not. Let's hope not. Bad.

Speaker 1:

That that'll be the least of our problems if the human becomes the problem.

Speaker 2:

Take me through, depreciation gate. How did you process that, and where do we stand now with the fear that GPUs will depreciate precipitously and h one hundreds will be worthless in

Speaker 1:

six to

Speaker 2:

twelve months.

Speaker 1:

It's it's totally not a problem right now. Like, CoreWeave has talked about, these things are lasting five to six years

Speaker 3:

Mhmm.

Speaker 1:

And they're getting, like, almost 95% of the pricing. Mhmm. So it could be potentially be a problem if the whole if this is a bubble, I don't think it's a bubble. Yeah. But if this is a bubble two, three years from now and there's a compute glut, then Yeah.

Speaker 1:

The stocks don't go down because there's a compute glut. But as of now, it's the opposite. Like Mhmm. All the GPU rental prices, even for stuff that's six years old, is still being sold out, and it the AI compute demand outpacing supply is so large that this is not an issue right now.

Speaker 3:

Do you have any theories on on where the next step change in token demand could come from? Because right now, we're seeing it in code gen, and there's a lot of optimism around, these types of workflows being applied to other forms of work. But we were talking about this on Friday. Like, if AI can just one shot beautiful financial models, it won't necessarily even make a real dent in token demand, at least compared to to code gen because no company needs to just constantly be, you know, be generating, models at the rate that, let's say, Gary Tan generates code. And and so I'm I'm like kind of been trying to wrap my head around where where could these incremental use cases I

Speaker 1:

actually think code gen is still just early innings.

Speaker 3:

Yeah. Yeah. And I I don't disagree with that.

Speaker 1:

Had ten, twenty agents, and they're kind of over, overseeing them. But then we have this other stuff where these models, the mythos and OpenAI, they're just gonna get better where you could automate all these work process flows. Companies are gonna use them for every single vertical, customer service, research, simulating chip design where they they can verify drug discovery, where they verify, drug molecules can do. So so we're just getting started at this stuff. So you can you're gonna see vertical AI agents on every single category.

Speaker 1:

And I think Logan's coming on. He wrote this great post on X Yeah. That he says this the AI agent wave is is gonna kind of, attack this $6,000,000,000,000 knowledge economy. Right? It's not just about programming anymore.

Speaker 3:

They're coming for us. Yes.

Speaker 1:

I don't think say, I'm actually

Speaker 3:

They're they're attacking the key context economy and the TBPN economy.

Speaker 1:

No. I I think it's it's it's like a calculator, a spreadsheet. You know, thirty, forty, fifty years ago, we had, like, you know, fifty accountants do doing the spreadsheet manually. Right? And now after a spreadsheet came, it didn't get rid of all of knowledge work.

Speaker 1:

It just enabled people to to think at a higher level and get more done. And Yeah. I'm very optimistic about that.

Speaker 2:

I mean, one one way that you 10x token demand on around a financial model without 10x ing the number of financial models that you're building is having the agent go and collect 10 times as much data. And so there's a lot of situations where, I mean, you look at like hedge funds that want to understand the price of Walmart stock. There are hedge funds that will task satellites to take pictures of Walmart parking lots, estimate the number of people on a day by day basis that are going into the Walmart to shop and then using that as a proxy to project revenue and then flow that through to cash flow and then flow that through to the DCF and the actual valuation of the company. And if you think about all the different financial models and all the different businesses where you could go and say, well, for this company, I need to go to every single loco like, I wanna know the price of Squarespace. Let me go to every single website that's powered by Squarespace and estimate the revenue that they're bringing in and their willingness to pay for their hosting service, something like that.

Speaker 2:

And all of a sudden, like, it's just one spreadsheet. It's just one number at the end of the day, but it's like a thousand times more work went into it.

Speaker 1:

Let me give you this great example. Every year, I do this a the same store sales for, these fast casual companies

Speaker 2:

Sure.

Speaker 1:

Like Chipotle, Cabba, and I put out this tweet. It goes viral. A year ago, when I do it, I would have to manually go to every IR website for these six fast casual restaurants. Yeah. It would take me like an hour or two.

Speaker 1:

Yeah. I would try to use a chatbot, they would get it wrong. Sure. I did it, like, a few weeks ago, and all the chatbots got perfect. So it just saved me two, three hours of tedious manual labor.

Speaker 1:

So that that's only gonna get better and better. Like Yeah.

Speaker 2:

Yeah. It's only gonna take you one like like, this year is the year that you you do it with multiple chatbots and you fact check it yourself, and then forever, it's gonna be just one prompt.

Speaker 1:

And And they get they got it right. Like, a year ago, they wouldn't get it right. But now in one, two minutes, I put give me the same store sales for these six restaurants. Yeah. I put in Gemini.

Speaker 1:

I put in ChatGee PT and just to to to make sure they're right, and they're right.

Speaker 2:

So Yeah.

Speaker 1:

That that all all the tedious labor, all the manual labor, all the data entry that, you know, all of us are used to, that stuff is going away, and we could think higher level. So I could look at the same store sales and say, oh, the economy is at risk and whatever. But all the grunt work, all the tedious work is gonna be taken care of by these AI agents.

Speaker 2:

I agree completely. I agree completely.

Speaker 3:

We got a lot of a lot more sound effects since the last time you joined. Last last last question for me. What what's your outlook on on Meta? It feels like the the broader market right now has zero faith in Meta to actually

Speaker 2:

put Yeah.

Speaker 3:

All their AI investments to use.

Speaker 1:

I have I have this history with Meta is that, you know, every time it starts falling apart, I say, it looks cheap. Yeah. And then it goes down another 30%. But nothing has changed. Like, no one's gonna replace Meta's digital ad position.

Speaker 1:

I mean, like, I would even say in the AI world, they're even better positioned because Google might lose digital ads share to AI, chop chatbots, their search position going in the future. So, like, no one's gonna replace Instagram. No one's gonna replace Facebook. Billions of people are still going to use those social media apps. And Yeah.

Speaker 1:

You know, it's every six six months to twelve months, everyone goes through this bare metacycle, but their pure competitive position really hasn't changed. And you saw what's happened to Sora. Right? Like, you know, everyone's all excited about Sora, and Yeah. And that that got Totally.

Speaker 1:

That shot.

Speaker 2:

Yeah. And and there's just this world where even even if, like, the AI spending is like a side quest, it's like, really, they just pulled forward, like, three or four years of CapEx, and they will use that for their other products. It's probably even less like wasteful than RealityLab spend, which might take even longer to realize the cash flows from. Like they can recoup, okay, we built this massive data center. We did this training run.

Speaker 2:

We didn't get to the frontier. We're not getting a lot of, like, Gen AI usage, but we can apply it to our ads platform and tools and Reels recommendations and a million other things just in years 2028, 2029. And, yeah, we're a little bit ahead of schedule.

Speaker 1:

For ad engine monetization.

Speaker 2:

100%. Yeah. The general model.

Speaker 1:

Reality Labs Yep. May he he made a waste of seventy to eighty billion dollars. He may waste a 100 billions of dollars on on these a Frontier AI models.

Speaker 2:

But the business is good.

Speaker 1:

Core ad engine, core business, that money making engine has it's not gonna be affected by this.

Speaker 2:

Yeah. Well, thank you so much for taking the time to come hang out. Always a great time, Tay. Yeah. Go subscribe to KeyContext on Substack.

Speaker 2:

Follow Tay Kim on social media. First adopter.

Speaker 3:

Join the many beaners that we're the first adopters.

Speaker 2:

Yes. Yes. You'll be in good company. And thank you so much. We'll talk to you soon.

Speaker 2:

Have a great week.

Speaker 3:

Great to

Speaker 1:

see you guys. Cheers. Yeah.