TBPN

Diet TBPN delivers the best of today’s TBPN episode in 30 minutes. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays 11–2 PT on X and YouTube, with each episode posted to podcast platforms right after.

Described by The New York Times as “Silicon Valley’s newest obsession,” the show has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

Absolutely wild podcast between Nvidia's Jensen Wong and Dwarkesh Patel. So many clips, so much debate. We're gonna kick it off with the question of whether or not Nvidia is a car. Let me start off

Speaker 2:

with this

Speaker 1:

video semi analysis because it is a treat.

Speaker 2:

Wait. And pause for a second. I don't You know why I was asking you? Yeah. Like, what app you use to edit videos?

Speaker 2:

Oh, yeah. It was to make this exact video. Yeah. And then I saw this and I was like, oh, I don't need to make it.

Speaker 3:

I don't you're not talking to somebody who woke up a loser. And that loser attitude, that loser premise makes no sense to me. We are not we're not a car. We are not a car.

Speaker 1:

Such a great edit. It's absolutely it's such a good vibe And it's so funny because I love to this is Tokyo Drift. Oh, good. It's funny because it's this new type of Vibrio where the seminalysis team, obviously, they they recontextualized the two clips and added the the the beat to get the video going. But that final edit with the cars sort of morphing together, like it goes from there's a match cut.

Speaker 1:

These are match cuts between one stick shift move and another or one car drifting and then another car drifting. That takes a really long time. Somebody clearly made this video just because they're enthusiast of Fast and Furious. And then the seminalysis team was able to quickly recontextualize that to be about Jensen's answer on NVIDIA's moat and the CUDA moat and what's happening with NVIDIA. So let's go through this question, because it's been it's been humming for a while, and it sort of bubbled up most recently, because there was a whole bunch of news that Mythos might have been trained on Tranium, then maybe TPU, then maybe Blackwell, and it was sort of a mix.

Speaker 1:

And it just feels like more and more of the AI labs are capable of making other chipsets work. In the early days, it was all about NVIDIA, and now it feels like the incentives are really high to figure other options out, and that creates a different competitive dynamic. So we we can run through my thoughts on this, and then we can go into the geopolitical considerations as well because that was another fascinating segment of the interview. So Jensen, the CEO of NVIDIA, spent over ninety minutes it was almost two hours when you include the ads, too in the ring with Dwarkesh Patel. It was electric.

Speaker 1:

And the key question, at least for the start, was whether or not NVIDIA is a car. And I'm only half joking about that. It sort of was the key question. NVIDIA has been the market leader for years. During the gaming boom, NVIDIA was the gold standard for rendering PC graphics.

Speaker 1:

There was always decent competition from AMD. But once the AI boom kicked off, the CUDA ecosystem significantly sped up development of AI systems and training of AI models. And for those who aren't familiar, CUDA is a programming model that enables GPUs to accelerate demanding workloads by parallelizing computation. So instead of needing to work on the very low level instruction sets, if you are a CUDA kernel engineer, CUDA engineer, you can access all the power of the GPUs very efficiently while staying up in the more mathematical AI research, more standard Python, C plus plus programming paradigms. You don't need to dip down too low.

Speaker 1:

But it's getting easier to dip down lower and that's what we're seeing. So that created the CUDA moat because developers were way more productive, and the biggest bottleneck to progress was allowing AI researchers to quickly test ideas and scale their experiments up to whole fleets of GPUs and eventually entire data centers. At the time, researchers really liked CUDA and really liked NVIDIA, and they did not want to have to spend hours and hours figuring out other hardware systems. They just wanted to run their tests and see if the model was getting better and continue to scale. But recently, the biggest cost center for AI Labs flipped from researcher time, more or less, to compute capacity, and this creates a much larger economic incentive to figure out a way to drive down the cost of chips used to train AI models.

Speaker 1:

I wrote about this back in on Tuesday, October 22. So I said, not every link in the supply chain of AI can be completely commoditized. NVIDIA has an insane amount of power, having ramped full year revenue over the last three years from $27,000,000,000 to $60,000,000,000 to $130,000,000,000 Absolutely insane top line revenue ramp at that scale, and that's why Jensen is so confident about how dominant this business is. It's the world's biggest company for a reason. It has been growing spectacularly at immense scale.

Speaker 1:

And not only did they grow the top line, but net profit margin grew insanely. So it grew from 16% to

Speaker 4:

50 Yeah,

Speaker 2:

you do that when you have pricing power and massive leverage because, in this case, demand massively outstripping supply plus developer love and kind of just

Speaker 1:

Yeah. Like I believe the forecast net margin was 70 something percent. We were hearing about 80% potentially. And so NVIDIA, you know, the plan and the plan is still to make an incredible amount of profit off of these chips because they are incredibly valuable. But all the hyperscalers and the and the AI labs, they are sort of incentivized now to form a bit of an anti NVIDIA alliance to commoditize the accelerator market and drive down those margins at least a little bit.

Speaker 1:

And so today, the AI chip market is starting to look much less monopolistic. AI coding agents can make it easier to write software that works on non CUDA chip stacks, and the teams behind competing chips have plenty of resources and economic incentives to bring performance in line with NVIDIA. Even if it's going to be a big hassle, even if you're going to just spin up a team to get AMD or TPU working, it's going to be worth it because you're talking about billions and billions of dollars spent on chips. For the past few years, NVIDIA

Speaker 2:

Yeah, another example of an instance where an AI lab had so much urgency that they were willing to spend whatever it took was meta rebuilding Yeah. Excel. Yeah, exactly. They were willing to outspend pretty much any other lab on talent because they didn't have time to find a homegrown talent or go through a normal recruiting process, especially considering a lot of those engineers were happy doing what they were doing.

Speaker 1:

Yeah, yeah, yeah, exactly. Yeah, the incentives flip when you get to this scale. Or the incentives just get so big, you can build a whole team for a specific thing, solve any problem. And so for the past few years, NVIDIA has sort of looked like SpaceX's launch program. It's an incredible technology with very few viable alternatives.

Speaker 1:

And so that creates great margins as we've seen with SpaceX's launch capacity, and they control something like 90% of the launch market. While the products have not degraded, quite the opposite actually, Blackwell and Verruban are incredibly powerful chips. They're clearly on the leading edge there and they have an incredible amount of supply chain guarantees from TSMC and across memory and all the other different pieces of supply chain, like NVIDIA is ready to make more chips, but increased competition makes the category look a little bit more like the car market than the rocket launch market. And so that, I think, is where Jensen is pushing back and saying, no, there's a lot more that we bring to the table with our customers that don't think of us like a car. This is not the difference between a Ford and a Toyota.

Speaker 1:

They all sort of get you to the same place. And you can swap one out. You can be driving a BMW one day, go to the dealer, turn it in for a Mercedes, and you're going have a pretty

Speaker 2:

Yeah, or the other example that I was using with the team earlier was this idea of like, if you're a delivery company like FedEx and you have a lot of Ford vans and then Hyundai comes to you and says, hey, you've been spending like $50,000 per Ford van, but would you consider a Hyundai Yep. We'll we'll we'll sell it to you for $35,000. It's just as good. And it could be like mildly inconvenient for the company because like they're kind of used to using board. They maybe have like an internal team that does some maintenance.

Speaker 2:

But when you start looking at that cost differential, it can start to get pretty interesting to say like, hey, we should why don't we try out some Hyundai's? Yeah. Let's like Yeah. Move over some of our routes to Hyundai's and like see how that goes. And they try it and they're like, hey, this actually this works pretty well.

Speaker 2:

Yep. Maybe there's like, know, higher maintenance but it actually like mass out. Yeah. And like we're gonna actually start adding more Hyundai's to our overall fleet. Yep.

Speaker 2:

And so Jensen's point on the podcast is that like we're not we're not selling cars. Right? This is not like something like you can kind of swap out. Yeah. And Dwarkesh was obviously pushing pushing And

Speaker 1:

for a lot of companies, swapping out NVIDIA for TPU would be very difficult to some workloads that Jensen focuses on. He says, we're not a tensor processing unit. We're an accelerator. There's a whole bunch of scientific computing workloads that work particularly well with NVIDIA. The the problem that

Speaker 2:

And of

Speaker 4:

course, Harkash

Speaker 2:

was just saying, yeah, well, it's perfectly fine to have, like, a specialized chip for specialized workflows because the biggest This the biggest companies in the world, the biggest buyers here have like a single type of workload Yep. That they're trying to do Yep. And that's why they're using your competitors.

Speaker 1:

Yeah. Because the AI build out does not seem to be slowing down as long as power can continue to be brought online and data centers can continue to find towns that will approve of them, demand for chips will presumably grow. But every chip designer and AI lab has to be praying that the net that those net margins come down. How quickly will it happen? Very early to tell.

Speaker 1:

The market did not react negatively to this back and forth, although, the price of NVIDIA has been basically flat since last August. And I think that's why Nvidia Jensen is trying to sort of like reset on the narrative because it's possible that, you know, with the there's a lot of movement up and down. They've been sort of flat. There's a desire to sort of like reset and and recontextualize Yeah.

Speaker 2:

Inference demand is scaling Yeah. In a very significant way. Yep. And the NVIDIA stock price is relatively flat.

Speaker 1:

Yeah. There's So you and get since August, there's been so many different moments where fears of the AI bubble, fears of the products not finding product market fit, that revenue might stagnate. Like, we've seen a ton of bullish signals for AI demand broadly. Demand is there. And so if you're a supplier, you should also be going up.

Speaker 1:

But there's been this overhang of what will happen to margins and market structure. And so that is what people are going back and forth on. Tyler, did you have anything else to thoughts on this?

Speaker 5:

Personally, I I think I I probably am more on the Dwarkesh side Mhmm. Where, like yeah. I mean, at some point, like, these margins are so high. Yeah. There's so much opportunity here.

Speaker 5:

We've seen that, like, actually, if you're a big lab, you know, it's lot of resources

Speaker 1:

Yeah.

Speaker 5:

But you actually can just, train the model on a different architecture. You can serve them on a different architecture. Yeah. Like, you actually can figure these things out. As models get better, you can go lower and lower lower on the stack.

Speaker 5:

Yeah. You can, you know, you can write the kernels

Speaker 1:

Yeah.

Speaker 5:

Semi autonomously. As things get faster and faster. And and it's like, I'm probably much more on the Dwarkesh side.

Speaker 1:

Yeah. It'll be interesting to see how Grock fits into this, the new CPU, integration between different pieces of the puzzle, like and then also, Dwarkesh makes this point that the supply agreements that NVIDIA has might be a bit of a moat for the next few years while TSMC line time is so constricted.

Speaker 5:

Yeah. But even then, like, you know, Jensen was saying that kind of supply constraints, this is like a two to three year problem. After that, like, you can just solve these things. Like, I don't know how to think about this because, you know, to me it seems like, yeah, so much of the value of Nvidia is just, like, they have such an incredible relationship with TSMC. Yeah.

Speaker 5:

But if and and it's so valuable because of how constrained it is. Yeah. But if that kind of constraint is is maybe gonna, you know, go away to some extent.

Speaker 1:

To a TSMC?

Speaker 5:

Yeah. That's what he says. Like, they're gonna increase, you know, they can build new fab and Sure. Whatever three three Yeah.

Speaker 2:

I mean, the big the big the big takeaway from the conversation is, like, you have one person who seems incredibly AGI pilled Yeah. Which is Dwarkesh. Yeah. And then you have Jensen who doesn't seem AGI pilled in that sense at all. Yeah.

Speaker 2:

Yeah. Right? He Dwarkesh asked about he was kind of getting early on the idea of like, will you be able to just like prompt your way to Nvidia chips? Yeah. Basically sell software.

Speaker 1:

Yeah.

Speaker 2:

Yeah. And Jensen was basically like, no, I don't I don't think that'll happen. And then

Speaker 4:

and then when you get

Speaker 2:

to the whole geopolitical conversation too, again, it's like Dwarkesh is like you're selling nukes and Jensen is in in my view, I'm selling computers. And that that was like the big rift. It was like these two kind of totally conflicting world views and it made for some very

Speaker 1:

Pay per view.

Speaker 2:

It was a pay per view. It should have been a pay

Speaker 1:

per view. It's a per view. Sriram Krishnan summed it up pretty well. He said, every person here's reaction to the Jensen plus Dwarkesh podcast can be extrapolated directly from whether they believe in the frontier labs achieving short timelines for AGI, ASI. If you believe in the labs achieving RSI and then AGI, ASI for some definition of all three in the next few years, you're probably sympathetic to the frame Dwarkesh adopts.

Speaker 1:

If not, you're probably more sympathetic to the arguments from Jensen. And so we can go into the export controls next and talk about that. Metacritic Capital sort of summed up a little bit of why just Jensen's rhetoric and how he wasn't conceding a lot of things. Dean Ball said, Dwarkesh Jensen reveals reveals how inconsistent and unbattle tested AI acceleration talking points are, especially when they're filtered through the prisms of corporate comms and mass politics. Strategically coherent acceleration accelerationism is possible.

Speaker 1:

He says, I try, but not currently prevalent. And Metacritic Capital says, the problem is that Jensen doesn't concede anything. Compute spending going to the moon, 1,000,000,000,000 revenue in sight. Models keep getting better. No unemployment.

Speaker 1:

Software codes are good. Other Western accelerators are bad. Chinese competitors are good. Nvidia makes token costs decline 90% per year, but Chinese compute scientists are capable of making all the necessary algorithmic improvements. He also can't be AGI pilled enough because at the end of the day, he is an intellectual property company in the business of sending a file to TSMC.

Speaker 1:

I think it's part of Taiwanese culture to want to be loyal to all clients and don't have favorite winners. He doesn't want to betray his software co customers. He has antitrust concerns.

Speaker 2:

Yeah. He was making the bold case for software, which was that AI agents are gonna use tools. Yeah. So he's like, there's gonna be more users of software than ever.

Speaker 1:

Which is something I'm like somewhat sympathetic to. But yeah, it it definitely

Speaker 2:

It's still very easy to to Yeah. Take the the counter.

Speaker 1:

The flip side of that. Yeah. Well, let's play the distilled recap from Dwarkesh Patel of the back and forth with Jensen on export controls. It's about four minutes and we'll we'll watch this and then discuss.

Speaker 4:

If Chinese companies and Chinese labs and the Chinese government had access to the AI chips to train a model like Claude Mythos with these cyber offensive capabilities and run millions of instances of it with more compute. The question is, oh, is that a threat to American companies, to American national security?

Speaker 3:

First of all, Mythos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company. And so the amount of capacity and the type of compute that's it was trained on is abundantly available in China. And so you just have to first realize that chips exist in China. They manufacture 60% of the world's mainstream chips, maybe more. It's a very large industry for them.

Speaker 3:

They have some of the world's greatest computer scientists, as you know. Most of the AI researchers in all of these AI labs, most of them are Chinese. They have 50% of the world's AI researchers. And so the question is, if you're concerned about them, considering all the assets they already have. They have an abundance of energy.

Speaker 3:

They have plenty of chips. They got most of the AI researchers. If you're worried about them, what is the best way to create a safe world? Well, victimizing them, turning them into an enemy, likely isn't the best answer. They are an adversary.

Speaker 3:

We want The United States to win. Having a dialogue and having research dialogue is probably the safest thing to do. This is an area that is glaringly missing because of our current attitude about China as an adversary. It is essential that our AI researchers and their AI researchers are actually talking. It is essential that we try to both agree on what not to use the AI for.

Speaker 3:

With respect to China, we want to have, of course, we want United States to have as much computing as possible. We're limited by energy, but, you know, we've got a lot of people working on that, and we had to not make energy a bottleneck for our country. But what we also want is we want to make sure that all the AI developers in the world are developing on the American tech stack and making the contributions, the advancements of AI, especially when it's open source, available to the American ecosystem. It would be extremely foolish to create two ecosystems: the open source ecosystem, and it only runs on the Chinese tech a foreign tech stack, and a closed ecosystem, and that runs on the American tech stack. I think that that would be a horrible outcome for United States.

Speaker 4:

I mean, think the concern going back to the flop difference in the hacking is, yes, they have compute, but there's some estimates that because they're at seven nanometer, they don't have EUV because of chip making export controls. The amount of flops they're about to actually produce, they have like one tenth the amount of flops that The US has. And so with that, could they train eventually a model like Mythos? Yes. But the question is because we have more flops, American labs are able to get to these level of capabilities first and because Anthropic got to it first, they say, okay, we're going to hold on to it for a month while all these American companies, we give them access to it, they're going to patch up all their vulnerabilities and now we release it.

Speaker 4:

Furthermore, if they even if they train a model like this, the ability to deploy it at scale, you know, if you had a cyber hacker, it's much more dangerous if they have a million of them versus a thousand of them, so that inference compute really matters a lot. In fact, the fact that they have so many AI researchers who are so good is the thing that makes it so scary because what is it that makes those engineers, researchers more productive is compute.

Speaker 3:

We should always be first and we should always have more. But in order for that outcome for you to what you described to be true, you have to take it to the extremes. They have to have no compute. And if they have some compute, the question is how much is needed. The amount of compute they have in China is enormous.

Speaker 3:

It's mean, you're talking about the country. It's the second largest computing market in the world. If they want to deploy, aggregate their compute, they've got plenty of compute to aggregate.

Speaker 1:

Very, very tense back and forth. Dean Ball says, it's a shame Jensen mostly fails here because the monoculture on export controls is bad. If you're a young AI policy researcher trying to make a name for yourself, it's almost impossible to be taken seriously unless you are pro export controls. Monocultures are usually bad. And I I am sympathetic to Dwarkesh's points there for sure, especially on the inference side, even if models exist in both worlds, like having a whole bunch of, you know, good guy compute that can go and patch bugs while the amount of attackers is much smaller.

Speaker 1:

It's just a matter of, you know, how many resources you have on each side. That's a great point. The only thing that they are like talking around is just Taiwan as a particular turning point and how their various positions flow through to Taiwan policy and where the how China's stance on Taiwan is something that I've always puzzled. And I wish that both of them had articulated their sort of philosophy on actually war gaming out what export controls do to likelihood of of Taiwan intervention or blockade or anything like that. I don't exactly know.

Speaker 1:

I've I've been trying to like work through it, but I don't have a complete thesis. But we've been debating it back and forth all day. I don't know if you have a strong take on any of this, Jordy.

Speaker 2:

I appreciate Matt Zeitlin's point. He's like, I kinda appreciate that Jensen Pong seems relatively normal about non business stuff compared to other tech founder CEO types. But then when it comes to Nvidia's actual operations, he's a complete sicko.

Speaker 1:

Yeah. Yeah. I saw a couple of takes around this that that he had like some very, very, very, very strong points. If you're deeper in the supply chain, I haven't I I I couldn't really assess how he did on that. But some people were there were definitely people that were in Jensen's camp.

Speaker 1:

It was it was divided, which I think is why this went so viral. Jensen Huang and Dwarkesh today. Most combative interview he's done in a while. The biggest regret not funding

Speaker 2:

Has there been a more combative interview ever with Jensen? It seems unlikely.

Speaker 1:

Think so. There is there is some funny details in here. Apparently, the Larry and Elon begged Jensen for GPUs at dinner story. That never happened. We absolutely had dinner.

Speaker 1:

At no time did they beg for GPUs, which is funny. I I wonder what that would happen. Dwarkesh posted tomorrow, and it's him and Jensen standing next to each other. And Volkov said, hey, Dwarkesh. Was this picture taken before or after the pod?

Speaker 1:

Because it does feel like it was a tense situation. Although, to both of their credit, like Jensen, it felt like he loved being in the arena, ask getting asked hard questions, like working through this. There's this back and forth where where Jensen's pushing back and Dwarkesh says, oh, I can drop it. And he says, don't need to drop it. I'm I'm enjoying this.

Speaker 1:

Like like, let's let's hash this out. And I thought that was very diplomatic and and and just good overall. Well, Intel is up on the news, up 4% today, 10% over the past five days, almost at all time highs. I think we're very close to the 2,000 peak for Intel, which it was also around where they were trading in 2021. Dollars 67 a share, dollars $330,000,000,000 company.

Speaker 1:

Clearly, with all of this backdrop and just the idea of more chips and maybe the CUDA ecosystem being something that you can work around, can an American fab run by Intel produce a chip that's viable for an AI lab? It feels like increasingly yes. That's certainly the argument that's being put forth by Dwarkesh. And it would be very exciting. I think it's something that every everyone would support an Intel resurgence.

Speaker 1:

There's some there's some news around TeraFab potentially getting involved. But first, let's start with the scoop from Grace Kay over at Business She says, scoop Cursor plans to use x a i's infrastructure to train its composer 2.5 code

Speaker 2:

golden scoop, Tyler.

Speaker 1:

According to people familiar with the matter, Cursor will you will use tens of thousands of x a i's GPUs, they said. And we got a scoop for Grace Kay.

Speaker 2:

This one's going to Grace.

Speaker 1:

Grace

Speaker 2:

This giant Congratulations.

Speaker 1:

You win the golden scoop.

Speaker 2:

The golden scoop. Congratulations. Interesting to see something we talked about Probably midway through last year, x AI has, you know, shown a tremendous ability to on on the kind of infrastructure Yeah. Data center side spinning up a a huge amount of compute very very quickly ahead of any timeline that any reasonable party would have probably And demand hasn't exactly followed in the way that they would have liked. Yeah.

Speaker 2:

And so opening that up to a company like Cursor who has all the demand. Yeah. And what they really need is their own model.

Speaker 1:

Yeah. It was also interesting because I don't know if it was thrown out as a potential project for other companies. I feel like MSL mentioned it at some point, maybe OpenAI. There there was some talk of like, okay, if you're marshaling all this compute and you wind up with too much, like, what do you do then? And the the idea of becoming a cloud provider if you have a if you have a data center and everything

Speaker 2:

works

Speaker 1:

Yeah.

Speaker 2:

That's been the big the big question of like, you know, with with everything that SpaceX is doing and now TeraFab Yep. They're gonna be creating all this Yep. Capacity. Yep. What where's the demand for that capacity gonna come from?

Speaker 1:

Right?

Speaker 2:

Yep. And so you could imagine a world in the future where SpaceX has a bunch of space data centers, they open up that capacity to a bunch of companies other than

Speaker 1:

Yep.

Speaker 2:

Just Elon Inc Yeah.

Speaker 1:

Businesses. So Grace says, the setup effectively turns xAI into a kind of cloud provider by renting some of its GPUs to other country companies. XAI could start generating revenue from its massive infrastructure while still developing its own AI models. The arrangement could help the company offset the costs of building and operating data centers while also deepening ties with a startup that has access to valuable coding data. And so there could be some sort of trade deal going on.

Speaker 1:

Ed Ludlow at Bloomberg has a report from the TerraFab. Musk's team is actively requesting price quotes and delivery timelines for a wide range of chip making equipment, photomasks, substrates, etchers, deposition, cleaning, testing tools according to sources. Elon Musk's lieutenants have reached out to chip industry suppliers for his Envision TeraFab project. Remember, he was pictured with Lip Buutan from Intel, I think, last week. Early steps in an audacious and likely arduous attempt to break into the production of cutting edge chips.

Speaker 1:

That is a very, very tall order, but maybe there's never been a better time to break into the cutting edge chip market given that you don't need to I mean, you sort of need to reinvent CUDA, but it's becoming it's becoming easier potentially. Well, there are two big releases from Anthropic and OpenAI today. Claude announced Claude OPUS 4.7, our most capable OPUS model yet. It handles long running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision, they say.

Speaker 1:

Very good score on SWE bench pro, 64.3%. Excited to test it. They say Opus 4.7 has substantially better vision. It can see images at three times the resolution and produce higher quality interfaces, slides, and docs as a result.

Speaker 2:

Yeah. The most notable thing here is you have a model a model card that shows a model that's not publicly accessible.

Speaker 1:

Yeah. Yeah. They share the Mythos benchmarking.

Speaker 2:

Mythos, Logging Distortion. OPUS 4.7. Just sitting there. But of course, unless you're one of you select won't be getting access to it at least yet.

Speaker 1:

Well, OpenAI announced codecs for almost everything. It can use apps on your Mac, connect to more of your tools, create images, learn from previous actions, remember how you like to work, and take on ongoing and repeatable tasks. With computer use on macOS, Codex can now use any app by seeing, clicking, and typing with its own cursor. It runs in the background without taking over your computer, working on tasks like front end iteration, app testing, or any workflow that doesn't expose an API. You can now generate and iterate on images with GPT image 1.5 and codex to create front end designs, mock ups, game assets, and more without leaving your workflow.

Speaker 1:

Usage is included in your ChatGPT account. No API needed. Automations can now run-in the same thread. Lots of updates here. And Tebow says, Codecs just got a lot more powerful computer use in app browser, image generation, editing, 90 plus new plug ins to connect everything, multi terminal SSH, lots and lots of stuff.

Speaker 1:

So go give it a test. Go take it for a spin. Openai.com, of course. And you can download it for Mac OS. The West creates the Internet.

Speaker 1:

Try nailing that Jell O to the wall. CCP nails it. The West creates LLMs. All cut. All right.

Speaker 1:

Try nailing this. And the CCP picks up a hammer. There's a Oh. Piece in the Wall Street Journal opinion section. AI is bound to subvert communism.

Speaker 1:

This is a very contrarian take because people, at least with the Internet, there was the the perception of, like, decentralization, permissionlessness, anonymity, a lot of things that felt very democratic. AI is very centralizing by default. This is the Teal take of, like, AI is communist and and crypto is is libertarian. And so this is this is a pretty wild thing to argue, but, you know, read read the opinion piece and see what they say. The the nail nailing Jell O to a wall, I believe that's from the Clinton administration, the idea that the Internet would spread so widely that, the Chinese Communist Party would not be able to control the population.

Speaker 1:

Everyone would be coordinating. Be sort of like an Arab Spring type moment. But of course, the firewalls went up, the surveillance happened, and nothing really changed and the Communist Party seemed stronger than ever. But this does sort of undergird a lot of what Dwarkesh has been saying about the the the risk of of China and having strong AI and stronger control over the population. It's always hard to get a read on exactly how things are rolling out in China.

Speaker 1:

There's some people that seem to like it over there. Of course, Dwarkesh took a whole trip to China and made it back okay. So it's not all doom and gloom. But it it is a it's a tricky tricky thing to argue, but we'll see. There's a ton of breaking news.

Speaker 1:

The big one

Speaker 2:

is Reed Hastings. Reed Reed Hastings is stepping off. The board of Netflix, and the stock is down tremendously. But this is good.

Speaker 1:

He's not stepping off the board. He's stepping off the board in June. He announced that he's stepping off the But, yes. I mean, that's what

Speaker 2:

I'm saying. No. But but but

Speaker 1:

Why is

Speaker 4:

this again?

Speaker 2:

It's good because it's good for Reed specifically

Speaker 1:

Oh, yeah.

Speaker 2:

Because it shows that people have confidence

Speaker 1:

Oh, sure.

Speaker 2:

Leadership and his I would expect Netflix to make a quick recovery.

Speaker 1:

But Yeah. Is fantastic. Percent in the last month, down eight and a half percent overnight after hours. But we'll see where the stock settles.

Speaker 2:

Alternative is a nightmare for Reed because if he if he announced this Yeah. Announced this and the stock popped Yeah. 20% Yeah. He was, you know, a handicap handicapping Totally, totally. The

Speaker 1:

Yeah. And I mean, the flip side is that Ted Sarandos, it seems like he put on a master class over the last six months with the Paramount negotiations, not getting over his skis. The shareholders wound up really liking how that all penciled out. And so it seems like the company is in good hands. And all of the different strengths that Netflix has continue to show across advertising and subscriptions.

Speaker 1:

And the big headline with Netflix is that they've kept their content budget essentially flat or slightly growing, while they've grown subscriptions and revenue and top line very precipitously and very consistently, even at a time where they haven't needed to invest exponentially more money in content. Obviously, they spend a fortune on it, but it's not it's not growing as fast as their revenue is growing, so their profits are growing, which is good news. Hastings' departure marks the end of an era for Netflix, which under his leadership transformed from a d d DVD by mail business to a juggernaut in subscription video streaming and disrupted Hollywood. He said, my real contribution at Netflix wasn't a single decision, Hastings said in a statement. It was a focus on on member joy, building a culture that others could inherit and improve, and building a company that could be both beloved by members and wildly successful for generations to come.

Speaker 1:

Well, we wish him the best on his next chapter, whatever he winds up doing. What an absolute run. And lots more stories to talk about, but we will be back with you on Monday at 11AM.

Speaker 2:

It's been an honor and a privilege.

Speaker 1:

We'll see you soon.

Speaker 2:

Be with you here today.

Speaker 1:

Leave us five stars

Speaker 2:

on Fantastic.

Speaker 1:

Apple Podcasts. It's

Speaker 2:

your day.

Speaker 1:

Sign up for our newsletter at tbpn.com. Goodbye.

Speaker 5:

Throwing flash bang.

Speaker 1:

Throwing flash bang.