TBPN

  • (00:19) - Papperger Pushes Rheinmetall to Top
  • (13:35) - 𝕏 Timeline Reactions
  • (16:11) - Anduril Says Failures are Part of Development
  • (20:21) - 🔴 CODE RED 🔴
  • (38:35) - 𝕏 Timeline Reactions
  • (56:20) - Stratechery: Gemini Spurs New AI Infra Race
  • (01:30:37) - 𝕏 Timeline Reactions
  • (01:42:23) - Matt Garman, CEO of Amazon Web Services (AWS), discussed several key announcements at AWS re:Invent 2025, including the introduction of frontier agents designed to enhance software development, operations, and security through autonomous capabilities. He also highlighted the launch of Nova 2, AWS's latest Frontier AI models, and Nova Forge, a tool enabling customers to integrate their own data into pre-training checkpoints to create customized models. Additionally, Garman announced the general availability of Trainium 3, AWS's new chip aimed at accelerating training and inference processes for customers.
  • (01:55:08) - Tae Kim, a senior writer for Barron's and author of "The Nvidia Way," discusses Nvidia's unique corporate culture under CEO Jensen Huang, emphasizing its blunt communication style, agility in decision-making, and meritocratic approach to talent recruitment. He highlights how these factors have contributed to Nvidia's sustained success and ability to outmaneuver competitors. Kim also addresses the company's strategies in navigating challenges such as competition from Google's TPUs and geopolitical issues affecting sales in China.
  • (02:23:05) - Tarek Mansour, co-founder and CEO of Kalshi, a leading prediction market platform, discusses the company's recent $1 billion Series E funding round, which elevated its valuation to $11 billion. He highlights the mainstream adoption of prediction markets, attributing this shift to factors such as declining trust in traditional media, the legalization of such markets, and their integration into daily activities like sports viewing. Mansour also addresses Kalshi's strategic partnerships with platforms like Robinhood and Coinbase, emphasizing their role in driving user engagement and expanding the platform's reach.
  • (02:42:16) - Matt Mullenweg, co-founder of WordPress and CEO of Automattic, discusses the recent live release of WordPress 6.9, highlighting its development by over 900 contributors worldwide. He emphasizes the importance of freedom in technology, advocating for open-source licenses as a "bill of rights for software." Mullenweg also introduces Beeper, a service that consolidates various messaging platforms into a single interface, aiming to enhance user experience across different networks.
  • (02:53:06) - Jason Fried, co-founder and CEO of 37signals, is renowned for developing web-based productivity tools like Basecamp and HEY. In the conversation, he discusses the launch of Fizzy, a new Kanban-style project management tool designed to be simple, colorful, and open-source, aiming to bring vibrancy and ease of use to the software industry. Fried emphasizes that Fizzy is built for their own needs, reflecting their philosophy of creating products they personally find useful, and offers it at a straightforward price of $20 per month with unlimited users and usage.

TBPN.com is made possible by: 
Ramp - https://ramp.com
Figma - https://figma.com
Vanta - https://vanta.com
Linear - https://linear.app
Eight Sleep - https://eightsleep.com/tbpn
Wander - https://wander.com/tbpn
Public - https://public.com
AdQuick - https://adquick.com
Bezel - https://getbezel.com 
Numeral - https://www.numeralhq.com
Attio - https://attio.com/tbpn
Fin - https://fin.ai/tbpn
Graphite - https://graphite.dev
Restream - https://restream.io
Profound - https://tryprofound.com
Julius AI - https://julius.ai
turbopuffer - https://turbopuffer.com
Polymarket - https://polymarket.com/
fal - https://fal.ai
Privy - https://www.privy.io
Cognition - https://cognition.ai
Gemini - https://gemini.google.com

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.

Speaker 1:

You're watching TVPN.

Speaker 2:

Today is Tuesday, 12/02/2025. We are live from the TVPN Ultradome, the temple of technology, the forces finance, the capital of capital. Ramp.com, baby. Time is money. Say both.

Speaker 2:

Easy use corporate cards, bill pay, accounting, whole lot more all in one place.

Speaker 1:

That's right.

Speaker 2:

Why is no one talking about Armin Popperger? He's the CEO of Rheinmetall. Okay. And they've been on an absolute tear. We're, of course, gonna get to Code Red.

Speaker 2:

We're gonna talk about OpenAI. But we talk about OpenAI every day, basically. And I thought it'd be interesting to, meet the CEO behind the world's fastest growing defense company. It's on the cover of the business section of The Wall Street Journal. Okay.

Speaker 2:

When I think high growth defense companies, I usually think Andoril, or, you know, Cyronic or there's so many other companies that are growing very fast in defense tech. Ryan Mittal has been on an absolute tear. They're now basically the same size as Lockheed Martin and General Dynamics. And it was a small company just a few years ago. So Ryan Mittal, they they make you can see the the gun that they make in that picture.

Speaker 2:

They make They

Speaker 1:

make big guns.

Speaker 2:

Massive cannons. They make artillery shells. They've been very important to the Ukraine war. So in the last three years, they've been on absolute tear. They've gone from roughly 5,000,000,000 market cap three years ago to $80,000,000,000 in market cap.

Speaker 2:

We gotta ring the gong. We gotta warm up the gong. Ring the gong. 80,000,000,000 market cap. They've been on a tear, but they had, there there and there's been, like, three there's been basically three key drivers to the growth to the story.

Speaker 2:

We'll tell the story in in in three acts as briefly as we can. And while we while we do, we will say thank you to Gemini three Pro. Three act story about Rayn Mittal. Three act Gemini. Google's most intelligent model yet, state of the art reasoning, next level vibe coding, and deep multimodal understanding.

Speaker 2:

So first, they had a head start. This company, they actually started over a century ago. 1889. Can you believe that? Very, very old.

Speaker 2:

So, they spend their first twenty five years basically just stacking up ammo for the German Empire. This obviously comes to a head in 1914 when World War one breaks out. And at the time, the company was one of the largest arms manufacturers. Like, they were they were pretty pretty big after twenty five years of just stockpiling ammo growing, growing, growing as a defense company. World War World War one breaks out.

Speaker 2:

But then after the war, they gotta pivot. They gotta pivot because the Treaty of Versailles forces them to switch to non military products. Say, hey. You gotta build cars. Make some trains.

Speaker 2:

They get fixated on trains. Trains. And also typewriters.

Speaker 1:

Not the first group get fixated on on trains. Happens to the best

Speaker 2:

of them. But they but they have a good run. They stay in business. They keep making trains, locomotives, particularly. You know, they're making big stuff.

Speaker 2:

And then twenty years later, it's, the mid thirties. '20 it's, 1935 around there. They are, they're starting to get back into weapons and ammo production. They can't stay away. Uh-oh.

Speaker 2:

Are they rearming? The Wehrmacht. And World War two, obviously, it's massive for production. They're printing. They're making lots of weapons.

Speaker 2:

But by the end of the war, their facilities have basically been destroyed by areas they need to rebuild the company from scratch. So after the second war, they get banned from making weapons again until 1950.

Speaker 1:

Keeps happening.

Speaker 2:

And so they have to go back to making typewriters. They keep getting relegated to typewriters. You guys, no more guns.

Speaker 1:

That's enough.

Speaker 2:

You have to make some typewriters. And so they get back into defense tech in the fifties, sixties. The German armed forces gets reestablished in 1956. And by 1979, Rheinmetall is making a 120 millimeter guns that go on Leopard tanks that you've probably seen in that image roughly. And so there's lots of m and a, lots of diversification over the next few decades.

Speaker 2:

They expand into automotive and electronics, and that kind of brings us to the second act of the story, which is the Ukraine war. So Russia invaded Ukraine on 02/24/2022, about three years ago. Rheinmetall was around 5,500,000,000.0, market cap then. And three days later, Olaf, Scholz, the chancellor of journey Germany gives what's known as the Zeitenwald white Zeitenwende speech, which is, literally translates to turning point. So he says, this is a turning point.

Speaker 2:

Europe has been invaded. We now have a foreign army on European soil. Even though Ukraine's not part of NATO, it feels like, you know, Russia is expanding. If they keep if they just keep going in the same direction, they're eventually gonna be in our hometown. So we gotta do something about it.

Speaker 2:

And what does he propose? He doesn't just say, hey. This is a big deal. He says, no. We're actually going to invest a $100,000,000,000, like, off balance sheet from some fund into defense tech.

Speaker 2:

We're going to spend more money. And then, of course, there's a whole bunch of other initiatives that happen. There's the Trump negotiations around how much Europe should pay as a portion of GDP on, on defense. But, basically, it's this major turning point where Europe goes from spending, you know, sustainment levels. Okay.

Speaker 2:

We're gonna spend this much every year to Yeah. We are going to double or triple or, you know, exponentially grow our spending. And it's all gonna be net new. So you can go and fight for it. Yeah.

Speaker 2:

And that's what Rheinmetall does. And so revenue has grown

Speaker 1:

sort of born out of that era.

Speaker 2:

Helsink is like the newer version of Rheinmetall. Rheinmetall is like the old roll up. It's been around for over one hundred years. Helsing, I think,

Speaker 1:

started Helsing was 2021. Mhmm. Was most recently in the news because they raised, I think, $600,000,000 from

Speaker 2:

Daniel Ak. Right?

Speaker 3:

Ak.

Speaker 1:

Yeah. Sparked controversy, of course. A lot of people in the Spotify world, world of music Mhmm. Just think that defense tech is defaulting.

Speaker 2:

Oh, I didn't realize that. There was actually backlash. Totally. Didn't see that.

Speaker 1:

Yeah. You know, if you're This

Speaker 2:

is you're a makes sense.

Speaker 1:

R I an artist and you, you know, believe in in peace at all costs, you're gonna probably be against that. But

Speaker 2:

Well, it depends. Maybe if you're, you know, POD Yeah. Or you're you're you're some other, you know, musician that was played during the war on terror, you could be very pro, the housing investment. Just depends. But, yes, I understand, overall.

Speaker 2:

So, revenue's grown 50%, since 2022, and they are now guiding for sales. I think they do maybe around, like, €10,000,000,000. I was kinda going back and forth on euros USD, but, they're guiding for sales of $58,000,000,000 and an operating margin of more than 20% by 2030. So they have, like, almost AI growth level numbers of everything. It feels very similar where there's a there's a structural change in the way their business is gonna work.

Speaker 2:

Same thing as Eli Lilly. Same story. There's a couple of these stocks where there is now sort of a megatrend, and they are in position to capture a ton of value as long as they can execute. The the big question is, you know, what winds up happening? But the third leg of the stool, the third important piece in this story is the current CEO, the man no one is talking about until today, Armin Paperger.

Speaker 2:

He's been called a white haired Goliath. I love that. CNN randomly threw that

Speaker 1:

a picture?

Speaker 2:

Just randomly threw that in.

Speaker 1:

There he is.

Speaker 2:

There there's some other photos. And last year, he was targeted in an assassination plot by the Russians. What? So the CNN reported that Russia had made a series of plans to assassinate several defense industry executives all across Europe, who were supporting Ukraine's war effort. And they also were planning to, set up fires in different, there was an IKEA that got lit on fire.

Speaker 2:

There were a number of different attacks. But, fortunately, American intelligence discovered the plot and informed Germany in time to stop the attack. And now the white haired Goliath

Speaker 1:

is Ryan in chat says, this feels like a paid ad. I can assure you it's not. John woke up this morning. We were at the gym, and he's like, why is no one talking about Rheinmetall?

Speaker 4:

Yeah.

Speaker 1:

And decided to write about it in the in the in the newsletter today.

Speaker 2:

That's funny. No. He's on the cover

Speaker 1:

of the Very interesting.

Speaker 2:

Wall Street Journal. And so so so they stopped the attack. And so Russia clearly sees, Armin Papager Papager as a critical to the European defense ecosystem. But, separately, there is a debate over how like, where the business goes over the next few years. Because on the one hand, like, the NATO inventory requirements are growing lot.

Speaker 2:

That's gonna drive a lot of net new demand for military equipment purchases, and the market's been historically undersupplied. But on the flip side, Rheinmetall may or may not be able to absorb as much of the demand as they're planning to. They have lots of integration to do between all their different acquisitions. And also with a potential end to the Ukraine war.

Speaker 1:

Feels like the stock would just immediately trade down on news of a peace deal.

Speaker 2:

And it has even on rumors of a peace deal. Exactly.

Speaker 1:

Yeah. It's down 15% over the last month.

Speaker 2:

Yeah. But they're they're scaling up in The Wall Street Journal. Says earlier this year, Armin Paperger, opened a new factory that will allow his company to produce more of an essential caliber of artillery shell than The entire US defense industry combined, surrounded by that day by dignitaries, including the head of the North Atlantic Treaty Organization, NATO, the Rheinmetall CEO, is riding a wave of post Cold War military spending that is reshaping the global arms trade. Rheinmetall is now the world's fastest growing large defense company and a key player in Europe's quest to rearm its home country. Germany is shedding its postwar reticence on military spending to lead the charge.

Speaker 2:

To capitalize. Papager has pushed the once obscure gun barrel maker into almost every part of the battlefield from satellites to warships. And that's what people are kind of saying about, oh, there's a lot of acquisitions. There's a lot of new projects. There's a lot of new deals.

Speaker 2:

Like, you know, the gun barrels, they've been doing that for a hundred and thirty six years. Satellites, they're kinda newer to it. Do they have the lineage? Do they have the experience? Can they stick the landing on those contracts?

Speaker 2:

The the money is certainly there, but is the expertise there? That's the big question. So his goal is to create a go to defense company with the heft and breadth to rival the American giants that have dominated the industry since World War two. And if he's writing any software, he's gotta get on graphite dot dev code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster.

Speaker 2:

Rimetal's stock is up 15 x since Russia's full scale invasion of Ukraine in 2022, giving a market cap of 80,000,000,000, roughly on par with US rivals. And when he took over the job, he started this job in 2013. So he's been CEO of Rheinmetall for twelve years. The company was 1,600,000,000.0. Overnight success.

Speaker 2:

Overnight success is right. Sort of, like what happened with Lisa Hsu. You know, she's 10 x that stock. I mean, the funny thing is is that Jensen's also 10 x to NVIDIA in that time. But, the Lisa Hsu story is a little bit more impressive because AMD was really, like, down in in the dumps, and and she has turned that company around fantastically.

Speaker 2:

Yeah. But back to Rheinmetall. This month, Rheinmetall set out ambitions to quintuple sales by the end of the decade, to the equivalent of roughly $58,000,000,000. Pat Patburger, reflecting on his long tenure at the company, told investors that seeing such figures was like a wonder world. It's a wonder world.

Speaker 2:

I love when an executive is speaking a different language, and it just doesn't quite translate. Like, is that what we say?

Speaker 1:

Kind of get the gist.

Speaker 2:

I get the gist. He's happy. Yeah. I'm happy for him. You know?

Speaker 2:

Good job.

Speaker 1:

Wait. We had a we had a buddy of ours who Yeah. Actually, I'm just gonna I'm gonna name I'm gonna name I'm gonna name him. Was gonna keep him anonymous, but it's just too funny.

Speaker 2:

Two point six is proposing the TBPN x Standard Oil x Rheinmetall collab.

Speaker 1:

Which is we were talking about We were texting with Sean Frank and Connor McDonald at The Ridge yesterday about how their Black Friday, Cyber Monday went. And they shared a bit on it. And Sean ends it and says, bro, the future is beautiful. And I am so happy to be alive with it's like an incredible reaction to a successful Black Friday.

Speaker 2:

I'm so glad it went well for them. Mean, the Wall Street Journal did report that on a busy Cyber Monday outage at Shopify halts transactions. Shopify experienced an outage on Cyber Monday that interrupted transactions for some merchants. It sounded like Ridge was not affected.

Speaker 1:

So it was the the the real issue was that admin panel went down, which freaked a lot of people out because you're not able to log in and, like, see what's happening.

Speaker 2:

Yeah. Of course. And and also, like, even if even if everything's working as as standard as expected, like, that's the day you're just refreshing the admin panel all day. Like, because you're just like, how much money am I making? Like, this is really critical.

Speaker 2:

Right?

Speaker 1:

Yeah. So, yeah, there were there were some some reports that a that a handful of merchants had actually had features on their site go down.

Speaker 2:

Yeah.

Speaker 1:

I didn't see any

Speaker 2:

I didn't see a ton of people saying, like, I lost my Cyber Monday. But, obviously, the the the good folks at Shopify will obviously be working extra hard to resolve any of this and and provide a a proper postmortem. Overall, it does seem like Cyber Mondays and Black Friday, Black Friday, Cyber Monday broadly, just went very well. Like, it just it just seems like consumer confidence was up, revenue was up, spending was up.

Speaker 1:

Harley had a post. He said total global Black Friday Cyber Monday sales by Shopify merchants over the last five years. 2021 was 6,300,000,000.0. 2022 was 7 and a half billion. 2023 was 9,300,000,000.0.

Speaker 1:

2024, November. And then 2025, 14,600,000,000.0. So combination of execution at the at the company level and execution at the Shopify level. And then, obviously, the market plays a big role as

Speaker 2:

well. Yeah. It it it really did seem like like things are just broadly going well or at least okay. I think everyone's sort of like nervous with with crypto up and down. And is there an AI bubble?

Speaker 2:

How big of a bubble? What will happen?

Speaker 1:

Somewhat of a code red going on.

Speaker 2:

It has been a code red. One part

Speaker 1:

of the world. Before we jump into that story

Speaker 2:

Profound, get your brand mentioned in Reach Chachi millions of consumers who use AI to discover new products and brands.

Speaker 1:

I was going to say, of Anderol was covered in The Wall Street Journal. Oh, So The Wall Street Journal has been doing quite a lot of defense tech coverage. Anderol, they had been talking about their approach of not using government funding for testing purposes.

Speaker 2:

Yes, Which

Speaker 1:

historically, company would get a contract and then they would work to actually make it. And so the government was effectively funding R and D. Anderol has a more traditional venture style model where they raise VC dollars. They spend that money to test and develop products. And so they gave a quote that was like, we do fail a lot.

Speaker 1:

But the We do fail. A lot. The extra context that was necessary was that that's not happening on the taxpayer's dime. Yeah. And it's part of their approach of doing rapid iteration and sort of like going according to plan.

Speaker 1:

Of course, that was taken out of context and turned into a headline that was, we do fail a lot. And not so dissimilar from what has happened to OpenAI in the last twenty four hours, where it sounds like an internal meeting was leaked. We were wondering, like, what is Well,

Speaker 2:

we can yeah. Let's actually read a little bit more on the Wall Street Journal, Androle thing. Yeah. Because I think it's interesting, and I wanna go into, some of the response, like how they responded to this. First, let me tell you about Cognition, the team behind the AI software engineer, Devin.

Speaker 2:

Crush your backlog with your personal AI engineering team. So Wall Street Journal came out with this story. We do fail dot dot dot a lot. It's a very funny quote. I actually think it's an awesome quote.

Speaker 2:

I will get into it. I think that they should put on T shirts and hats. Like, I think it's actually a very

Speaker 1:

their next campaign.

Speaker 2:

I think it's a very key cultural. I think it's like a don't work at Andrew role type moment. It makes a ton of sense in terms of, like, the the culture of, like, fail fast. This is this is not new in Silicon Valley, and and yet it's still being reframed as new. It's I'm surprised that there's, like, alpha here still.

Speaker 2:

But Yeah. Let's see. Palmer Luckey says, the valid reasons for slowness are bureaucratic BS and cowardly executives who cater to snide analysts and public market outlets like WSJ that have nothing to say about years late programs and everything to say about a fire that covered 0.0002% of our test site. I'm not even exaggerating. That's the real number.

Speaker 2:

It is exactly what anyone would expect from testing a system that violently blasts lithium powered drones out of the sky. This is what weapons development should look like. Heck. Camp Pendleton has over 200 fires per year on their training range, and that is with fully mature weapon systems. Going on and on about this for paragraphs is so Yeah.

Speaker 1:

Almost getting getting close to a fire every single weekday.

Speaker 2:

Yeah. They obtained satellite imagery that reveals the damage to the grass on the weapons test range. The other the other examples of this story are similarly absurd.

Speaker 1:

John, you're telling me that the grass was damaged at the explosives testing ground?

Speaker 2:

Yeah. It it is You're telling me something that funny the

Speaker 1:

way You're telling me there was an explosion weapons testing facility?

Speaker 2:

Yeah. It was wild. The, the other examples in this story are similarly, silly absurd. Oh, no. An engine sucked in a piece of FOD.

Speaker 2:

Stop the presses. Autonomous boat behaves exactly as designed and stops moving when it receives a faulty command, and are all hit by a pattern of setbacks. It's just so pathetic, the type of thing that can only be written and taken seriously by people who have no idea how hardware development actually works. And, of course, a few other folks in the in the ecosystem chimed in. Mainly, there's a good post here from Blake Scholl, founder of Boom Supersonic.

Speaker 2:

He says, If you plan to pass every development test, you'll move slowly and expensively. It's optimal to fail many dev tests. Selective quote outtake into headline suggests a hatchet job, not an honest report on an attempt to do things differently and better. Yeah. I I still just think the, the we do fail a lot is just it's so ripe for a billboard campaign, a t shirt, a hat, or something because it if if you like, the whole thing with Silicon Valley is that you should fail 99 times

Speaker 1:

Get up

Speaker 2:

once. Because if you succeed once and fail 99 times, it's a million times better. It's infinitely better than zero zero failures, zero successes. Like like, you will take a ton of failure for one success. And that's the whole that's the whole ethos.

Speaker 1:

That's the American ethos.

Speaker 2:

That's the yeah. It's the American ethos. It's the technology ethos. It's it it it there's a lot there. Anyway, back to, oh, actually, right.

Speaker 2:

We we we can wrap up with, you you can go read the Wall Street Journal report on Armin Papager if you Papager if you want. We gotta figure out how to pronounce his name. He did have one, one fun line in here, which was, what did he say? He said something like, he said, referring to so now he's, you know, basically the same value as, Lockheed Martin in general dynamics. And he said, on The US companies, he said, they come to me ten years ago.

Speaker 2:

It was a different story. And so he's he's just flexing the fact that like he's now big enough that he he deter he, he requires like, you can go visit him because he's like made it.

Speaker 1:

Always a good sign.

Speaker 2:

He yeah. He he's he's he's taking a little victory lap. And there's some other funny things in here, but you can go read that. Let me tell you about linear. Meet the system for modern software development.

Speaker 2:

Linear streamlines work across the entire development cycle from road map to release. So let's head over to red alert territory.

Speaker 1:

Gavin Baker, responding to the reporting says, October, 1,400,000,000,000.0 in spending commitments. November, rough vibes. And December, code red. Life comes at you fast. Code red.

Speaker 1:

It it certainly has felt it certainly has felt fast

Speaker 2:

Yes.

Speaker 1:

Ever since that faithful podcast.

Speaker 2:

Yes. That was a crazy turning point.

Speaker 1:

Although, was there was plenty of conversation, you know, prior to that around

Speaker 2:

Yeah.

Speaker 1:

Around of what the trajectory of OpenAI would actually look like.

Speaker 2:

Yeah. It's it's hard to actually understand the full nuance here. Like, somebody in the in the replies, a rational analysis. Insane at InsaneAnalyst. What a crazy handle.

Speaker 2:

Says, debt obligations come at you fast. And it's like, that's not really what's happening here. Like, like, the the code red, like, leak from this the the the information reported. It was clearly, like, some sort of all hands that Sam Altman was, you know, whole holding a town hall with the rest of the OpenAI team. And he's kind of just saying, like, lock in.

Speaker 2:

That's what he should have said. Never say code red. You gotta say lock in, brothers. Lock

Speaker 1:

in. Don't say rough vibes.

Speaker 2:

Don't say rough vibes.

Speaker 1:

Code red.

Speaker 2:

Say lock in. Say we're we're taking that hill. We're storming their fortress. We will grind Google Gemini team into paste with and we will crush our enemies. We will see them driven

Speaker 1:

before us. Learned this lesson.

Speaker 2:

Yes. They used

Speaker 1:

to say code red. Yes. That meant there was a fire Yes. In the hospital and that you would probably wanna figure out a way to get out. Yes.

Speaker 1:

Yes. They started is it code blue?

Speaker 2:

Yeah. Now now they will say code blue. So if you hear code blue in a hospital Yell code red. Worried. Need You need to to worry.

Speaker 2:

Worry. But maybe maybe okay. Steelman. Steelman here. Maybe Sam Altman was using code red in the hospital sense.

Speaker 2:

He didn't say code blue. If he had said code blue, we should be really worried. But he said code red. So he's saying it's not that bad.

Speaker 1:

But don't you think they just retired? They I think they just retired.

Speaker 2:

Yeah. So he's saying, I'm I'm using retired phrase. I'm not I'm not saying code blue. If I if I was saying code blue

Speaker 1:

code brown, which is a hazardous spill Okay. Which Gemini three spilled on It's the very hazardous. We got a code brown.

Speaker 2:

Yeah. We got a code brown. That's a crazy is that real, or is that some, like, meme joke?

Speaker 1:

No. This is no. No. I'm reading the hospital emergency codes.

Speaker 2:

Okay. Okay. Well, anyway, let me tell you about Restream. One livestream, 30 plus destinations. If you wanna multistream, go to restream.com.

Speaker 2:

Cha ching. No. I I think I think if you're if you're if you're a CEO who's under incredible scrutiny, like you're Sam Altman, and you have beat reporters at this point who are texting your employees every single day, hey. What's going on? What's on the ground?

Speaker 2:

Give me a quote. What happened?

Speaker 1:

Yeah. So to give people to give people context, they're the BEAT reporter a BEAT reporter might reach out to they will actually adopt the strategy of just trying to wear someone down Mhmm. Where they will send hundreds of messages Mhmm. To individual people on on the team just over and over and over relentless Yep. Like email, cell phone

Speaker 2:

Yep.

Speaker 1:

Instagram DM, LinkedIn, just like constantly constantly constantly flooding, hoping that at some point this person just says, like, fine. Like, I'll I'll I'll Well, name with you.

Speaker 2:

Beat reporter comes from them trying to beat you down. That's the whole point.

Speaker 1:

Is that true?

Speaker 2:

That's where it comes from.

Speaker 1:

No way. You're you're you're messing with me? Yeah. I'm messing with you. Okay.

Speaker 1:

Okay. Okay.

Speaker 2:

I have no idea. But I like the idea of it. It's like, they just try and beat

Speaker 1:

down new employees. It's like that

Speaker 2:

They try and beat

Speaker 1:

you down.

Speaker 2:

I got a beat reporter on my team.

Speaker 1:

I was like

Speaker 2:

On my tail. Yeah. Yeah. No. Got it.

Speaker 1:

I mean, it's certainly what it's what it's become.

Speaker 2:

There is a little bit of it. No. No. I think there's bee reporting. There's gumshoe reporting.

Speaker 2:

Gumshoe reporting is where you report, and you're you're actually walking around the town so much that you get gum on your shoes. That's the idea. It's like you're on the ground reporting. You're walking around the city. You're get you're talking to people.

Speaker 2:

And then I think, like, a beat cop and beat reporting is like you're on a beat like it's a drumbeat. Like, every day, you report on the same thing. And so it's about consistency. It's not it's not what are you laughing at now?

Speaker 1:

Ryan, in the chat, if you work at an AI startup and you aren't drinking Mountain Dew Code Red every day, you aren't gonna make it. What if Sam was talking? What if he was just

Speaker 2:

saying You just said you break in.

Speaker 1:

We gotta lock in. I bought us a bunch of code Code red.

Speaker 2:

Yes. Want you all drinking it every

Speaker 1:

day. Yes. It's time to really focus.

Speaker 2:

Yes.

Speaker 1:

And and, of course, that snippet got pulled up. I just wanna know what what what person on the on the OpenAI team thinks it's in their best interest to be in a meeting like that and then just go share inflammatory quotes on said meeting?

Speaker 2:

Just leave. Just just just go make 10 times as much money in a different lab. You know? If you if you don't like your employer, just bounce and make more money. Like, why are you why are you sitting there leaking and and and just dragging your company down?

Speaker 2:

Don't you have stock options? Yeah.

Speaker 1:

That's what I'm so I'm so confused.

Speaker 2:

It's a mole. There's a mole. There's someone inside the organization who's working against them or something. I don't know. Seems rough.

Speaker 2:

Anyway, there is some praise for OpenAI on the timeline, which we should get to from none other than Blake Robbins. Blake says, OpenAI is operating on a different level. Play a that sound cue, Jordy. The amount they have shipped in the past few weeks and months is incredible. Feels like we are witnessing a generational run.

Speaker 2:

This was on October 6.

Speaker 1:

K. This was on October 6. Sora was, I think, number one in the charts at that point. Yes. It's now 21.

Speaker 1:

Yes. Pulse was got some excitement early on. Yep. But, I think people are a little bit not feeling like as excited. Yep.

Speaker 1:

Atlas launched. Yep. And then, it's hard to really gauge what what adoption has been like. I've I I know some people that that love it.

Speaker 2:

Yep.

Speaker 1:

But

Speaker 2:

So Eric Sufort on October 6, quote tweeted Blake Robbins and kind of summed it up. I think he said, indeed, impressive. But the scattershot nature raises questions about the company's discipline and ability to support these disparate initiatives. Is OpenAI a frontier research lab, social network operator, a commerce engine, a hardware company? Because it's hard to do all of that well.

Speaker 2:

And then Eric goes back and finds his old

Speaker 1:

And they're trying to they're they're still, like, very much care about competing in code gen. Right? Yeah. Wasn't even listed there. Really important.

Speaker 1:

And so so if you go back if you go back to the BG two interview or just the BG interview Mhmm. Sam Sam's answer to the question of how are you gonna support the 1,400,000,000,000.0 of commitments was we're automating science and we're making Yes. And we're make and we're making, like, consumer electronics. Yes. And the reason that that that didn't, to me, that that was kind of like a concerning answer because

Speaker 2:

Yeah.

Speaker 1:

Google has been doing those things for years.

Speaker 2:

Yeah. But they've earned the right because they have twenty five years

Speaker 1:

funding it with massive cash flow.

Speaker 2:

Yeah. Hundreds of billions of dollars in revenue and and so much cash just to go around. And, like, it's always been this, like, academic lab and this sort of, like, environment where they do side projects. But they've just they I think before they started any of that, they had firmly established themselves as, like, the go to search engine. And they

Speaker 1:

They were got funding that with cash so they were funding these initiatives with cash flow.

Speaker 2:

I believe so.

Speaker 1:

And even though they've been doing it for this long Yep. It's not like Sundar's going out there and saying, guys, we're actually gonna do it. Yep. We're gonna do an extra 100,000,000,000 Yep. Next year because we're automating science.

Speaker 1:

Okay. And we're we're we're doing this Yeah. This new consumer electronic device.

Speaker 2:

Yeah. No. No. It it is it is crazy. Let's continue.

Speaker 2:

First, let me tell you about Privy. Privy makes it easy to build on crypto. Rails securely spin up white label wallets, sign transactions, integrate on chain infrastructure all through one simple API. I have I have a plan, and this comes from the chat, of course. If Sam Altman really wants to set the record straight, everyone's saying Code Red.

Speaker 2:

Oh, Code Red. It's so bad. He needs to come out with a statement. We're gonna Baja Blast Gemini out of the App Store. If he says our plan is to Baja Blast Gemini and Anthropic into the minor leagues of AI research, I think he just wins completely.

Speaker 2:

What do you think, Tim?

Speaker 1:

I I I think

Speaker 4:

people are they're underestimating the the possibility that code red it was actually red was past tense of read. Oh. They're talking about the code that was read by the model.

Speaker 2:

Oh, yes. Yes.

Speaker 4:

Yes. Was the code red Yes.

Speaker 2:

Scenario? It might have been r e You're e

Speaker 4:

talking about the next agent model that

Speaker 2:

was doing code gen. Yeah. I I have read the code, and and we're ready for the next pretraining run. I listened to Mark Chen on Ashley Vance's Core Memory podcast. It's very good.

Speaker 2:

You should go listen. Also, Ashley has a new a new YouTube channel for core memory podcasts. So if you wanna find it, head over there. And and it was interesting. Mark Chen, I really like the way he runs that organization.

Speaker 2:

I liked a lot of things he had to say. He had some funny funny takes, some funny anecdotes. Basically, just saying, you know, he's extremely competitive. He doesn't wanna lose. He's he's he's, you know, going all out right now.

Speaker 2:

And that one of the ways he's dealt with the talent wars is to just go to everyone on his team and say, hey. I'm not gonna match dollar for dollar with Meta. Like, if you want to make 10 times as much money, yeah, you're free to leave. Like, you can just go. But we are on a mission here.

Speaker 2:

We're a team, and we think what we're building is so big that in the long term, we will be bet we will be better. We will be bigger. And he also clarified interestingly that although there was a big raid and a lot of people from OpenAI did go to Meta, He was saying, like, there's been there's been he was basically like, there's a lot of poaching that's happened from OpenAI generally. Like, whenever someone starts a a new lab, they always go to OpenAI. They're like, we need at least one OpenAI guy to know how they do it.

Speaker 2:

Right? Makes makes a lot of sense. He also said he didn't lose a single direct report. I don't know exactly how many direct reports he has, but he was saying that he didn't lose a single direct report. So maybe that's like he's trying to say, okay.

Speaker 2:

There were people that were His loyal lieutenants. Yeah. Yeah. His his his lieutenants stuck around. It was sort of interesting.

Speaker 2:

But he did say that he also, he he sort of echoed Shalto and said that he believes that pre training, there's still low hanging fruit there, that OpenAI will be doing new pretraining runs, that they have seen, that scaling is holding, that there's no platform.

Speaker 1:

They also said they have models internally that outperform Gemini on benchmarks. Yes. And obviously, he caveated that by saying benchmarks aren't the only thing that matter. Yeah. So I do think I mean, it's it's worth sharing

Speaker 2:

Let's play more of Let's actually play this clip Okay. From Ashley Vance here. It says OpenAI has seen Gemini three and is both moved and not. We sat down with OpenAI's research chief, Mark Chen 90. Is

Speaker 5:

this Yeah. Yeah. So to speak to Gemini three specifically, you know, it's a pretty good model. And I think one thing we do is try to build consensus. Know, the benchmarks only tell you so much.

Speaker 5:

And just looking purely at the benchmarks, you know, we actually felt quite confident. You know, we have models internally that perform at the level of Gemini three, and we're pretty confident that we will release them soon, and we can release successor models that are even better. But, yeah, again, kind of the benchmarks only tell you so much. And I, you know, I I think everyone probes the the models in their own way. There there is this math problem I like to give the models.

Speaker 5:

This

Speaker 2:

is funny.

Speaker 5:

I I think so far none of them has quite cracked it, even thinking models.

Speaker 6:

Just tell us?

Speaker 2:

So,

Speaker 5:

yeah. I'll wait for that.

Speaker 2:

Is this is this like a secret math problem?

Speaker 4:

Oh, no. No. No.

Speaker 5:

Well, if I nod to here, maybe it gets trained

Speaker 2:

on it. But It's gonna get so saturated.

Speaker 5:

To speak to Gemini theory specifically, you know, it's a pretty good model. And I think

Speaker 2:

I think this is looping.

Speaker 5:

One thing we Yeah. Looped.

Speaker 2:

Yep. Having having a secret math problem that you give every model to to assess it is is is pretty elite. I keep reflecting on like So let's read what Prinz is saying here. So new interview with Mark Chen from OpenAI. Ashley Vance, the interviewer, has apparently been spending a lot of time at OpenAI, including sitting in on meetings.

Speaker 2:

He seems to be writing a book, and he seems to think that OpenAI has made some huge advance in pre training. Pre training seems like this area where it seems like you've figured something out. You're excited about it. You think this is gonna be a major advance. Mark doesn't spill the beans, though.

Speaker 2:

He says, we think there's a lot of room in pretraining. A lot of people say scaling dead is dead. We don't think so at all. Big question about what that means. Is that scaling RL?

Speaker 2:

Is that scaling dollars in? Is it is it oh, yeah. If you if you if you invest a $100,000,000,000,000, you can give it one more IQ point. It's like, yeah, that would be an example of, like, scaling holding, but, like, no one's gonna make that trade off. Yeah.

Speaker 2:

No one no one is gonna be like, yeah. I'm down. Totally. Spend the 100,000,000,000,000.

Speaker 1:

Okay. So what what Sam said in the, internal Slack memo

Speaker 2:

Oh, it a Slack memo? Yeah. Okay.

Speaker 1:

Because he was directing more employees to focus on improving features of ChatGPT, such as personalizing the chatbot Mhmm. For more than 800,000,000 people. And and again, we've we've seen them like launch more functionality around this. I I think the theory is that this could be a very like make the product really really sticky. Mhmm.

Speaker 1:

Whether or not that's true generally is still unclear. It's certainly people have have, been very loyal to four o. Altman also said, this is in the information's piece.

Speaker 2:

Did he mention Baja Blast?

Speaker 1:

He hasn't he hasn't specifically said Baja Blast,

Speaker 2:

but This is the thing.

Speaker 1:

I think he's kind of alluding to it.

Speaker 2:

He's warming up to talking about Baja Blast.

Speaker 1:

Other key priorities covered by the Code Red include ImageGen, the image generating AI that allows users to create a variety of photos. You had included in your newsletter last week that you've been going over to Gemini specifically for Nano Banana. Yes. So I wonder if this

Speaker 2:

is Yeah. We were debating broader trend. Does this does this actually matter? I think it does. I think that the image generation functionality like, fundamentally, what LLMs are doing all what these chatbots are doing is they're they're basically instantiating full web pages.

Speaker 2:

They should be able to instantiate anything that you could possibly land on, whether it's a video, an image, a blog post with images embedded, an audio format. Like, it should be able to to, like, not just understand everything and give you the answer, but it should be able to contextualize that answer in any format. And so I do think being able to generate images at the top shelf, top tier way. The big question we were talking to Tyler was, should they say, hey. We're just gonna use nano banana, which is like a crazy thing.

Speaker 2:

But, you know, there is a world where they say, like, hey. Yeah. Like, we're not gonna focus on that. We're gonna we're we're actually gonna just bend in nano banana, but we are going to be the the front door, the aggregator. And we're just gonna be the the the the actual runway in

Speaker 1:

the background. Right?

Speaker 2:

Yeah. Yeah. Yeah. Hand it off to a different team, potentially. I I don't know.

Speaker 2:

It it like that's probably a little bit too close to home. But Ben Thompson has had this this this claim for a while that potentially OpenAI has has a strong hold on the consumer market to the point where if they swapped out the underlying model, they would still accrue tons of the value because people don't really know what model is which. Like, I think the average user doesn't do it. But first, Tyler has a

Speaker 4:

Yeah. I mean, I I think that especially makes sense in the context of images and video because they're just so expensive. Yeah. Like, I think a Nano Banana Pro image is like I think it's like 10¢.

Speaker 2:

No way.

Speaker 4:

It's really or okay. That might be per, like, a thousand or something. But it's still it's still they're really expensive. Yeah. Videos are even more expensive.

Speaker 4:

Videos are like really really expensive. Oh. So it it I think it makes more sense in that scenario because you would imagine that it's just like so expensive to to vend it yourself. It's like you're spending so much resources on that.

Speaker 1:

Yeah. We

Speaker 2:

have to look at this. I I believe this is Nano Banana. Let me see if I can find this. This Nano Banana Pro image that let me see if we can pull this up. It's it's from John Gregorchuk.

Speaker 2:

It says architects are cooked. AI is coming for you. Prepare accordingly. Have you seen this, Jordy?

Speaker 1:

I did see that.

Speaker 2:

You did see this one? Yeah. Did you look at the image closely?

Speaker 1:

No. Okay.

Speaker 2:

So At

Speaker 1:

all? Is it is it It's

Speaker 2:

one of the funniest images I've ever seen. So basically, this image

Speaker 1:

It's like a it has a walkway with like a 40 foot drop to the ground.

Speaker 2:

I mean, it's not quite that bad, but it's it's close.

Speaker 1:

Yeah. I just I just I I I didn't I I don't buy the the the theory that architects are are cooked, just because you can generate, a floor plan or, or designs for a home just just because the actual process is you're dealing with the city, basically. Right? And you're trying to get things permitted. Yeah.

Speaker 1:

It's not like the problem is just making pretty designs. Right?

Speaker 2:

It's the classic let me see. I'm trying to put this in the chat. It's the classic, is the radiologist's job just to look at images and detect cancer? No. It's way more than that.

Speaker 2:

Okay. So this is the image. And I was actually crying laughing because the, the the the the tagline is architects are cooked. AI is coming for you. Prepare accordingly.

Speaker 2:

And you see this, and it's like this AI generated image, and it looks like remarkable.

Speaker 6:

Like

Speaker 1:

Looks like a floor plan.

Speaker 2:

It looks like a floor plan. It looks amazing. Like, it looks like okay. Yeah. That's like all the lines are straight.

Speaker 2:

We used to be in the era of, like, any any text would be typoed, there would just be crazy lines everywhere. But you zoom in, and it's, like, one of the funniest layouts ever because you realize that it's just it's just one massive room with with with, like, three or four. Okay. So first off okay. So you come in through the two car garage, then there's a powder room.

Speaker 2:

So

Speaker 1:

it's The mudroom.

Speaker 2:

So so first off, there's this mudroom lawn mudroom and laundry with two bathtubs in it. Scroll up to the right. Okay. Just go yeah. Right there.

Speaker 2:

So why do you have two bathtubs next to your coat closet?

Speaker 1:

Right? In the mudroom.

Speaker 2:

In the mudroom. And then and also, like, you can't go normally, you come out of garage, you go straight to the mudroom. But here, you have to go into the main area, which is the gallery hall, and then you go from there into the and so scroll to the left a little bit so we can see the

Speaker 1:

What is the coat?

Speaker 2:

Why is it what is it? You go

Speaker 1:

to the bath. There's

Speaker 2:

a powder room, and then there's some The coat bath with two toilets. And why is there two toilets next to each other? Remember we were touring that facility and had two it had two two bathrooms right next to each other with no line next to it or

Speaker 1:

Yeah. Yeah. We were in in the in in the the crazy office that that had the machine. One of the bathrooms just had, it was like a it was like meant to be a private bathroom. Yeah.

Speaker 1:

And it just had two toilets there. We were like, what? Why the two toilets?

Speaker 2:

Yeah. So it's like so so you come in through your main foyer, then there's a master bathroom, then there's a coat bathroom with two more toilets. And then there's a huge walk in closet with which isn't even directly attached to anything else. So you have to, like, go through this corridor to get to the rest. And so this master suite has three toilets, but then it gets better.

Speaker 2:

It gets better. So go over to the top right hand side This is so look at Bedroom Number 2. It's just like Woah. Off the center. Then Bedroom Number 3 is there.

Speaker 2:

Then there's a Jack and Jill bath, then scroll down.

Speaker 1:

But there's nothing Three three sinks?

Speaker 2:

Three sinks. Three sinks. No toilets. And then there's another bed bathroom. And then there's a third bathroom with a toilet.

Speaker 2:

This is With a ziplock sinks.

Speaker 1:

This is you might not like it, John, but this is this is is architecture at its best.

Speaker 2:

Yeah. You have you have five you have five five sinks next to your two bedrooms, which and then also bedroom two doesn't have a doesn't have a bed.

Speaker 1:

Anything. It just connects to the it opens into the gourmet kitchen.

Speaker 2:

Gourmet kitchen. But then if you scroll down if you scroll down, you can see that there's, like, this huge walk guest suite. What is the huge walk guest suite? And then you have this, like, massive dining room that just makes no sense. And then down at the bottom to to to kick it off, there's, of course, like, the great room that's directly tied into the kitchen where there's just the most open floor plan you can possibly imagine.

Speaker 2:

Then And if you scroll down, you'll see that there's, like, just these windows that, like like, all of a sudden

Speaker 1:

Trevor in the chat says bathroom scaling wall.

Speaker 2:

Like, why are why are all of a sudden the doors, like, vertical instead of this is supposed to be a top down image, and now I'm looking at these doors, and they're, like, presenting

Speaker 1:

What did the comments say? Did the comments say, like, hey, buddy. Why'd why'd you put, you know, three sinks in that one bathroom?

Speaker 2:

Well, everyone everyone gets that it's a joke.

Speaker 1:

Oh, it was meant to be it was meant to be a joke. Yeah. Yeah. Yeah. Okay.

Speaker 2:

Okay. Yeah. Yeah. This this John guy, like, totally thinks it's so funny, and and he's just, like, joking around. And and so everyone's just, nightmare fuel.

Speaker 2:

Like, this is crazy. And John's making the same jokes. It's super convenient off the open floor plan. No kitchen toilet? Like, you know, people like, then people just joking about all the different stuff.

Speaker 2:

And I don't know. I mean, you know, is is is AI going to help with, you know, architectural design? Of course. Is it is Nano Banana gonna randomly one shot, like, the perfect floor plan? No.

Speaker 2:

Also, no. But, you know, of course, there's there's stuff that's that's the funniest image. It's so funny. So funny. Anyway, I was I was actually dying laughing at at this thing.

Speaker 1:

Okay. Anyway Back back to the Code Red.

Speaker 2:

Dant. Automate compliance and security, AI that powers everything from evidence collection to and continuous and continuous monitoring to security reviews and vendor risk.

Speaker 1:

Yes. DD DOS is adding fuel to the fire.

Speaker 2:

Fuel to fire.

Speaker 1:

He says, this is why OpenAI is in code red. In the two weeks since the Gemini launch, ChatGPT Unique daily active users, a seven day average, are down 6%. He is sharing, to be clear, web traffic data.

Speaker 2:

These these traffic sources are so rough. I just feel like people use apps. Like, like, the web traffic is probably a good proxy. It's probably a decent proxy. But even then, I I just I don't know how how high intent those users are.

Speaker 2:

Because it's like, do you think you're being tracked by SimilarWeb that effectively? Like, I would hope that I don't have that much spyware on my Chrome browser that Oh,

Speaker 1:

I'm sure

Speaker 2:

knows exactly where I am. Or maybe it does, but I would think that, you know, OpenAI and Gemini and Google would be like, yeah. We're not we're not letting you put a pixel on our site.

Speaker 1:

How is it? Do you know do do

Speaker 2:

We should have we should have someone from SimilarWeb on the show explain it to us. Like

Speaker 1:

Tell us your sources. Yeah. How do Tell us everything.

Speaker 2:

How do you actually calculate all this stuff? Because I mean, you could just poll people. You could just ask a million people. Hey. What are you using?

Speaker 2:

Right? I don't think that's how this works. But the wonder if the

Speaker 1:

any of the the Chrome extensions sell your data. I'm sure I'm sure Yeah. A number of them

Speaker 2:

Yeah. Yeah. I have this I have this Chrome extension installed right now. TBPN timeline viewer. It was vibe coded by someone sitting over there.

Speaker 2:

He doesn't even know what programming language it was written in.

Speaker 4:

This is not true. Blatest.

Speaker 2:

Do I do I ever tell a story on the show on the show?

Speaker 1:

I don't know.

Speaker 2:

So Tyler doesn't wanna tell it. So Tyler gives us this, we use a Chrome plugin to to, like, track the show when we're sharing posts between us. And, this Chrome plugin, he'd, like, vibe coded it, and he sends it over, and I unpack it to install it. And I'm like, why are there, like, node modules here? Like, like, that's usually for, like, Node.

Speaker 2:

Js JavaScript on the back end. And he's just like, what are you talking about? And I was like, you don't know that you're using Node. Js?

Speaker 1:

It's a good extension, sir.

Speaker 2:

Because I think it was just

Speaker 4:

so I trust Claude. I trust Claude to make the right decision.

Speaker 2:

You don't even specify what programming language it uses, which is, like, pretty sick. It's actually extremely bullish for Claude and Claude code. It's really good. Anyway, the part of the code red, of course, is, that, OpenAI Sora app has fallen out of the top 20 most downloaded apps in The United States on both the App Store and Google Play, and so, things are things are falling. I actually opened up Sora today.

Speaker 2:

I looked at it, and there was some cool stuff happening. This is a little bit of a hot take. Like, it was not there was still a lot of slop, which I would define as like the, you know, it's a POV video of a bus driver with a bunch of cats on the bus, and it's, like, cute and funny or, like, you know, it's a chipmunk water skiing, like that type of stuff.

Speaker 1:

Why you were late for the gym today?

Speaker 2:

No. But I I was sneaking a peek at Sora while I was driving. If I was like, if I die and crash because I'm looking at slop, this would be extremely depressing. But

Speaker 1:

At a stop sign?

Speaker 2:

Yeah. I'm stopped. But the there there was there was one cool one, which was, like, more like pixel art, actually. And it was interesting because you remember the OpenAI Super Bowl ad?

Speaker 1:

Like Yes.

Speaker 2:

If you if you prompt Sora to make that type of content, it actually is really cool, and you can remix it in a very interesting way. And so, Sam had taken somebody else had done, like, a bunch of geometric shapes pulsing to, like, electronic music. And then Sam was able to take and say, make it orchestral music and make them pastel colors. And he was able to, like, remix off of that. And that felt like, okay.

Speaker 2:

Maybe we're getting into Suno territory. Very odd that Suno and Sora are so close in names. I don't know how that happened. Maybe they should team up or something.

Speaker 1:

Wouldn't be the first time OpenAI has named something.

Speaker 2:

Well, who who oh, yeah.

Speaker 1:

Similar. Yeah.

Speaker 2:

IO. I don't know. But, I don't know. I I I was I was I was seeing, like I I don't think it's fully over. You know?

Speaker 2:

I I think it's, like, it's in a it might be in just a trough of disillusionment. You don't know. This could be this could be a trough.

Speaker 1:

In the in the trough.

Speaker 2:

Trough is in the trough. It's entirely possible. But, clearly, the vibes are rough, and people are, taking shots. Terminally, online engineer says, just put the ads in the chat little bro in the chatbot. Because Sam Altman says OpenAI is making it a very aggressive infrastructure bet with new partnerships.

Speaker 1:

But to to be fair, this clip was from

Speaker 2:

Long time ago.

Speaker 1:

Review that was, like, at least a couple months ago.

Speaker 2:

Yes. Yes. Yes. And also, you can do both. What is interesting is that it's it's maybe

Speaker 1:

It sounds like they're delaying

Speaker 2:

It sounds like they're

Speaker 1:

delaying ads.

Speaker 2:

Ads, which is which feels odd because I personally was I'm I'm maybe the only person that's really excited about ads in Chatuchupti. I think it's a good thing for the business. I think it makes a ton of sense, and, and I was excited to see where that rolls out. I hope that they don't delay it. I think that that that's where they should be running.

Speaker 2:

But, if that if they really are losing ground to to Google and Gemini's and, like, the Gemini app so quickly, I'm shocked because it feels like the Gemini three news, like, the launch went well. People were excited about the model. The model card looked good. The benchmarks looked good. But you still have to be pretty tuned in to understand the nuances of the model one way or another.

Speaker 2:

Like, it's just not like the big model smell and the vibes. Like, your average AI user doesn't care if the model responds with, it's not just this. It's that. Like, most people clearly that's why it wound up getting RL ed into the model. Most people are like, wow.

Speaker 2:

Contrastive parallelism. This is epic. I love it. Thank you. Like, this is really

Speaker 1:

Contrastive parallelism.

Speaker 2:

Antithetical parallelism. Like, I've never this is like a big big big phrase, big word. Like, this is amazing. And so I I'm I'm shocked that there would be such a I'm not I'm not shocked by, like, a vibe shift in on X and in Teapot with regard to how people have been skeptical of the OpenAI financing. And so they've been looking for a a a crack to show.

Speaker 2:

And Gemini coming out and and and leapfrogging a little bit, even if it's just on some obscure benchmark that the end user might not even care about. I was really interested in I I understand that, like, X would jump on that narrative, but I'm surprised to see, if it's true, this idea that, like, there's actually some sort of consumer shift. And, I mean, it seems like with the red alert comments, like, maybe maybe it is maybe it is.

Speaker 1:

So you think they have to explain the funding gap at this point? Or can we all just agree that maybe maybe everyone got a little too excited?

Speaker 2:

Yeah. I don't know. I don't know. I I feel like everyone's sort of repriced everything already with the Oracle round tripping and just this idea that, you know, some of the equity investments, like, are circular, but it's basically just like a discount on their purchases. And, you know, these things probably aren't as binding as as we think.

Speaker 2:

And so I feel like the the OpenAI is gonna blow up the economy narrative. I feel like that was really oversold and is is much it it it should be fading in my opinion, but I don't know. Buco Capital bloke has been digging into the funding hole.

Speaker 1:

Apparently, ChatGPT is also down right now. I just tested it. It's not down for me, but the chat says it's down.

Speaker 2:

Well, would

Speaker 1:

And x is saying it's down.

Speaker 2:

Oh, really? Oh, wow. Time to Baja blast those surfers back online, brother. It's time to rock. We need we need a pump up speech that doesn't include any negative phrases that can be taken out of context.

Speaker 2:

We need we need to be Baja blasting. We have to Baja blast. We have to Baja blast our way to the top of the App Store. Sora team, I need you to Baja blast. Bill,

Speaker 1:

it's time to

Speaker 2:

Baja It's to Baja blast to the top of the App Store. You have to Baja blast at the top of the App Store. You have to and, and we're gonna have to Baja Blast some, some funding into this company because, apparently, there's a $270,000,000,000 funding hole here. This is from a a podcast between Ranjan, who writes at, Read Margins, and and Alex Kantrowitz at Big Technology. They did a podcast together.

Speaker 2:

And, here's the quote from Buko, Capital Bloke. He says, squaring the total, it leaves OpenAI in a $477,000,000,000 funding hole. The math doesn't work. Maybe OpenAI should release to the world, here's how the math can work because I haven't seen anyone state how this can actually work. And so even if you get there, OpenAI does fall $2.00 $7,000,000,000 short of the money.

Speaker 2:

It needs to continue funding its commitments right. So it has in twenty third in 2030, OpenAI free cash flow will be about 287,000,000,000. That's, like, insane. That's if if this is I this feels like silly to me because if you're if you're in a situation where you have $287,000,000,000 of free cash flow, like, you can't raise more debt on that. Like, I I I feel like math tends to work out when you go from a nonprofit to a $300,000,000,000 cash flow a year in ten years.

Speaker 2:

Like, it just the everything just forms in front of you. Like, you're like, yes. You are building the bridge as you're driving, but, like, that tends to happen when you're on that much of a tear. The bigger question is, can they actually free cash flow $287,000,000,000 in 2030?

Speaker 1:

So Amazon's free cash flow for 2024 was $38,000,000,000 And let's see what Google's Yeah. This is like a 72. So saying that that, they're gonna do three times more Yeah. Than, Google and Amazon's

Speaker 2:

So this is the HSBC report is modeling 386,000,000,000 in annual enterprise AI revenue by 2030. Enterprise AI revenue. That's these are just huge numbers. It's it's almost not worth analyzing. I still think the biggest thing is just understanding how significant, how tied up are these contracts.

Speaker 2:

Well, let me tell you about Fall, the generative media platform for developers, developing fine tune models with serverless GPUs and on demand clusters. So what else is going on? We should read through Ben Thompson's latest piece because he's provided a lot more context on Google, NVIDIA, and OpenAI with a post called Google, NVIDIA, and OpenAI.

Speaker 4:

Would you

Speaker 1:

look at that?

Speaker 2:

And we we we thank Ben Thompson for always having an even keel. Highly recommend subscribing to Strathecari. It's a fantastic publication if you're not subscribed already. And he's a former guest of the show. So let's read through this his latest Monday piece.

Speaker 2:

He says, a common explanation as to why Star Wars was such a hit and continues to resonate nearly half a century on from its release with everyone except Jordy Hayes who hasn't seen it, because he hasn't seen any movies.

Speaker 1:

I've seen Star Wars, John.

Speaker 2:

How many Star Warses have you seen?

Speaker 1:

I have to have seen all of them except some of the more

Speaker 2:

there's been some recent All of them except some of them.

Speaker 1:

Well, no. The the the more the more recent ones are the like, it hasn't there have been, like, a new Star Wars in the last

Speaker 2:

How many how many Star Wars are

Speaker 1:

there, Jordy? There's there was is there six?

Speaker 2:

Six? There's six Star Warses? That's how many movies they've made?

Speaker 1:

Six, like, Star Wars.

Speaker 2:

There's six. There's six.

Speaker 1:

Yeah. I'm gonna I'm gonna hasn't there been, like,

Speaker 2:

six in Everyone calls it the septilogy. Yeah. There's there's six.

Speaker 4:

Wait. Aren't there are six.

Speaker 2:

There's nine. There's three trilogies. There's the there's the original trilogy

Speaker 1:

Okay.

Speaker 2:

Prequel trilogy, and then the sequel trilogy. And then there's also two spin offs.

Speaker 1:

Okay. So I didn't watch I didn't watch any of, like, the the the, like, new ones. You didn't but but you watched the prequel Skywalker.

Speaker 4:

You were just talking about George Lucas directed.

Speaker 2:

Yeah. Okay. Okay. So so so he's a Lucas head.

Speaker 1:

I'm a yeah. Exactly.

Speaker 2:

Okay. So you've seen A New Hope. You've seen I look at the Strikes Back.

Speaker 1:

Real Star Wars.

Speaker 2:

You've seen Return of the Jedi, and then you've seen Phantom Menace and, Revenge of the Sith and Return of the something. I can't actually I actually don't know that much about Star Wars. But, anyway, you should know enough to follow along with this analogy from Ben Thompson. He says, you have Luke, bored on Tatooine, called to adventure by a mysterious message borne by r two d two, that he initially refuses, refusing refusal of the call. This is the classic, this is the classic hero's journey.

Speaker 2:

So he refuses the call. A mentor in Obi Wan Kenobi leads him to the threshold of leaving Tatooine and faces tests while finding new enemies and allies. He enters the cave, the Death Star, escapes after the ordeal of Obi Wan's death. Spoiler alert, Ben, what are you doing, brother? What if somebody hasn't seen it and they don't know that Obi Wan dies?

Speaker 2:

It's crazy. Oh. And carries the battle station plans to the rebels while preparing for the road back to the Death Star. He trusts the Force in his final test and, and returns transformed. And when you zoom out to the original trilogy, it's simply an expanded version of this of the story.

Speaker 2:

This time, however, the ordeal, is in the entire second movie, The Empire Strikes The heroes of the AI story over the last three years have been two companies, OpenAI and NVIDIA. The first startup is called, the first is a startup called, with the release of Chachi BT, to be the next great consumer tech company. The other was best known as a gaming chip company characterized by boom and bust cycles driven by their visionary and endlessly optimistic founder transformed into the most essential infrastructure provider for the AI revolution over the last few weeks. However, both have entered the cave. They're in the cave.

Speaker 2:

They there's the cave of disillusionment and are facing their greatest ordeal. The Google empire is very much striking back. And I believe, didn't Anjaney, over at a 16 z, coin that, like, Empire Strikes Back?

Speaker 1:

Formerly the other 16 z. Oh, he's independent.

Speaker 2:

Oh, he's independent.

Speaker 1:

Yeah. Oh, I had no idea.

Speaker 4:

Which So is this what, Sam meant when he tweeted the picture of the Death Star? Because I don't I feel like we never really figured out what he meant by that. That was before GPT five, I think.

Speaker 1:

Yes. Think he wanted the 2025 vague post of the year No. Awards.

Speaker 4:

It's so vague that even after the release

Speaker 2:

No. No.

Speaker 1:

No. No.

Speaker 2:

I I I think this is it. I think this is It's it's Google's the empire, and and he's launching the thing

Speaker 4:

that will take a time because, like, OpenAI was, like, clearly in the lead of the models. Like, 2.5, I think I think 2.5 was the best job at all.

Speaker 2:

Cash flow. You know? Who has more soldiers? Who has more researchers? Who has more TPUs?

Speaker 2:

Right? Like, they they you know, it it'd be fair to characterize it'd be fair to characterize Google as the empire the whole time.

Speaker 6:

Yeah. I mean,

Speaker 4:

size I guess, the founding of OpenAI, then Google was definitely the Death Star. Interesting. Isn't that their kind of Yeah. Origin story of OpenAI?

Speaker 2:

Yeah. Yeah. Yeah. We it was. They were they were worried about that.

Speaker 2:

Anyway, I enjoy the Star Wars based analogies almost as much as I enjoy numeral.com. Compliance. Handled. Numeral worries about sales tax and VAT compliance so you can focus on growth. So Google strikes back.

Speaker 2:

The first Google blow was Gemini three, which scored better than OpenAI's state of the art model on a host of benchmarks even if actual real world usage was a bit more uneven. Gemini three's biggest advantage is its sheer size and the vast amount of compute that went into creating it. This is notable because OpenAI has had difficulty creating the next generation of models beyond the g p t four level of size and complexity. What has carried the company is a genuine breakthrough in reasoning that produces better results in many cases, but at the cost of time and money. Woah.

Speaker 2:

Time and money.

Speaker 1:

Throwing a rave.com

Speaker 2:

ad right in the middle of the Strictory article. I love it. Gemini three's success seemed like good news for NVIDIA who I listed, Ben listed, as a winner from the release. Quote, this is maybe the most interesting one. NVIDIA who reports earnings later today is one is on one hand a loser because the best model in the world was not trained on their chips, proving once and for all that it is possible to be competitive without paying NVIDIA's premiums.

Speaker 2:

On the other hand, there are two reasons for NVIDIA's optimism. The first is that everyone needs to respond to Gemini, and they need to respond now, not at some future date

Speaker 1:

Did you

Speaker 2:

when their chips are good enough.

Speaker 1:

Did you know that Sundar was apparent like, people were claiming that he had used the phrase Code Red back in 2022 at the ChatGPT line Yes. When the card was

Speaker 2:

Yes.

Speaker 1:

Yes. And so Yes. I I I I'm remembering that now, but I missed it. Sundar came out and said he Yes. Didn't use that exact term, but there there was reporting that he did.

Speaker 1:

And I heard

Speaker 2:

A rumor that he also said that he wanted to Baja blast Sam Altman

Speaker 1:

Out of the atmosphere.

Speaker 2:

Of San Francisco, out of the atmosphere with a Death Star laser. Yeah. No.

Speaker 1:

It is crazy. I wanna try to find more historical examples.

Speaker 2:

Of Baja blasting, folks. We got a Baja blast. You got a Baja blast sometimes. So Google started its work on TPUs a decade ago. Everyone else is better off sticking with NVIDIA.

Speaker 2:

At least if they wanna catch up secondly and remote and repeat relatedly, Gemini reaffirms that the most important factor in catching up or moving ahead is more compute. This analysis, however, missed one important point. What if Google sold its TPUs as an as an alternative to NVIDIA? We're gonna talk to Tay Kim, author of the NVIDIA Way about that.

Speaker 1:

He's gonna tell all.

Speaker 2:

He's gonna tell all. He's breaking his silence. So, that's exactly what the search giant is doing. First, with a deal with Anthropic, then a rumored deal with Meta, and third, with the second wave of neo clouds, many of which started as crypto miners and are leveraging their access to power to move into AI. So a lot of those Neo Clouds, they they found a bunch of power, and they don't really have the right chips yet, or maybe they're upgrading their chips.

Speaker 2:

They might be in a new cycle, and so TPU could be at the top of the menu for them. Suddenly, is NVIDIA that is in the crosshairs with fresh questions about their long term growth, particularly at their sky high margins. If there were, in fact, a legitimate competitor to their chips, this does, needless to say, raise the pressure on OpenAI's next pretraining run on NVIDIA's Blackwell chips, the base model still matters, and OpenAI needs a better one. And NVIDIA needs evidence that it can be created on their chips. What is interesting to consider is which company is more at risk from Google and why.

Speaker 2:

On one hand, NVIDIA is making tons of money, and if Blackwell is good, Vera Rubin promises to be even better. Moreover, while Meta might be a natural Google partner, the other hyperscalers are not. They're not gonna be selling. You you know, is we're gonna have the CEO of Amazon Web Services, Matt Garman, on the show in just

Speaker 1:

And AWS announced a new chip.

Speaker 2:

And I don't think AWS is gonna be buying TPU anytime soon, but we will be asking him that question exactly. And I wanna get to the bottom of it. So OpenAI, meanwhile, is losing more money than ever and is spread thinner than ever even as the startup agrees to buy ever more compute with revenue that doesn't exist yet.

Speaker 1:

That's the Not mincing words.

Speaker 2:

And yet, despite all that and while still being quite bullish on NVIDIA, I still like OpenAI chances more. Woah. Oh, Ben Thompson likes, likes OpenAI's chances. Indeed, if anything, my biggest concern is that I seem to like OpenAI's chances better than OpenAI itself. Woah.

Speaker 2:

NVIDIA's moats. Wait. He wrote this before he wrote this before the red alert? Interesting. He's really got a crystal ball over there.

Speaker 2:

So NVIDIA's moats. If you go back a year or two, you might make the case that NVIDIA had three moats relative to TPUs. Senior superior performance, significant more flexibility due to GPUs being more general purpose than TPUs, and CUDA and the associated developer ecosystem surrounding it. OpenAI, meanwhile, had the best model, extensive usage of their API, and the massive number of consumers using ChatGPT. The questions then is what happens if the first differentiator for each company goes away?

Speaker 2:

That, in a nutshell, is the question that's been raised over the last two weeks. Does NVIDIA preserve its advantages if TPUs are as good as GPUs? And is OpenAI viable in the long run if they don't have the unquestioned best model? So NVIDIA's flexibility advantage is a real thing. It's not an accident that the fungibility of GPUs across workloads was focused on as a justification for increased capital expenditures by both Microsoft and Meta.

Speaker 2:

TPUs are more specialized at the hardware level and more difficult to program for at the software level. To that end, to the extent that customers care about flexibility, then NVIDIA remains an obvious choice. The interesting thing about the flexibility is that isn't SSI a big TPU buyer? I I feel like they I feel like SSI was maybe going big on TPU, and I think of SSI as very much like, we're going to experiment. We need maximum flexibility.

Speaker 2:

By default, I would assume that they're a heavy consumer of GPU because they want as much flexibility as possible. But maybe the nature of Ilia's research is flexibility within that that is still afforded within the TPU ecosystem.

Speaker 1:

They're using TPUs through Google Cloud? Yeah. But there's been no

Speaker 2:

Oh, yeah. They're not buying them. They're buying them. But but still, just means

Speaker 1:

more flexibility because you can just turn it on or off.

Speaker 2:

Yeah. I I'm talking about the actual, like like, the TPU is is a is an ASIC. It has it has, like, literally, like, less features than the GPU. Like, a gaming GPU and a like, the the TPU doesn't I think it doesn't support, like, f p four, right, or something like that. There's some there's some type of math that is harder to do on a TPU because it's making trade offs.

Speaker 2:

Right? Yeah. And and so even though I don't understand it fully, I understand that that, you know, TPUs are more specialized at the hardware level. And so if you were to be in, like, the era the era of research, maybe you would want something that's less specialized because you'd be like, I'm going back to exploring all sorts of different types of math that aren't

Speaker 1:

necessarily Yeah. Again, if you're if you're buying TPUs through the cloud or you're effectively just buying cloud services Yeah. You do have more flexibility because you can say, hey, we're we wanna use more of this or we wanna use less. You're not, like, buying a bunch of servers and

Speaker 2:

Yeah. And Yeah. Yeah.

Speaker 1:

Chips that

Speaker 2:

that you On the scale on the scale thing, that makes perfect sense.

Speaker 4:

Yeah. I think also, like, historically, TPUs have definitely been more restrictive just because of, like, the software was just not as good Mhmm. Or it was closed source or whatever. And then you you know, yesterday, Don Patel was talking about how Google is slowly trying to open source more and more stuff for

Speaker 2:

the TPU.

Speaker 4:

So you would imagine that in the future, it should be, you know, much easier to use TPUs generally.

Speaker 1:

Yep. Sean in the chat says, flexibility in terms of, hey, I have this new architecture. Do I need to write kernel code from scratch, or is there a nice CUDA CUDA module I can use just time? Right? Flexibility is engineering work from NVIDIA.

Speaker 2:

Yeah. Yeah. No. That that that's a really good point. So CUDA, meanwhile, has been a critical source of NVIDIA lock in, both because of the low level access it gives developers, but also because there is a developer network effect.

Speaker 2:

Dylan Patel was talking about this. You're just more likely to be able to hire low level engineers if your stack is on NVIDIA. The challenge for NVIDIA, however, is the big is that the big company effect could play out with CUDA in the opposite way to the flexibility argument. While big companies like the hyperscalers have the diversity of workloads to benefit from the flexibility of GPUs, they also have the wherewithal to build an alternative software stack That they did that they did not do so for so for such a long time is a function of it simply not being worth the time and trouble. When capital expenditure plans reach the hundreds of billions of dollars, however, what is worth the time and trouble changes.

Speaker 2:

A useful analogy here is the rise of AMD in the data center. That rise has not occurred in on premises installations or the government, which is still dominated by Intel. Rather, large hyperscalers found it worth their time and effort to rewrite extremely low level software to be truly agnostic between AMD and Intel, allowing the former's lead in performance to win the battle. And so AMD, better performance, better efficiency per dollar, but didn't have the best software. And now but now because there's so much, on the line, so many so, the spending amount is so high, companies will go and work around all the bugs, develop new software that allows them to take advantage of AMD's better performance.

Speaker 2:

In this case, the challenge NVIDIA faces is that its market is a relatively small number of highly concentrated customers with the resources, mostly as yet unutilized to break down the CUDA wall as they already did in terms of Intel's differentiation. It's clear that NVIDIA has been concerned about this for a long time. This is from NVIDIA Waves and Moats, which he written at the absolute top of the NVIDIA hype cycle after the 2024 introduction of Blackwell. This article takes full circ this takes this article full circle. This is from the previous Ben Thompson article in Strictory.

Speaker 2:

It says in the before times, I e, before the release of ChatGPT, NVIDIA was building quite the free software moat around its GPUs. The challenge is that it wasn't entirely clear who was going to use all of that software. Today, meanwhile, the use cases for those GPUs is very clear, and those use cases are happening at much higher level at a much higher level than CUDA frameworks, I. E, on top of models. That, combined with the massive incentives towards finding cheaper alternatives to NVIDIA, means both the pressure, to and the possibility of escaping CUDA is higher than it ever has been, even if it is still distant for low level work, particularly when it comes to training.

Speaker 2:

NVIDIA has already started responding. I think that one way to understand DGX Cloud is that is NVIDIA's attempt to capture the same market that is still buying Intel server chips in a world where AMD chips are better because they have already standardized on them. NIMs are another attempt to build lock in. In the meantime, though, it remains noteworthy that NVIDIA appears not to be taking as much margin with Blackwell as many have expected. The question as to whether they will have to give back more in future generations will depend on not just their chip performance, but also on redigging a software moat increasingly threatened by the very wave that made GTC such a spectacle.

Speaker 2:

So Blackwell margins are doing just fine, should note, as he's he's back to the original article, the modern article, as they should in a world where everyone is starved for compute. Indeed, that may make this entire debate somewhat pointless, Implicit in the assumption that GPUs might take share from g or from TBUs might take share from GPUs is that for one to win, the other must lose. The real decision maker may be TSMC, which makes both chips and is positioned to be the real break on the AI bubble. Interesting. So, ChatGPT and Moats Resiliency.

Speaker 2:

That's

Speaker 1:

I can read through these ones. ChatGPT, in contrast to NVIDIA, sells into two much larger markets. The first is developers using their API, and according to OpenAI anyways, this market is much stickier and reticent to change, which makes sense. Developers using a particular model's API are seeking to make a good product. And while everyone talks about the importance of avoiding lock in, most companies are going to see more gains from building on

Speaker 2:

Never avoid lock in.

Speaker 1:

Expanding Always lock

Speaker 2:

in.

Speaker 1:

From what they already Always Baja blast. And for a lot of companies, that is OpenAI. One, I would caveat here. We were talking to a founder yesterday who, off the show, who was saying he immediately, as soon as Gemini three launched, spent like twelve hours hours just moving moving over to to Gemini from from OpenAI. So depending on the product, I don't I don't know that API is always gonna be super sticky.

Speaker 1:

Yep. I'd say winning business one app by one by one app will be a lot harder for Google than simply making a spreadsheet presentation to the top of a company about upfront costs and total cost of ownership. Still, API costs will matter. And here, Google almost certainly has a structural advantage. The biggest market of all, however, is consumer, Google's bread and butter.

Speaker 1:

What makes Google so dominant in search, impervious to both competition and regulation, is that billions of consumers choose to use Google every day, multiple times a day, in fact. Yes, Google helps this process along with its payments to its friends, but that's downstream from its control of demand, not the driver. What is paradoxical to many about this reality is that the seeming fragility of Google's position, competition

Speaker 4:

really is

Speaker 1:

a click away. It is, in fact, its source of strength. And then there's a excerpt from

Speaker 2:

Yeah, we can United skip this one and continue at the bottom. The CEO of a hyperscaler can issue a decree to work around CUDA. An app developer can decide that Google's cost structure is worth the pain of changing the model, undergirding their app. Changing the habits of 800,000,000 people who use ChatGPT every week, however, is a battle that can only be fought by individual by individual. This is ChatGPT's true difference from NVIDIA in their fight against Google.

Speaker 2:

Yeah. And so this, I think, is the most important takeaway, is just Ben Thompson creates aggregation theory. This idea of, like, it's so important to aggregate demand in the modern Internet world. It's potentially the only thing you can do. You can't really monopolize supply.

Speaker 2:

It's very hard to monopolize supply, but monopolizing demand is something that happens. Yeah. And and and the the strength of habits is significant. Like, we're watching this stuff every single day so we can take the time to, okay, yeah, we should test out this other, you know, model. We should daily drive this app.

Speaker 2:

But for a lot of people, if they've if they have a map that's installed and they've been using it for a year, they're never changing. Yep. Even if the model's slightly better over there, they're just not even gonna hear about it because they're just like, this is the thing that I use to plan my vacations or

Speaker 1:

this And the thing the thing that I've heard come up multiple times is people that when Gemini three launched, they switched to Gemini three on desktop Yep. But they stayed using ChatGPT on mobile.

Speaker 2:

Yep. And, I mean, to to be completely transparent, like like, the, the the Gemini mobile app has is really, really struggling to stay connected. There's something in the when you fire off a prompt, it doesn't, like, save it locally and then cache that and then send it off, inference it, and then come back. It like, unless you keep the app open, like, it will, it it it will just give you, like, a server disconnected error. Like, I've gotten, like, dozens of these.

Speaker 2:

And and that's gonna be a real like, I I think it should be something that they should be they should be able to fix in, a weekend. But Yes. You know, hopefully, it's soon. But for a lot of people

Speaker 1:

Logan.

Speaker 2:

Yeah. Well, they'll they'll get it.

Speaker 1:

They'll get it.

Speaker 2:

But, back to the moat and the map ad, the moat map and advertising. I, this is, I think, a broader point. The the naive approach to moats focuses on the cost of switching. In fact, however, the more important correlation to the strength of a moat is the number of unique purchasers to users. The strength of the moat is increased by the number of buyers.

Speaker 2:

Okay. So that you can see where this is going with like, Nvidia has five buyers Yeah. And ChatGPT has a billion buyers essentially.

Speaker 1:

And once it has advertisements.

Speaker 2:

Once it has advertisements. Yeah. Advertisers might be even more. Right? Yep.

Speaker 2:

So this is certainly one of the simpler charts I've ever made because it's literally just one line. But it's not the first in the moat genre. So he talks about the moat moat map. I argued that you could map large tech companies across two spectrums. The degree of supplier differentiation from Facebook where the supplier's completely commoditized, just your friend on Facebook, to, to Microsoft and Apple where the suppliers are somewhat more controlled.

Speaker 2:

Yeah. There's the

Speaker 1:

What a chart.

Speaker 2:

The the more the more unique buyers of your product you have, the the the stronger your moat because it's hard to con because you have to convince each one of them. And then second, the extent to which a company's network effects were externalized, Internalized network effects are Facebook again and then externalized as Microsoft. And so putting this together gave the moat map, so who has the network effect versus the supplier's map. If we scroll down there, you can see them. What you see in the upper right are platforms.

Speaker 2:

The lower left are aggregators. Platforms like the App Store enable differentiated suppliers, which allows them to profitably take a cut of purchases driven by those differentiated suppliers. Aggregators, meanwhile, have totally commoditized their suppliers but have done so in the service of maximizing attention, which they can monetize through advertising. It's the bottom left that I'm describing with the simplistic graph above. The way to commoditize suppliers and internalize network effects is by having a huge number of unique users and, by extension, the best way to monetize that user base and to achieve a massive user base in the first place is through advertising.

Speaker 2:

It's so obvious the bottom left is where Chatuchi B. T. Sits.

Speaker 1:

I wonder I wonder what he thinks then about, about them potentially kind of

Speaker 2:

Delaying ads?

Speaker 1:

Away and delaying ads.

Speaker 2:

Probably punch in the air. Him and Eric Souffert are probably, no. And I'm I'm right there with them. I completely agree. Boo.

Speaker 2:

Launch the ads product. Launch the ads product. Get it out. Come on. Don't delay that.

Speaker 2:

That's the most important thing. So at one point, it didn't seem possible to commoditize content more than Google or Facebook did, but that's exactly what LLMs do. The answers are a statistical synthesis of all the knowledge the model makers can get their hands on and are completely unique to, every individual. At the same time, every individual's user usage should, at least in theory, make the model better over time. It follows then that ChatGPT should obviously have an advertising model.

Speaker 2:

This isn't just a function of needing to make money. Advertising would make ChatGPT a better product. It would have more users using it more, providing more feedback, capturing purchase signals not from affiliate links but from personalized ads, would create a, would create a richer understanding of individual users, enabling better responses. And as an added bonus and one that is very pertinent to this article, it would dramatically deepen OpenAI's moat. Yeah.

Speaker 2:

I I keep going back to, this idea that OpenAI needs personalized ads like like Instagram. Like, that's what Sam said when he was interviewed multiple times on his ad strategy. He was like Yeah. You know, like, ads can be bad, but these these Instagram ads are pretty good. You know?

Speaker 2:

He's he's and people are, oh, he's like he's backtracking. It's like, no.

Speaker 6:

That's

Speaker 2:

fine. Get like like, do the proper business model. Like, please implement the correct business model. I'm happy about that. But the interesting thing is that Instagram does not serve as ads when you necessarily when you search for something.

Speaker 2:

Like, if you're if you're on a video for, like, a Ferrari, like, you don't just immediately get an ad as your next thing for, like, Ferrari of Hollywood, like, or Ferrari of Beverly Hills. Like, no. You get an ad for the toaster that you were about to check out on.

Speaker 1:

I actually do. Like, half half half of the ads I get on Meta are local dealerships in LA.

Speaker 2:

Yes. But but importantly, not when you're Tied to search. It it's not tied to search. And so Chattypiti can do the same thing where they can clearly show you, where they can clearly show you something that you are about to check out. You're shopping for Christmas for this thing.

Speaker 2:

You're searching for the Roman Empire. Let's show you the ad for the thing that you're shopping that you're shopping for, like, right next to it. It's fine. And so I I think that can work very well. Anyway, let's go to Google's advantages.

Speaker 2:

It's not a question that Google can win the fight for consumer attention. The company has a clear lead in image and video generation, which is one of the, of the reasons why I wrote about the YouTube tip of the Google spear. I mean, Google's advantage in data is insane. Like, YouTube. So massive, that's got to be just the compounding I mean, we're uploading another three hours of video to YouTube today.

Speaker 2:

You're welcome.

Speaker 1:

This one's for you, Sunday.

Speaker 2:

This one's for you, Dennis. But the the the flip side is, like, they also see the entire Internet because the way the Google bot scrapes, like, the Google searching in Gemini is such a killer feature. Like, it's such a it's such a killer feature if they can keep that on and they can actually surface that. Like and and the AI search results are obviously gonna get good. They're gonna figure out how to surface it.

Speaker 2:

I I think. I'm still pretty optimistic. But let's see what Ben Thompson has to say, and let's also tell you about Adio, the AI native CRM. Adio builds, scales, and grows your company to the next level. So Google is obviously capable of monetizing users.

Speaker 2:

Even if they hadn't turned on ads in Gemini yet, it's also pointing out, as Eric Souffer did in a recent Strathecari interview, two of the best collabing. We love to see it. That Google started monetizing search less than two years after its public launch. It is search revenue far more than venture capital money that has undergirded all of Google's innovation over the years, and it is what makes them such a behemoth today. In that light, OpenAI's refusal to launch and iterate on an ads product for ChatGPT, now three years old, is a dereliction of business duty.

Speaker 2:

He's calling him out Woah. Particularly as the company signs deals for over $1,000,000,000,000 of compute. What are you doing? Sam, get the ads out. Put the ads in in the chatbot.

Speaker 2:

We love it.

Speaker 1:

Just put the ads

Speaker 2:

in the chatbot. And go you gotta Baja Blast some ads into that app. You got to. I want ads in ChatGPT, please. On the on the flip side, it means Google has the resources to take on ChatGPT's consumer lead with a World War one style war of attrition.

Speaker 2:

Rheinmetall, call callback. OpenAI's lead should be unassailable, but the company's insistence on monetizing solely via subscriptions with a downgraded degraded user experience for most users and price elasticity challenges in terms of revenue maximization is very much opening the door to a company that actually cares about making money. To put it another way, the long term threat to NVIDIA from TPUs margin, from TPUs is margin dilution. The challenge of physical products is that you do actually have to charge people who buy them, which invites potentially unfavorable

Speaker 1:

I always so so yeah. Who who both Gemini and and ChatGPT will have ads eventually. Yes. Right? Yes.

Speaker 1:

You can bet on that.

Speaker 2:

Yes.

Speaker 1:

Who will ramp ad revenue faster? It's hard not to bet on Gemini even with a smaller user base because they have the ad network. They have all the they have all the customer relationships already. You can they can just say like, hey, here's a pop up. Do you want it?

Speaker 1:

Here's Yes. Here's $10,000 of free ad credits. Yeah. Try it out. Right?

Speaker 1:

Yeah. It's like native. It will already be in AdWords. The the example that I use is like how Zoc was able to take Instagram, which had a lot of users, plug it into the Meta Ads Yep. Platform and just scale revenue like crazy.

Speaker 1:

Then do it again with with Reels. And so, yeah, very very clear that they both will have ads. And again, if Gemini can really ramp that quickly, they could again, I do feel like we're moving towards a world where every consumer will be able to get the best LLM for free, right? I don't necessarily believe that every American will be paying for an LLM in five years. And so if Google can get there first and then keep Gemini on the frontier and deliver the best, free, fastest text and image model, that's going to be very, very difficult to compete with.

Speaker 1:

And so again, getting to ads faster feels like it makes more sense.

Speaker 2:

I like that point. I have a rebuttal. First, I'm gonna tell you about Figma. Think bigger, build faster. Figma helps design and development teams build great products together.

Speaker 2:

So getting to ads first is an advantage. That's your take. I like it, but there is a little bit of a risk with launching ads first because you could forever be branded with you're the ads one. And we saw this when, Arvin from Perplexity came on the show, and he mentioned this idea of ads in LLM queries. And we all agreed on the on the discussion.

Speaker 2:

We all agreed that ads were going to come to AI tools because that is the way to get the most people using them and make intelligence free. And you Yeah. Got in a, you know, a debate with Mark Cuban over this, for example. And and it seemed very logical, but there is these there is the fact that the first major chat app to put ads in their app is going to be a massive news cycle. It's just going to be, like, national news.

Speaker 1:

Sam Adman.

Speaker 2:

Exactly. It'll be OpenAI has ads now, or it'll be Gemini has ads now. And so you don't if you like, it's much easier to be the second mover there because it's gonna be less of a news cycle. And so you kinda do wanna there is a there is a little bit of advantage to being the second mover there. Right?

Speaker 2:

Because you're you're you're going to get sort of branded as like, oh, that's the ad supported one. And the other one can add ads, and people will like, oh, yeah. Like, I guess but that's, like, standard. Yeah. There's gonna be like a backlash.

Speaker 2:

And people will be like, oh, no. I don't like this company. Blah blah blah blah blah. Like

Speaker 1:

Yeah. I mean, the harder the harder thing is just how do you do it. Right? Like, I do feel like it's different. You're going to an LLM.

Speaker 1:

Mhmm. People are get going to an LLM for advice and recommendations. That's different than going to Google and searching and seeing ads at the top. I mean,

Speaker 2:

there's plenty of surface area.

Speaker 1:

I No. No. I'm not I'm not saying there's no surface area, but I'm just saying, like, the right way to do ads in LLM Yeah. Is not clear yet.

Speaker 2:

Yeah. Yeah. I mean, they will need to do some experimentation. But, I mean, just starting with, you know, like, Google already has retargeting information. I I actually went to, Gemini with a question, and it clearly knew everything about me.

Speaker 2:

And it said, you're the host of TBPN, and you've started you founded these companies. And it had it it already knew that probably just because I authenticated with something else. I don't know. But Yeah. It it should know, okay.

Speaker 2:

Hey. We could retarget you with this. Let's put this in. And and the same thing with, with OpenAI. Like, with ChatGPT, there's plenty of spaces where it's like you're waiting for it to, to give you the answer.

Speaker 2:

Okay. Hey. We're generating you the image. Why don't you show me other images of ads right there? There's tons of surface error.

Speaker 2:

I I agree. There there will be a whole bunch of iterations on, like, what the ideal ad looks like. But, yeah, you you can clearly get out. So let's go back to Ben Thompson, close this out. Says, the reason to be more optimistic about OpenAI is that an advertising model flips this on its head because users don't pay.

Speaker 2:

There's no ceiling on how much you can make from them, which by extension means that the bigger you get, the better your margins have the potential to be and thus the total size of Again, however, the problem is that the advertising model doesn't exist yet. So he started this article recounting the hero's journey in part to make the easy leap to The Empire Strikes Back. However, there is a personal angle as well. The hero of this site has been aggregation theory and the belief that controlling demand trumps everything else. There's their Google was my ultimate protagonist.

Speaker 2:

Moreover, I do believe in the innovation and velocity that comes from a founder led company like NVIDIA. And I do still worry about Google's bureaucracy and disruption potential making the company less nimble and aggressive than OpenAI. More than anything, though, I believe in the market power and defensibility of 800,000,000 users, which is why I think ChatGPT still has a meaningful moat. At the same time, I understand why the market is freaking out about Google, their structural advantage. Their their structural advantages in everything from monetization to data to infrastructure to r and d is so substantial that you understand why OpenAI's founding was motivated by the fear of Google winning AI.

Speaker 2:

It's very easy to imagine an outcome where Google's inputs simply matter more than anything else, which is to say one of my most important theories is being put to the ultimate test, which perhaps is why I'm so frustrated at OpenAI's avoidance of advertising. Google is now my antagonist. Google has already done this once. Search was the ultimate example of a company winning an open market with nothing more than a better product. Aggregators win new markets by being better.

Speaker 2:

The open question now is whether one that has already reached scale can be dethroned by the overwhelming application of resources, especially when its inherent advantages are diminished by refusing to adopt an aggregator's optimal business model. I'm nervous and excited to see how far aggregation theory really goes. Fascinating.

Speaker 1:

It's his baby.

Speaker 2:

Yeah. It's a it is yeah. I I I agree. It is the correct it is the correct framing. It'll just be very interesting to see.

Speaker 2:

I I I really wonder, who's gonna who's gonna take the leap first? Who is going to, who's going to jump and, and put ads in in in the in the app first? It it feels like Google should do it. It feels like Google will will be able to do it.

Speaker 1:

Yeah. And nobody's gonna be like, oh. What? Google is putting ads in a product? Yeah.

Speaker 1:

It won't be that surprising.

Speaker 2:

So they should probably move faster.

Speaker 1:

We have some breaking news.

Speaker 2:

What's the breaking news?

Speaker 1:

Jason Fried is joining the show at 2PM. I can't wait. Surprise guest.

Speaker 2:

Surprise guest.

Speaker 1:

He's launching Fizzy today.

Speaker 2:

Yes.

Speaker 1:

Kanban as it should be, not as it has been. I will wait we'll wait to talk about this till he joins in an hour and twenty minutes. So little surprise guest appearance from Allegiant.

Speaker 2:

He's calling out his competitors directly. I love when the founders do that. Founder mode. Founder mode. Yeah.

Speaker 6:

Very true.

Speaker 1:

Should we talk about John Giandrea leaving the company?

Speaker 2:

He's out. He's out. I quit.

Speaker 1:

I quit.

Speaker 2:

You like that one. I think that's probably been used before. It's some headline, but

Speaker 1:

it's Mar Subram.

Speaker 2:

So, of course, Mark Gurman has the scoop, I I believe.

Speaker 1:

The Germinator.

Speaker 2:

Germinator's at it again. He says Apple AI chief John Giandrea is leaving the company. Amar Subraman Mania from Microsoft has joined the lead AI under Craig Federighi. And so, we should dig in a little bit to this history. So, Swix has a little bit of a, of a deep dive here.

Speaker 2:

He says, Aman brings a wealth of experience to Apple. He's quoting here, having most recently served as, CVP of AI at Microsoft and previously spent sixteen years at Google, where he was head of engineering for Google Gemini. Wait. Oh, I guess at the end of that because Gemini is not 16 years old. This is bearing the lead.

Speaker 2:

He joined Microsoft AI four months ago. Wow. What a crazy turn

Speaker 1:

LinkedIn says six months ago, but but who's who's counting?

Speaker 2:

That's pretty fast. And so Yeah.

Speaker 1:

But this makes this makes sense considering Apple is partnering with Gemini and not a lot of people are gonna be in a better position to help integrate that into Siri Sure. Sure. Than Amar.

Speaker 2:

Yeah. I mean, I don't know. Maybe there's something to, you know, just having a taste of all the different big tech companies. Oh, yeah. I've I've been at Microsoft.

Speaker 2:

I know how they work. I've been at Google. I know how that works. I'm I'm ready to ready to rock over here. Gurman does need to come on, back on ASAP.

Speaker 2:

I agree, Raghav. He was fantastic.

Speaker 1:

We'll get him in person. We'll get

Speaker 2:

him in

Speaker 1:

person. Hopefully before

Speaker 2:

So German continues. He says, strange hire for a number of reasons, but it's hard to argue that the Apple job is a bad one. Anything is an important improvement at this point, So the bar is as low as it comes, easy to lay up on the resume. So that'll be that'll be fun to see. I'm I'm personally just excited to to actually test drive what Gemini, how it works in Siri, how how seamless that is.

Speaker 2:

Because if it really is just raise press the button, get Gemini, and it's linked up properly, and it doesn't have timeouts, and it gets back to you pretty quickly, like, that's gonna be a pretty powerful experience. That's that's definitely gonna cut down on ChatGPT app usage for iPhone users, I would imagine. Underrated threat. I would think so. Like, the re like, the the there are so many moments where

Speaker 1:

People are you out Apple.

Speaker 2:

It's Yeah. Ultimately I don't even know that I don't even know that Apple will benefit massively from this. It's not like they're gonna sell twice as many iPhones. They're already so big. It's not like they're gonna charge for it.

Speaker 1:

I don't think it's necessarily, like, especially bullish for Apple. Yeah. It's it's an underrated threat for OpenAI.

Speaker 2:

I would think There's

Speaker 1:

a lot of queries that all hit ChatGPT on mobile that are not even super economic, but just a lot of my usage around, hey, just trying to learn about something or or research a product Exactly. Etcetera. Yeah. And if that's just, like, again Yeah. One tap and you're in there.

Speaker 2:

Yeah. I mean, yeah, the the original promise of Siri was, you know, not just, hey, what's the weather today? But really asking anything. And Gemini clearly solves that for 99% of knowledge retrieval queries. I I would be I I think I'm gonna be using that a lot unless they really botch it.

Speaker 2:

And I don't know how they're gonna botch it.

Speaker 4:

But, yeah, we Yeah. I mean, I I think the

Speaker 2:

But if anything's possible, you know?

Speaker 1:

Apple's like, hold my beer.

Speaker 2:

They could be like, every for privacy reasons, every time you press the button, you have to esign. And it's like, why are we doing that? Yeah.

Speaker 4:

Like, the original, like, Siri kind of vision was like this very conversational AI. Right? Yeah. But I I don't think Gemini has a real time voice model yet. Like, I'm pretty sure OpenAI is the only one that has that.

Speaker 2:

Really? I don't think it matters at all.

Speaker 4:

Really?

Speaker 2:

Yeah. I I really

Speaker 4:

think That that form factor feels like that would be the best thing to be on. Yeah. You press the button on your iPhone And then it's on. Then it's just on. Yeah.

Speaker 2:

Yeah. I mean, I wouldn't be surprised if they can if they can, like, get that model, that version of the model out because it's really just, like, distilled a little bit faster. It's not some, like, uncanny breakthrough that Gemini that the Gemini team will not never be able to crack. Right? So they just have to build that.

Speaker 2:

But, honestly, like, I don't know that that's certainly not how I how how I would use it for most things. Most things, I would say, okay. Like like, I want I've I've one question. Get me an answer within a reasonable amount of time, and maybe read it off to me or produce, like, an article that's, you know, a pretty readable article summarizing the answer to my question. And then, yeah, maybe maybe there is, like, a back and forth, but I don't know.

Speaker 2:

We'll see. Oh, you're you're getting you're getting truth zoned in the chat. Gemini does have a real time voice feature. Gemini Live. Yeah.

Speaker 2:

I think it's on the app. I've I've

Speaker 1:

used it.

Speaker 4:

Try that. I I don't have

Speaker 1:

an app.

Speaker 3:

So Large journalistic force headed towards you. Stand by.

Speaker 1:

Tyler, Anthropic is acquiring Bun. What do you have to say?

Speaker 4:

Yeah. I mean, this is definitely in line with their, like, you know, focus on dev stuff.

Speaker 2:

Okay. What what what is Bun?

Speaker 4:

Bun is a it's like a I don't know, it's like a bundler for JavaScript. It's like a very dev

Speaker 2:

It has dramatically improved the JavaScript and TypeScript developer experience. They're gonna make Claude code even better. I like that Gabriel from OpenAI here is OMG in the chat, which is like a pretty crazy thing to say. But I I I appreciate it. So Claude is one of the world's smartest and most capable AI models for developers, startups, and enterprises.

Speaker 2:

Claude Code represents a new era of agentic coding, fundamentally changing how teams build software. In November, Cloud Code achieved a significant milestone just six months after becoming available to the public. That's crazy. It's only been six months. Wow.

Speaker 2:

It reached a $1,000,000,000 revenue run rate. We were always struggling to understand what that meant. Right?

Speaker 4:

Yeah. Well, so there there are two ways to pay for a Cloud Code. There's either with your Cloud subscription where you get, like, Cloud Pro or Cloud Max

Speaker 2:

Mhmm.

Speaker 4:

And there's a certain amount of tokens you can use. Then there's also you can just directly wire up APIs, calls essentially to Claude code. Yep. And then you're being charged, like, directly based on usage. So that's probably what that revenue is from.

Speaker 4:

Yeah. And then yeah. I also have thought about this thing where, like, oh, you can break down the number of tokens from the subscription. So it's like your $20 subscription, three fourths of your tokens are on Cloud Code. I mean, three fourths of your $20 is counts as Cloud Code revenue.

Speaker 2:

Okay. Yeah. Yeah. That makes sense.

Speaker 4:

I'm not sure exactly.

Speaker 2:

Yeah. I mean, I'm sure that they can they can account for it. So it's founded by Jared Sumner Sumner in 2021. Bund is dramatically faster than leading competition. You say it's a breakthrough JavaScript runtime.

Speaker 2:

Does it compete with v eight? I I'm I'm very interested in, like, it's Node. Js. It competes with Node. The thing that Tyler needs to learn from.

Speaker 1:

What was that what was that company that OpenAI acquired earlier this year for, 1,000,000,000?

Speaker 2:

I know the one you're talking about. Isn't that analytics or something?

Speaker 1:

Yeah. But, again, I I remember at the time people were like, oh, like, OpenAI's competitors are not gonna be happy about this app acquisition. So that comment from Gabriel

Speaker 2:

Well, congratulations to the Bund team. Congratulations to Anthropic and everyone on the Claude Code team. Very excited that you're getting to work together for your massive deal. Speaking of other massive deals

Speaker 1:

Speaking of size.

Speaker 2:

Alfred Lin.

Speaker 1:

Hit the get that gong ready, John. Were building that.

Speaker 2:

What did he do?

Speaker 7:

Break it down.

Speaker 1:

Alfred Lin comes in on the board of DoorDash, buys a 100,000,000 of DoorDash.

Speaker 2:

Calling it Linsanity. He's not done stewarding DoorDash. He's he's continuing to steward the company with a $100,000,000 buy.

Speaker 1:

And, of course, sends the stock up, almost 6% on that. Pretty pretty Oh, are He ripped. Excited about it.

Speaker 2:

He ripped. He ripped. What is this? Another 4,600,000.0 to donate it to shrimp welfare?

Speaker 4:

Okay. So so the they yeah. Basically, the the story is Interopac were they were doing some, like, research and about smart contracts. And so they had Cloud Code try to figure out, like, you know, issues in smart contracts. And then I I'm not sure exactly where the, like, money came from.

Speaker 4:

Maybe it was for, like, bounties. But there's some way in which Cloud Code basically generated, like, $4,600,000 in, like, cash from finding these exploits. So then they just

Speaker 2:

Did it actually generate real money or is this, like, the the hypothetical

Speaker 1:

This is simulated testing. Testing.

Speaker 2:

Okay. Simulate So

Speaker 1:

so yeah.

Speaker 4:

So it probably means that they could have like basically stolen $4,000,000 from people, but they don't wanna do that.

Speaker 2:

Maybe they should have if they really wanna get up the

Speaker 1:

In other anthropic news, David Sacks says he's still waiting on Dario's support after the New York Times piece was published. Sam Altman, of course, came in and said, David Sacks really understands AI and cares about The US leading in innovation. I'm grateful we have him. Of course, Dario and Saks not the biggest fans of each other, so I don't expect that one coming through anytime soon. While we wait for our first guest, Matt Garmin, let's pull up this clip from Huberman Lab, if we can play this.

Speaker 1:

Rob Moore is highlighting Doctor. Jeffrey tells Huberman that LED lighting in buildings is a public health crisis that could be on par with the use of asbestos, many building contractors slash designers are coming to him worried they're going to be sued and asking how to start fixing the issue. So let's pull this up when we have a second.

Speaker 3:

For light. Because I am very concerned about the amount of short wavelength light that people are exposed to nowadays, especially kids. The group of us

Speaker 8:

that are shuffling around, some of them are saying this is an issue on the same level as asbestos. This is a public health issue, and it's big. LEDs came in and people won the Nobel Prize for this very rightly at the time, because they save a lot of energy. The LED has got a big blue spike in it, although we tend not to see that. And that is even true of warm LEDs, and there is no red.

Speaker 8:

The light found in LEDs, when we use them certainly we use them on the retina looking at mice, we can watch the mitochondria gently go downhill. They're far less responsive. Their membrane potentials are coming down. The mitochondria are not breathing very well. Watch that in real time.

Speaker 8:

Under LED lighting. Under LED lighting at the same energy levels that we would find in a domestic or in a commercial environment.

Speaker 1:

This is why I want to rig the studio with Incandescent? Incandescent

Speaker 2:

We're going back to candles.

Speaker 1:

CandleMax?

Speaker 2:

Let's do candlelight.

Speaker 9:

How about

Speaker 1:

a hearth?

Speaker 2:

This is the way

Speaker 1:

If we put a hearth so we have lights above our heads Yes. Sure, or LEDs killing us slowly Candles. Softly. If we put like somewhat a bonfire Mhmm. Right above us.

Speaker 2:

That'd be the way.

Speaker 1:

And then, we just when when the wood kind of burns out Yep. The show's over. We just go until the

Speaker 2:

That would be good. I like that. Let me tell you about Turbo Puffer, serverless vector and full text search, built from first principles in object storage, fast, 10 x cheaper, and extremely scalable. Our yes. Mac Garman

Speaker 1:

is at reinvent.

Speaker 2:

Amazing. So Fantastic. Well, we are joined by the CEO of Amazon Web Services, Matt Garman. Thank you so much for taking the time to come and chat with us. How are you doing?

Speaker 7:

Yeah. Hi. Thanks, guys, for having me.

Speaker 2:

Please take us through, some of the high level announcements. Obviously, it's, it's reinvent. Very exciting. Congratulations on all the progress. Would love to know, what's at the top of your mind?

Speaker 2:

What's on the top of presentations over the over the over the course of the event?

Speaker 7:

Yeah. We had a couple of really exciting announcements today. A couple I'd highlight. First, we introduced this idea of frontier agents. Yeah.

Speaker 7:

These are agents both in Quiro for software development as well as in operations and security. And these Frontier agents are meant to accomplish much much more than customers were able ever able to do in the past where we have these autonomous agents that can help customers really turbocharge their software environment. So super excited about that. We had some announcements around Nova, which is our frontier AI models that we announced. We announced Nova two in our new sets of models.

Speaker 7:

And one of the things I'm, in particular, really excited about is Nova Forge, which allows customers to actually bring their own data to pre training checkpoints

Speaker 2:

Oh.

Speaker 7:

Mix in their data with Amazon data, finish training the model, and at the end of it, have a custom model that deeply understands their own enterprise data and is and is just for them. Yeah. So that that's another thing that I'm excited about. And then the third thing is we announced a new chip around Trainium three

Speaker 6:

Mhmm.

Speaker 7:

To really turbocharge the next generation of training and inference for our customers. And so quite excited to to get that, and that went GA today as well.

Speaker 2:

That's very exciting. I let let's go back and start with the first one. Let's talk about, coding agents and the your own proprietary models. How are you thinking about positioning those to potential buyers? Are you do you like the benchmarks these days?

Speaker 2:

Do you think that, we're sort of, like, post all the benchmarks, or do you think those are still useful tools for a buyer who's making a decision? Is it about integration? Is it about cost? How are you positioning them?

Speaker 7:

Yeah. When you think about software development, it's it's not about pure benchmarks. It's really about what is gonna allow you to get the most amount of work done. And when we think about our offering, which is called Kiro

Speaker 2:

Yeah.

Speaker 7:

It's really focused on in a enterprise or or environment where somebody is doing high velocity development. Mhmm. They actually need more structure. People love vibe coding and it's exciting Yeah. But you can actually get down a path where you get stuck.

Speaker 7:

Yeah. And you'll often find actually that you spend just as much time trying to get back to where you were before

Speaker 2:

Yep.

Speaker 7:

As if you just coded it from the beginning. We have this idea of specs that gives you structure to what you're trying to build. And so you can have agents go and start to build around those specs together with you and your team.

Speaker 3:

Mhmm.

Speaker 7:

And it gives you the structure that allows you to go really fast, can undo if you need to, can make sure that you're hitting your design requirements, and it and it really allows you and the agents to operate in conjunction with each other and move really really fast. And we're starting to build these much more capable agents that can go and actually do long running tasks for you on your behalf, but all of it is kind of ties into this structure. And we view that as a way to deliver kind of real development that's gonna be meaningful on a large code base with large teams and enterprises where they have existing things, not just kind of single individual people sitting there kind of doing vibe coding, which, you know, you can do vibe coding on on Quiro as well, by the way. We think that that's just not sufficient for

Speaker 2:

what Sure. Sure.

Speaker 1:

That makes sense.

Speaker 7:

Development's gonna need.

Speaker 2:

Yeah. And, talk to me about what it actually looks like to set an agent off and say, hey. I got a task for you. Come back to me, in a few days, which it sounds like that's where we're going. We've been tracking the meter benchmark, and it seems like we've been seeing doublings there.

Speaker 2:

But, again, a lot of those have been the benchmark has been how long would it take a human to do this task? The actual agent might have done it faster. And so you you it's not necessarily that you're actually letting something cook over the weekend. What's the experience been like, and and what have people been reporting about, these long running agents?

Speaker 7:

Yeah. I think the first and actually most important thing is thinking about how you actually kind of have a mind change on how you think about software development.

Speaker 2:

Where

Speaker 7:

you think about not about do this task, get it back, look at it, do this task Mhmm. But how are you thinking about directing a lot of agents to go out there and do lots of different things and and let those run for long periods of time where they can kind of have amorphous tasks? Like, instead of go write me this function, like, try to go solve this problem for me. And then I'll come back and and and then you but you can but if you send out two or three or 10 or 20 or 50 of those things, then your job as a software developer and as a product leader is actually much more around coordinating those when they come back, troubleshooting, make sure that you're directing them, course correcting, etcetera. And so I'm excited about that.

Speaker 7:

We've already seen these these processes go off and work for multiple hours at a time on on particularly, like, really hard, tricky amorphous tasks, and and we think those things are are gonna continue and be more the norm of how software developer teams change what they accomplish. Yeah. We think Hero is gonna be the engine that's gonna drive a lot of that.

Speaker 2:

Yeah. Yesterday, we were talking to, Vincent from Prime Intellect, and they do some of this, like, fine tuning on smaller models. And he has this thesis, I think, that you share that, a lot of businesses will need to take a a a pretrain and then and then bring their own data, fine tune it, not just because it's important from performance and output, but also from cost. But I'm interested in understanding, how you think the market will shape out. Do you see implementation partners and, like, consulting firms coming in and doing that?

Speaker 2:

I was asking him, like Yeah. You know, there's a lot of tech startups that are gonna be able to do that. They're gonna understand I need to build an RL environment around my app. But for larger legacy companies, they might not understand. So how are they going to wind up using that tool in particular?

Speaker 7:

I I think they will. And and actually, I just wanna highlight one piece there where some of what we announced today Yeah. Is a little bit different. We announced this idea. It's it's an open training model

Speaker 2:

Okay.

Speaker 7:

With Nova. And so the difference so what you just said is people take a pre trained model and they'll do RL after the fact and they'll try to Sure. Do some some fine tuning Yeah. Which is great, but there is actually limits to where that does. In fact, if you do too much post training, oftentimes those those models will forget what they've done at the beginning.

Speaker 7:

They'll start to lose some of their reasoning and their core intelligence. Yeah. I mean, this is an unsolved problem, except when you go and insert your data in the pre training phase. And so what we do with Nova is we expose checkpoints. You can take a 60 points percent trained or an 80% trained model, pre trained model.

Speaker 7:

Mhmm. Insert your data into that pre training phase, mix it in. We then expose actually Amazon training data to you via an API that you can then mix it together. And so it's like you said, here's my all my corpus of corporate data. Here's everything that I need to know about my industry.

Speaker 7:

We then mix that in and then finish pre training the model so that you get a pre trained model that totally understands your company and your data. Yeah. And then you can go do fine tuning. You can go do reinforcement learning gyms. Yeah.

Speaker 7:

After that, you can shrink them down and distill them. You do all those things, but on a pre trained model that deeply understands what your company does.

Speaker 2:

And is that called mid training now? Is that the right buzzword for that?

Speaker 7:

It's not like we're and mid training is a different thing.

Speaker 2:

Different thing. Okay.

Speaker 7:

First time that anyone's ever exposed this idea Okay. Thank to deliver pre training checkpoints where we can mix in your data. No one's ever done this before.

Speaker 2:

Very cool.

Speaker 7:

It's first time.

Speaker 2:

Great. Yeah. Well, then, yeah, on on market structure, do you think it's self serve enough that, you know, large corporations will do it? Or or do you need, like, a do you need an AI lab? Do you need an AI scientist?

Speaker 2:

Do you need someone to who can, you know, write TensorFlow or PyTorch or something to implement this? Or is it something where, you know, just a normal software engineer at a large company could go and pull this off the shelf and implement it?

Speaker 7:

Yeah. We'll we'll see. I think we're gonna keep working on the tools today. Mhmm. I do think that for some enterprises, they'll wanna have some consulting Yeah.

Speaker 7:

Folks that help them with this. I think we'll have some some people where you have some experts that should come and and teach how to do this. And I think we'll quickly get the tools to a point where, you know, it's not somewhere where, you know, a nontechnical person is gonna go do this for sure. But but it may be a software developer that that tends to be a little bit more on the the AI or ML side, that we hope is gonna be able to go do this Yeah. Without having to have a whole bunch of expertise about how to go free train a frontier model.

Speaker 2:

Yeah. On the cost side, obviously, you're working you announced a new chip. I imagine that there's, you know, the emergence of some synergies across the models that you're developing, the software you're deploying, the cloud, and then also the chips. How are you positioning the, like, the Trainium ecosystem? Is this something that you're you're planning on really doubling down on across the entire stack, or do you wanna be more chip agnostic?

Speaker 2:

Are we gonna see you buying TPUs in the future?

Speaker 7:

No. We we definitely well, a couple of things that Sure. They're all to unpack. The first is we're very excited about Trainium and think it has enormous potential, and we absolutely think there's a benefit to optimizing every single layer of that stack where we have the best cost performance Mhmm. That we can deliver at at Trainium.

Speaker 7:

We have optimized models for you to use Okay. And applications and agents at the top of that that we

Speaker 6:

talked about.

Speaker 7:

So we think that whole optimization of that stack is gonna be critically important. And of course, we're gonna support choice for our customers as And so we'll continue to offer GPUs from NVIDIA as an example, and and we have a very tight partnership there. Yeah. But but we do think and we're quite excited about what Trainium three is gonna offer for customers. And I do think that we're gonna see an explosion of that ecosystem as more and more people, get access to those chips and are able to take advantage, of the pretty significant cost performance benefits that you can get from running on training.

Speaker 2:

How are you thinking about, open source the open source ecosystem that you need to build around Trainium? That's the big discussion with the GPU right now, the question of you know, Google has some amazing folks. They have some amazing software folks, it seems like. They don't necessarily need to open source everything. And so a lot of people are waiting to see how much the the industry, you know, builds open source alternatives independently, versus how much does Google just give away.

Speaker 2:

What's your thought process on building a an open source ecosystem or even just giving developers access to closed source software to run efficiently on Trainium?

Speaker 7:

Yeah. No. We're we're all in favor of having a a open set of software to run on Trainium. In fact, we've we have our Neuron SDK, which is open source today and allows everyone to to contribute to that. We we think that the the more that we can collaborate on that software ecosystem to make it easier for people to to use chips.

Speaker 7:

And we, of course, support, the broad set of whether it's PyTorch or or other kind of open frameworks as well.

Speaker 2:

Mhmm.

Speaker 7:

So we we collaborate across the industry on that and and are big advocates of, contributing to and, and supporting that, open ecosystem.

Speaker 1:

Jordy? Love to get, your insight on just, like, general constraints for for AWS as a business, what you guys are doing on the power side. Is that is that a real constraint? Anything that you can share there?

Speaker 7:

Yeah. You know, like, it's as we're scaling incredibly rapidly, we've you know, we recently announced that we've added 3.8 gigawatts of data center capacity in the last year alone, which is just an insane amount of data center capacity.

Speaker 2:

Thank you.

Speaker 7:

Oh, you're welcome. And and and so it's it's ramping incredibly fast and it's it is a constraint. You know, we have more demand than we have supply today for for AI. Sure. And as we ramp up the supply chain, constraints.

Speaker 7:

We think about chip constraints. We think about networking constraints. We think about power constraints. We think about networking constraints, data centers, etcetera. And so we're we're working really hard to to try to remove every single one of those.

Speaker 7:

And when with an industry that's growing as rapidly as the AI one is, there's always gonna be some constraint. And we work really hard to keep removing blockers every every time so we can keep growing fast.

Speaker 1:

Makes sense.

Speaker 2:

Well, we have a hard stop. So thank you so much for taking the time on such a busy day to come chat with us. We'd love to have you back on the show and go way deeper. But thanks so much, and congratulations on all the massive releases. We're excited to dig in deeper and keep chatting about them.

Speaker 2:

But have a great rest of your day. We'll talk to you soon.

Speaker 1:

Thanks for

Speaker 7:

having me on. Yep.

Speaker 1:

Goodbye. Coming on.

Speaker 2:

Cheers. Let me tell you about public.com, investing for those that take it seriously. They got multi asset investing, and they're trusted by millions. We have Tae Kim, author of the Nvidia Way, and a Barron's senior writer joining the show. Take Him, the author of the Nvidia Way.

Speaker 1:

Take Him on the beat.

Speaker 2:

For joining the show. And I'm sorry it took us so long. We've exchanged posts on on x many times, and, we wanted to have you on the show earlier, but I'm so glad

Speaker 1:

We gotta ask right away.

Speaker 2:

Did because it's the perfect time to talk to you.

Speaker 1:

Do you have roommates? Not. No roommates.

Speaker 6:

Guys, longtime listener, first time caller. I'm so excited to be on this.

Speaker 2:

I'm so excited to have you here. Reminder, everyone, go buy the book. Seriously, Christmas is coming. I can't imagine a better gift for every one

Speaker 3:

year old?

Speaker 2:

For a four year old.

Speaker 1:

For even yeah.

Speaker 2:

Know what my son's getting. He's getting the NVIDIA way. Jensen Huang. Teach them young. Copies.

Speaker 2:

Multiple copies. No. Seriously. Get get 10 copies. Give them a 10

Speaker 6:

A lot of teenagers have have read it. Like, it's amazing. And they reach out, and, it's an inspirational entrepreneurial book that a lot of parents are giving to the kids. So Yeah. Definitely.

Speaker 1:

I love your, your headset, by the way.

Speaker 2:

The headset's elite.

Speaker 1:

We're actually developing our own TBPN Locked in. Headset because this is just like this is the I this is the ideal ideal setup.

Speaker 6:

I was telling my friend, I don't care if I look like a dork. My hearing is going. You look like dork. Like, it I get locked in when I have the headset. Could hear everything.

Speaker 1:

No. And it's wired. The worst for the worst is AirPods. Yeah. AirPods have a tiny lag.

Speaker 1:

Yeah. If you're doing Zoom calls, it just it it ruins it. But we feel like we feel like you're right here at the table with

Speaker 2:

them. It's not code red over there. He's Baja blasting. Yeah. You know he's Baja blasting.

Speaker 2:

Anyway, let's let's we we we I mean, we have some time. Let let's run through I wanna know a little bit more about your perception of Jensen, your perception of NVIDIA, and just set the table for us, we know how what is his management style? How does he he has all the direct reports. He reads everyone's, like, to do lists every day. They have tons of employees.

Speaker 1:

They never

Speaker 2:

fire people. Like, what makes NVIDIA's culture unique? Set the table for us so then we can go into the opportunities and challenges with that framework in mind.

Speaker 6:

So so the first thing I found out about their culture is it's very blunt.

Speaker 3:

Mhmm.

Speaker 6:

Like, I think in most companies and you guys have done start ups, but I don't know if you work for large corporations Yeah. Bureaucracy builds up process. It gets ossified. Yeah. NVIDIA is the complete opposite.

Speaker 6:

Like, if things are not going well, he'll chew you out in front of the whole company.

Speaker 2:

Gotcha.

Speaker 6:

And that kind of blunt mentality, I think, you know, sparks better performance because you don't wanna be embarrassed in front of Jensen's in front of the whole company. Yeah. But also, it it just sparks an agility. Like, when I talk to people at Intel or Google, like, the biggest problem they have is meeting paralysis, and you need to get sign offs from, like, five different executives. Mhmm.

Speaker 6:

At NVIDIA, like, you have a meeting, Jensen makes the call, he seeks out the right information

Speaker 2:

Mhmm.

Speaker 6:

And you move. So there's this agility at NVIDIA. The the other thing is just a meritocracy. Even from the beginning, like, thirty years ago, Jensen's always asking, who's the smartest person that you work with? Who should I try to recruit?

Speaker 6:

And, from the beginning, like, Dwight Dirks, who's one of the top people right now, he he recruited him because he talked to this other guy, and he said, oh, Dwight's really smart. Like, I I really enjoyed working with him. And he just almost ruthless in a sense. He just goes after him and then recruits and bring people aboard. So this meritocracy, agility, speed, and just getting rid of the internal politics, think, really separates NVIDIA.

Speaker 1:

Mhmm. How has most of the team becoming millionaires affected the culture, if at all?

Speaker 6:

I think they had that thing in in the first ten, fifteen years, like people started getting sports cars and putting them in the parking lot. But a lot of people cars.

Speaker 2:

Let's go. That's the best news I've ever heard. That's fantastic.

Speaker 1:

But Sounds like it's had an incredible impact on

Speaker 2:

the culture. Job's finished. Okay. You you can do know. You can end you you don't need to respond.

Speaker 2:

You don't need to say

Speaker 6:

anything more.

Speaker 1:

I think

Speaker 6:

winning culture breeds winning, and people wanna stay with winners. Yeah. Right?

Speaker 2:

Yeah. Wanna be on the track then.

Speaker 6:

A lot of people I meet in terms of colleagues, they work for a chip company for five years, and the chip doesn't work out.

Speaker 2:

Yeah.

Speaker 6:

And you just wasted five, seven years of your life. So you wanna stay within winning company, and NVIDIA has been winning for thirty years. So it it it's kinda like winning if it gets winning. You get the talent, and then then the talent stays. Like, there's so many top executives at NVIDIA that stayed there for twenty five, thirty years.

Speaker 1:

And There's also there's there's there's some benefit of peep when if if if somebody doesn't have a scarcity mindset. Right? And they're just playing to win. Like, they're just like they're they're they're they're no longer thinking like, oh, if we can just get to that next milestone and get this secondary sale, and if I if I can participate, and if I can invest my two years and sell into the next Mhmm. Tender offer, then I'll you know, they're just like, we're good.

Speaker 1:

All that matters is just being as elite as we possibly can be and just and and doing it, for the love of the game, basically.

Speaker 6:

And and a lot of these people that, like, they're they made it. They could retire. They could have retired ten, twenty years ago, but they stay and work still work eighty hours, a hundred hours a week because that's what's expected. Like, even the marketing people at NVIDIA are working eighty, eighty five hours a week.

Speaker 2:

Oh.

Speaker 6:

And that that kind of mentality, I I think, is different, at the companies. Mhmm.

Speaker 2:

Okay. Let's, let's shift into the competitive dynamic. I mean, NVIDIA has been, we I was revisiting the performance of the Mag seven since the dawn of ChatGPT. It's been three years exactly. NVIDIA, by far the winner, top 10 x, on market cap.

Speaker 2:

The next closest company, I think, maybe four x by comparison. And so, the clear AI winner in the public markets, the most obvious AI trade that just completely ripped. Now, you know, there's this whole narrative of, like, how strong is their moat? What is what does the TPU mean? Is the TPU gonna be significantly competitive?

Speaker 2:

Is there gonna be margin compression? How have you been processing this new narrative that, NVIDIA might face serious competitive threats because they're so on top of the world. Everyone owes them so much money that people are saying, I gotta get a discount from somewhere, and I'll go to Google maybe.

Speaker 6:

I wanna talk about this 10 x move. Like Please. It it hasn't been, like, straight up to the right. There's always been Yeah. Every three, six months, there's always a reason to sell NVIDIA.

Speaker 6:

Like, the h 100 problem, the transition to Black Whale

Speaker 2:

Yeah.

Speaker 6:

China, ASIC, Broadcom competition. So this this stuff has been happening this entire 10 x move up, and the media loves to latch on to the latest thing to worry about. Right?

Speaker 2:

Yeah.

Speaker 6:

We had deep seek earlier this year that the entire media establishment was it's over for the AI tree. A AI models have become so efficient when Yep. It was actually the opposite because the reasoning models and there was an ex exponential demand for compute. Totally. So I find it amusing, like, the whole world kinda discovered that Google had a really good chip in the TPU to

Speaker 2:

Which they've been working on for a decade too. Yeah. They

Speaker 6:

they've had it for ten years. Yes. Right? They've offered it to, clients Yeah. 2018.

Speaker 6:

This is nothing new.

Speaker 2:

Yes.

Speaker 6:

The Ironwood specs, you know, which I always take with a grain of salt with specs. Even even in NVIDIA, they talk about 25 extra improvement with Blackwell one. It's more like

Speaker 1:

I'm really just focused on the name Ironwood is goes pretty hard.

Speaker 2:

It's pretty good. Yeah. They they they got some good names over there in Google.

Speaker 6:

Ironwood came out all the specs came out in April. Yeah. Like, this is not all this stuff is isn't new. Yeah. And, another thing I wanna say is TPUs, their chips, Morgan Stanley estimates, there was a huge decline in TPU shipments in 2025.

Speaker 6:

And Google, at Google Cloud, NVIDIA GPUs took more share than TPUs this year. It's like, no one talks about this. Right? And now everyone's going Ironwood is gonna take over the world and NVIDIA's trouble market.

Speaker 2:

Is that just because we're I

Speaker 1:

mean, I think it was a Gemini three it was a no. Was a Gemini three thing. People are like, Gemini three is the best model in the world Yeah. On by by the

Speaker 6:

best world for a whopping six days, and Chatchip ET is still number one on the App Store.

Speaker 2:

Let's let's talk about mean, six days. It was unseated by Claude, which was also Anthropic Yeah. Which is also TPU potentially in the future.

Speaker 6:

Yes. Right. And ChatGPT, Microsoft, the head of Microsoft AI Cloud, Gregory, that OpenEye is training, their next models on the g b 300 and d l 72 Blocked. That just went live in October. Actually, earlier today, the NVIDIA CFO said it's gonna take six months, so I was a little disappointed in that.

Speaker 2:

Wait. Six months for the training run?

Speaker 6:

Yeah. Well, she said the first models on Blackwell Okay. Like, on the, like, super clusters are gonna take six months.

Speaker 2:

So it's gonna singularity. Fuck. Fuck. It's gonna be another another six months.

Speaker 6:

AI is gonna get there. The the Claude and Gemini Yeah. Benchmark gains with, pre training.

Speaker 2:

Yep.

Speaker 6:

That's the most bullish thing for the whole AI industry. Right? AI AI adoption is the scaling laws are attacked. Everything's gonna work out, and OpenAI is gonna get there when they, build their next training, run on the next model. So so going back to TPUs, thank goodness.

Speaker 6:

A shout out to semi analysis. They do the best channel work in the industry. Everyone freaked out on Friday. Right? They read that semi analysis.

Speaker 6:

No. Oh, no. Yes. Total pot of ownership.

Speaker 2:

TPU v seven.

Speaker 6:

They're gonna destroy the but, like, people that actually know the industry, it was flaming bullish for NVIDIA.

Speaker 1:

Yes.

Speaker 6:

Like, it it just it was, like, so obvious in my face because Dylan and the semi analysis, they said, wait a minute. The next TPU v eight is not gonna be that great. They lost a ton of people, and the the step function up in performance is not gonna be that great. So you know what's gonna be great? NVIDIA's Vera Rubin, which comes out at the end of next year.

Speaker 6:

Mhmm. So no company is

Speaker 1:

People saying it's the Rick Rubin of chips.

Speaker 2:

Are they related? Yeah.

Speaker 6:

It it so anyway, so Vera Rubin is gonna be dramatically better at the end of next year. And even if, you know, the the Iowood, which just became generally available and they're ramping right now Mhmm. It's it's it's no one's gonna switch over for one it's a huge, endeavor to put workloads Yep. From CUDA NVIDIA GPUs, and put them on. There's always problems when you put put them on on a new chip.

Speaker 2:

Yes.

Speaker 6:

And and let's talk about TPU customers. Right? Everyone freaked out that Meta might spend a few billion dollars in 2027. Mhmm. That sounds like a lot.

Speaker 6:

Right? Mhmm. That's less than 1% of NVIDIA's, expected revenue.

Speaker 2:

Sure.

Speaker 6:

It it nothing. Yeah. And Ben Thompson was very smart and astute. He's like, who's gonna buy the TPU? Who are the biggest buyers of AI chips?

Speaker 1:

Yep.

Speaker 6:

They're the hyperscalers. Right?

Speaker 2:

Yep.

Speaker 6:

So maybe Meta will put a portion of their workloads 1% NVIDIA's revenue.

Speaker 9:

Mhmm.

Speaker 6:

Is Amazon gonna buy TPUs?

Speaker 1:

Well, John just asked the CEO of AWS if he was gonna buy TPUs. He dodged that question.

Speaker 2:

Say yes. No.

Speaker 6:

I mean, there's no way in hell they're gonna buy TPUs. They have their own cranium. They're not gonna support their number one, like, one of the number

Speaker 1:

one archrival.

Speaker 6:

Yeah. They're not doing that. So is Microsoft gonna buy TPUs?

Speaker 2:

Been like, yes. I'm gonna buy one so I can, like, study it.

Speaker 6:

Microsoft's not gonna buy TPUs. They're the number two player in cloud computing, and they're not. Are the neo clouds gonna buy TPUs? Now you're gonna say, yes. Google got has some neo clouds.

Speaker 2:

Yes. You know

Speaker 6:

what happened with those neo clouds? They're financially backstopping those neo clouds. Mhmm. So Google is financially giving money and backstopping the debt for those neo clouds. So so there's a handful of small neo clouds, but is CoreWeave gonna buy TPUs?

Speaker 6:

Probably not. Right? Mhmm. Who are the other customers of AI chips? Enterprises, companies, sovereign AI.

Speaker 2:

Yeah. Anybody who wants to run, like, fine tuned model, some small model, something like that.

Speaker 6:

Yeah. Nineties, nineties. Those. So without a bite. Yeah.

Speaker 6:

So, like, if you just go down on our first principles basis and look at the customers of AI chips Yes. Like, they're gonna stick with NVIDIA. The millions of developers know CUDA. Yes. So you don't have You really need like, Dylan talked about this, is you really need, like, top notch, like, software sophisticated engineers that can, work with TPUs and learn learn JAKs and all that stuff.

Speaker 6:

So most people aren't don't have those crackerjack engineers. Right? So they're gonna stick with NVIDIA because everyone's used to NVIDIA. NVIDIA is backwards compatible and forwards compatible. So, like, twenty, thirty years of this stuff.

Speaker 2:

Mhmm.

Speaker 6:

And if you buy it, NVIDIA Colette, the CFO talked about this morning. It's, you can use it for training, and they can use it for inference. It's all on the same architecture, and it's gonna work. Like, I I talked to an AI startup CEO a few months ago. He tried AWS training.

Speaker 6:

Oh, it looks a lot cheaper. Total cost of ownership. But then it crashed. There were bugs. The reliability they they couldn't figure out what happened, and there's like they just threw up their hands.

Speaker 6:

I give up. Like, no one is gonna, like if you have reliability problems, bugs, crashes, the best thing about NVIDIA is all that stuff has been ironed out over the last ten, fifteen years. If you have a problem, you can figure it out because

Speaker 1:

Yeah. It's like giving giving an f one giving an f one driver, like, a car that that is unreliable unreliable and saying, hey. Go go race. Go race. Have have fun out there.

Speaker 1:

And then it's like, you know, d And the specs immediately. Right?

Speaker 6:

Specs specs look awesome. It looks seems great. But then when you actually build your business on it, you you put the future of your business onto something. You the number one thing, it's not price. It's like, better fuck better work.

Speaker 6:

It better work.

Speaker 2:

And But with NVIDIA, it works. Yeah. But what what, react to the this idea that Dylan Patel was joking about as, TPU is a stalking horse. So this idea that Sam Altman is already saving 30% on his NVIDIA purchases effectively because just the threat of going to a TPU is enough to get NVIDIA to make an investment or slightly discount in one way or another.

Speaker 6:

I I don't think that's reality, and I don't think that math actually works Okay. Because I think he's confusing the AMD deal where AMD gave, you know, free warrants to OpenAI. Mhmm. Right? First of all, the deal's not done.

Speaker 6:

Sure. It's a letter of intent.

Speaker 2:

Of these deals are done. They're they're they're not done.

Speaker 6:

AMD's AMD's done. They they they signed the agreement where they're giving away a percentage of their company through these free warrants. The NVIDIA deal hasn't been signed yet, and they actually in the October that it might not happen.

Speaker 1:

Sure. No. Yeah. So Doesn't doesn't OpenAI have to buy AMD chips in order to get the warrants?

Speaker 6:

Yes. Yes. Yes.

Speaker 1:

And so it still it still could be that they don't actually end up going through with the purchase and then they wouldn't get their That's your point.

Speaker 6:

Yeah. Yeah. So so the whole, like let's do a side step here with the circularity and all that

Speaker 2:

stuff. Yeah.

Speaker 6:

All this stuff, it's like one gigawatt at a time. There's milestones on OpenAI. There's milestones on AMD, technical milestones. They have to achieve certain targets. So all this talk about, you know, everyone loves the the big number that adds up five years of CapEx.

Speaker 2:

A lot. Like,

Speaker 6:

it it it's it could get the leverage could be up or down, depending on how things happen every each year of the way. So, you know, it might not be that big number if OpenAI doesn't come out with an amazing model or AMD isn't able to hit hit the milestones they set for their next four fifty m I chip. Right?

Speaker 2:

Mhmm.

Speaker 6:

So, like, it you know, don't worry about five years. Like, take it one year at a time. Right now, demand is off the charts. Now going back to the NVIDIA 30% discount Yeah. Like, that's not how equity investments work.

Speaker 6:

If NVIDIA does invest 10,000,000,000, 10,000,000,000, up to a 100,000,000,000 Mhmm. Say that say that NVIDIA gets ownership of the company. It's not like a freebie. Right? You're giving away, ownership of your company.

Speaker 6:

So it's not really a discount. You're getting you're getting, ownership of the company. So I I don't really believe in this 30% discount thing because, NVIDIA, Jensen will say they're they're investing to accelerate OpenAI, and, they they they they're looking forward to, you know, OpenAI going Yeah.

Speaker 2:

I mean, it's definitely creative. It's definitely a new structure. I'm just trying to I I would I would steel man in that, like, I'm an entrepreneur and somebody comes to me and they're like, I'm gonna invest a $100,000,000 in your company over a series of milestones, you're also gonna buy something from me. I'm like, yeah. I'm taking some dilution, but, realistically, like, this is a way less of a headache.

Speaker 2:

Like, where else was I gonna get a $100,000,000,000 from in in in OpenAI's case? Like, it's a great it's a great source of funding that, yes, it will be diluted, but the whole structure is all diluted all the time because of all the different ownerships.

Speaker 6:

Except for this kind of sentiment thing that we had the last few weeks Yeah. About OpenAI hasn't had any problem in raising money for venture Yeah.

Speaker 2:

Yeah. Yeah. It's true.

Speaker 6:

Yeah. Yeah. It's not just it's not like NVIDIA is the only source of funding for OpenAI. Like, everyone wants in. Yeah.

Speaker 6:

The revenue run rate, it's like 5,000,000,000 to 20,000,000,000 at the end of this year.

Speaker 2:

What was your take on the Code Red? What what do you think about the Code Red?

Speaker 6:

So I saw that you you you showed the Ashley Vance

Speaker 2:

Yeah. Interview. Mark Chan really interesting.

Speaker 6:

Mark Chan was was talking about how they kinda focused a little too much on reasoning and their retraining muscle wasn't there. Reasoning we could talk about this later. Reasoning is like the biggest kinda accelerant of AI demand in the past year. So Sure. I I think it's actually really good.

Speaker 6:

And supposedly, Ilya was, you know, doing the research for reasoning. Yeah. Reasoning is awesome. Right? But they they kind of focused on reasoning the the past year with o one and o three.

Speaker 6:

Sure. And now they're like, okay. We have to go back to pre training. Mhmm. So o o OpenAI knows that pre training still works because Gemini three had great pre training results and and cloud, Opus, 4.5 did.

Speaker 6:

So now they're gonna do the pre training. Mhmm. So they have their focus on one thing, and now they're gonna do the other thing and make their their model much better. I I do agree that OpenAI has been a little too maybe diluted. Like, they're doing apps.

Speaker 2:

Sure.

Speaker 6:

They're doing hardware. They want to compete in AI infrastructure against Microsoft and Oracle. They want to compete in AI chips against NVIDIA. Like, I I thought it was really interesting, like, Satya repeatedly said, he wouldn't name who he's talking to, but it's like, I think it's important that we realize this is not a zero sum game, and this could be a win win partnership. That was during the Anthropic Nvidia deal with Microsoft.

Speaker 6:

Right?

Speaker 2:

Sure.

Speaker 6:

He said that a couple times, and I think the the person that he's talking to is Sam Altman.

Speaker 2:

Mhmm.

Speaker 6:

Right? Yep. Makes sense. Let's let's NVIDIA and Microsoft, made OpenAI as successful. They were they were the partners.

Speaker 6:

Like, why are you competing with your the partners that brought you to the dance?

Speaker 2:

Mhmm.

Speaker 6:

Right? Let's let's go back, focus on making the best AI model in the world, and, don't compete with NVIDIA and Microsoft. Maybe maybe five years from now, but, like, it seems a little aggressive to compete with them, right now.

Speaker 2:

Let's talk about China. There's been a ton of debate over NVIDIA selling chips to China, legacy chips, older chips. We've gone back and forth on it so many times. What's your current thinking about the the the the best policy for NVIDIA exporting chips to China generally?

Speaker 6:

I think the best thing is to keep it one or two generations behind Mhmm. The current state of the art. Yeah. Like, this is a really nuanced policy that people you know, everyone's either hawkish or dovish.

Speaker 2:

Totally.

Speaker 6:

Whatever. The best policy is to keep I don't wanna use the word that Howard Lutnick used that got China very upset and forced. Forced China to, like, tell his companies not to buy a Sure. Engine. But the best policy is to get China still on the NVIDIA sack.

Speaker 6:

So NVIDIA gets $50,000,000,000 of revenue per year

Speaker 2:

Mhmm.

Speaker 6:

That can help r and d and fund r and d and make the chips even better.

Speaker 2:

Like Yeah.

Speaker 6:

NVIDIA and The US already won. Right? Yeah. They had 95% market share. Like, why are we gonna give $50,000,000,000 of oxygen to Huawei and all these other Chinese AI chips, companies that now Chinese, companies that need to buy AI chips are gonna buy Chinese AI chips.

Speaker 6:

Like, why not keep China on the NVIDIA tech stack one or two generations behind? Don't give them the best stuff, but maybe one generation behind. I think that'll be the best compromise

Speaker 2:

Mhmm.

Speaker 6:

For for for both sides. But I don't know.

Speaker 1:

What do you think what do you yeah. What where do you place a likelihood that, that the Chinese market has opened up again at at some point in the next twelve months?

Speaker 6:

Maybe fifty fifty fifty. I mean, sounds like a cop out. Like, I was much more positive six months ago, but Mhmm. You know, this has been just so crazy. First, the Trump administration banned h 20.

Speaker 6:

Yeah. Then they didn't ban it. They said it was fine. But then China was like, no, you hurt our feelings. We're not gonna let companies buy the

Speaker 2:

h 20. Yep. And then they banned it.

Speaker 6:

And then Yeah. And then maybe Trump is gonna let Nvidia sell the h 200. Yep. Or a a Chinese specific version of Blackwell that's kinda, like, hobbled a little bit. Mhmm.

Speaker 6:

Who knows? Like, NVIDIA needs to convince the Trump administration and then China to to buy the chips. The the the worst part of it is China was willing to buy the h 20, and it just all the kind of geopolitics and her feelings, you know, that that ship has sailed. So I don't know what's gonna happen. But I do think it the ideal situation is NVIDIA could, sell one generation behind, make $50,000,000,000 a year, and and keep, the competition from Chinese AI startups out of the way.

Speaker 1:

Yeah. And even understanding that at some point in the future, China's buying effectively zero chips from NVIDIA, but it would be five, ten years in the future? It's like you have to you have to assume that, like,

Speaker 6:

it's a

Speaker 1:

Go for it.

Speaker 2:

Yeah. Go.

Speaker 6:

The the main the amazing thing is, sorry, it's 0%. Right? And NVIDIA's revenue accelerated. For the first time in two years, NVIDIA's revenue accelerated in this latest quarter. And this is, like, not talked about enough.

Speaker 6:

This is the first quarter that the NVL 72 AI server has been available in volume. And then revenue just skyrocketed without China, which is incredible. Right? And and that's why I'm so bullish over the next few years because next few quarters, let's say that. Because this product cycle is gonna last at least three, four quarters.

Speaker 6:

The the key tell is the revenue acceleration first time in two years. Mhmm. And and the other key tell is that the networking segment for NVIDIA was up a 162% year over year. Mhmm. And typically, a lot of these data centers, and these neo clouds buy the networking stuff six months ahead of time.

Speaker 6:

Mhmm. So the next six months from now, like, the the the GPU numbers for for Blackwell and the NVL 72 server is gonna be it's just gonna be bonkers. It's gonna be off the charts. And people don't talk about these NVL 72 AI servers. They're 3 to $4,000,000.

Speaker 6:

Right? They're 72 GPUs, 144 dies, one and a half tons, 5,000 cables, and the and the prior version was eight GPUs. So so these these AI servers, I I I call it the iPhone three g moment. Do you guys remember the iPhone three g?

Speaker 2:

Like This is big for the Christmas shoppers out there. If you if you want a gift that's a step up from the NVIDIA way, I recommend bundle something. Picking up an NVL 72.

Speaker 1:

Yeah. It's only

Speaker 6:

$4,000,000.

Speaker 2:

Yeah. But for the right 12 year old in your life or or the potentially I think Tyler

Speaker 1:

It won't fit under the tree, but it could be fun

Speaker 2:

keeping the It's like it's like a Lexus. You put it in the in the in the driveway with the bow on top. Yeah. Yeah. Yeah.

Speaker 2:

The way it does. The fork

Speaker 1:

you drive the forklift come drop off.

Speaker 2:

Exactly. So here's the fork.

Speaker 3:

I got

Speaker 2:

you an NVL 72. Enjoy.

Speaker 6:

And and then we have the reasoning model thing where exponential compute and like companies are actually seeing like huge cursor, 40% productivity gains. Yeah. C h Robinson, 40% shipments. What profit mortgage, 80% reduction in paperwork Yeah. Cost processing.

Speaker 6:

This is like the next year because of AI reasoning, because of the MDL7 d two, that's why, Amazon and Microsoft said every quarter this year, they raised their CapEx. And everyone's like, they're gonna cut their CapEx. They're gonna no. Every single quarter, they raised their CapEx. That's because they're seeing the demand.

Speaker 6:

And that's why Amazon and Microsoft are gonna double their data center capacity over the next few years. Mean, that that that's crazy. Right?

Speaker 2:

Yeah.

Speaker 6:

In September, more leasing there are more data centers leased than entirety of 2024. This is like exponential step function up, and and people aren't talking about it. They they want to talk about TPUs like destroying NVIDIA.

Speaker 1:

About What what about some of the kind of demand guarantees that have been happening? Is that a is that a concern at all? Do you think about it much? Is it

Speaker 6:

Not really. I mean, demand guarantees like you're talking about NVIDIA and CoreWeave. It's like, when that happens, analysts every every quarter are on the conference call, like, did you use that, you know, demand guarantee? Like, it's not happening yet. Like, CoreWeave's five year old GPUs that everyone says are useless are a 100 use utilized.

Speaker 6:

Right? H 100, a massive cluster, before, it expires, probably a three year contract, they got, like, 95% of the pricing. This is, like, unheard of. And and the reason is there's overwhelming AI demand, and there's not enough capacity. Overwhelming AI demand, not enough capacity.

Speaker 6:

And and and people just are are just they don't care about what's happening in the real market. This is real life facts, evidence, numbers. NVIDIA going from 56% revenue growth to 62% revenue growth on $57,000,000,000 with barrel kind of revenue. I mean, these are a bonkers number. We we talk about the stock price being up 10 x.

Speaker 2:

That's Their

Speaker 6:

revenue is up 10 x in, like, two years. This is like beyond history, the last thirty years of following technology.

Speaker 1:

I love you're the you're the only person without NVIDIA fatigue. You're just like, you're not bullish.

Speaker 2:

David Goggins of

Speaker 1:

yeah. I

Speaker 6:

I mean

Speaker 1:

They can't keep running. They can't keep running.

Speaker 6:

This is not me just, like, making stuff up. This is, like, the numbers are there right in front

Speaker 2:

of everyone. Good points. You make good points. I I like it a lot. We'll have to have you back on the show soon.

Speaker 2:

This is a

Speaker 1:

lot Yeah. Let's do it. Let's let's make this a regular thing.

Speaker 2:

Yeah. Super fun to

Speaker 1:

to have you on finally.

Speaker 2:

Thanks so much

Speaker 1:

for Thanks for all your reporting time.

Speaker 2:

The book is the NVIDIA way. Get it at wherever books are sold. Get 10 copies. Give it to copy. Everyone in your life.

Speaker 2:

Also, give the gift of fin dot a I, the number one AI agent for customer service. Automate your most complex customer service queries on every channel.

Speaker 1:

Up next.

Speaker 2:

Oh, yeah. We we can just go straight into our next guest.

Speaker 1:

Let's bring in

Speaker 2:

We have Tark from Kalshi with some massive news. Tark, great to see you. How are you doing? From the office floor. Stream.

Speaker 10:

Good to see you. Hey, guys. Thanks for having me. Very excited to be here.

Speaker 2:

You are locked in. Look at that backdrop. Fantastic. Please, introduce yourself. You've been introduced.

Speaker 2:

Give give us the update. What's the news? Let's ring the gong.

Speaker 10:

Well, we would just raise our series e. We just raised a billion dollars, $11,000,000,000

Speaker 3:

valuation.

Speaker 1:

Great great wind up. Great wind up.

Speaker 10:

Honestly, I was waiting for the gong. Congratulations. Fucking sick moment. How's it going, guys?

Speaker 1:

Yeah. Great to have you on. I don't I was thinking over the I don't know if anybody had a crazier Thanksgiving holiday than you. It was there was a lot going on last week. So nice to come out of that with a big announcement.

Speaker 1:

But yeah, maybe kind of just update us on I think everybody's has been following the prediction market wars. The the the more important story, I think, is just how Some people are calling

Speaker 2:

it bloodbath, actually.

Speaker 1:

Yeah. I mean, just like it's it's been a battle To real to happen. On the timeline. But, yeah, I think, like, the what what's happening in the background is, like, this explosion of this, you know Mhmm. New asset class that, you know, again, I think in your announcement earlier, you were saying a few years ago, there nobody really cared at all.

Speaker 1:

And now that it had they, you know, you and and industry broadly have million millions of users. So it's pretty unprecedented. But yeah, what what's what's what's been the latest on on your mind?

Speaker 10:

I mean, I think the the thing that's happening right now is prediction markets, I think, have gone mainstream.

Speaker 2:

I

Speaker 10:

I think every inch of evidence is pointing towards that, and I think that the the thing that we're seeing is there's sort of one of these rare shifts in consumer behavior that you you don't see often. Like, they they don't happen. Changing the behavior of a customer, the habits of a customer is a rare thing, and it's unique. And when you see it, you have to really go after it with all your might. And it's, you know, there's a number of things that have to align for that to happen, and I think they're aligning for prediction markets.

Speaker 10:

I think it's happening, and I think there's one factor is the fact that people are not really trusting the legacy media and legacy sources of information, and they go to prediction markets to get smarter. The other one is that they're legal now. You know, Kashy took on this battle over years to legalize this entire market and set it up as a legitimate financial asset so that anyone can participate. And three, I think we're all kind of we sort of caught wildfire this this this year. I mean, I think the we're seeing people there's literally this phenomenon where you cannot watch a sports game without looking at the cashi odds live and the cashi charts.

Speaker 10:

You cannot talk debate about a topic, about the future without, you know, touting somebody to put a position on Calci on the app. So it's it's a huge announcement. We're very excited about it. And and honestly, it really feels like we're just we're just scratching the surface of what prediction market's gonna be.

Speaker 2:

One thing I noticed when, I'm watching a sports game is there's sometimes integration with CallSheet, sometimes with a competitor. What's actually going on?

Speaker 1:

You know, legacy sports book.

Speaker 2:

Yeah. What what is actually going on? I feel like a lot of people who are just passively observing the timeline are seeing a lot of, like, announcements and partnerships with

Speaker 1:

The partnership economy.

Speaker 2:

The the part and people are joking about it. Like, what's actually going on? What's at stake with some of these partnerships? What have you done, and what does it actually mean? Because it feels like if you do a partnership with a specific league, that doesn't necessarily mean that I can't get odds on that event somewhere else.

Speaker 2:

So what what is actually going on with the partnership economy?

Speaker 10:

I mean, I'll tell you kind of our approach to this. So so we are building you know, the the our focus is building on a business. It's very metrics driven,

Speaker 9:

you know,

Speaker 10:

and and sort of for context, so we're doing a billion and a half of volume a week now. Wow. And, you know, we're market leader by a meaningful margin. Think depending on sort of how you measure it, so we're something around 80 to 90% market share now. And I think any partnership we do, we bucket them in a bunch of categories, but they're all focused on, like, actually driving legitimate volume and legitimate use case into their products.

Speaker 10:

So our partnership with platforms like Robinhood, mean, Coinbase leaked, it's coming in December, and PrizeFix, Webull are kind of in that bucket. Then we have partnerships with a series of partnerships coming around news, one of them leaked this morning in the New York Times article, but they're also very

Speaker 1:

One sentence, 10 leaks.

Speaker 10:

Everything leaks these days, and we're just like, nothing is news anymore, it's all leaks, but the point is we're focused on things that drive legitimate use to the product and then drive legitimate utility to the partner. Whether it's a broker, obviously, this could be a big revenue line for them. And if it's a news network, it's a compliment to their reporting Mhmm. That actually makes their reporting more accurate, and reporters love truth and prediction markets bring truth, so Got you could see the synergies and how they fit.

Speaker 2:

Okay. Yeah. Yeah. Yeah. That makes a lot of sense.

Speaker 2:

Yeah.

Speaker 1:

What yeah. I I think the some of the some of the big news out of last week is that Robinhood is is entering and kind of potentially trying to verticalize the product experience on their side. What can you say about the I guess, like, how you see the structure of the market evolving? You guys are an exchange. Robinhood is a brokerage.

Speaker 1:

Sounds like they're trying to actually build an underlying exchange themselves. Mhmm. How much should how much should sort of observers of the industry look to how the how stock trading and stock markets, stock exchanges have evolved versus prediction markets? Like, what does this market kind of look like in five years, ten years as much as you can kind of pull out a crystal ball for us?

Speaker 10:

I mean, maybe the basics is like and you've seen this a little bit in AI. Right? After you see the success we've had, it's it's basically indicative of like, okay, there's a massive market opportunity ahead of

Speaker 2:

us.

Speaker 10:

And when that happens, I think you're gonna inevitably see a ton of competition. And generally in those markets, like, the the this sort of massive surge of competition, whether it's brokers, there's some of the sports books like Druxes and FanDuel coming in, it's it's just usually a sign that there's a lot of good things to come for that market. Right? It's it's a sign that, like, you have big companies reprioritizing their entire roadmaps to go all in after this, and that's a positive for us. Like, we're market leader in a market that, you know, everybody is starting to believe is going to be ginormous.

Speaker 10:

In terms of the specific question on market structure, I mean, like, you know, we we have obviously the exchange. We also have our direct products. In some ways, our is competitive with some of our partners, and I think, you know, the same way that we're working with a lot of different brokers over time, some brokers are gonna sort of diversify and work with a number of different exchanges, And that's how these sort of market structure evolve over time. And the only thing that matters, the kind of the thing that stands out is similar to any other market is product, and product velocity is, are you putting out products faster than anybody else, and are you putting putting out products better than everybody else? And I think Calshay has had a pretty incredible track record of setting the pace in the industry.

Speaker 10:

At least if you look at the last year, we've set the pace in the industry and everyone's following. And I and I feel pretty good about us continuing to do so in next twenty four months.

Speaker 2:

How do you think about the market structure? I think everyone's wondering, like, obviously, this is a new market. It's unlocking entirely new sort of asset classes, and and it's it's obviously big. Everyone's excited about the numbers, but, is this a natural monopoly? Is this duopoly?

Speaker 2:

Like, how many winners will there be? How do you even think about the market structure? Is there some return to scale?

Speaker 10:

It's it's interesting. Like, I I kinda, like, don't think much about that. Like, I think investors love to sort of investors do this thing where they're sort of gonna rationalize all of it in five years, and everybody's gonna be super smart about how they all figure it out. But look, I think that it's a very nascent thing, right? It has some similarities to rideshare, it has some similarities to the DraftKings FanDuel era when that happened, it has some similarities to the online brokerage industry, and it has some similarities to financial exchanges like CME.

Speaker 10:

Where does it fall? Probably somewhere in between all of these sort of buckets Mhmm. And probably not exactly the same as any of the any of these other buckets. And I think that you'll see more of I think with enough scale in financial services, but also true for any industry, everyone gets into everyone's territory. And so the only thing that matters, again, is sort of what companies are going to rise above the others in terms of product velocity and product quality.

Speaker 10:

Yeah. And I that's just what we're narrowly focused on.

Speaker 2:

There's a question from the chat. Can you explain how external market making works on Kalshi?

Speaker 10:

That's been a for some reason, hot topic recently, but, you know, market makers are a part of any financial market. You you kinda need them to basically have liquidity in in markets, and actually, Cauchy and prediction markets have less customer to market maker flow than traditional markets. If you look at options, for example, it's like the vast majority is, you know, Geordie to a market maker like Citadel, whereas on Cashex, the vast majority is, you know, Jordy versus John, and then some of it goes to market makers. And it's an open transparent order book where everyone's competing on price.

Speaker 3:

Mhmm.

Speaker 10:

And we have actually a separate company called Cashex Trading that trades on the exchange, but they're a very small percentage of any liquid markets. Really, their function has been, for new markets, a little bit weird.

Speaker 2:

Liquid markets. That makes sense because if it's some really, really niche thing, who's gonna put in the first $500? Exactly. Take the risk.

Speaker 10:

And they're not very profitable. It's actually we really like, they're really focused on providing a good customer experience so that we bootstrap markets Yeah. Rather than like any meaningful part of the business model today. And if we took it out, it'd be actually a worse experience, so I I think it's definitely not positive for the ecosystem. It's a bit like Uber, you know, when the adverse interests got impacted.

Speaker 9:

Sure.

Speaker 10:

The taxis, they were coming out with all these reasons, right? Like, you know, about all these kind of random reasons, but I I don't think there's much truth to it.

Speaker 1:

Yeah. So so I'm sure you can't comment on any specific lawsuit. There's there's a numb

Speaker 3:

I can.

Speaker 1:

Of them. But what what what has been the I think there's quite a lot of prediction markets experts that have looked at some recent lawsuits again against prediction markets and said, they clearly don't understand how this works. Can you comment at all on some misunderstandings broadly?

Speaker 10:

Yeah, so what Kaushi has done is first regulate prediction markets as a financial instrument under this agency called the CFTC. People have been hearing more about the CFTC recently because it also regulates crypto. And that's one of the main financial agencies. There's the SEC that does stocks and CFTC that does commodities. And then we did the same thing with elections, and now we did the same thing with sports.

Speaker 10:

And the way that it works is like financial markets, those are regulated at the federal level. And so the law around these markets is just federal. They they kind of report to a federal government and federal regulator, not a state government and state regulator. And there's a bunch of reasons for that, but it's kind of how the constitution was formed, which is some stuff makes sense at the federal level and some stuff is more local and makes sense at the state level. And we are one of these things that fall under the federal level, and federal law preempts state law.

Speaker 10:

So if you are okay on the federal side, state law doesn't really kind of apply to that exchange. And that's why we have one regulator which is the CFTC, our federal regulator. And again, I think it's normal when like, when something so disruptive happens to an industry, the people that are adversely impacted are going to come after it and come up with all sorts of arguments for why it shouldn't exist or why, you know, Airbnb was terrible and all these different things. But at the end the the thing that drives it long term is is this a great product and are consumers loving it and using it? And the answer is yes in those in in this case.

Speaker 2:

Mhmm.

Speaker 1:

Off of the success of Cauchy and Polymarket, there's been a bunch of net new prediction market startups that are created. Is there a possibility that that, this market, like, ends up having these sort of, like, niche, maybe more, like, vertical marketplaces? Or do you think that the platforms with the greatest liquidity and and the deepest liquidity will will, ultimately just absorb those submarkets?

Speaker 10:

It depends on how narrowly we define prediction markets versus broadly. Like, I I really think of prediction markets as kind of just like a next gen like like, expansion of financial markets to touch anything. Calcio means everything in Arabic, but really, if, like, if markets kind of progressively grew over time, Calcio, what we did is just like kind of widen that set dramatically over what it could touch. So I could see some, you know, startups innovating on specific verticals over time and doing reasonably well, but there is real concentration of liquidity and concentration of volume that happens in those type in in these types of markets that is hard, I would say, to battle with. And so I think at least from from that aspect, I think the the cards are probably mostly shuffled already.

Speaker 1:

Mhmm. That makes sense.

Speaker 2:

Last question. There was a viral clip of you talking about Donald Trump junior. Is do have anything more to share on his involvement? Because I was watching that, and I was like, yeah. There it's kind of like, hey.

Speaker 2:

Where are we going with this thing? It seems like politicians have a deep insight on how campaigns use these prediction markets, but can you share anything more about his involvement in the company?

Speaker 10:

Yeah. Mean, first of all, that clip is a clip, and you know how these clips are taken. You

Speaker 1:

know Who needs context?

Speaker 10:

Yeah, we don't need context. But anyways, I think that we have done One of the main products that took us mainstream was the election market, and that brings a lot of attention from politicians on both sides of the aisle, and you see it Trump at the time was using his prediction markets odds during the election, and actually Mamdani more recently was using his calci odds pretty consistently during his election, so in some ways you're gonna see a lot more Prediction markets are gonna touch financial markets, gonna touch the news and they're gonna touch the political process because they bring more truth to all of the above, all of these categories, and in some ways it's good that we get more and more, I would say, politicians involved and engage with these markets. The one thing I'll say about this, and again, it's very it's in the same bucket as the other things that we discussed where there's industry dissidents that are against prediction markets that find all these different reasons for why prediction markets might be bad, but the thing that happened is not this administration necessarily, even though this administration is pro innovation, is we won that lawsuit on the election market, which has really redefined what the landscape, what the boundaries of what a financial market is, and that lawsuit was won as a court of appeals in DC with It's a very progressive panel.

Speaker 10:

It was a panel of democratic judges where we won three zero. So people wanna make it out of it to be a partisan issue even though I don't think truth needs to be a partisan issue. It's just you know, these markets work, people love them, and they generate a lot of insight out of them, and I think that will win the day at the end of day.

Speaker 1:

Last question from my side. How does the CFTC view when a market participant has some type of alpha or or nonpublic information Yeah. And they're they're they're betting they're betting on a market, you know, based on that information. From my view, as somebody who, like, gets data from, you know, we work with Polymarket. We we look at we use Polymarket data on the show.

Speaker 1:

If somebody has inside information and they're they're they're trading on that information, it actually makes the markets more accurate. So in some ways, as a user who's just, viewing markets, it's I want people that have inside information on global events to be trading so that the markets actually better reflect reality. But what is like the CFTC's view on that type of activity? Because things get thrown around all the time, insider trading this or that, but I don't actually know what the actual Yeah. Law

Speaker 10:

That's actually a great question. It's a point of debate in in this land, but I think there's some distinctions. So so Cauchy is a regulated exchange, so everything we do in some ways, a lot of the laws and the rules are very similar to what you would expect in a New York Stock Exchange and some of the traditional financial markets. The question of insider trading is interesting because what you just said could also apply to the stock market. Right?

Speaker 10:

Like, if you want to accurately price the stock, maybe we should let insider trading happen.

Speaker 2:

Sure.

Speaker 10:

And the reason why it's actually not allowed is because it makes the game unfair. It makes the market unfair, and if the market is unfair, liquidity dries up. People just stop participating. Yep. Right?

Speaker 10:

And that's why you have to have reasonable rules of the road where people can reasonably expect to be treated fairly in this marketplace, where there's no kind of asymmetric or structural advantage for for one participant versus the other. And we take the similar approach here. So if you actually have insider information, which is information that you're not supposed to reveal to the public, you're not supposed to trade on it because trading on it is a way to reveal it to the public. And so that makes a more balanced, more fair marketplace, and I think we're very focused on that. But it's a very interesting question.

Speaker 10:

It's one the industry is battling with, but we take a hard stance on insider trading.

Speaker 1:

Yeah, because if somebody goes and they go and they vote in a local election, and they see like, okay, I talked to somebody there, and they said they were voting this way, I talked to another person, they also said they were voting this way. And then somebody trades on that information. Like, is it actually, like, is is that, you know, how how how do you define that type of activity? Right? It's like anybody could go down to the polling, you know, any anyone could go down to the polls and and kind of like or or voting center and just see, like, ask the same question, right?

Speaker 1:

So anyways.

Speaker 10:

Well, I was gonna say it's it's the same as the stock market. Right? If if you go and sit in front of Walmart and count everybody that's going in and out and then, you know, during the day and forecast their sales from that, that's actually fair game. Now, if you call your cousin at Walmart and ask them for information they have internally that they're not supposed to reveal to the public, that's insider trading. I think we have a very similar line here.

Speaker 1:

Yeah. Yeah. Yeah. That makes that makes sense. Mhmm.

Speaker 1:

Very cool. Well, helpful. And, yeah, congrats to the whole team. It's a pretty massive milestone.

Speaker 2:

Huge.

Speaker 1:

And, yeah. Great. Great getting the update.

Speaker 2:

Thanks so much for taking the

Speaker 10:

time, guys.

Speaker 4:

Thanks for having me.

Speaker 2:

Will talk to you soon.

Speaker 1:

Talk soon.

Speaker 2:

Have a good one. Adquick.com. Out of home advertising made easy and measurable. Plan, buy, and measure out of home with precision. Our next guest Precision.

Speaker 2:

Matt Mullenweg from Automatic. He is in the Restream waiting room. Let's bring it. They are set up. Parent company at wordpress.com, Tumblr.

Speaker 9:

I think are gonna remain

Speaker 2:

Welcome to the show. Matt, how are you doing? Think we have you on a hot mic. We might have you on a hot mic. Hopefully not.

Speaker 2:

Welcome to the show.

Speaker 9:

About to come on the screen? Yes. Alright. Well, a little drum roll. You'll

Speaker 2:

go about this.

Speaker 9:

Experience TBPN. Let's time.

Speaker 1:

So, Matt, Our audience

Speaker 2:

We're streaming on them. They're streaming on us. Are you doing?

Speaker 1:

Each other.

Speaker 2:

Oh, we're clap too. Fantastic. How are you doing? Good to meet you.

Speaker 6:

Howdy. Howdy.

Speaker 9:

Thank you so much for I know this is a little nontraditional, so we're we're like two hours into like our big annual address, the state of the word. It's kind like our state of the union speech. And But, thank you so much for allowing us to connect them. I'm kind of A lot of folks in the room have never heard or seen TVPM before. So this will bring a lot of new folks into your world, and I'm excited for some of your world to learn about WordPress.

Speaker 2:

Yeah. Give me the state of And the then also, I want your your personal word of the year. We've been debating what the word of the year should

Speaker 9:

be over here.

Speaker 2:

Oh.

Speaker 9:

So state of the word, and I'll say the state of the word is strong.

Speaker 2:

Okay. There we That's good. Let's hit the gong.

Speaker 1:

We're hitting the gong for that one.

Speaker 2:

The strong state of the word.

Speaker 9:

We actually just did a live release of WordPress 6.9. Okay. So WordPress does major releases three times per year. We were able to do it right here on stage. We had a little button that we pushed.

Speaker 9:

We got

Speaker 2:

it all the time. I love it.

Speaker 9:

And that was was pretty fun. Don't worry. Didn't just ship it again. It's you know, one of the things about WordPress is is it's not just built by one company, but it's a community of WordPress 6.9, over 900 contributors from all over the world. Different countries, different languages, different companies all coming together.

Speaker 9:

And so, that was pretty exciting. My word of the year, and actually a theme we were just talking about, is I'm going choose freedom. Freedom.

Speaker 1:

Powerful.

Speaker 9:

As technology, like, starts to influence more and more of our lives, you know, how we travel, who we date, the things we learn, the news we're exposed to. You know, the sort of freedoms that are embedded in an open source license. I like to refer to open source licenses sort of like a bill of rights for software. It gives you inalienable rights that no company or person can take away from you. And that freedom and agency, think, is really, really important and something that I think, you know, as technologists or builders that we should try to embed into everything that we do.

Speaker 2:

Give us an update on Beeper. I was super fascinating I was super fascinated by that product. I love I love walled gardens. I also love tearing down the walls of gardens. It seems like a a a good shot across the bow of the iMessage walled garden.

Speaker 2:

How's the progress going there? Are you using the service personally daily? Are we gonna see a lot of growth there?

Speaker 9:

Well, obviously, I'm using it daily. So, I would think of it not as like a replacing a walled garden, but more like allowing your gardens to come together. Mhmm. So, I'm sure you like me. I have friends on lots of different networks.

Speaker 9:

Mhmm. And some of them always love to use WhatsApp. And some of them always love to use, you know, Instagram or LinkedIn DMs sometimes. I even get some interesting stuff there. And I hate it when I miss these messages.

Speaker 9:

Yeah. You know, because, you know, checking all the different apps sometimes are in the notifications, I might miss something. So, think of it not unlike how email clients, you know, can bring in lots of different email accounts. Beeper takes all the different networks where your friends already are and maps them together. Now, the plus and minus is that you're it's not gonna replace the networks.

Speaker 9:

Like, I still keep all the different sort of specialized messaging apps because, like, for example, if someone sends you an Instagram story, when you click on that, you're gonna want to load Instagram, for example. So I think of it as complementary and hopefully even increasing the usage in a very small way right now. It's it's pretty nascent. But in the future, think of it as like sort of a different interface. So you might still have like the dedicated apps, but then having this all in one inbox that you can sort of manage everything, tag people, have folders.

Speaker 9:

And there's cool features like scheduled messaging across all platforms or even just like weird heuristics that are pretty simple to do. But like show me all the Don't just show me unread, but show me all the people I've messaged that haven't messaged me back yet.

Speaker 2:

Oh, yeah. Sure. Sure. We we've talked to some young hackers, some startups who are building, you know, sort of beeper competitors and their whole value prop is like, we figured out a way to get it into the iMessage ecosystem. Do you think that, we need a new regulation there or some sort of law change or some result to actually open up iMessage, or do you think that, with enough tricky hacking, it can be done?

Speaker 9:

Well, technically, it's it's not hard. Well, it is hard.

Speaker 2:

It's hard, but it's

Speaker 3:

It's very

Speaker 9:

possible to reverse engineer these networks. Yeah. However, as we saw with sort of a previous iteration of beeper

Speaker 2:

Yeah.

Speaker 9:

If the network really, really doesn't want you to do that it's probably not good to pick a fight with a trillion dollar company.

Speaker 2:

Yeah.

Speaker 9:

So, perhaps these things might happen through open source or something. As a commercial company, I think ultimately you have to be somewhat respectful and try to complement these networks. Mhmm. So, how beeper works today is we don't support iMessage on the mobile or Android. In theory, we could, but Apple has indicated that's something they don't want.

Speaker 9:

We do support on macOS clients. We have a way to integrate with sort of iMessage using some APIs that are available on macOS. So, on macOS, we can bring in your iMessage.

Speaker 2:

Got

Speaker 9:

it. But again, I'm building this for long term. Yeah. And we are a commercial company as well. Sure.

Speaker 9:

So, you know, we we want to work with the networks. And, you know, perhaps there can be regulations like the European DMA or things that can encourage interoperability. But ultimately, I think that the the sort of people who run these networks have to see a longer term benefit for them. And for things like, you know, some of the other networks I mentioned that BIPO works with, I think their business model and everything, the increased usage is really useful for them. I think for today, Apple's business model, particularly in The US, kind of the lock in effect to their device business, which is of course where they make a lot of money from iMessage, probably indicates that, and less forced to, I I doubt they will adopt sort of iMessage interoperability.

Speaker 9:

But who knows? Sort of like they used Lightning for a while and eventually got USB C, and all of our lives got better. Who knows what will happen in the future?

Speaker 1:

Talk about links on the Internet. I feel like we're at a point in time where social media platforms are trying to keep users in in their own applications so that they can monetize them to the fullest extent. Meanwhile, you have LLMs which are ultimately doing a lot of the same thing. They're taking content from all over the Internet, trying to keep users in the individual applications. Feels like WordPress in in many ways is making moves to kind of like almost fight back against that.

Speaker 1:

I I might have that incorrect. But I feel like it's important. If you're running a business independently online, it's great to have people like on your own website so you can develop a a deep relationship with them. But what what is your view on that? We're very much anchored around X as a as a business.

Speaker 1:

Obviously, X has had issues with links or, you know, chosen to demote them in the algorithm over the last couple years. But give us kind of the state of the union on on links.

Speaker 9:

That's a broad one. Well, I will say x is actually a great example. And I've I've talked to Nikita about this. So they now they've shifted some of the balancing of links, and they now have this really nice in sort of app browser. So you've probably noticed that now.

Speaker 9:

That when you load a link, you actually still have the ability to like like and reblog and everything. And I think that's kind of the future. So, I I do think that there you can have things that are complementary, cause so much of the, like, great content and everything is more on this open web. It doesn't have to be like fully embedded in the app. But that is sort of a technological change.

Speaker 9:

So I would say, I actually point to x as some place where I think things are going in the right direction. Although, I do agree that sort of time when links got really deboosted and everyone had to do it as like a reply was kind of weird and sucked. Yeah. So, for for WordPress publishers, you know, we support so many different types of websites. And different types of websites, I think, might have different motivations.

Speaker 9:

So, for example, a popular plugin for WordPress is called WooCommerce. It's an e commerce plugin. It actually runs on about 8.9% of all websites in the world are now running this e commerce plugin. You can think of it like an open source Shopify. And when you if you're selling something, a merchant, you don't you just want to sell the product.

Speaker 9:

You don't might not necessarily care that someone comes to your website to buy it. Yeah. So, some of the new things that are happening with in partnership with OpenAI and others where we're allowing products to actually be like browsed and bought inside of the LLM are pretty exciting. I also think that the incentives of these open source chatbots in particular are very complementary to the open web. So, for example, like, if you're on Amazon, Amazon really wants you to stay, or eBay, or Etsy, or something like that.

Speaker 9:

They want you to stay in their marketplace on their system. But when you think of how Google works, and sort of the growth of Google in the open web, you know, they they have their search pages, but they also would link out. And that was Yeah. A whole part of their business model and how they grew. We're seeing that with the chat bots as well.

Speaker 9:

And in fact, something I talked about a little bit earlier is that the traffic from bots, both from them crawling, but also user initiated actions, is exploding. Mhmm. And has already surpassed sort of human traffic, and it'll be interesting to see where that goes in the future. So, you know, there's never a better time, I think, to invest in having a domain, but also invest in publishing. And, you know, just like you might have a direct relationship, like, example, I suppose I could get like a, you know, chat GPT to summarize today's TBPN episode.

Speaker 9:

But it's more exciting to watch it. Yeah. I think that creators developing a direct relationship and brand is is gonna be part of the future as well.

Speaker 1:

Very, very cool. Well, there's so many more things that I wanna ask. Yeah. But I know you're in the midst of of your own presentation. So thank you for tuning in.

Speaker 1:

Come back on soon. And thank you for having us. The view is spectacular as well.

Speaker 9:

It's a pleasure. I love to come down and hang out when I'm in LA next. Thank

Speaker 2:

you so We'll talk to you soon.

Speaker 1:

Great chatting.

Speaker 2:

A good rest of your day.

Speaker 1:

See you. Bye. That was the first..com.

Speaker 2:

Exceptional sleep without exception. Fall asleep faster, sleep deeper, wake up energized. That's our sponsor. We have

Speaker 1:

Ninety four last night, John. Spoke to you again. Lost my phone. Lost your phone. Well, we have Jason Fried in the restroom

Speaker 3:

waiting room.

Speaker 2:

Let's bring him in. The TV show. Jason, how are doing? Good to see you.

Speaker 3:

Good. You?

Speaker 2:

Congratulations. Massive news today. Break it down for us. What's up?

Speaker 3:

Was there big news

Speaker 1:

today?

Speaker 3:

I I missed the news.

Speaker 2:

What was the news?

Speaker 1:

Oh, you're news.

Speaker 2:

You're you're just calling out

Speaker 1:

You are the news.

Speaker 2:

Trello. Name He's naming names. A lot of lot of people don't do that. A lot of people say, oh, the competitors, the best in class solutions, the Gartner hype cycle. No.

Speaker 2:

You called them out. You you put them

Speaker 6:

on the map.

Speaker 3:

We had some fun. Yes. We launched a new product today called Fizzy, which is kind of a a fresh take on Kanban, an old idea. Obviously, it's been around for a long time. But we we know we have a different spin on things, different take on things and felt like it was time to do something new and kind of bring it back to the basics and also add some fun and color and vibrancy, which is missing in the software industry.

Speaker 3:

I feel like, the people might be colorful in a sense, but the products are very much the same. And so we wanted to do something different, and, that was that was what we did today.

Speaker 2:

Why new name? Why not, you know, a new new tab in an existing product?

Speaker 3:

Right. Well, Basecamp, which is our biggest product, has Kanban in it. We call it card table there.

Speaker 8:

Yeah. But, you know, the thing is is

Speaker 3:

that Basecamp is very popular, but it's, you know, let's say there's a 100,000 accounts. Right? A 100,000 companies use it. It's a small number in the end. And there's a lot of people who can use something like Fizzy that are not gonna use Basecamp.

Speaker 3:

Basecamp is a much bigger system. It's for bigger projects. Mhmm. And there's a lot of small things that people need to do and organize and track. And so building a small standalone thing just feels like it makes more sense, frankly, for for this kind of thing.

Speaker 2:

So do you I mean, do you have an idea of, like, who is the target market, Startups, individuals, like like, you use this to plan your Thanksgiving dinner?

Speaker 3:

Yeah. I mean, the target market is me and us. Basically, we build things for ourselves.

Speaker 2:

Sure.

Speaker 3:

I don't think about who we're making things for because we're making things for us always. And the idea is that, you know oh, actually, let just say this. I I find the best products in the world are made by the person who's making them for themselves. That's been my experience, like Yeah. Enthusiast products.

Speaker 2:

Cool.

Speaker 3:

And then other people find them and other people discover them and you find out that you're like other people and other people are like you and they kind of dig it, you know? Yeah. And so

Speaker 1:

I I I said to someone this morning that that I feel like TVPN is that way where sometimes when I'm driving home, I want to watch I wanna I wanna watch TBPN, but I'm like, we just we just made it. I just lived it. I should probably, you know, watch something else. So I I never I never watch watch the show myself. So most call

Speaker 2:

me and say, hey, we

Speaker 1:

can just make we can do a podcast on the fly.

Speaker 2:

Let me just talk about tech news more. I'd love to I'd love to know about, the actual process for building the product. Yeah. Who who is staffed on the team? How many people?

Speaker 2:

What time period? When did you start? Do you have a designer, developer? Is it all just what what's the prompt? I imagine you just use one prompt for this?

Speaker 3:

One prompt was

Speaker 2:

all prompt.

Speaker 1:

All you needed.

Speaker 3:

Yeah. So, you know, it's what's interesting is we actually also open source this. So this is fully open source. It's a SaaS product and fully open source. So you can run it yourself for free.

Speaker 2:

Yeah.

Speaker 3:

Which means you can go into GitHub actually and look back at the very first commit about eighteen months ago and see everything we did along the way. All the changes we made, all the dead ends, all the starts and stops, exactly who was involved in our team over time. And it's changed. So we had typically, we have two designers, one or two designers on something. Then there's other people who chime in here and there, who jump in here and there, different programmers jump in at different times.

Speaker 3:

But it's fully documented, which is very rare. You'll almost never ever see this in commercial soft basically, almost never. Sometimes, but almost never. Especially going back to day one. What ends up happening is you can do this thing where you can basically on launch day, you can clear the log, basically.

Speaker 3:

And then from that point on, people can see what you're doing. But we opened it up from day one about eighteen months ago. So it's actually all in there. The team size in total, probably about six people worked on it here and there over eighteen months. But for the most part, it's usually two or three people working on something at a time.

Speaker 2:

How do you think about pricing these

Speaker 1:

Yeah. I feel like as in in 37 signals fashion, pricing will be opinionated. So I'm excited to hear how you how you guys approach this one.

Speaker 3:

I you know, we don't really well, we have a price, but I don't know if it's the right price. Never do. It's $20 a month, unlimited users, unlimited usage. One price. No chart, no table, no contact us.

Speaker 3:

Just a price tag. Like, you went

Speaker 2:

in and bought

Speaker 3:

a pair of jeans or peanut butter, it'd be like, how much is it? Talk to

Speaker 1:

the sales rep. They're gonna look you up and down. They're gonna say, well, how much how much should this person pay?

Speaker 3:

Right. What watch are you wearing? All the things. Right? So it's $20, but we give you a thousand cards for free.

Speaker 3:

So there's no time limit on the trial. You get a thousand cards for free. And if you never use a card is like a, you know, like a to do item or something.

Speaker 2:

Sure.

Speaker 3:

If you never use them up, it's free forever.

Speaker 2:

Okay.

Speaker 3:

And you can also run it for free if you wanna run it yourself.

Speaker 2:

Source. Yeah.

Speaker 3:

Yeah. So we're basically just serving as a host. If you want to just turn it on, sign up, and be going, we'll host it for $20.

Speaker 2:

Yeah.

Speaker 3:

Currently, look, this is an introductory price. We could change the price six months from now. If we do, we'll let people lock in where they were. We're not gonna change prices on them. But we might raise it.

Speaker 3:

We I don't even know what we'll do. But we wanted to pick a number that was fair. The other thing I wanted to do is I wanted to price this more like an accessory. This is not the only tool. Like, the software industry is interesting because it thinks that whatever it makes, it's the only thing anyone ever needs.

Speaker 3:

Right? The thing is is people need a lot of different things. And so Fizzy is not gonna be the only thing you have. It might be one of the many things you might use. And so we kind of price it that way.

Speaker 3:

It's like an accessory. $20 a month, kind of a no brainer, unlimited users Mhmm. Cancel any time, no upfront anything, and it just feels like that's the right place to start. We'll see it where we end up, but that feels good for now.

Speaker 2:

If you if I pay you to host it, where is it hosted?

Speaker 3:

China. No. So it's hosted we have we have we have a few different data centers. So it's

Speaker 1:

not well,

Speaker 3:

it's in it's in our know.

Speaker 2:

What I'm getting at is like it's like it would be easy to just throw this on AWS, but, like, you're the one company that doesn't just do that. Right?

Speaker 3:

That's right. So we have a data center in in Chicago. We have one in in Amsterdam. We have one in North Carolina. Yeah.

Speaker 3:

So we have in a few different spots, and it's all on our hardware and other people's data centers. We rent space in data centers. Yeah. That said, again, you can also if you just don't trust us, don't want us to do it, you can put on your own stuff, including like a simple droplet, like a digital ocean something. Whatever you can find that that will host something basic will work for for this as well.

Speaker 2:

I mean, you actually can host it in Alibaba Cloud if you want. It's open source. That's the whole point of open I could put

Speaker 3:

it on. Someone does.

Speaker 1:

There's an AI company that recently had a Code Red. Oh, yeah. Have you ever had a Code Red ever once?

Speaker 3:

Not like that. Not like a competitive pressure Code Red. Let's make sure we kinda focus on this competitor. But we've like screwed up

Speaker 2:

Yeah.

Speaker 3:

And and have all hands on deck to fix something. I mean, there was a moment I think Did you

Speaker 1:

learn did you ever learn the the like, did you ever get overly fixated on a competitor and sort of like learn that? Because because there's there's like that's like y c like, just law. Right? Like, don't overly focus on competitors. Like Yeah.

Speaker 1:

You're probably not gonna die as a company because of your competitor. You die because of, I think they say, like, indigestion or something like that.

Speaker 3:

Right. Most wounds are self inflicted.

Speaker 1:

Yeah. Exactly. But but sometimes you have to to actually have a a lesson be fully ingrained, you you have to learn it the hard way. I'm curious if if that was I not the

Speaker 3:

I think there was one time way back when we used to have a product we sold a new product now called Campfire. But way back in 2006, we launched Campfire, is a real time chat group chat. Yeah. And back then, we could not shove this down people's throats. Like, nobody understood group chat for a business.

Speaker 3:

It just was very, very hard to sell and to move, and it was a very small product for us. And then Slack came out, and I saw it. I remember, oh, shit. Like, yeah.

Speaker 2:

That's

Speaker 3:

They nailed it. Like, we we just yeah. It was

Speaker 2:

crazy because them nailing it was it was IRC. Like, I used IRC back in the day, and the hashtag channels, like, everything like, there were all the primitives had been, like, battle tested in IRC.

Speaker 1:

The other thing is Slack doesn't feel like that outside of the world in term for even from a I'm sure you have opinions on Slack's, like, but it it it doesn't even feel that like, you guys probably could see that and be like, oh, that that's like like like, the the design was opinionated Mhmm. And, you know, fun.

Speaker 3:

It felt fun. Yeah. Slack felt fun. I mean, IRC, of course, was is very geeky and whatever. But, yeah, the fundamentals were there.

Speaker 3:

But Slack had a wonderful onboarding experience. It felt fun. They had great integrations. They just kinda like totally leapfrogged us in that world. And and that was like fine, but it it did it was the first time I felt like I felt that sort of nervousness in my stomach.

Speaker 3:

Now, I didn't feel it against our business because Basecamp is a very different kind of product and it was fine. But it was Campfire specifically because I was frustrated. I was trying to figure out how to make it better. And then I saw them come out like, oh, shit. Like, yeah.

Speaker 3:

That. That. Yeah. That's how you do it. So that was one time, but but I I just don't think there's any reason to focus on competitors.

Speaker 3:

I I just don't you can't control them. You know what they're gonna do. You don't know if they're gonna be around in three months or three years. You don't have the same economics as they do. Yeah.

Speaker 3:

So it doesn't really make sense. Like, example, I'll take Hey, our email service, hey dot com. We have 40 some odd thousand paying customers for Hey. Right? Which is if if we if you were Gmail, it'd be an absolute abject failure to only have 40,000 paying

Speaker 1:

customers to do their shutdown years ago.

Speaker 3:

In seconds. Right? But for us, it's a multimillion dollar business because we have 60 people here. So for us, it's a great business. So like, I can't go, well, Gmail is killing us.

Speaker 3:

They're not killing us. They're doing their thing. We're doing our thing. So I think you've gotta in my opinion, the only person you actually compete with are your own economics. Like, that's not a person, but the only thing you compete with are your own economics.

Speaker 3:

Yeah. If you can make it work, if you can make it viable, you're

Speaker 1:

fine. You can't even get it.

Speaker 3:

Your cost. You compete with your cost.

Speaker 1:

Competing with your cost.

Speaker 2:

Yeah. Every business needs an AI notetaker. What are your opinions on AI notetakers? If they join the call, are you admitting them or are you letting them stay?

Speaker 1:

I'm pretty harsh. I always let them sit out in the cold. I never let them in.

Speaker 3:

We don't we don't have meetings. We don't so I don't I don't even I couldn't even invite one in if I wanted to. We just we don't we don't do that. But I I will say I have been in a few calls recently that other people have set up, and there's been, like, an AI transcript, and it has been quite handy. Yeah.

Speaker 3:

It's really pretty impressive when it works really Strangely, Apple can't seem to get voice mail transcriptions to work at all.

Speaker 2:

I mean, Apple is is just struggling with all the all the basics on transcription. Even just talking to your phone and, like, Whisper works. Yeah. It works in the ChatGPT app. It works everywhere else.

Speaker 2:

Apple just has not implemented it properly, and it's and it's not it's not crazy AI god. Like, it's literally just take the words that I'm saying and write them down verbatim. And that is a huge and that's a huge benefit because if you're in a business call, sometimes I just wanna search the actual transcript. I don't even need you to summarize it or put action items or go do things for me. Not agentic.

Speaker 2:

Not none of that. Just actually write down exactly what I said so that when I say, you know, you know, we had you know, when I say AWS or whatever, I can go search for when that happened in the transcript. And a lot of a lot of companies just haven't been able to implement that. It's been weird.

Speaker 3:

I agree. I think I think, frankly, that is one of the best use cases of I don't even know. It's not even AI, though. It's just it's trans great transcription software. It's it's very, very handy.

Speaker 3:

And I think, like, this is the thing. Like, it's it's transcription software has been around for a long time. It's gotten better and better and better, but it's not like AI.

Speaker 6:

No. No.

Speaker 3:

You know? But it it

Speaker 2:

is In other ways, it has been AI for twenty years. It's been the original AI in many ways. You know? It's like been throw a bunch of data at it and and try and estimate what things are. Even, like, OCR, these similar things, they're just per they're they're not AGI.

Speaker 2:

They're AI in a sense they're narrow. It's it's the recommendation algorithm on YouTube or TikTok or in Netflix or, you know, this specific you upload a, you take a picture of a receipt. Does it understand the text in there even if it's kind of a dark photo? Yes. That's specific narrow AI, and that's great.

Speaker 2:

But we need to actually get those things working on our phones. Agree. Agree. Yep.

Speaker 3:

We left AI out busy, by the way. We made a just conscious effort. We actually had some in for a while and pulled it out and had it back in and pulled it out. Yeah. Yeah.

Speaker 3:

I'm just like, I wanna remove stupid from this. I I don't wanna add intelligence. I'm gonna remove stupid Yeah. From the software. So it's just so straightforward Yeah.

Speaker 3:

That it just works and you don't even feel like, god, I wish I had AI for this or for that.

Speaker 2:

Yeah.

Speaker 3:

So v one, no AI. We'll see what happens down the road. Again, it's open source.

Speaker 2:

People can the really interesting thing with with Fizzy is that there is a world where you can just actually sit back and do nothing on AI. And if AI is real and valuable to your users, they will get it stuffed down their throats via their OS, via their browser because Atlas is gonna be trying to jump on. Perplexity Comet is gonna be trying puppeteering their their fizzy, and and the rest of the the rest of the system that they're using, whether it's their phone or their laptop or their desktop, like, it's gonna bring the AI to bear with computer use. And so you might never have to build it.

Speaker 3:

Yeah. I think so. In fact, this is actually interesting really quick. Recently, OpenAI added a Basecamp connector to ChatGPT. Mhmm.

Speaker 3:

And we didn't we didn't even do anything. Oh. So they did all the work. Yeah. And they just sent us an email saying, hey, we're launching this base camp connector like in a few weeks.

Speaker 3:

Yeah. Great. I'm like, this is fantastic. We don't have an MCP server. Nothing.

Speaker 3:

They just did it. Wow. And so I just think more to your point, I think more and more of that's gonna happen. Totally. Which is it's gonna be available in the OS or someone else is gonna do it or whatever.

Speaker 3:

And to to spend all this time to build it into the product specifically, I just don't feel like it's the right the best use of initial. An initial v one should be focused on the product itself and not the other things that it could possibly do. Again, later on, maybe there's stuff that comes in. Maybe people via the open source version submit some PRs that have some AI stuff. We'll see where it goes, but we didn't need it for v one.

Speaker 2:

Yeah. Yeah. This is always a question of where the AI lives. Like, do you need to go and pre train your own model to answer questions? Or if you set up a good knowledge base, will you just get sucked into the next pre train automatically?

Speaker 2:

And you can just go to ChatGPT and ask about you, and you'll be there. Anyway, Jordan I

Speaker 1:

wanna I wanna keep hanging out for an hour, but we we do we do have to wrap the show because we're going to look at a Okay. Studio. We have one last question from

Speaker 2:

David Senre.

Speaker 1:

Yeah. One last question from our mutual friend David Senre. Could we get a could we get a risk check?

Speaker 2:

What what watch here? What are

Speaker 1:

you rocking on launch day?

Speaker 3:

I might be the only person who coordinates their watch with their software. It's Really? It's possible.

Speaker 2:

Okay.

Speaker 3:

So I'm wearing today, I'm just wearing a I'll take it off because I I don't know how to quite hold it up otherwise.

Speaker 4:

Yeah. Please.

Speaker 3:

This is a vent just a vintage Heuer from 1974, which is birth a birth year watch. Let's see. Hang on.

Speaker 1:

Woah. Oh, birth year watch.

Speaker 3:

Hang on. On.

Speaker 2:

Hang on.

Speaker 1:

I love the Yeah. Hang on. Let me see. The focus is hard, but Oh. There you go.

Speaker 1:

There you go.

Speaker 3:

There go. There we go. I love

Speaker 1:

the I love the orange. Thank you.

Speaker 3:

It's colorful. Fizzy is colorful. Fizzy is full of color. It's the most colorful watch I own for the most colorful product we've ever made.

Speaker 1:

I knew I knew you were gonna match. I knew I knew it was gonna be intentional.

Speaker 2:

This is so good.

Speaker 3:

It's a little sad. A little bit. It's a little bit sad.

Speaker 1:

It's fun.

Speaker 2:

It's joy. This is joy.

Speaker 3:

This is is amazing. I don't I don't have like a what? Like a green what what why are you guys wearing a green jacket a few days ago? What was that about?

Speaker 2:

It was Shopify. Black Friday, we did we were celebrating commerce online, and so we wore Oh,

Speaker 3:

just green green suit.

Speaker 1:

No. It's like Shopify color.

Speaker 2:

Signature color is is green.

Speaker 3:

Oh, shop I didn't even know that.

Speaker 2:

Yeah. Shopify is green.

Speaker 9:

Yeah. Yeah. Okay.

Speaker 6:

Is a

Speaker 2:

little confusing because we use a dark green in our brand theme, and so it actually paired up pretty nicely. We also have yellow suits for when we, for when there's big ramp news. We will wear solid yellow suits. 14. Yeah.

Speaker 2:

You've

Speaker 3:

probably seen

Speaker 2:

those. Those are fun.

Speaker 3:

I've seen that.

Speaker 1:

Hard to hard to miss. Well, Jason, open invite to the studio. We'd love to hang out for, like, a full hour.

Speaker 2:

Good job. Just loving it. Everyone's having a

Speaker 1:

good Everybody

Speaker 3:

Let's do it sometime. I'd love to. I think I got an email about about that. So we'll figure that out.

Speaker 2:

Amazing. Awesome.

Speaker 3:

Appreciate it.

Speaker 1:

Alright. Thanks for being on today. See you. On the line.

Speaker 2:

Talk to you soon.

Speaker 1:

Very exciting.

Speaker 3:

Thank you.

Speaker 1:

Talk to See you. Bye.

Speaker 2:

Getbezel.com. Shop over 26,500 luxury watches that you're not gonna you're not gonna believe it, but this was actually the next ad read up. Fully authenticated in house by Bezel's team of experts. And, we gotta close out the show, so I'm gonna tell you about wander.com. Book of Wander with inspiring views.

Speaker 2:

Hotel grade amenities, dreaming beds, top tier cleaning, twenty four seven concierge service. There are so many more posts that I wanna get to. But There's a lot.

Speaker 1:

Get to them.

Speaker 2:

There's a new arena mag out. You gotta go to Arena Mag. Check it out. We are featured in this Arena Mag issue zero zero six, the three Martini lunch. We had Julia on the show, of course, to talk about it, but now it's it's in print.

Speaker 2:

There's a lot else going on.

Speaker 1:

And we will be back tomorrow. Yes. Sorry to cut

Speaker 4:

it off.

Speaker 2:

Thank goodness.

Speaker 1:

Lot of fun.

Speaker 2:

I would be in a very bad place if we weren't podcasting tomorrow, but fortunately, we are. So we'll see you tomorrow. Leave us five stars on Apple Podcast. Thank you for hanging out. Goodbye.

Speaker 1:

Have a great evening.