TBPN

  • (00:16) - Alby Churven is a teenage entrepreneur from Sydney who, by age 14, has already founded Finkle, a gamified learning platform aimed at teaching teens coding, entrepreneurship, AI, and real-world skills. He began coding when he was six years old, and previously built Roblox games and a youth-oriented soccer brand before pitching Finkle to Y Combinator (Winter 2026). Alby’s vision blends youthful creativity with a mission to rethink education — and his journey has drawn global attention for ambition and boldness.
  • (07:22) - Three Years Since the Launch of ChatGPT
  • (13:06) - Gemini Surges
  • (20:17) - David Sacked by NYT
  • (39:54) - 𝕏 Timeline Reactions
  • (01:01:19) - Dylan Patel, Founder and Chief Analyst at SemiAnalysis, discusses Google's strategy to sell Tensor Processing Units (TPUs) externally, highlighting the challenges posed by their non-standard design and the need for broader software support. He emphasizes the importance of open-source software in expanding TPU adoption and notes that while Google's internal software stack is robust, making it accessible to external customers is crucial. Patel also touches on the competitive dynamics between Google and Nvidia, particularly regarding hardware performance, software ecosystems, and market positioning.
  • (01:33:48) - Ro Khanna, a Democratic U.S. Representative from California's 17th congressional district, is known for his advocacy on technology, economic equity, and transparency. In the conversation, he discusses his legislative efforts, including the bipartisan Epstein Files Transparency Act, which mandates the release of all Justice Department files related to Jeffrey Epstein, aiming to hold powerful individuals accountable and restore public trust. Khanna also addresses the impact of artificial intelligence on employment, emphasizing the need for policies that enhance human capabilities rather than replace workers, and highlights the importance of balancing technological advancement with job preservation to maintain social cohesion.
  • (02:11:19) - Jonathan Swerdlin, co-founder and CEO of Function Health, is dedicated to empowering individuals to proactively manage their health through comprehensive lab testing and advanced imaging services. In the conversation, he discusses Function's mission to provide affordable access to over 160 lab tests and full-body MRI scans, enabling early detection of potential health issues. Swerdlin emphasizes the importance of utilizing technology to make personalized health data accessible, aiming to help people live longer, healthier lives.
  • (02:27:59) - Thrive Announces Partnership with OpenAI
  • (02:29:55) - Cristóbal Valenzuela, CEO and co-founder of Runway, discusses the release of Gen-4.5, the company's latest AI video generation model. Gen-4.5 achieves unprecedented visual fidelity and creative control, producing cinematic and highly realistic outputs while providing precise control over every aspect of generation. Valenzuela highlights that Gen-4.5 has surpassed competitors like Google's Veo 3 and OpenAI's Sora 2 Pro, securing the top position on the Artificial Analysis Text to Video benchmark.
  • (02:46:41) - Vincent Weisser, CEO of Prime Intellect, discusses the recent release of Intellect 3, a 100-billion parameter model developed through scaled reinforcement learning and post-training, achieving state-of-the-art performance at a smaller scale. He highlights the creation of an open environment where contributors worldwide can develop reinforcement learning environments, enhancing the model's capabilities across various tasks. Weisser emphasizes the trend of open-source models matching closed models' performance and the potential for businesses to fine-tune models for specific applications, leading to better performance and cost efficiency.
  • (03:01:01) - Ben Hylak, co-founder and CTO of Raindrop—a company providing monitoring solutions for AI agents—discusses the challenges of silent failures in AI systems and the importance of real-time monitoring to detect and address these issues. He highlights how Raindrop's platform processes millions of events daily, enabling engineering teams to identify complex problems like tool call failures and user frustration. Additionally, Hylak shares that Raindrop recently secured $15 million in seed funding led by Lightspeed Venture Partners to further develop their monitoring infrastructure.
  • (03:17:02) - 𝕏 Timeline Reactions

TBPN.com is made possible by: 
Ramp - https://ramp.com
Figma - https://figma.com
Vanta - https://vanta.com
Linear - https://linear.app
Eight Sleep - https://eightsleep.com/tbpn
Wander - https://wander.com/tbpn
Public - https://public.com
AdQuick - https://adquick.com
Bezel - https://getbezel.com 
Numeral - https://www.numeralhq.com
Attio - https://attio.com/tbpn
Fin - https://fin.ai/tbpn
Graphite - https://graphite.dev
Restream - https://restream.io
Profound - https://tryprofound.com
Julius AI - https://julius.ai
turbopuffer - https://turbopuffer.com
Polymarket - https://polymarket.com
fal - https://fal.ai
Privy - https://www.privy.io
Cognition - https://cognition.ai
Gemini - https://gemini.google.com

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.

Speaker 1:

You're watching TVPN.

Speaker 2:

Today is Monday, 12/01/2025. We are live from the TVPN UltraDome, the temple of technology, the fortress of finance, the capital of capital ramp. Time is money saved both. Easy use corporate cards, bill payments, accounting, and a whole lot more all in one place. We have a special guest.

Speaker 1:

Special guest today opening the show with us. Albie from the Land Down Under. Please. Probably saw him go viral recently. Yeah.

Speaker 1:

But why don't you introduce yourself?

Speaker 3:

Yeah. So I just I actually arrived in LA on Saturday.

Speaker 2:

Welcome.

Speaker 3:

Yeah. I'm from Sydney, So about an hour and a half from Sydney. And, yeah, I've been building something called Finkel, which is basically Duolingo for Life Skills.

Speaker 2:

Mhmm.

Speaker 3:

And I just applied to YC as well with that post on X.

Speaker 1:

How many views did you get on the application video?

Speaker 3:

I think it got like 7,800,000.

Speaker 4:

So yeah.

Speaker 1:

Wow. Let's the let's hit the Gong We're pretty viral. For Albie. Well done.

Speaker 5:

Well done.

Speaker 2:

So give me an example of a life skill that you can learn with your app.

Speaker 3:

Yeah. I guess, like, entrepreneurship, especially Mhmm. Startups and stuff. Because, like, in Australia, I don't know about in The US, but school is very entry level. It's not hands on.

Speaker 3:

Feel like it's very just not preparing us for life. Like, you do need it if you want to be a doctor or a lawyer or something, but some kids don't want to do that. And, like, yes, you do have commerce and computer science and stuff, which I am doing as electives.

Speaker 2:

True.

Speaker 3:

But they're not hands on, and they're very, like, outdated and, like, textbook heavy.

Speaker 6:

Mhmm. Yeah.

Speaker 3:

So I feel like actually learning life skills that can that you can apply now, especially, like, with AI and everything. Like, if you don't know how to use AI now, you're sort of gonna be left behind. So

Speaker 1:

Very exciting. What are you what are you hoping to get out of your trip? You're on summer holiday Yeah.

Speaker 2:

Right now?

Speaker 3:

Well, my exams just finished before I came. So there's still like two weeks left of school.

Speaker 2:

But How do you think you did?

Speaker 3:

I think I got I got like a B in science and like

Speaker 1:

There you go. There you go.

Speaker 3:

A B in math.

Speaker 1:

Focus on the game. On the game. Yeah.

Speaker 2:

I really just room to grow. But

Speaker 3:

Yeah. I guess what I'm trying to get out of it is just to like meet as many people as possible. Right. Make as many connections as possible because this trip probably won't a trip like this probably won't happen again for a while. Yeah.

Speaker 3:

That's sort of my goal.

Speaker 2:

What's the status of the YC application? You've submitted it?

Speaker 3:

Yeah. It

Speaker 2:

Have you heard back yet?

Speaker 3:

No. It hasn't It's still, like

Speaker 4:

I'll need

Speaker 2:

to We

Speaker 1:

got a recommendation. Yeah. We can we gotta we gotta If you're

Speaker 2:

a YCL, I'm watching this. Please go leave a recommendation.

Speaker 3:

Yeah. Yeah.

Speaker 2:

But congratulations. Thanks so much for coming by. What what is the stage of development of the actual application, the the product itself? Are live?

Speaker 3:

Can people

Speaker 2:

go download it?

Speaker 3:

Demo right now. We're getting, like, beta testers. Okay. But the beta should should be launching soon, probably by, the end of this year.

Speaker 2:

Mhmm. Did you have a wait list? Are you doing email capture yet?

Speaker 3:

Yeah. Like wait list beta testers. We've got like a couple 100, but

Speaker 1:

yeah. Very cool. Incredible. Well, congratulations on all the attention. I'm sure you'll convert it into a lot of opportunity and have a great trip.

Speaker 1:

Yes. Great to have you

Speaker 2:

by way. And good luck with the YC application.

Speaker 1:

Thanks so much. And looking looking sharp in the suit.

Speaker 2:

Looking sharp in the

Speaker 6:

suit. Okay.

Speaker 2:

Amazing. Have a good rest of your day.

Speaker 3:

Thank you. Cheers. Thanks for

Speaker 2:

stopping by. Before we move on to the rest of the show, let me tell you about Restream. One live stream, 30 plus destinations. If you wanna multistream, go to restream.com. And, it's been three years since ChatGPT launched.

Speaker 2:

I wanted to reflect a little bit. Everything changed or maybe nothing changed or maybe some amount of change in between everything and nothing. You're more on the nothing changed camp. I sort of agree with you. I was sort of I was sort of reflecting on, like, okay.

Speaker 2:

Thanksgiving's happened. It was Thanksgiving over the weekend. You know? How different is my world? Like, there's not a humanoid robot that's cooking for me.

Speaker 2:

And, also, even if we had a humanoid robot, I think that would I think Thanksgiving would be the day we let the robot sit in the closet because Let him cook. We enjoy no. No. Let us cook. We enjoy cooking.

Speaker 2:

Cooking is a fun family experience. And so of all the things

Speaker 1:

Let us

Speaker 2:

Thanksgiving is like the track day of cooking. Like, even if you have a robot that does it, you still wanna do it on Thanksgiving. You don't wanna cook on a random Tuesday.

Speaker 5:

When you're

Speaker 2:

busy, you got lunch, know, all this other stuff. Thanksgiving is the is the Nurburgring.

Speaker 1:

And I was doing some dishes after Thanksgiving, and I felt like it was a good way to kinda like it felt like walking off the pie Yeah. In a little Totally. Way. I wasn't walking wasn't walking very far. Yeah.

Speaker 1:

Yeah. Yeah. Yeah. Back and forth.

Speaker 2:

And so yeah. So that that that hasn't really changed that much for me. I was thinking I was reflecting more on the agentic commerce thing. It feels like ChatGPT and OpenAI, they really are pushing to make revenue from agentic commerce, like, before in this holiday season. And incredible speed of execution.

Speaker 2:

Like, clearly, it's a big opportunity if you can figure out how to, you know, run ads, commerce, convert, take a cut of that. That's big. My experience actually demoing it, it was kind of interesting. Like, the actual product in ChatGPT is pretty good, but you can see that the walled gardens are already going up. So, one place that I like to go to for reviews of products, specifically around the holidays, is the Wirecutter.

Speaker 2:

Now the Wirecutter, their whole twist was they wouldn't rate each product. What What they would do is they would pick a category, and then they would just tell you what their best product was in that category, sort of like a cluster mix of of vacuums. So they would give you the platinum tier, vacuum and then a budget pick. And so I've always liked the wire cutter. Think they do a very rigorous job.

Speaker 2:

They were acquired by The New York Times. The New York Times is currently in a lawsuit with OpenAI. And so if you go to ChatGPT and say, hey

Speaker 1:

And I think they're about to be in a lawsuit with David Sachs.

Speaker 2:

Maybe. Maybe, which we will talk about on the show in a little bit. But if but but but if you go so I went to Chateappity, and I was like, hey. Okay. Pull a deep research report.

Speaker 2:

Just pull everything from the wire cutter and and tell me every category and every product that's top ranked. Because then I can just scan it really quickly and be like, oh, yeah. I didn't even remember that that category existed. That would be a great gift. I'll get it, and I'll go through the I'll go through the Wirecutter link.

Speaker 2:

I'm fine with that. I'm paying Chatuchi PT. I'm happy to go and use their affiliate link on the Wirecutter. That's how the Wirecutter monetizes. But it couldn't do it.

Speaker 2:

It couldn't do it. It said, hey. We don't we can't touch the Wirecutter. Like, we're it's off limits. You gotta head over there yourself.

Speaker 2:

Get pop open a Chrome tab, brother, if you wanna head over there. Like, that's on you. Interesting. Or or maybe an Atlas tab. I don't know.

Speaker 2:

But so so so that that that had not really changed that much for me. But the one thing that did really change on Thanksgiving was the discourse. Like, the AI narrative has fully arrived to just family and friends.

Speaker 1:

You mean fam in in the in the home?

Speaker 2:

Yes. Yes. In people that don't work in technology, that don't their job is

Speaker 7:

not tied

Speaker 1:

to tech thing. Trough.

Speaker 2:

Not that. More talking about is it a bubble? Where do you think all this stuff goes? The stuff that, you know, we've been talking about

Speaker 1:

about living in

Speaker 5:

a bubble?

Speaker 1:

You think the average family in America is

Speaker 2:

I saw multiple newsletters where the whole conceit of the newsletter going into the holidays was how to talk to your family about the AI bubble and how to how to talk to your family about AI generally. And I think it's real because if you've been watching your four zero one k over the last year, you've seen a massive spike and then a recent sell off. And if you've turned on any news or opened up any newspaper, you've been hearing about $1,000,000,000,000. And you're like, what? A trillion dollars?

Speaker 2:

That ChatGPT app? They're they they need a trillion dollars to make that thing work. Chat. Right? Chat?

Speaker 2:

And so and and so it it is it is a really big narrative. And so I wanted to reflect on, like, what has actually changed over the last three years, and specifically in the Mag seven. The Mag seven has been on absolute tear. Just over the last three years, the value as a whole has basically tripled. It was a little under $8,000,000,000,000.

Speaker 2:

Now it's over $21,000,000,000,000. That's a lot of value created in the last three years. NVIDIA was second to last in the max in the mag seven when ChatGPT launched. It was worth just $420,000,000,000, something around there. Today, the stock is up over 10 x, basically.

Speaker 2:

It's $44,360,000,000,000.00.

Speaker 1:

And up today. And up today. Despite all the chaos

Speaker 2:

Dylan Patel was trying so hard to bring that stock down, but he couldn't do it. He's coming on the show at noon. We're gonna confront him about his bear posting and whether or not the market

Speaker 1:

is Funny enough. Over Broadcom is down today. Okay. Why is that the maker of the TPU.

Speaker 2:

Oh, yeah. Oh, I mean, a lot of these things, it's like it's already been priced in. I mean, even when you read that seminalysis piece, we we you know, a lot of it's like, we've been writing about this for months. People have already put this trade on, etcetera, etcetera. But I do think that the the NVIDIA, the 10 x that's happened has really created some crazy zealots and just an entire industrial complex because there are so many people who put who's who heard AI.

Speaker 2:

They tried the ChatGPT thing, and they were like, this is big. How do I get it on this? I can't buy OpenAI. OpenAI is running away with it. Oh, they need NVIDIA chips.

Speaker 2:

That's the logical next step. They went in NVIDIA, and they got a 10 x. And they could have gotten a 10 x on, like, a million dollars, $10,000,000. Like, there's no amount of money because it's it was it was already a $420,000,000,000 company. So you could be you could put your entire retirement savings in it.

Speaker 2:

No problem. Complete liquidity. Right? It's not, oh, you gotta get some SPV. It was really easy.

Speaker 1:

Sikhi Chen from Runway was saying that back, I think it was twenty twenty, twenty twenty one, he he said he put an uncomfortable amount of his net worth Yeah. To NVIDIA. Yep. And, obviously Yeah.

Speaker 2:

And and and and near Sian, same same story. Right?

Speaker 1:

Still underappreciated. Near? The the NVIDIA ten year fund. All it does is buy Nvidia. You Yeah.

Speaker 1:

Just by investing in it, you can't possibly sell. Mhmm. God's chosen company. Yes. That's what the, I think, title of the fund was.

Speaker 2:

Oh, really? Yeah. Yeah. That's hilarious. And so, I mean, yeah, there's been a ton of zealots.

Speaker 2:

We're gonna talk to Dylan Patel at noon about some of the zealots that have been attacking him. Previously, the world's largest company in November 2022 was Apple. And at the time, they had a sizable lead over Microsoft, Amazon, and Google. Now that gap has closed a bit as the hyperscalers have grown more over the last three years on the back of the AI boom. And it's interesting.

Speaker 2:

I mean, you can sort the Mag seven by, by market cap. And today, you get the following ranking. Tesla, then Meta, then Amazon, Microsoft, Alphabet, Apple, and then NVIDIA at the top. And the big question, I think, that's on everyone's mind and kind of underpins the horse race that we that we cover every day on this show is, what would that ranking look like in the next three years? Is NVIDIA really a monopoly?

Speaker 2:

Is it is it impervious to attacks from the from, you know, different suppliers?

Speaker 1:

What does Broadcom have to do to get into the Mag seven? I don't know. Sitting at 10 on the market cap. Yeah. Companiesmarketcap.com, which we are not affiliated with Yeah.

Speaker 1:

Which is just a fantastic website. Fantastic. Broadcom is sitting at number six above Meta currently.

Speaker 2:

I don't know. I mean, I think several years in the $1,000,000,000,000 club, like, just being undeniable at that scale. There's also just, like, a bit of branding. Like, some of the companies that made it into the Mag seven were I feel like the Mag seven leaned understandable, like, not that deep in the supply chain. Even NVIDIA was the deepest.

Speaker 2:

NVIDIA had the least of, like, a consumer brand, but still a lot of people use the gaming's graphic gaming graphics cards. Broadcom is really tricky because there's no consumer angle whatsoever. Consumers can buy Tesla. They can use Meta products. They can buy on Amazon, have a Microsoft, you know, operating system.

Speaker 2:

They can use Google. They can have an iPhone. Yep. And they can have an NVIDIA gaming graphics card.

Speaker 1:

The top right now. Yeah. Tesla's sitting at 10. TSMC at nine. Eight is Saudi Aramco.

Speaker 1:

Seven, Meta. And then six is

Speaker 2:

Prague. Also think you have to be an American company to be in this, like, MAG seven or whatever the hot ranking is, like FANG. FANG did not include never included oil companies, never included international companies. Because if you go there, then you could be like, oh, well, let's include, like, the Chinese tobacco company that's worth a trillion dollars or something like that. Like, there are some crazy there are some crazy, like, foreign owned companies that are if they were independent, might be worth a trillion dollars because they just have so much of these assets.

Speaker 2:

Yeah. Exactly. But it doesn't really count because it's just sitting there out in the out in the ether. Well, let me tell you about Gemini three Pro, Google's most intelligent model yet, state of the art reasoning, next level vibe coding, and deep multimodal understanding. And speaking of that, boo booko capital bloke as opposed to here, Gemini app downloads are catching up to ChatGPT, and Gemini users now spend more time in the app than ChatGPT users.

Speaker 2:

People are going back and forth on, can Gemini catch up? You know, the model, clearly very good. The big bombshell in the semianalysis piece over the weekend was the this idea, which I think has been bandied about before, this idea that OpenAI has not done a proper pre train since four o, and the four five pre train kinda got mothballed. And so but, there was this question about, like, is pre training dead? Seems like the Google folks said, no.

Speaker 2:

It's not. And then they went and did a pretrained, and and Gemini three outperformed. Anthropic also pretrained.

Speaker 5:

Yeah. I mean yeah.

Speaker 2:

Pretrained pilled?

Speaker 1:

I mean, we asked Chilto about this, and he said, oh, yeah. We're still bullish on scaling. Yeah. And he'd I I think, actually, like, Cholto kind of, like, in the subtext said, like, the the reason OPUS 4.5 was good is not because it was a new pre chain. It's because it was RL.

Speaker 1:

That's what I've read.

Speaker 2:

That was your reading? Yeah. I I I feel like there's still there's still juice in the lemon of pretraining, but it's not scale. Like, we only have one Internet. Ilya was correct about that.

Speaker 2:

It's not scaling the the size of the pretrained, which is what happened with four five, from from, GPT 4.5. That was just bigger, I guess. But but does seem like there's little optimizations that you can do on the on the pre trained side. But I don't know. We'll have to dig into it.

Speaker 2:

But but I I I think the thing that no one is debating is the fact the Gemini three as a model with Nano Banana Pro, with v o three is just like the actual foundational intelligence is plenty good to be dominant in the consumer AI category. The question is, can you actually get people to install the app, use it? Can they enjoy it? Do they not churn and go back to Chechiuti? I've been fighting back and forth, left and right, going into one app and the other.

Speaker 2:

I was getting a ton of disconnects errors with the Gemini app even though the model's great, and there's some really cool features.

Speaker 1:

Yeah. They need to catch up on the product side.

Speaker 2:

Exactly. Yeah. The yeah. The product side. And so a lot of people are saying like, oh, Gemini team should just, the app team should just go and, you know, copy ChatGPT's homework and and, you know, copy all these little features.

Speaker 2:

I put out a post, that the folks over at the Gemini team actually, you know, did turn into bug reports and I think are working on. But it really does seem like it's a it's it's a really it's a sprint to actually create an app that is as sticky as ChatGPT because ChatGPT, the app, is fantastic and and very, very, very well designed. And so the Yeah.

Speaker 1:

And there's some reporting from SimilarWeb is what the Feet is using Mhmm. To track average user minutes.

Speaker 2:

I always find those hard to I mean, must be like Nielsen ratings where they're like polling people or something because Yeah.

Speaker 1:

And I don't know.

Speaker 2:

Don't know. In OpenAI. Like, you can't get a pixel into the Gemini app.

Speaker 1:

And are are they counting user minutes if a tab is open, but I'm not actually

Speaker 2:

in? And is this just desktop? Because that's, like, completely separate from mobile use.

Speaker 1:

Desktop and mobile web, which I again, I don't No. Using mobile mobile web.

Speaker 2:

I I don't know. I wouldn't read too many too much into this data specifically. I would I would much more look at, like, what are the structural advantages that we know exist? And, I mean, with Gemini, one of them is to that point about the wire cutter. You know you know where the wire cutter shows up?

Speaker 2:

Google search results. You know what you know what what company has one bot for scraping everything? Google. So the Google bot is, identifies as one entity. So you can either say, I'm allowing Google or not.

Speaker 2:

And it's a big it's a tall order to be like, yeah. I don't wanna be in Google results. And so you a lot of companies are saying, yeah. I'm good with Google showing up in Google results, but that also shows up in AI search results. And they can and there and there are things that companies can do to say, hey.

Speaker 2:

Don't put me in the Gemini, you know, like, training dataset necessarily. But in terms of just actually showing up, you've seen it in the Google in the Gemini app. It says using Google Search. And so if I go to Gemini and I say, hey. Head over to the wire cutter.

Speaker 2:

Find me the best vacuum cleaner. Google probably can do that.

Speaker 1:

Gemini can

Speaker 2:

probably do that. Yeah. Test Whereas whereas OpenAI is is in a fight with The New York Times. Whereas Google and The New York Times like, they might not love each other, but they definitely have, like, a like, an uncomfortable truce. Right?

Speaker 1:

A funny a funny Gemini integration that I that I've used is that you've land in a in a Hangout, and you just say, who is

Speaker 2:

this person? Real? You can actually do that?

Speaker 1:

It is. It pulls up a sidebar. You can just ask, like, who who am I meeting with right now? And it'll give you like

Speaker 2:

It's clearly. Who am I meeting with? What should I say to you? What should I say

Speaker 1:

to What should I ask them?

Speaker 2:

What is my name? What are what what do they wanna know about me? What should I tell them about me? Okay.

Speaker 1:

So Gemini was able to pull Wirecutter Okay. Recommendations. Yeah. I I don't know.

Speaker 2:

Is interesting.

Speaker 1:

I feel like I feel like

Speaker 2:

Yeah.

Speaker 1:

I wonder yeah. I wonder if if if Wirecutter is actually benefiting from this in any way yet. Mean, for

Speaker 2:

because Google hasn't Gemini hasn't rolled out the agentic commerce stuff that would actually, like, scrape out the referral, the referral token. And so if I'm if I'm in Gemini and I'm saying I'm gonna do some agentic shopping or whatever, I say, pull me the best vacuum cleaner from the wire cutter, it goes over and does that. And then I land on the wire cutter, and then I click that link, that should give the wire cutter the credit. Now if I, as a follow-up prompt, go in Gemini and say, okay. Great.

Speaker 2:

The wire cutter told me the best vacuum cleaner is from James Dyson, of course, is the Dyson. Yep. Find me the Amazon link. Well, Gemini's probably not given the wire cutter the attribution at that point. It might even be taking its own attribution.

Speaker 2:

I don't know exactly how it's how it's functioning right now, but I would imagine that that link does not get reinstantiated as the as the wire cutter, affiliate link. And so, we could see I mean, these are all, like, going to be pretty existential questions for the SEO crowd, anyone who's monetizing off of SEO. We saw some, some screenshot that apparently site traffic to Vox properties is down 50%. And I don't know if the how how much how much of that is just the shift to social media versus the shift to Yeah.

Speaker 1:

How much is it their business strategy just being like, hey. We wanna do more video. Yeah. And that'll be distributed off our site

Speaker 2:

for the most part. I think a lot of people generally do not they they consume more and more content on social media platforms. They go from YouTube to their RSS player to audiobooks to Twitter to Instagram, and they kinda bounce around from one and the other. And then every once in a while, they will go in and and actually land on a particular site. Like you can.

Speaker 2:

If you go to tbpn.com, you can get our newsletter in your inbox every morning. And we can you can also sign up for Cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. Well, speaking of The New York Times, David Sacks is going to war with The New York Times.

Speaker 2:

He says inside the NYT's hoax factory he calls it a hoax factory because The New York Times posted a a piece about David Sacks saying that, the the headline was Silicon Valley's man in the White House is benefiting himself and his friends, and Ryan Mac was going back and forth with Sean Maguire or yeah. Sean Maguire. Ryan Mac says, today has been a good example of what x has become complaints from a subset of wealthy tech folks about a story that circulates more widely than the actual story itself. Musk bought the platform to control the message, and he and his friends are getting just that. And Sean Maguire says, you don't get to run this headline, then write an article that doesn't validate the claim, and then get away with playing the victim.

Speaker 2:

We see the we see through the ruse. And so, David Sacks has responded in full to the NYT's hoax factory. He says, five months ago, the five New York Times reporters were dispatched to create a story about my supposed conflicts of interest working as the White House AI and crypto czar. Through a series of fact checks, they revealed their accusations, which we debunked in detail. Not surprisingly, they the published article included only bits and pieces of our responses.

Speaker 2:

Their accusations ranged from a fabricated dinner with a leading tech CEO to nonexistent promises of access to the president to baseless claims of influencing defense contracts. Every time we would prove an accusation false, NYT pivoted to the next allegation. This is why the story has dragged on for five months. Today, they evidently just threw up their hands and published this nothingburger. Anyone who reads the story carefully can see that they strung together a bunch of anecdotes that don't support the headline.

Speaker 2:

And, of course, that was the whole point. At no point in their constant goalpost shifting was NYT willing to update the premise of their story to accept it. I have no conflicts of interest to uncover. No conflicts of interest. As as it became clear that NYT wasn't interested in writing a fair story, I hired the the law firm Clare Lock, which, specialized in defamation law.

Speaker 2:

I'm attaching Clare Lock's letter to the NYT so readers have full context on our interactions with NYT reporters over the past several months. Once you read the letter, it becomes very clear how NYT willfully mischaracterized or ignored the facts to support their bogus narrative.

Speaker 1:

So Will says hiring Claire Lach for this is sick. Cruise missile to blow up a straw hut.

Speaker 2:

He's a big fan of, of litigation. He loves litigation. Well, people have been supportive of this broadly in tech. Let's go let's go through some of the reaction. Sam Altman says David Sachs really understands AI and cares about The US leading in innovation.

Speaker 2:

I'm grateful we have him. Brian Armstrong.

Speaker 1:

Yeah. Here's here's here's here's my takeaway.

Speaker 2:

Yeah.

Speaker 1:

If you believe that AI and crypto are are industries that we should support in The United States, then you want to have a czar focused on those things that generally feels positively about those things and wants to create the best possible environment for those industries to thrive in The US. I think that there's actually a debate on both fronts. Like, there's people on the left that think AI and crypto are just default bad. They want less of them. And there's people on the right that believe that, too.

Speaker 1:

But I think that ultimately, there's arguments for why The US should bleed in stablecoins, which is part of why the Genius Act important. And a lot of the AI action plan, there's going to be debates on individual points in that. But in general, I think creating an environment in The US where we can continue to lead in AI is important. So I think there wasn't I didn't see any sort of like smoking smoking gun in any of this stuff. There were some allegations around around the all

Speaker 2:

I don't think they smoke very much at all. I think it's mostly tequila drinking.

Speaker 1:

That's true. They do. All in tequila.

Speaker 2:

Although although Jay Cal does tow the gun regularly.

Speaker 1:

Oh, yeah.

Speaker 2:

So maybe that's the smoking gun.

Speaker 1:

He's a Texan.

Speaker 2:

Yeah. No. I I didn't see anything, very specific. I mean, it's it's it's all in. Like, they're they are super connected.

Speaker 2:

If you partner with them in some ways, like, would expect to get more of a read on where they're spending time in DC, what they're seeing. That seems like there are cleared lines on what you can share, like what what turns you into a lobbying firm and what I think they've stayed out of becoming a lobbying firm, and so they have clear clear rules on that. Yeah. I think Boz distilled it pretty well. Before we read his post, let me tell you about Adio, the AI native CRM.

Speaker 2:

Adio builds, scales, and grows your company to the next level. Boz said, I don't know David Sacks, but I want more expertise in government. Experts tend to have made money in their area of expertise, have friends in their area of expertise. If our peep if people can't have history or friends in a field before leading it, then our leaders won't know anything. And I thought this was a dig good distillation of, like, the core debate about, like, should you have someone who has never participated in an industry overseeing it?

Speaker 2:

Or should you like, someone who's purely academic, purely outside of it?

Speaker 1:

And I believe there's some readers Yeah. And probably people at The New Times that would like somebody that hasn't participated in either industry to be running in a role like that and just blanket against both industries and just sort of hold them back.

Speaker 2:

So the reaction is interesting. In in the comments, I mean, first, the top comment is somebody, like, beefing with Boz over how he ran the Quest store. It's, like, clearly a VR aficionado who, like, has, like, axe to grind over niche VR, like, policies. But the second post is what I wanna get to because it actually addresses the core claim here. And Alex says, the the construct you're thinking of is called a council.

Speaker 2:

It's been used for a long time to allow the elected with limited knowledge on a domain to get a consensus of options from a range of experts. This minimizes conflicts and prevents kleptocracy. But, like, isn't that what a czar is? I thought I I thought I thought he Sax was a counsel. Like, he's not he's not an elected official.

Speaker 2:

Like, the the the elected official is Donald Trump, the president. And, like, there's a variety of folks there. And then and then, Saxe is, like, appointed to this czar role that is just to give his, like, his he like, he he doesn't have the right he doesn't have the ability to just, like, create legislation out of thin air. Right? Like, He he is he is very much a

Speaker 1:

I was trying to look up the history of czars. Right?

Speaker 2:

It is weird. Is it like, have we always had czars? I know there was a whole thing about border

Speaker 1:

czar was Bernard Barak appointed by President Woodrow Wilson to head the War Industries Board in 1918. The press dubbed him the industry czar because he had sweeping powers to coordinate wartime production. During World War II, President Franklin D. Roosevelt appointed several czars to manage the massive wartime economy, including a shipping czar and a synthetic rubber czar. These roles were

Speaker 2:

Synthetic rubber czar?

Speaker 1:

One of the most iconic

Speaker 2:

Stoked for that.

Speaker 1:

People don't talk about the need for our ongoing need for synthetic rubber czar. No. These roles were essential because existing government bureaucracies were too slow to handle the urgent demands of total war. During the Nixon era, the modern concept of the czar, a policy specialist with a specific portfolio solidified under Nixon. During the nineteen seventy three oil crisis, Nixon appointed William Simon as the energy czar to manage fuel shortages.

Speaker 1:

He also had a drug czar during the beginnings of the war on drugs. So anyways, again, I think unless you're just blanket against these industries, it's hard to argue that you want somebody that doesn't have any expertise in said industries.

Speaker 2:

Yeah. Some of these some of these claims here here's one. It's sort of hard to to track. Like so he says, free free from those this is from The New York Times, from the actual article for the screenshot. Free of those restrictions, mister Sachs flew to The Middle East in May and struck a deal to send 500,000 American AI chips, mostly from NVIDIA, to The UAE, The United Arab Emirates.

Speaker 2:

The large number alarmed some White House officials who fear that China, an ally of The Emirates, would gain access to the technology, these people said. But the deal was a win for NVIDIA. Analysts estimated that it could make as much as 200,000,000,000 from the chip sales. And so, like, I I under like, we we've covered the debate around export controls and should NVIDIA where should NVIDIA be able to sell things? But, it's never been an open and shut case in my mind.

Speaker 2:

It's never been like, oh, it's so obvious that The UAE is completely off the table.

Speaker 5:

Yeah.

Speaker 2:

I don't know.

Speaker 1:

And also I mean, it was also just like painting painting the friendship between Sachs and Jensen as like something that that felt wrong was Yeah. Was a little bit rough considering it's the most valuable company in the world. Yeah. One of the most important AI companies, potentially the most important AI company if you just go by weight in the in various indexes. Yeah.

Speaker 1:

So

Speaker 2:

I don't yeah. I don't know. I mean, it's like it's clear that he doesn't have NVIDIA bags directly. Like, that's completely debunked. So so you have to do these, like, 25 different steps to get to some sort of conflict.

Speaker 2:

It's a lot of

Speaker 1:

like, you know I read this, and I think like this is if you're the average New York Times subscriber

Speaker 2:

Yeah.

Speaker 1:

This is probably that you were they were probably like very excited by this story. Right?

Speaker 2:

Yeah. I think a lot of people are definitely just riled up by the All In Podcaster.

Speaker 1:

Charlie in the chat says, All In Pod about to be an all timer after this article. Do you think it's possible that David and Jason coordinated to get this hit piece done to grow All In even further? They said It's not a big we're at such an insane scale. That was great thing.

Speaker 2:

Yeah. Jason Jason said a bunch of I mean, Jason made a lot of good arguments, about about this, but one thing was he was like, we would be smaller if we what what what was it? He was like he was like, we would be bigger if we didn't talk about politics. And that seems crazy to me. I feel like politics is like the ultimate TAM expander in the history of podcasting and media broadly.

Speaker 1:

The audience for political content is, like, 10 times

Speaker 2:

less than times 10 I do believe that Jason loves talking about tech. And, like, I think he's I think he's

Speaker 1:

He's an OG.

Speaker 2:

He's he's an OG. He said he said that multiple times. But but I would be shocked if if if if if politics was not a was not a TAM expander for for podcasts broadly. And then the other thing is that he said they they lost money on the all in events. I don't know how that's possible.

Speaker 2:

Like, those events, obviously, they're, like, big budgets, but, you know, I would I would imagine that that, like, the sponsors can and the ticket sales, they're not cheap tickets. Right? I would I would imagine that they'd be making money off that. I certainly hope so. I mean, they've been running this thing for five years.

Speaker 2:

They it's incredibly valuable in the ecosystem. They should be able to capture some value there.

Speaker 1:

Maybe they set up their own data center to sort of manage. Right? They're just

Speaker 2:

they're just under one they're like, yeah. We we decided to bring We're

Speaker 1:

gonna make it back.

Speaker 2:

Podcast production on prem, and we we ordered

Speaker 1:

A lot of content.

Speaker 2:

A 100,000 blackboards.

Speaker 1:

A lot of cap backs.

Speaker 2:

Blue Al Blue Al really has us by the balls. It's rough. Martin Scrawley here says, the Sax piece illustrates the exact problem with the New York Times. Voters specifically want this type of person, not a bureaucrat who has never worked a real job, Lena Khan, K Street, Yeah. And

Speaker 1:

That's So the the the issue and the reason I think this article was written is that New York Times subscribers specifically want this type of article.

Speaker 6:

Mhmm.

Speaker 2:

Yeah. Whiskey titans going back and forth here. Did you miss the entire part of the article? This isn't a, quote, we can't have businessmen in government. This is a we can't have the government officials who host government summits and sell access to the president for $1,000,000 via their podcast business.

Speaker 2:

And Martin Scully says, I doubt it was Saks who wanted to sell $1,000,000 passes. And Whiskey Titan says, I agree with you. I'm sure it wasn't, but letting Jason run rampant until Susie Wiles steps in isn't a great look. I happen to think Saxe is doing fine in this particular role, but I also understand the general public feelings, like, there's a lot of graft. The New York Times isn't the right conduit for that argument, though, and they're going back and forth.

Speaker 2:

The timeline truly is in turmoil over this. Dan Premack had a good take. He had a whole, breakdown of this, which I think was interesting. He said let's let's kick this off. But first, let me tell you about fall.

Speaker 2:

Build and deploy AI and in the AI video and image models. They're trusted by millions to power generative media at scale. So Dan Primak said, lots of people are sending me the New York Times story on David Sachs Outside of the all in sponsorship proposal, which feels oblivious at best, corrupt at worst, I'm not seeing much in there that's new, at least to those who've been following. Dan Pramek says, as an aside, it's true that SACS slash Kraft still have a ton of AI investments. Thing is all tech investments at this point are AI investments.

Speaker 2:

It's kinda like Internet investments at this point. If you invest in tech startups, you de facto invest in AI startups. And Jason says, we lost money on the event. The NYT knew this and deliberately published false information. And and, Dan Primak says they included statement that you lost money on it.

Speaker 2:

What did they print that was false? They were somehow make that that we're somehow making money in this or some gain. And Dan Primak says, just re reread. Just re reread. Doesn't claim that all in made money.

Speaker 2:

Said you tried to generate revenue via $1,000,000 sponsorships, including for VIP reception that didn't end up happening, but ads that you don't know what but ads that you don't know what sponsors ultimately paid or that it doesn't know what sponsors ultimately paid included the statement that you lost money. Am I missing something? And Jason says, mister Sachs has raised the profile of his weekly podcast All In through his government role and expanded its business. Confused. I thought you were talking specifically about the White House AI summit pieces, Dan Primmack.

Speaker 2:

Talking in general, don't know how you would would not quantify. Sachs' role in White House defraised all in profile, at least among normies. As for role in biz expansion, guess you could stake your claim there. I completely disagree with this. I feel like I feel like the All In podcast put the White House on the map.

Speaker 2:

I feel like a lot of people were like they found out about the White House and about the US government.

Speaker 1:

Which house?

Speaker 2:

Because exactly. Exactly. Because of the All In Podcast. They were listening to All Podcast, they were like, wait. Wait.

Speaker 2:

Wait. You're telling me

Speaker 1:

There's people in Washington DC.

Speaker 2:

And they run this whole country.

Speaker 1:

They create

Speaker 3:

of the rules.

Speaker 1:

Yeah. They create sort of laws and framework for how our country should operate Yes. Which which industries we want to Yeah. You know, support and grow.

Speaker 2:

You're telling me. You're telling me that there's a group of people. And one of my besties is that This they're in is amazing. I gotta learn more about this. I gotta figure out what a bill is.

Speaker 2:

I gotta figure out what how a bill turns into a law.

Speaker 1:

Chad, what is a bill?

Speaker 2:

Jason says, if anything, going deep into politics has been a net negative for all in, at least in my opinion, we would we would be growing faster and wouldn't have lost some percentage of our left leaning audience if we'd stuck to tech, market, science, VC, etcetera. That's an interesting take. I I I still think that politics made politics so important. It made it so big.

Speaker 1:

Well, yeah. And it made it made the content polarizing. Yeah. Which but I think that polarizing in media is good. You actually get more attention.

Speaker 1:

Not necessarily good from all points of view Yeah. But good from a pure just like reach.

Speaker 2:

I mean, yeah. I was looking at the I I think the I think the the the ratings or, like, the the amount of viewers for for like, CNBC is bigger than Bloomberg by, like, a pretty significant margin because Bloomberg's, like, extra wonky, and CNBC is a little bit I mean, it's, like, literally called consumer business news. Like, that's what the c stands for, I believe. And then and then you have Fox, which is even more. Like, Fox News is political, and it's much bigger ratings

Speaker 4:

Yeah.

Speaker 2:

Than CNBC or Bloomberg. And then ESPN is, like, by far the biggest. Yeah. Because it's like sports. Everyone loves sports.

Speaker 2:

Yeah. And so, like, maybe that's the final that's the final form. They should go full poker and then full sports. Should become SportsCenter competitor.

Speaker 1:

I could see it. That might be the way. A b says, I only learned about the Trump, about Trump because Chemoth endorsed him.

Speaker 2:

Yes. Exactly. I had never heard of this guy.

Speaker 1:

Who? Who? The vodka and social media entrepreneur? He's running for president?

Speaker 2:

Okay. So Dan Primak is weighing in again concluding it. He says, this the New York Times story was mostly a nothing burger, at least for those familiar with the situation. As for hoax, the story itself as published isn't being disputed. Obviously, The New York Times had info questions that Sachs' lawyers answered, and disproven info wasn't included.

Speaker 2:

That's how journalism works. The real complaint seems to be about the headline, quote, Silicon Valley's man in the White House is benefiting himself and his friends. I get the complaint, but that's not but it's really a matter of interpretation, not true, false slash hoax. Imagine if you had a friend and they went to the White House and they didn't try and benefit you. You'd feel you'd feel like You might

Speaker 1:

not be friends with them.

Speaker 2:

You might not be friends with them anymore. Saxon Trump and the Trump White House are pursuing let them cook AI policy. I like that. That they believe will help us win the AI race and that the rewards outweigh the risks. Others disagree.

Speaker 2:

Yeah. This is so true. It's like there there is no, like, oh, like, we now know the correct way to win the AI war. Like, we we we know that there's a correct way. It's very obvious.

Speaker 2:

It's like, no. Everyone's debating this constantly even inside of tech, and and Sacks has one view that I think has actually played out pretty well considering that he's been anti doomer, anti fast takeoff, more industrial capacity, more, you know, opportunity to grow GDP. You know, there there are some elements of his takes that are a little bit more like TBDs, like what what actually happens to jobs over the long term, how does it manifest in GDP growth over the long term. But so far, I think he's been correct. And and I think that's what Dan Primmack is saying here.

Speaker 2:

He says, only time will tell if Sacks is correct. What we know for sure, though, is that his deregulatory policies should help VC funds, his, those runs by his friends, those run by strangers, etcetera. Thus, the headline is defensible, albeit pushing an agenda. And that's and that's the timeline in turmoil, folks. Let me tell you about graphite.

Speaker 2:

Dev code review for the age of AI. Graphite helps teams on GitHub chip higher quality software faster. I can't read this. AI Amblicus, this name. This Ilia interview will be compulsory viewing for any future student trying to understand what misallocation of capital looks like in real life.

Speaker 2:

See, I I completely disagree with this take. People were going back and forth on this. We talked about this a little bit over the, the over the holidays. But, FleetingInBits says, can you say more? Just that he doesn't have any business direction or something else.

Speaker 2:

And, and the original poster says, these are my intuitions. But for what it's worth, on the micro level, he just seems to drift in a sea of possibility and not the See kind of

Speaker 1:

Yeah. I originally read this as as as the misallocation of capital that I've seen is like the the tenth, eleventh, twelfth foundation model lab that has like a 100 to $1,000,000,000 Mhmm. That is just like kind of iterating on what Ilya already worked on Mhmm. Already developed. Right?

Speaker 1:

Doesn't necessarily, like, if they just do if they just create a a model Yeah. That's, like, not state of the art. Like, I don't know that there's gonna be incredible value in that. Meanwhile, I'm like, okay, you take the guy that that that whose work led to ChatGPT and you give him a few billion dollars and let him, you know, continue to iterate. Yep.

Speaker 1:

And and he's not just firing a single multi billion dollar cannon and hoping he hits a target. It's like this incremental research that I think is still one of the best shots at developing the next paradigm, whatever comes after LLM. Yep. So I read this. I think that your reading of this was right.

Speaker 1:

But I initially read it the other way. And I was like, yeah, I do think this is somewhat bearish on the incremental large language model lab?

Speaker 2:

Yeah. I don't know. I mean, I can I can kind of steel man both? Like, we're gonna have Julian on the show in when is he coming on? At or or we're having Vincent from Prime Intellect come on the show.

Speaker 2:

And, I was talking to him. We'll we'll get more information from him. He's gonna be on at, 01:40. But, Vincent was explaining that that more and more, more and more companies and different business processes, they do need specific training runs. They do need the skill sets of a foundation model lab, but they're not but there's a lot of, business to be done that's not purely AGI seeking, not purely paradigm shifting.

Speaker 2:

So I do think that there's there's some value there if the business can be run well, which is a big if, but there is a there is a path where a thinking machines or or one of these companies is going to go and do specific reinforcement learning, specific model development for, a specific company and task. That can work out. It's a very different business than searching for the next paradigm doing science. And maybe you shouldn't even call it a lab because you're not really even trying to do foundational science necessarily. You're more productizing.

Speaker 1:

Company.

Speaker 2:

Yeah. It's a business. Business. Which is great. We love that.

Speaker 2:

What's what's interesting about Ilya is that when we talked about this, like, it is a venture style bet. Like, let the scientist go experiment. Maybe it will work out. It's extremely high risk, probably a zero. But if it works, it's huge.

Speaker 2:

Right? So the expected value is still high. What's crazy is that we're doing a venture style bet at growth scale, and it's just massive amount of capital, for something that I think I think the consensus here is that it's it's either he solves it and it's incredibly valuable and leapfrogs everything and is just amazing, or it's just you you do get lost and you get a a sea you get lost in the sea of research and ideas, and you never really produce anything. So I love I love the high risk bets. I I just understand why people are saying, woah.

Speaker 2:

Woah. That that scale, that's a lot of money. That's a lot of money. But that but that has been happening internally at Google for a long time. They probably burned a lot of money on you know, research projects.

Speaker 2:

Hasn't been that big of a deal because they had the engine for it. And if the if the investors are, significantly diversified, they should be fine.

Speaker 1:

Yep.

Speaker 2:

Anyway, what what else is in the timeline today? Fin dot ai, the AI that handles your customer support, the number one AI agent for customer service. We did get a good meme.

Speaker 1:

We got a couple good memes Cody says, when my wife asks what we should eat for dinner, but says no to my first two suggestions.

Speaker 2:

We are back to the research. I like it. And then, when she asks what I want for dinner from Bezlord, the answer to that question will reveal itself. I think there will be lots of possible answers. Very true.

Speaker 2:

It's a great great new meme template. I like it. When my husband asks how many Amazon packages are still on the way, the answer to that question will reveal itself. I think there will be lots of possible answers. But I think that's actually true.

Speaker 2:

If he creates if he creates some new AI, like, there's a bunch of different ways to monetize it. We know this is a we know this is a fact. But, of course, Ilya is is now joining the ranks of Yan Lecun and, Rich Sutton and Andre Kirkpaty of sort of industry legends that are, more or less saying that scaling is over and LLMs are dead. You know, on the other side, Sholto is saying scaling maybe not over. So we'll see.

Speaker 1:

This is What? This post is yeah. This post is great. Scaling is And LM's are a dead end. Aw.

Speaker 1:

You're sweet. Scaling is over and LM's are a dead end. Hello?

Speaker 2:

Human resources? I love his meme template because it's like yeah. Jan Lakoun has been saying the same thing.

Speaker 1:

He said Jan says, for the record, my current BMI is twenty four.

Speaker 2:

This guy rocks.

Speaker 1:

He's very funny. I thought he would drop the the meta tag on X by now. But I guess he's still

Speaker 2:

Oh, he's still used to rewrapping them. Didn't he leave? Reporting to he's like

Speaker 1:

Reporting to leave.

Speaker 2:

Okay. He's like on on his way out, more or less. Another 1,000,000,000 to SSI. There's a bunch of this in the SSI bucket. Let me tell you about ProFound.

Speaker 2:

Get your brand mentioned in ChatGPT. Reach millions of consumers who use AI to discover new products and brands. Of course, we are having Dylan Patel on the show in twelve minutes, and we should do a little bit of a run through of the drama on the timeline. The timeline was in turmoil. Lots of people, very, you know, upset with semi analysis latest post

Speaker 1:

dare you How dare you take

Speaker 2:

a they took NVIDIA. They took a swing at the king, which was the name of their article. They said, TPU v seven. Google takes a swing at the king. The king is, of course, NVIDIA.

Speaker 2:

And they are asking, is this potentially the end of the CUDA moat? Anthropics, they're talking about Anthropics, one gigawatt TPU purchase, the more TPU, Meta, SSI, x AI, OpenAI, Anthropic buy, the more GPU CapEx you save, next generation TPU v eight, and they're going into what the what the battle between TPU and, the next generation GPU out of NVIDIA will look like. And this this upsets some people. There's a lot of folks who are long NVIDIA. Either they have invested in NVIDIA, they made a lot of money in NVIDIA, or their their whole business is tied to NVIDIA or AMD even.

Speaker 2:

And so Or they bought

Speaker 1:

the local top a month or

Speaker 2:

Potentially. There's a whole bunch of reasons. You could also just disagree with this, and you could just think that, you know, semi analysis, their takeaways are wrong. But I think it's a thought provoking article. Think there's a lot

Speaker 1:

of data so in here. Thorough.

Speaker 2:

They're extremely thorough. And I think that they do leave you with a lot of new information that you can, you know, do with what you want. And I think, in general, the response to this article was very positive, but there were some folks who were very upset by it and went and went all over the place.

Speaker 1:

And on accounts that put a noun and then capital as their name. Yes. And suddenly, they're an expert on everything.

Speaker 2:

Yes. Yes. Yeah. It was it was a little odd seeing the credentialism come out from the anons because, like, I I don't think we should get in the two can play that game, camp. It's it's a little bit rough.

Speaker 2:

But there's a there's a little bit of interesting stuff in here. I wanna read through some of this. Let's let's kick it off with the with the opening of the semi analysis article. The two best models in the world, o Anthropix Quad o 4.5 Opus and Google's Gemini three, have the majority of their training and inference infrastructure on Google TPUs and Amazon's Trainium. Now Google is selling TPUs physically to multiple firms.

Speaker 2:

Is this the end of NVIDIA dominance? The dawn of the AI era is here, and it's crucial to understand that cost structure of AI driven software deviates considerably from traditional software. Chip microarchitecture and system architecture play a vital role in the development and scalability of these innovative new forms of software. The hardware infrastructure on which AI software runs has a notably larger impact on CapEx and OpEx and subsequently the gross margins in contrast to earliest earlier generations of software where developer costs were were were relatively larger. Consequently, it is even more crucial to devote considerable attention to optimizing your AI infrastructure to be able to deploy software.

Speaker 2:

Firms that have an advantage in infrastructure will also have an advantage in the ability to deploy and scale applications with AI. And as I say, we've long believed that the TPU is among the world's best systems for AI training and inference neck and neck with king of the jungle NVIDIA. Two point five years ago, we wrote about TPU supremacy, and this thesis has proven to be very correct. TPU's results speak for themselves. Gemini three is one of the best models in the world.

Speaker 2:

And, there's a very funny bit in here. I need to find it. Saving. Oh, yeah. Here.

Speaker 2:

So, this is this is a very spicy line in here. He says, OpenAI hasn't even deployed TPUs yet, and they've already saved 30% on their entire lab wide NVIDIA fleet. This demonstrates how the perf per TCO advantage of TPUs is so strong that you already get the gains from adopting TPUs even before turning one on. And so, basically, what he's what he's explaining is that because of the competitive dynamic between NVIDIA and Google with CPU now, you can use TPU as a stalking horse. Yeah.

Speaker 2:

And say, hey, if you don't cut your prices in video, we know that you have really high margins.

Speaker 1:

Or not even cut prices, but encourage an investment.

Speaker 2:

Exactly. And so that's what their And

Speaker 1:

video would rather would rather invest back into your business Yes. Instead of cutting prices.

Speaker 2:

Yes. And so says, we think the more re more realistic explanation is that NVIDIA aims to protect its dominant positions at the Foundation Labs by offering equity investment rather than cutting prices, which would lower gross margins and cause widespread investor panic. Below, we outlined the OpenAI and Anthropic arrangements to show how Frontier Labs can lower GPU total cost of ownership by buying or threatening to buy TPUs. And so OpenAI NVIDIA, the, you know, it was, $22,000,000,000 per gigawatt, the rest of the system. So it's a $34,000,000,000, billion dollar per gigawatt expense to NVIDIA.

Speaker 2:

But NVIDIA is doing effectively an equity rebate of $10,000,000,000 per gigawatt in investment. And so how that works out is a 29% partner discount. Anthropic has similar math but a little bit higher at 44% partner discount because Microsoft is paying for a piece of it. And so it's interesting it's an interesting thesis, and, it's unclear exactly like well, you know, if the claim that the the investors will panic if it was actually just lower gross margins, Well, if you say the quiet part out loud like this and you have, you know, you you do the math to show that there is basically a discount, that margins might be coming down, because of competitive dynamics, Does that wind up resulting in investor panic? I mean, certainly, didn't today.

Speaker 2:

Isn't NVIDIA up today? Right? Yeah. NVIDIA is up 1%, adding a casual, you know, what, 10,000,000,000,000 or $100,000,000,000 or something? $110,000,000,000 quadrillion.

Speaker 2:

Yeah, gigajillion dollars.

Speaker 1:

Yeah, I just mean, again, we said this earlier on the show, but Broadcom is down almost 4% today, which I would have expected it to be the other direction given that to actually buy TPUs physically, you need to go through Broadcom.

Speaker 2:

Yeah. Yeah. So a lot of people are going back and forth on, you know, can semiannualysis be trusted? Because they're writing about about, you know, NVIDIA and and and, Dylan. I think some people didn't understand that he was joking.

Speaker 2:

Zephyr here has a post. Dylan is being tongue in cheek, but he's not wrong. NVIDIA was extremely dominant for the last three years as we saw in the stock. It's up 10x over the last three years. New competitors will cause a reduction in market share and margin compression, but TAM is big, so revenue profits won't go down.

Speaker 2:

75% of GM is just unsustainable. Hyperscalers will also use the cheap TPUs threat to extract better deals from Jensen, priority access for Rubin Feynman, or discounts on GPUs. Jensen called Altman and initiated the $10,000,000,000 deal after he saw the information about, the information article about OpenAI testing TPUs. And so, this is in reaction to that, that that point about OpenAI hasn't even deployed TPUs yet, and they've already saved Yeah. Percent.

Speaker 1:

There's a a decent post here from just another pod guy.

Speaker 2:

Mhmm.

Speaker 1:

They say, Dylan's speed running through all the learnings of sell side research, industry capture, pissing off IRR execs, gatekeeping info based on client tier, difficulty scaling beyond single star analysts, distorted MSN representation of your notes, eventually spending too much time marketing versus researching, amazing biz content though. Obviously, Dylan would would push back on

Speaker 2:

He did.

Speaker 1:

On a lot of this stuff. If you actually read through the entire article Yeah. There's nothing in the article should actually in this article should be that surprising because so much of the article is just referencing old semi analysis research. Some of which they did a a a, you know, sort of before the paywall, some of which they did under the paywall. Yep.

Speaker 1:

But it felt felt like a kind of a culmination of everything that they've been saying for a really long time. And I think that part of part of I think the surprise here is just how much how much faster this conversation has really come to a head than people may have expected. I think I think the at least surface level on the timeline, I think people felt like the TPU threat was maybe like a 2026, 2027 conversation versus being like it's a part of these buying discussions right now and negotiations.

Speaker 2:

Yeah. Yeah. The other buried lead in the article was, of course, about pre training. So there's a snippet in here. OpenAI's leading researchers have not completed a successful full scale pretraining run that was broadly for a new frontier model since GPT four o in May 2024.

Speaker 2:

And, you know, this this is it's it's it's so interesting that this like, if this was wrong, you would imagine that there would be a whole bunch of reaction from OpenAI people or, like, proxies or surrogates. Right? People quote tweeting and be like, that's just not true. Wow. Something else is cooked.

Speaker 2:

But the fact that I haven't seen anyone re respond to this and say like, oh, this is this is wrong. Like, we actually did. Not that not that, like, that's the north star for what the business is. Like, the business's job is to create profits.

Speaker 1:

Right?

Speaker 2:

It's not to, you know, complete successful full scale pre training runs. That's not the goal. That's just something that they might do in service of making a better model, making a better product. But ultimately, it's whatever the customers want. Yeah.

Speaker 2:

And if the customers are happy with four point zero level base pre train and a bunch of reasoning on top, that's fine. So, what what else is in the back and forth? People are, also, I mean, really, it does make me happy that we didn't go deeper into ranking people because the the it does feel like when you create a list of tiers and rank a bunch of people, you're just creating a big bucket of enemies down at the bottom of, like, people who want you dead because you rank them low. But I'm sure we'll get into the discussion of ClusterMax and what would you what how people are interpreting ClusterMax because there is a whole bunch of ways to read it. Like, one way to read it is, like, which which stock should you buy?

Speaker 2:

Right? But, like, that's not necessarily the read. The other the other other read is, like, which product is the best to work with as a customer. Yeah. But it's like, what customer are you?

Speaker 4:

Yeah.

Speaker 2:

There are some that are in the lower tiers that are fantastic for very specific use cases. Like, this is the nature of every business. Like, one of the one of the one of the neo clouds that was particularly upset with with Dylan is in a very niche market. But if you're in that niche market, it's probably a great product. It's probably great for you if satisfy this specific list of criteria and you don't need these features.

Speaker 2:

You're probably fine then. But it's a it's a lot of fun. People are going back and forth. They're also debating whether or not Dylan is is independent given that he lives with Sholto from Anthropic and

Speaker 1:

We gotta ask him why why he has roommates. Not even I'm not even concerned about a conflict.

Speaker 2:

It's roommate gate.

Speaker 1:

Yeah. It's roommate gate. But

Speaker 2:

What about this other That's a tinfoil hat post from Juukan. My theory is that Meta deliberately leaked the story to the information about, Google's about acquiring Google's TPUs. For Meta, it's a classic risk free power play. The moment Jensen Huang reaches wind catches wind of Meta using Google silicon, NVIDIA is likely to rush in with an investment. They might even be negotiating as we speak.

Speaker 2:

This allows Meta to secure capital and shift from burning their own cash to potentially getting discounts or effectively buying NVIDIA chips with NVIDIA's own money. Plus, if they actually do secure Google TPUs, they solve their compute shortage. It covers all bases. I wonder when other hyperscalers will catch on to this magic wand. All you have to do is hint at using TPUs and Okay,

Speaker 1:

but the issue is how many red flags would be would be waving if Jensen was like, yeah, we're investing $20,000,000,000 in Meta. We're very excited about we're very very excited about Meta and owning a piece of of

Speaker 2:

Yeah. That seems very very odd.

Speaker 1:

So so he's in a position where I don't know what kind of leverage Jensen has in those conversations with Meta because he doesn't want to discount. And it's not like OpenAI where you can just announce an investment or an Anthropic, etcetera. So how does any type of rebate actually happen is the question.

Speaker 2:

Yeah. Well, before we bring in our next guest, let me tell you about Turbo Puffer serverless vector in full text search, built from first principles in object storage, fast, 10 x cheaper, and extremely scalable. Let's, let's read through some more, some more TPU stuff to set the table. So Clive Chan says, I keep seeing stuff about TPU. Has anything materially new happened?

Speaker 2:

There's no evidence Google has ever trained Gemini on non TPU hardware going back to pre GPT models like BERT. TPUs predate NVIDIA's own tensor cores. Anthropic and character and SSI and midjourney have long used TPUs. I'd be surprised if Meta weren't looking at them. NVIDIA's moat has never been deep for the big labs.

Speaker 2:

See OpenAI deciding it could do better than CUDA and investing in Triton instead, regularly edging out c u d n n on benchmarks. There's nothing magical or structural about any of this, just good engineers doing good work. TPUs are not that much more efficient than GPUs, and small performance per watt difference are dwarfed by whether Meta has the right kernels and systems engineering to pull talent to pull it off. Both NVIDIA's and Google's moats are small, and we are still at the point where individual good engineers can flip the entire balance. Why why was this not priced in?

Speaker 2:

This is all super old public info. I I have a feeling that, that this, Clive Chan, who, I guess, is over at, was at Tesla and then OpenAI, is is is a little bit of, like, first time in the public markets, first time realizing that, the people who trade this stuff are not necessarily, like, on the super inside of the labs actually, actually understanding Yeah. The the decisions that are being made inside the labs. Like, it's a completely, separate ecosystem. And that's why, organizations like Semi Analysis exist.

Speaker 2:

And I believe we have Dylan Patel from Semi Analysis in the Richmond Wynn Room. Let's bring him in. Dylan, how are you doing?

Speaker 1:

What's up, man? Fantastic.

Speaker 6:

How about yourself? You know, I saw I saw the meme image that you guys, put out there for me, so I had to wear

Speaker 1:

it through tank.

Speaker 6:

Show you. You're getting finished. Let's go.

Speaker 1:

Let's go. Oh. Dude, we need a bigger bigger screen for that bicep. We'll we'll work on it.

Speaker 2:

Let's go.

Speaker 1:

Where in the world are you?

Speaker 6:

I'm in Florida. I was spending Thanksgiving with my family here. I'm trying to chill out a little bit. It's nice to have the family pamper me a little bit because I broke my foot a couple of weeks ago.

Speaker 2:

Sorry to hear that. How'd you break your foot?

Speaker 1:

Tripped over a TPU.

Speaker 6:

Family family reunion playing football in Texas. We're we're American as we can get.

Speaker 2:

There you go. There you go. Well, we were just running through a little bit of the the TPU article. Can you, can you actually set the table for me on, like, what do you think is new about it versus what has seminalysis already been saying, and this is more just, like, tying everything in a bow.

Speaker 1:

Yeah. Half of the article is just referencing recent

Speaker 2:

this for two years. We've saying this for one year.

Speaker 1:

And even referencing con Google's own content about the TPU dating back

Speaker 2:

Yeah.

Speaker 1:

Even even further. So

Speaker 6:

Yeah. I would say I would say the majority of this piece was if you're if you're a client, it's already been pretty much all published. Mhmm. But it hasn't been tied together. It hasn't had a narrative around it.

Speaker 6:

Right? Because when we think about, like, what we put out as on the paid side versus what we put out on the newsletter. Right? Mhmm. Our clients sort of get, you know, what changed?

Speaker 6:

What happened? Here's the numbers. That's about it. Right? We don't explain the technology that much because our clients are sophisticated.

Speaker 6:

Right? They're either in the industry or and or they're finance pros who don't give a shit about the technical stuff. And so it's it's either of those two. Right? And and so we're we're just explaining, here's what's happening.

Speaker 6:

Here's the change. Here's the numbers. Right? So for months, we've been saying Google's selling TPUs. For months, we've been saying, hey.

Speaker 6:

Here's TPU v seven versus Blackwell. We've even put out updates on here's what we think TPU v eight is versus what we think Rubin is. Yeah. And so, generally, it was making it into a narrative and explaining the technology in the corporate, I would say, politics or dynamicism around it. Right?

Speaker 6:

So that's Yeah.

Speaker 2:

You know,

Speaker 6:

I think I think there has been bits and pieces put out by other other folks. Right? I think the information has done great reporting on some of the stuff after we did, but in the public space, I think, you know, for as an example. Right? Like so so other people have put out bits and pieces surrounding this, but they haven't put out the full picture.

Speaker 6:

So so as far as, like, what's new, it depends on where you sit in the stack. But but, you know, Anthropic and and and Meta and folks like that have been talking to Google about buying TPUs for many months. Right?

Speaker 4:

Yeah.

Speaker 6:

Yep. Whereas whereas people externally are you know, last week when Gemini three was launched or two weeks ago, people were just learning that TPUs are trained training Google's models. Right?

Speaker 4:

So it's

Speaker 6:

it's wherever you in that information spectrum. Right?

Speaker 2:

Yeah. Totally. So, on that information spectrum, the finance bros, they can probably just like, if they if they read into this, oh, bullish Google or bearish Nvidia or whatever, like, they can kind of trade in and out, as as they please. But what on the on the more technical side, like, are people using semi analysis research to understand, like, okay. I'm a Neo Cloud.

Speaker 2:

What do I wanna rack for next year? Maybe I need to be putting in a TPU order. Is that is that how people interpret your research? Like, what happens on the technical side of the house?

Speaker 6:

Yeah. So so as far as, like, some of the paid stuff we do, we have one model called the TCO model, right, which is calculating the TCO of all these different hardware's performance, building up the entire cluster cost, you know, breaking it out into, like, a dozen plus different things, whether it's storage or networking and breaking down the cost of everything. So there we put out research on TPUs because as soon as Neo Cloud started getting offered, hey. You wanna buy TPUs? Yeah.

Speaker 6:

We're like, okay. We need our own ground up model. So when you're negotiating a big contract, what you do is called it should cost. Right? You you go and calculate what it costs for the company versus what it costs for me to deploy.

Speaker 6:

And then you, like, think about, like, oh, what's the margins they have? What is ridiculous to offer them? What is not? Right? Yep.

Speaker 6:

Because everyone always wants to know, like, hey. What margin are they making off of me? Can I push that down a little bit? What is on what is ridiculous to demand in a negotiation versus what's not? So we've already been working, you know, with through this TCO model.

Speaker 6:

We've put out four different updates on the TCO of TPUs Mhmm. V seven and v eight because there are neo clouds out there as well as labs who are purchasing TPUs that are using that to understand what's the cost. Now, you know, Anthropic, I will say, just already knew and figured it out because they've hired hired so many Google people, but other labs are also looking at it. Right? Yeah.

Speaker 6:

And so so, you know, when you when you say, hey. On on the cost side of things, the technical side of things, right, there's a lot of network engineers now out there who have never deployed Google hardware that are now like, okay. I need to figure out how to do this. Text. Right?

Speaker 6:

Like, you know, so there's there's people who have DM'd me that are like, oh, we've been you know, as you know, we've been thinking about deploying Neo Clouds, but your material on this is technically better and teaches me more than Google's own material. Right? So it's like, this is this is helpful to people on multiple factors.

Speaker 2:

Yeah. What what what about the software side? Google's built their own internal stack to compete with CUDA. How much of that are they going to actually give to their customers who are buying TPU? Because that feels like you it it feels like potentially you could over rotate on, oh, well, Gemini three is really good, but why is it good?

Speaker 2:

Is it just because of the hardware, or is it also Google's incredible prowess, multi data center training, all this fancy stuff they have that they won't be giving you when they sell you the TPU?

Speaker 6:

Yeah. So so that's the interesting thing is some of the soft software will remain closed source, but you can still use it.

Speaker 4:

Okay.

Speaker 6:

Right? And then some of the software, they are trying to open source aggressively. And then some of the software, they're just never gonna give out there anywhere. Right? So it sits in three kind of buckets.

Speaker 6:

Right? The interesting, I guess, newer thing that we did in the piece was we looked across all these different open source AI software. Right? Whether it's PyTorch, whether it's VLLM, whether you know, all these different open source libraries. And we calculated and counted up how many Google commits there were.

Speaker 6:

Right? And you can see there's a chart in the article where the number of commits that Google's doing on TPUs has exploded over the last handful of months.

Speaker 5:

Mhmm.

Speaker 6:

Right? As they've decided to shift their strategy, sell TPUs externally, they also recognize software has to be open for this. Right?

Speaker 1:

Yeah.

Speaker 6:

Know, only the gigabrains at, like, Anthropic can figure out how to do everything themselves. Yeah. Right? It's it's those people outside of Anthropic, you know, types that that need a bunch of open source software that builds on top of it. Right?

Speaker 6:

Mhmm. And what's interesting is when you look at, like, hey, NVIDIA you know, the biggest argument that NVIDIA doesn't really make for GPUs, but they should, is that, you know, about 40% of the software that's open sourced is actually just from China, right, on on CUDA, and that's the CUDA mode. Right? It's like 40% of the software is just, like, open source stuff, whether it's people committing to VLM or PyTorch or or all these other libraries. Right?

Speaker 6:

ByteDance open sourcing stuff, DeepSeek open sourcing stuff. Sure. And and and and Google, you know, they they don't have people open you know, Anthropic's not gonna open source software. So Google needs to catch up not just by, hey. Here's all the software we have internally.

Speaker 6:

Let's open source it. They also need the ecosystem to build a ton of software on top of TPUs. And so that's the that's the real big challenge there. And and there's an element of software there that NVIDIA is happy to open source, and customers of NVIDIA are happy to open source that Google will never open source because it's it's you know, Google Cloud is selling the TPU. Gemini is the one actually using it and developing a lot of the software, and these two groups are not always gonna be aligned.

Speaker 2:

Yeah. Isn't that, like I mean, what are the other kind of, just problems with, Google becoming an actual, like, seller of TPU? It feels like, there's obviously an opportunity because NVIDIA has high margins. There's demand. It's a great chip.

Speaker 2:

But culturally, structurally, like, Google tries a lot of different things. They have a lot of advantages, but occasionally, like, they fall flat on their face with just, they can't even get an RSS reader out or something like that. So, like like, are there other risks to the TPU not really finding its footing for reasons that aren't just the fit the laws of physics?

Speaker 6:

Yeah. So so the biggest challenge I see with them is it's everything is nonstandard. Right? Google for years, they they developed liquid cooling first. Right?

Speaker 2:

Sure.

Speaker 6:

The for for AI computing. Mhmm. They deployed rack scale architectures. Right? Everyone's talking about g b 200 rack scale architecture.

Speaker 6:

Google did it first with GPUs. Right? Mhmm. But when they did all of this stuff, they didn't give a crap about, hey. You know, this has to go in fifty, hundred, thousand different people's data centers.

Speaker 6:

Right? This has to go in my data centers that I designed myself. So everything is super vertical. The entire liquid cooling supply chain is super vertical, entire the the racks aren't even the standard width. Right?

Speaker 6:

So when I look at, like, a data center, it's like the door the loading bays, because they're so much wider, the Google racks are, like, three times as wide. It's like, maybe it might not even fit into the data center, like, physically, like, down

Speaker 2:

the doors. Okay.

Speaker 6:

So there's, like, all sorts of random like, I wouldn't say random. It's Google from first principles design stuff.

Speaker 2:

Totally. Yeah. Yeah. Yeah. But but if you're in Neo Cloud and you're like, the hot thing's gonna be TPU next year or the year after, and I wanna be able to sell into that market, it's not just flip a switch, drop in, replace with TPU.

Speaker 2:

You have to maybe build a whole new building. Like, it might be that significant.

Speaker 6:

Let Right. Or or do or or, like, knock down some walls. And then, like, you know, I need to I need to go get liquid cooling not from Dell and Super Micro and and HPE who I've Yep. Who serviced me already. I need to go get it from some random supplier who's only ever sold to Google.

Speaker 6:

Interesting. Usually, they're sitting across the table from, like, some gigabrain engineer who has a team of 20 people working on liquid cooling instead of, like, you know, my one guy who does liquid cooling Yep. Procurement and negotiations and, like Sure. Also does procurement of, like, network stuff.

Speaker 2:

Yep. Yep.

Speaker 1:

There was Makes a there was a tinfoil hat theory floating around that Meta leaked their TPU in Meta leaked their TPU interest to try to gain some sort of leverage over maybe some negotiations with NVIDIA. I don't know if you see any possibility in that. But how do you think those conversations are going? Jensen doesn't wanna discount and compress his margins, but at the same time, he can't do this kind of like equity rebate thing. If he if he took a big position in in Meta

Speaker 2:

He'd be very suspicious. Very He'd concerned. I totally get the OpenAI investment. That seems like it makes much much more sense than saying, hey. We're going long Meta, you know, as a $4,000,000,000,000 company.

Speaker 6:

Yeah. At the end of the day, right, like, TPUs have, like, a set of maybe 10 customers. Right? Because you have to be super sophisticated.

Speaker 2:

Yeah.

Speaker 6:

And so what really is challenging here is is, you know, Meta Meta looks at the numbers. You know? It's like, okay. OpenAI, I'm getting 30% off because they're pay they're investing in me as a result. Obviously, they get equity, but they're investing in me, and I get 30% off on these GPUs as a result.

Speaker 6:

Right? Meta, you can't do that. So so Meta, they're I don't think that they're just negotiating. Right? Like, you know, is is are they just negotiating with NVIDIA when they buy AMD?

Speaker 6:

No. They're they're just any engineers. Yeah. Right? They're developing all the software.

Speaker 6:

They're actually deploying Lama four zero five b was exclusively on AMD for a number of months. Right? For inference. Right? So so when you look across, hey.

Speaker 6:

Is Meta just, like, playing around trying to negotiate? It's like, no. No. No. Like, they're they're looking out for what is best.

Speaker 6:

Right? And Meta is power constrained, and TPUs are currently way more power efficient. Meta is compute constrained. And TPUs are potentially higher performance per watt and higher performance per dollar. Right?

Speaker 6:

At least that's what we believe for t p v seven that it is. So they'd be dumb not to look at it. Right? And they they have the time they have the people, they have the team. Now NVIDIA at the same time has to play the game a chicken.

Speaker 6:

Right? Yeah. Sure. They could discount the pricing somewhat. And because what's funny is NVIDIA is more vertically integrated than Google is when selling hardware.

Speaker 6:

Right? Google has to pay Broadcom who pays TSMC, whereas NVIDIA gets to pay TSMC directly. Right? There's this vertical integration challenge where NVIDIA could drop the price a little bit, and they'll be fine, but they don't want to. Right?

Speaker 6:

You know, the the whole point is you charge the highest price possible. And then the last thing is they've got this, like you know, they've got this view about antitrust. Right? You you don't want to cut deals for specific customers because that looks bad. Right?

Speaker 6:

Instead, you want you know, right now, Dell pays the same price for GPU as gigabyte as Meta. Now the networking hardware, there's different pricing because there's a lot more competition, and NVIDIA can cut a lot more there. But on the GPUs themselves, NVIDIA's pricing is very fair. Right? Fair in the sense that they're making a shitload of money off of everyone.

Speaker 6:

You know?

Speaker 1:

Yeah. How talk about kind of Jensen's leverage that he has around Reuben allocations as some of these customers start to at least consider TPUs.

Speaker 6:

Yeah. So as far as, like, next year's TPU deployments, it's pretty set in stone for the vast majority of the volume. Right? Anthropic's got a bunch, and then there's some sprinkled else else elsewhere. But as we go into 2028 where Google can actually ramp, you know, the the flip side is Rubin is also ramping.

Speaker 6:

And at least based on our research looking throughout the supply chain, you know, over a year ago when OpenAI started their chip team, they poached, like, 15 Google people overnight. Right? In one week, like, someone I knew I heard Gup was like, yeah. I'm joining OpenAI. And then I text, like, another three people I know, and they're like, oh, yeah.

Speaker 6:

I'm also joining OpenAI. I'm like, what the fuck? So so oh, Google's a lot of their best TPU engineers have left. Right? They also have a ton left.

Speaker 6:

And so that what that's done is, you know, chip timelines are so long. That didn't affect t p v seven. That's affecting t p v eight. At the same time, Google is trying to diversify their supply chain, get from not just Broadcom, but also MediaTek. And so Google's got a real challenge on t p v eight in that it's good.

Speaker 6:

It's an improvement. But then when you go look at what NVIDIA is doing with Rubin, Ruben is so much better because NVIDIA is just pedal to the floor, paranoid as fuck. We we have to be the best, and we have to be way, way, way better than everything because how much better I am than everyone else is my margin. Right? And so NVIDIA NVIDIA has sort of, like at least currently, we think NVIDIA is gonna be so much better that they'll be fine and they'll be able to maintain margins.

Speaker 6:

Right? Now things can happen. Ruben can delay or TPUs can delay and the position looks better or worse. Right? Mhmm.

Speaker 6:

There's a lot of unknowns to go through. But as far as, like, what is Jensen's leverages, look. I'm gonna make the best hardware and plus my software advantages, and I'll be able to continue to be dominant and dominate the market. Right? There's there's curveballs I could go, which is like, oh, Google software, they could open source enough software that actually their software ecosystem is not far behind NVIDIA.

Speaker 6:

Maybe they don't want to. Right? Or, hey. They could, you know, they could execute everything, and NVIDIA has a three, six month delay. Now all of a sudden, they're a lot more competitive.

Speaker 6:

Right? And and so all these things are still open questions, but it's it's NVIDIA can play the allocation game as well, of course. Right? Yeah. Hey.

Speaker 6:

I'm gonna give all of the GPUs initially to companies that probably could buy TPUs, but that ends up being all the AI labs and hyperscalers. Right? Mhmm. At least, know, like Meta, right, and ByteDance, that would actually be willing to buy TPUs. And then you end up with this, like, weird situation where, okay.

Speaker 6:

Well, that's, like, 75% of the GPU market anyways when I look at the the AI labs through the Neo Clouds. Right? When there's, you know, Nebius and Iris Energy and all these other, you know, CoreWeave and all these folks are are deploying for OpenAI anyways. Right? You know, this this sort of ends up being like, well, sure.

Speaker 6:

I could stiff, like, some people in the allocation, but at the end of the day, everyone who is a potential customer for GPUs is sophisticated enough to be where they were gonna be on the beginning of the allocation anyways. Right?

Speaker 1:

That makes sense.

Speaker 2:

What what how are you framing cluster max these days? Is it is it for customers who want to buy services from NeoClouds? Is that the primary goal of ClusterMax? Because I feel like some people look at it, and they're like, this is a buy rating. This is a sell rating on the stock.

Speaker 6:

So so the funniest thing is, like, ClusterMax v one, the title of it was ClusterMax, how to rent a GPU. Okay. Because we discussed all of that. And then and then in ClusterMax v one, I believe we put Iris Energy and underperform. Mhmm.

Speaker 6:

Right? At the same time, the research side of the business, we explicitly were like, dude, they've got these data centers. It doesn't matter if they suck at running GPUs. Sure. They've got these data centers.

Speaker 6:

They've got this power. If you just value them on a watts per you know, how much money they could make, it's it's a long. Yeah. At the same time as, like, Jordan. Right?

Speaker 6:

Jordan, he's running cluster max. He's like, Iris kinda sucks. And it was other people in the technical team before him. You know, it's like it's like Jeremy who's running the data center side, and I think he's been on TBPN is like, dude, Iris Energy is a stock. Right?

Speaker 1:

So it's like it's it's

Speaker 6:

it's kind of like, you know, it's like what what what the technical side of the house does versus what the, you know, research side of the house does. Yes. They talk to each other. Right? Jeremy did ask the team, like, hey.

Speaker 6:

What you think of Iris Energy? I think it's a log and and the team working on ClusterMax is like, I don't know. Like, you know Yeah. It's it's it's a bad cloud, and it's like, that doesn't matter. So ClusterMax has nothing to do with the stock.

Speaker 6:

Right?

Speaker 2:

Yeah.

Speaker 6:

Yeah. Now, obviously, there's gonna be some correlation with how good is a stock versus, you know, who's gonna wanna rent from them. Yeah. But at the end of the day, right, like, cluster max is the the goal purpose sole purpose of what we explicitly say in there is it's for people renting anywhere from, like, you know, hundreds of GPUs to, you know, right below the AI lab scale. Right?

Speaker 6:

The AI lab scale, there's different considerations. But in that range, tens of thousands of GPUs all the way down to hundreds of GPUs, that's who we're targeting. Plus, we're saying we're giving a bunch of feedback for people to make the cloud ecosystem better. Yeah. The unsung hero between cluster max v one and v two is that we moved the bar up.

Speaker 6:

Right? You know, what it required to be in gold, like, was was much more. What it required to be in silver was much more because everyone improved so much. Right? And as as we continue to, like, increase the requirements, make it harder and harder

Speaker 1:

to You gotta move keep moving the goalposts. Right.

Speaker 6:

People keep improving the ecosystem. And, actually, you know, this is this is the funny thing. It's like ClusterMax is evil. It's like when I when we look at the quotes and we've got hundreds of quotes on clustermax.ai, all these companies are like, dude, I love this. This one specific bug that this Neo Cloud had, they fixed it as soon as you wrote about

Speaker 4:

it. Yeah.

Speaker 6:

Right? Or, like, hey. Help me understand the reliability. Help me understand this or that. People are like, love ClusterMax.

Speaker 6:

And and, you know, altruistically, like, I think we're generating billions of dollars in value just from, hey. Like, all these clouds are more efficient, and there's less failures, and it's easier to get your workload running on any random GPU cloud, and the market is more efficient. Now I'm not making any money off of that. How am I making money off of cluster max? I'll be very clear.

Speaker 6:

Yeah. Is people who hire us to do do do do diligence. Right? So people who wanna acquire a Neo Cloud, people who wanna sign a massive, massive deal that's not just like thousands of GPUs, but tens of thousands of GPUs. And then lastly, it's people who wanna, you know, invest in an EoCloud.

Speaker 6:

Those are the three areas where we're making money off of, quote, unquote, cluster max, but not really. We're not selling ratings. We're not Sure. You know you know, we're we're in fact like, a customer will do a consulting project project with us or wanna wanna buy some research from us. And I'll explicitly put in our Slack shirts to Slack or I'll send an email to the CEO like, dude, just so you know, the people working on this are not the people who are doing cluster max rating.

Speaker 6:

Right? Yeah. You know, the the people who are buy you know, the research on, like, these data centers are there and this is the power ramp or here's the accelerators or here's the TCO. That's not the people doing cluster max. Right?

Speaker 6:

And I don't care about, you know, whether you buy it or not. I you know, at the end of the day, Google and Amazon and Microsoft are way bigger than and, you know, Flubdak and, like, you know, those kind of companies. Right? And yet, one some of those are ranked in silver and some of those are ranked in platinum and gold. And that's because, technically, what's what matters not, hey.

Speaker 6:

You know, obviously, when we talk about who buys our research, the biggest company in the world are gonna pay me more than the midsized companies in the world.

Speaker 2:

Okay. Question from the chat.

Speaker 6:

And the price is discriminated based on that.

Speaker 2:

Would you change the rating of a Neo Cloud if Sholto promised to do the dishes for two weeks straight?

Speaker 6:

You know, there there was an argument that I saw someone who's like, who does the chores? And it's like, brother, we we live together by choice. You know, we we pay someone to come once a week. If you cook something, you do your own dishes, but, like, you know, frankly, the we're we're working so much and I think like, you know, I think I think Dorkesh has ordered pizza from the same spot three nights in a row before. Right?

Speaker 6:

Like, it's it's

Speaker 1:

it's Wait. Question is being an adult man with roommates underrated?

Speaker 6:

So I'm I haven't lived with people in years. And then when I moved to SF So

Speaker 1:

you came back. Year This is crazy.

Speaker 2:

Okay.

Speaker 6:

I moved with I I moved to SF this year. You know, I'm like, oh, you know, I should live with friends just so it's more fun. And the first house kinda fell apart, so I moved into this house with these guys. Yeah. And we've been talking about it for months.

Speaker 6:

I love it. Right? It's like, look, we we we we have you know, if you think about, oh, what if we all rented our own places that were good and then we pulled that budget together, we have a nice place.

Speaker 2:

Yeah.

Speaker 6:

Right? And then in that place, have plenty of space for ourselves. Yeah. We we pay for someone to come and clean once a week. Right?

Speaker 6:

So at the end of the day, what is what is the negative here is, like, well, we're living with our friends.

Speaker 1:

Just guys being dude.

Speaker 6:

To wear, like,

Speaker 2:

it's it's it's it's underrated. If you if you do bunk beds, you have more room for activities.

Speaker 1:

Exactly.

Speaker 2:

Anyway no. No. Sorry. Sorry. Actual question from the chat.

Speaker 2:

When is TPU going on inference max? We gotta know.

Speaker 6:

So we're working on it. Right? We we're we're working with Google, technical folks. You know, funnily enough, actually, we triggered a security warning for this Google engineer. Kimbo went to a a a JAX conference.

Speaker 6:

Right? JAX is

Speaker 2:

Yeah.

Speaker 6:

Is the opposite it's like PyTorch for but for TPUs. It's the most simple it's it's Google's own internal thing. Right? Mhmm. That people do use externally.

Speaker 6:

He went to this PyTorch or this JAX conference. A Google engineer presented something. He's like, can I get the slides? They send it to him, and then Google security alert, like, locks him out of his computer because he sent us, like, some technical, like, information. And, like, for three days, the guy can't work, and he's freaking the fuck out.

Speaker 4:

And I'm

Speaker 2:

like No.

Speaker 6:

I I emailed Jeff Dean. I'm like, bro, this is like, do not fire this guy. He sent me stuff that you presented at a public conference. He's like, oh, okay. Yeah.

Speaker 6:

Yeah. I'll get that fixed. But, anyways, like, we're working with we're we're trying to, you know, implement it. We have access to some TPUs. Mhmm.

Speaker 6:

The software stack is different. Right? Yeah. You know, just

Speaker 2:

just just one second. Rewrite or reimplement inference max, like like, code that

Speaker 4:

actually runs.

Speaker 6:

I won't say it's that much work. Okay. Like, as much as, like, completely redoing inference max, but there's a ton of work. Right? Okay.

Speaker 6:

So we're moving as fast as we can. Yeah. Yeah. Internal target is this year.

Speaker 4:

Okay.

Speaker 2:

You know? Well, then do it. Then then the obvious question is is, like, I I feel like Inference Max is my north star for TCO relative in AMD versus NVIDIA land. There was a bar chart of TCO for GPU versus NVIDIA. It looked like it looked like TPU was doing really well on that chart.

Speaker 2:

The bars were very low. Where did those numbers come from? Do you have confidence in those numbers, or do you think the numbers will change once you actually get TPU on inference max?

Speaker 6:

Yeah. So inference max shows performance TCO.

Speaker 4:

Right?

Speaker 2:

Okay.

Speaker 6:

You know, it's great. Great. Like, know, like, guess what? Like, you know, TCO of, like, a Raspberry Pi is incredible. It's, like, $5.

Speaker 6:

Right? Sure. You know, versus versus a GPU is $50,000. Okay. Performance divided by TCO is what matters.

Speaker 6:

So that bar chart is saying, look. TPUs are cheaper. And at least on quoted specs, you know, now now let's make some assumptions around around utilization. And in the in the article, we explicitly said, look. We don't know what the utilization is.

Speaker 6:

Mhmm. It's gonna change customer to customer. Mhmm. Here's a range. Worst case, it's like a little bit worse than GPUs.

Speaker 6:

Best case, it's way better than GPUs. Right? Yep. And and so inference max will tell us what the actual performance is in inference. Mhmm.

Speaker 6:

Because we don't know yet. Right? Currently, the open source software for TPUs is not good enough for good enough for us to just take the open source software and say that's the performance. Right? Because that's obviously, like, not real.

Speaker 6:

Right? Anyone who, like, is actually buying TPUs is gonna spend engineering hours to work on it. And so we're trying to work with Google to get a real performance number that is achievable by people, you know, and and will be upstreamed into the open source software because this is an in progress thing. Right? No one cares what t p v seven can do today.

Speaker 6:

It's about what it does in six months. And so, you know, obviously, we don't wanna be you know, today, t TPUs, if you're using VLLM, are worse performance TCL than GPUs, without a doubt. But the target is moving very fast. And, you know, there's a ton of, like, low hanging fruit for us to implement before we actually put a number out there. Right?

Speaker 6:

And so where does Google sit there? We'll see. I I personally believe the TCO side of things, the total cost of ownership is based on what we know on supply chain. Right? Mhmm.

Speaker 6:

How much do how much do the chips cost? How much do the racks cost? How much does the liquid cooling cost? How much does the memory cost? How much do the cables cost?

Speaker 6:

Etcetera, etcetera, etcetera. Right? That's based on our estimates up and down. So I think the TCO side of things, we're pretty confident. It's the it's the performance side of things where we don't know.

Speaker 6:

Right? There is a wide range, and that's what we sort of tried to state in the article. Right? Performance is a wide range.

Speaker 1:

Can you can you explain more about Google and Broadcom's relationship? Max Hodak from from Neuralink and Science was was asking on the timeline last week, why have Broadcom as a middleman? Couldn't couldn't Google do the design and and place the orders from TSMC themselves? But but what's your read on on that relationship and how how durable it is?

Speaker 6:

Yeah. So when you think about chip design, there's a few different stages. Right? There's defining the architecture, and then there's actually, like, implementing that architecture onto a process technology. There's laying out that architecture into gates in the on the chip, and then there's, like, the the whole supply chain side of things.

Speaker 6:

Right? Negotiating contracts, giving allocations, etcetera.

Speaker 2:

That takes, like, eighteen months. Right? Isn't that, like, an eighteen month process, basically?

Speaker 6:

Yeah. Eighteen months or more. Right? Yeah. I would say I would say, actually, like, NVIDIA is is faster side and Google's on the slower side just because, you know, NVIDIA's been doing it for longer.

Speaker 6:

They have a bigger team. Right? And they they you know? But at the same time, Intel has the biggest chip design team and they move even slower than that. Right?

Speaker 2:

Mhmm.

Speaker 6:

They take like four years. At least that's what they did a year or two ago. We'll see what the new CEO can get into the you know you know, reorg it. Right? But as far as, like, Google, you know, when they first started the TPU, it was a very few people, and they relied heavily, heavily, heavily on Broadcom to do everything.

Speaker 6:

Right? They just defined the top level architecture, and Broadcom did everything I said below. Right?

Speaker 2:

Got

Speaker 6:

it. Negotiating with supply chain, figuring out figuring how to lay out the gates, everything. Right? As time has moved forward, Google has taken on more and more of this. Right?

Speaker 6:

Now they use you know, they've talked a lot about alpha chip where they use AI to help floor plan the chip. Right? Once you have the architecture, how do I physically lay it out onto the chip? Right? They've done more and more and more there.

Speaker 6:

They haven't taken over everything yet, but that's that's sort of the point. But goo Broadcom has this, like, super big advantage. Right? NVIDIA, they they acquired Mellanox, you know, call it five, six, seven years ago. Huge acquisition.

Speaker 6:

Who's the biggest networking company in the world? Broadcom. Right? Broadcom is the biggest networking company in the world. And, you know, when you talk about AI, it's it's the AI it's architecture of the actual processing elements.

Speaker 6:

It's memory, which you're buying from, you know you know, the memory companies. Right? Hynix and Samsung and Micron. And then it's networking. Right?

Speaker 6:

When you try and boil it down to the most simple things in software. Right? The networking side of things is so important. And the, let's say, technical competence of everyone around the world besides Broadcom and NVIDIA in networking is so low or rather it's just not as good as them. They're actually good, but it's like Broadcom and NVIDIA are just so good.

Speaker 6:

And Broadcom is better than Nvidia in many ways at networking that, you know, when you think about what is Google doing, yes, they're defining how the network topology is. But when you're talking about the physical network SerDes, you know, how how packets get transferred, all these different things, Broadcom has heavy, heavy influence there. So to this day, right, Broadcom is still charging margins like they did three, four years ago even though Google has taken up more and more of the work. But at the same time, Google can't leave until they figure out how to do the networking and supply chain themselves or with a partner. And so what are they doing on TPUVA that is potentially distraction that's slowing down their execution is they're working with MediaTek.

Speaker 6:

Right? MediaTek at times has helped Cisco with their network chips. Mhmm. MediaTek has a lot of work on some of this networking stuff. They're nowhere close to Broadcom, right, on revenue.

Speaker 6:

Right, for and and that's that's that's one metric on technical competence. You know, that's another metric. I I think MediaTek is good. Right? But, like, they're just nowhere close to Broadcom.

Speaker 6:

So now Google is having to work with, you know, I don't wanna say subpar vendors, but inferior vendors to Broadcom.

Speaker 1:

And that's just to increase their margin on TPU eight.

Speaker 6:

I would I would even say their angle when they started this project was never we're gonna sell TPUs externally. It was, dude, we're paying, you know, a three x markup to Broadcom, and half the cost of this chip is memory. Like, what the fuck are we doing? Right? You know?

Speaker 6:

At the same time, it's like, well, sure. Physically, the cost for the networking is not that much, but what value does the networking bring is, you know, sort of Broadcom. And then Broadcom's also doing the, like, game theory, not science of like, well, you can't really leave us. So we're gonna charge you what we think is fair or what we think we can charge. And Google's like, oh, no.

Speaker 6:

We're stuck to you. Right? So MediaTek is taking way, way, way less margin. They're not passing the memory through them. Right?

Speaker 6:

And so, you know, this this ends up being like, hey. That's a huge advantage for them. Flip side is like, well, they've they've they've got to engineer all this work that Broadcom was doing instead of working on a way better architecture. They've gotta work with a worse vendor. Right?

Speaker 6:

Objectively worse. Although MediaTek, like I said, is very good to try and implement TPUs more directly with TSMC with less Broadcom sort of in the middle.

Speaker 1:

Very helpful.

Speaker 6:

And and and Google, you know, because it's risky, is going down both paths. Right? They're continuing to work with Broadcom on TPV eight, and then separate TPV eight project, they're working with MediaTek. Right? Because they can't risk, you know, find whatever 30 points of margin, 40 points of margin, 50 points of margin.

Speaker 6:

I can't risk the TPU being late because ads runs on that. Gemini runs on that.

Speaker 1:

Yep. Can you can you give any takes on the NVIDIA's $2,000,000,000 investment in Synopsys that that got announced this morning? I don't know if you you saw it. I'm assuming you did.

Speaker 6:

Yeah. So in a time where, you know, let's say the two biggest chipmakers, Broadcom and Nvidia, are making more money than ever and everyone else in the supply chain, and all the hyperscaler are trying to design more and more chips. Everyone's everyone's sort of working on that. You've got you've got the the EDA vendors are at the lowest possible valuations or lowest valuations that they've had. They're still very expensive, but lowest valuations they've had on a earnings multiple basis for a long time.

Speaker 6:

There and and and this is on the eve of, hey. Like, objectively, are there gonna be more chip designs or less chip designs in five years? A lot, lot more. Now the flip side is AI chip design is coming. There's 20 plus companies doing AI chip design.

Speaker 6:

We've got a really long article coming on that soon that'll explain the landscape, but AI chip design is gonna shake up everything.

Speaker 2:

And so the question

Speaker 1:

is like AI this is AI AI chip design. Correct?

Speaker 6:

AI helping chip design, whether it's for AI chips or for, like, power chips.

Speaker 2:

Okay. Yeah. Got it.

Speaker 4:

And so so the question

Speaker 6:

is like, you know, NVIDIA NVIDIA has a lot of tools internally. Right? The the dirty the the thing about EDA is that there's three companies that own 95% of the revenue. But at the same time, Google and NVIDIA and broadcom and all these guys also design a lot of their tooling internally, although they are massive customers of all three vendors. Right?

Speaker 6:

So it's kind of like an oligopoly where the customers also contribute a lot. And so NVIDIA's whole goal here is like, how do I get every EDA flow working on GPUs? Because today a lot of it is running on on FPGAs, a lot of it's running on CPUs. And AI AI chip design is gonna get a lot more AI influenced. How do I get everything working on GPUs in terms of like the operation of it even if it's helping people design not GPUs.

Speaker 4:

Mhmm.

Speaker 6:

Right? Yeah. And and I don't have enough engineers to work on all the software. They've open sourced a lot of software. Right?

Speaker 6:

Like, Coolitho, it's e it's software for lithography. Right? And they've got all this software up and down the chain all the way from lithography to laying out chips and all these other things. They just wanna make it all run on GPUs. And and so that's that's what their goal here is.

Speaker 6:

Right? And now they've given Synopsys a huge, huge they're buying Synopsys at the lowest valuations that Synopsys has ever had with all this cash that they were gonna give away in dividends or buybacks anyways. And and they're getting Synopsys to now make GPUs first class. Right? And so I think this is a win win, for Synopsys and NVIDIA.

Speaker 2:

Well, we could go way longer, but I know what's on your calendar. You gotta hit the gym. Thank you so much for coming by and chatting with us. This is really

Speaker 1:

There you go.

Speaker 6:

There you go. Helpful.

Speaker 2:

Have a great rest of your day. Enjoy catching up. The holidays with your family. Great catching up. We'll talk to you soon.

Speaker 1:

Cheers.

Speaker 6:

Goodbye. See you guys.

Speaker 2:

See you. Me tell you about public.com investing for those who take it seriously. They got multi asset investing, and they're trusted by millions. We have Ro Khanna in the restream waiting room. Let's bring him in to the TVP at Ultradome.

Speaker 2:

Ro, good to meet you. Welcome to the show. How are you doing?

Speaker 4:

I'm doing well. You guys have become quite the celebrities in my district. Everyone is tuning in to your podcast. I just I'm

Speaker 2:

glad to hear it. I'm I'm glad to hear and we're and we're happy to have you on the show. Thank you so much for taking the time. What was that?

Speaker 4:

Are the All guys are the All In guys jealous, or do they do they respect you?

Speaker 2:

I think that they have left. They have they're in the stratosphere. They have left the, you know, the the low the scraps. Yeah. We're picking up the scraps compared to them.

Speaker 4:

They they they think I I think like any great Silicon Valley startup, you're Yeah. Nipping at their at their heels. I I

Speaker 2:

Well, it's funny. We we we had we had a New York Times piece about us. It was a very nice, you know, just like, here's what TBPM is doing. Just kind of an explainer piece. David Sacks, of course, got a little bit more of the investigative journalism treatment, got five reporters.

Speaker 2:

We only got one. And so I think that tells you about the relative importance of the shows. But, anyway, I'm sure we'll get into that. I would love for you just to kind of set

Speaker 1:

Normally normally, when our guest joins join and they're wearing a suit, we we say thank you. But I think this is a I'm assuming it's one of your daily drivers. So Yeah.

Speaker 4:

Well, you know, I I'm I'm not back in my district. They it'd be the only way I'd lose my seat is if I started to show up with a suit to, like, be the place was there.

Speaker 1:

But in DC, he's

Speaker 4:

with the uniform. Yeah. That's the uniform in DC.

Speaker 2:

But I was hoping you could you could sort of, take us through a little bit of the prehistory since it's the first time on the show. Just explain, how you wound up in this position, a little bit of your backstory. And then, obviously, there's so many hot topics that I wanna talk about in, artificial intelligence and tech broadly, and I want I want your opinion on on everything that's going on. But I'd love to kick it off with a little bit of, like, how you wound up in congress.

Speaker 4:

Sure. Well, I am the son of immigrants. My parents came from India in the late nineteen sixties. My grandfather spent four years in jail alongside Gandhi as part of the Indian independence movement, and that really inspired my love of public service. When I came out to Silicon Valley, I I had a professor, Larry Lessig, he said, if you care about policy, don't go out to to to Silicon Valley.

Speaker 4:

That's where the interesting things are happening. That's where the big things are happening. So I went out, and I ran when I was 27 against the Iraq war, and I got killed. I got crushed, seventy one to nineteen. But came to the attention of, folks as someone willing to stand up for, against the war.

Speaker 4:

And then I worked as a as a tech lawyer. I supported president Obama. I got to go work for president Obama. And then I wrote a book about what we needed to do to build new manufacturing across this country in 2012, what we needed to do to really have the modern economy in different parts of

Speaker 8:

the You're

Speaker 1:

one of the the the first American beginning of the American dynamism movement, would you would you say?

Speaker 4:

Yeah. I I I was gonna say to President Trump, he stole all my ideas in terms of manufacturing, but, you know, he'd he'd

Speaker 1:

Well, that's good. That's good though. That's good though then. Right? We I just

Speaker 4:

want No. No. Look. I I I support the American dynamism movement. I I'm a fan of sort of what Marc Andreessen wrote in a Wall Street op ed about, like, how do we not make masks in America?

Speaker 4:

How do we not make basic things in America? When my parents came to this country in the sixties, we were the place to be. We were humming. We're brimming with confidence. Kennedy said to go to the moon.

Speaker 4:

And and, you know, my first book was about why manufacturing still matters. I think it was a colossal mistake to let China eat our lunch on so many key industries, especially now with rare earth metals and magnets. I mean, we should have a Manhattan project to do that in The United States or New Zealand, Australia, Chile. But, you know, so I I after my time with the Obama administration, after I wrote this book, I said, technology is is gonna shape so much of the future of this country. I have a vision of how we can make sure that it it helps everyone in my district and around the country, and maybe I have something to offer to to to congress.

Speaker 4:

So I ran against an incumbent again, lost again. California is a machine dominated state. It's very hard to break in, and I persisted and won on my third try.

Speaker 1:

There you go. Third time's a charm.

Speaker 2:

And so for for this year, in 2025, how would you frame, you know, your top priorities? There's this weird there's this weird disconnect between, that we've been tracking on, like, how relevant is AI? It's it's so dominant in tech, and yet, if you talk to somebody at Apple, they'll be like, we didn't wanna focus on AI this year at all. We wanted to focus on battery life because that's what helped us sell phones, and AI was actually not a driver of iPhone sales, for example. It's it's a it's a deeply pervasive discussion point, and yet it's not necessarily, and yet it's it's widely used, but also widely hated.

Speaker 2:

It's such a unique technology. But, just in terms of political priorities, what's been on the top of the stack for you this year?

Speaker 4:

I wanna answer your question on AI. But, obviously, in the last few months, what's been highest priority is getting these Epstein files released. I mean, Thomas Massey and I passed the Epstein Mhmm. Transparency Act. It was my bill, passed 427 to one, a 100 to zero in the senate, and Donald Trump signed it.

Speaker 4:

Most urgently, it's about Yeah. Justice for these underage girls over a thousand victims who were raped at Epstein's Island. But it's also about this kind of idea of elite impunity that these rich and powerful people, I call them the Epstein class, don't play by the rules Sure. Which you and I have to play by, and people are tired of it. And it also is a story of how how in the world you get some things done in Washington.

Speaker 4:

How how did a Yeah. Bay Area progressive congressperson end up getting Donald Trump to sign his bill and getting 427 people in the house to vote for it and a 100 senators. So that has been, the immediate priority. But what I say to folks

Speaker 2:

So are are you optimistic that, the American people will ever get a, like, a truly cohesive narrative on the Epstein story, or will it be our generation's JFK assassination?

Speaker 4:

I'm confident we're gonna get far more than we've had so far. The release is now mandated law December 19 or December 20. I think more names are gonna fall. You've already had some Sure. High profile names fall because of their affiliation with Epstein covering up for him or

Speaker 2:

Sure.

Speaker 4:

Or being inappropriate. I I there are gonna be other names that come out. Now do I think that it's gonna satisfy everyone? No. There's always gonna be some sense that we didn't get a full justice.

Speaker 4:

Yeah. But it's gonna be much better than these women who were denied justice Sure. For decades, which was not partisan. I mean, they were shafted by a justice system that didn't work. And there are lot of rich and powerful people who got away with it.

Speaker 4:

But look, I what I tell people is that AI is gonna matter even more than anything. And and to your point about Apple, it's not AI literally as just AI as Grock or ChatGPT or a technology that detects patterns and can predict the future based on patterns. It's more that AI has become a symbol of for a technology revolution that people know is changing everything about their way of life and the economy and where they feel like they don't have control, that they don't have a full say in what that's gonna mean. They don't have a full say in what that's gonna mean for their kids in terms of having good paying jobs.

Speaker 2:

Mhmm.

Speaker 4:

And they're unsure if their kids are gonna have as good a life as their parents had. They don't know what that's gonna mean culturally for them as citizens. Are they gonna have the same sense, or are they just gonna be manipulated by algorithms? And they don't know what that means culturally as their kids are on phones in school and and becoming sort of creatures with machines. And so this whole concept of how technology is going to be something that empowers people and that people feel comfortable about as opposed to fearful of is the challenge in my view of our time.

Speaker 4:

And, you know, I've I've gotten attacked from some people in Silicon Valley saying, oh, it's kinda a Luddite. And I was like, no. I'm not a Luddite. Of course, I believe AI can do a lot of great things in medicine Mhmm. In coming up with new disease and lowering costs.

Speaker 4:

But I I don't think we can be oblivious to people's concerns about keeping jobs and keeping social cohesion and making sure their kids are gonna have a good economic future. And so I've tried to be thoughtful about how we adopt AI, how we adopt technology in a way that keeps the American dream alive and and benefits folks.

Speaker 2:

And, I mean, that's such a wide remit how we adopt AI technology because you can see it implemented from a chatbot that, you know, some random person uses or, you know, kids are using AI all the way down to, you know, deeper in some the bowels of some enterprise software product that, you know, no human was ever interacting with to begin with. And then it's just streamlined a little bit with, some some AI dropped in the middle of some big system. How are you thinking about creating some sort of taxonomy around AI? Do you do you like a divide between generative AI and more traditional machine learning workloads? Do you see a divide between consumer and b to b applications, self driving versus what happens in a chatbot?

Speaker 2:

Like, how are you thinking about actually breaking apart that problem? Because there's so much there when we say AI.

Speaker 4:

I would say that the key distinction is is AI going to enhance human capability or eliminate human beings? Mhmm. That is the the distinction. And that we need to figure out as a society how we get more AI that is enhancing human beings as opposed to just eliminating them. Mhmm.

Speaker 4:

Let me share two thoughts on this, both of people who influenced me. Steve Jobs described a computer as a bicycle for the mind. Yeah. He didn't say computers would eliminate the mind. He just said it would make the mind go really faster and better.

Speaker 4:

Yeah. Right? And my view is how does AI do that? Yeah. And then Darren A.

Speaker 4:

Swogle, who won the Nobel Prize at MIT

Speaker 2:

The college.

Speaker 4:

Has this idea of total factor productivity. Sure.

Speaker 2:

Let me

Speaker 4:

try to explain it simply. If you just had AI replacing human beings and those human beings then becoming not productive Mhmm. Not only would you have frustration in our society Yeah. Right? I mean, who wants to just get a check without contributing?

Speaker 4:

People have pride. But you also wouldn't actually maximize total production because you have all these people who could be doing things who are not productive and who are not being able to earn a living and spend money. And so what he says is that there is some savings of that for consumers and for shareholders if a technology just eliminates labor. But the best technologies, like electricity, like automobiles, don't just eliminate people. What they actually do is they increase people workers' ability to produce.

Speaker 4:

That there are technologies that increase human capability, and so you have the benefit of the synthesis of the technology and the worker. And that that is actually what transforms lives, and he calls it total factor productivity. And so my ideas around this has been how do we do that? How do we make sure we just don't eliminate 4,000,000 commercial drivers? How do we make sure that the adoption of things is actually making us more productive and that it's being done in with respect to to to to workers and and and capability.

Speaker 1:

Okay. But so so let's let's make it more specific because I because I agree at a high level with a lot of that. But let's talk about, like, a specific role or job like truck driving. Mhmm. You've generally come out against or or have concerns around AI based job displacement with and with long haul trucking and truck drivers.

Speaker 1:

On the other side of that, if running a trucking company and I want to deliver the best possible service for my customers, it's possible that AI would be able to support that. How, how, what kind of like policy do you think is right in order to create, you want some guardrails around the industry, how AI should be used in trucking? I'd love to kind of understand more.

Speaker 4:

Yeah. I would say have a human in the loop. And so what does that mean? I'm on a plane, a lot of it is automated, but we still have a pilot there. And I'm glad we have a pilot.

Speaker 4:

I wouldn't want to just fly in an automated plane. And so does this mean that a truck driver's job may become more appealing? Because right now, as you know, we have a shortage actually of truck drivers of more demand. But if they have a a assist from a technology that maybe allows them to rest more, that's less taxing. They're there for the edge cases if something is possibly going wrong.

Speaker 4:

They're there to deal with maintenance. They're there to make sure that you have loading and unloading happening. We can reimagine what the role of a truck driver is going to be. And we can certainly have a temporary view for the next five years that you should have the driver there now. That doesn't mean that at some point there may not be jobs or certain parts of things that don't require a driver.

Speaker 4:

But it doesn't seem unreasonable for five years to say, we want a driver in the loop and let's rethink the the types of of of jobs that that that will be. And if we need the government to be helping invest in in in in these in the developing of this technology, fine. But do it in a way that's gonna be complementary with drivers.

Speaker 2:

Yeah. This is kind of happening already with Waymo where there is a human in the loop, and it's but, you know, the ratio of teleoperators to cars on the road is potentially higher than one to one right now, you know, according to some reports. But over time, I think the Waymo team expects there to be fewer and fewer humans in the loop over time. The question is how fast does that happen? And you're sort of proposing, maybe try and make that as gradual as as a as a process as possible.

Speaker 2:

Because, I mean, you go back to, like, the elevator operator used to be a human. Now we use buttons, and no one's really missing those jobs. They phased out over time. I think the main thing is everyone is concerned about rapid job displacement, not necessarily the if if I told you your grandson can't, can't be a truck driver, you'd say, oh, you know, he'll find a different job. But if it's like every truck driver out of out of the job next year, that's obviously much more dis, dis like, disengaging to The US economy.

Speaker 2:

Is that how you think about it in terms of just timelines more than strict rules forever?

Speaker 4:

I think that's thoughtful. I there's a famous economist who once said in the gender time, jobs for the father, not for the son. And by that, he meant, look. We've gotta make sure that people in their thirties, forties, fifties, sixties have have jobs. That doesn't mean that that's exactly what their kids are gonna do or their grandkids are gonna do.

Speaker 4:

For a lot of these human in the loop legislation, we're talking about five years. We're not talking about fifteen years. And we're talking about roles evolving. Right? I mean, it may be that there are these evolve, then there's less of a need to to to hire folks down the line.

Speaker 4:

And you you have a natural transition of folks. But you're taking people who are workers and making sure that they're productive and they have a good life. Let me explain why I think this matters. Phone operators, which people often give an example. And Alex Alexis, who's at Chicago, had a great point about this.

Speaker 4:

That was 2% of the workforce. Commercial drivers are 10% of

Speaker 2:

the workforce.

Speaker 1:

Wow.

Speaker 4:

You already have an anger in the country of so many people displaced by globalization, displaced by the concentration of wealth in some areas. And you really wanna throw into this mix of rapid, mass job loss displacement of and and then what? Just compensate them and and have people stay at home and just get a check? Like, is that the society that we think is gonna be productive? Or do we rather, figure out how they have, some role and some say, and the transition be managed in in a way that is considers their interests as well.

Speaker 4:

And I get that it's a good natured debate and people will say, okay, Connie, you're adding some cost to the issue. And if all you cared about was shareholder profits and minimizing consumer costs as your only holy grail and you didn't care about jobs and you didn't care about communities, then people have a legitimate critique of me. But I would argue that that was the mentality during globalization, and it's what's led to so much of the polarization, not just of our politics in The United States, but in the Western world, led to things like Brexit, led to anti immigrant sentiment. And maybe we should consider jobs and communities not as dispositive, but as a factor just like we consider consumer costs and shareholder profits.

Speaker 2:

Yeah. What's your what's your look back on how the Uber story played out? Because that was a weird moment where there was a big pushback from the taxi cab drivers. Those jobs still exist, but they're just way less profitable because the medallion system has kind of been undone. But if you want to make a if you wanna make money, driving someone, you can, but it's you're making less money.

Speaker 2:

Like, do you think that we should have handled that differently if we could run back the time, or do you think it happened slowly enough that it was actually okay and delivered enough value to the consumer? Because Uber is one of those weird examples where the amount of, you know, taxi cab like activity, ride sharing activity, it 10x ed, and and more people take these these rides as than ever before, in the taxi era. And yet it did have remarkable, impact on the market structure of that industry.

Speaker 4:

Well, I'd be hypocritical for saying I'm against Uber. I take Ubers, all the time.

Speaker 2:

Right? I'm the same way.

Speaker 4:

Expose in the next time I get into an Uber. Yeah. The, but but I I'll say this. We should have done more for the medallion owners. I cried actually in New York.

Speaker 4:

This is before Zuran Mamdani became Zuran Mamdani when he He was an assembly was really focused on a lot of these taxi drivers who had lost their medallion value Yeah. And were underwater. And what could we do to to compensate them? And I'd actually reached out to Jamie Dahman who tried to do something to his credit through through JPMorgan, and it it ended up not working out. But, you know, we should have, as a government, done more to help those folks who who had medallions, who lost all their value.

Speaker 4:

And that's an example of something where we could have been more proactive. And then there's a huge debate about Uber drivers and whether they're getting enough value and have enough say over their lives. I would argue that we need that. And I'd argue we need national health insurance. This is the biggest area where if you're not gonna be employed as a as a traditional employee, it would really help if people didn't have to buy health care on an exchange that has soaring premiums.

Speaker 4:

So there are better things we need to be doing to help that Uber driver. But do I do am I glad that there is a technology like Uber? Yes. I am. I think it has created jobs, it has made life easier for many people.

Speaker 1:

You brought up Mamdani. He made a post, I think it was yesterday or the day before, that a bunch of Silicon Valley types were agreeing with, which was Right. You don't you don't you haven't seen that very often. It was around basically, around SMB deregulation making it easier to get a small business off the ground. Is that should that be a more important conversation at in in in every state and region?

Speaker 1:

I feel like I growing up in California, I've seen so many businesses, like, try to get off the ground, and you end up seeing like a finished restaurant that just has its door closed because they're waiting on some permit or something like that. And it's obviously hard enough to start a restaurant. And it seems like oftentimes local governments can get in the way. Do you think that needs to be just a bigger part of the conversation as given that starting a business is a great way to insulate yourself from at least some job displacement risk with AI.

Speaker 4:

Yes, it does. And look, Zoran became famous in part with his halal video where he was basically saying it takes too much regulation to have a halal stall and we need to streamline that. And so I believe, yes, we need to make it easier for people to to start a small business, to be their own business owner. That's not just making the permitting easier. It's also making sure people have access to capital.

Speaker 4:

A lot of times, that's a barrier. But I'll tell you one thing that I think is often a a blind spot for, folks in my district. I love small businesses. I love entrepreneurs. I think that there's a lot of people who wanna build wealth.

Speaker 2:

Completely agree. Sorry. Completely agree. We love small businesses here too.

Speaker 4:

But here's the but. Okay. Most Americans most Americans are not gonna go just start a small business. Like, this this idea that every person in Bucks County, Pennsylvania, where I grew up or Western Pennsylvania should start a startup or build a business. Like, my dad never did that.

Speaker 4:

He had a middle class life. He worked for the same company for thirty years. And there are a lot of people who just want a decent job. And they just want a job that can support a family. And there's nothing wrong with that if they wanna be in manufacturing or they wanna be a nurse or they wanna be a childcare provider.

Speaker 4:

And so sometimes our rhetoric becomes like, why can't everyone become an entrepreneur? It's like, why can't everyone become a politician? Now maybe a entrepreneur is a better life. But, like, a lot of people just don't wanna do that. Yeah.

Speaker 4:

And they still wanna have the American dream.

Speaker 2:

Yeah.

Speaker 4:

And so all I'm saying is let's think about how to help small business owners, but let's also think about the 4,000,000 people who are drivers and, like, what is their life gonna look like? And and and it's important to have that balance.

Speaker 1:

Mhmm. Yep.

Speaker 2:

Give me some lessons from the recent trip to China. I'm fascinated by how they're dealing with AI. Are they doing anything right? Are they moving even faster? Do they have a solution to the job displacement, problems?

Speaker 2:

Is there anything good or maybe risky that you found going out there? What were your takeaways?

Speaker 4:

Three takeaways. One, one third of the AI talent is in China. Mhmm. What does that mean? That means it it would be totally counterproductive to ban Chinese students from coming to The United States or Chinese entrepreneurs for coming to The United States.

Speaker 4:

Yep. We wanna have, that talent come to The United States because we still have a better, ecosystem for capital and for investment. Second, we need to make sure that we're developing the talent in AI here in The United States and investing in STEM and making sure that we're encouraging local development of that. A third, and this is the most important important, guess how much, youth unemployment is in China.

Speaker 2:

Technically, 20%? It's really high. You

Speaker 4:

guys are you guys are too smart

Speaker 2:

on this stuff.

Speaker 6:

It's really high.

Speaker 4:

2020%?

Speaker 2:

That's crazy. How is that possible? I feel like can't they just go build more bridges and create more jobs? I thought I thought it was a command and control economy. I don't know.

Speaker 2:

It's always

Speaker 1:

They have enough they have enough empty sky rises, I think.

Speaker 2:

Maybe. I don't know. Yeah. What what was what was your takeaway from it?

Speaker 4:

As I describe it to people, you can't build dating apps in China. Right? Like, so, you know, the people who at

Speaker 2:

least can't. Why is a band? Right.

Speaker 4:

Yeah. I mean, they they they're it's such a directed economy. They want everyone to, like, make stuff, manufacture stuff.

Speaker 2:

Okay.

Speaker 4:

Not do who not do things that they would consider frivolous. Sure. Sure. No sports app, music app, but Interesting. All the cultural stuff that we do that improves consumer life or thinks about consumer needs.

Speaker 4:

And, you know, so you're someone who gets this fancy education in college, and then they're like, okay. Go work at a factory. And just like we've undervalued people who wanna work at factories in America, we should be having more trade schools and more respect for factory workers. They've undervalued people who don't wanna work at a factory. And the reality is, like, you should have both choices.

Speaker 4:

So these people, they they they're there, and they don't wanna go necessarily to build a bridge or necessarily to build the next factory of robotics. Yeah. And the it was hilarious because I would talk to the premier, Li Chang, or others, and they'd say, well, it's a voluntary unemployment problem. These are just folks they they should be getting doing these jobs. But what if in America we said, okay.

Speaker 4:

You know, as as one of the newsroom when when they're they were being laid off said to someone, go become a an electrician. Well, that's as offensive as telling a steelworker to become a coder. Like, you know, people do things and they wanna do what they aspire to do. And China is a command directed economy that is overvalued manufacturing, doesn't have that diversity. We do.

Speaker 4:

Our problem has been the opposite that we undervalued making things. We undervalued the trades. And so what we need is sort of a balance for America to have manufacturing, but also this incredible ecosystem of the service economy, which can employ people where China can't. And that's ultimately why I bet on America. I'm also one pointer sick of this argument that let's just go be like China.

Speaker 4:

They're where they're gonna eat our lunch. Really? You know, the Chinese model is a a crony communism. Like, okay. GGPing gets rich, and a bunch of people who are running these companies get rich.

Speaker 4:

And the rest and then you have 20% unemployment, and you have, consumer welfare declining. And and and look at how most people live. They don't live in nice houses with, you know, two cars. So, like, I don't want China as a model. Yeah.

Speaker 4:

And I'm not going to compromise every American having economic security just because we're chasing China. China is not the model. America needs to be more like America of how we built America in the 1940, fifties, and sixties.

Speaker 1:

I completely agree. Quick. You. Want your takes on a couple things. Housing affordability, I think a lot of people agree right now that housing affordability is sort of like upstream of a lot of the problems that we're facing as a country?

Speaker 1:

What's your current stance on how we can improve affordability at the local level and at the federal level?

Speaker 4:

I'm a Yimbi. I'm an abundance guy on housing. Mhmm. We gotta build far more housing in California.

Speaker 2:

Mhmm.

Speaker 4:

You know, I I I don't endorse people who are sort of zero housing people in in in my district. We gotta realize that aesthetics matter, but economic, equality of opportunity matters more. And you can't have $5,000,000,000,000 companies in my district and expect to live like where the valley of the heart's delight. Like, if you got that many companies, you gotta have housing near transit and dense housing to make sure that people can live there. And that it's not just a place where wealthy people can live and that the working and middle class is getting shafted.

Speaker 4:

We also need to stop private equity from buying up single family homes. People say, oh, this is a red herring. No. It's not a red herring. In some places, they have bought up too much single family homes.

Speaker 4:

So pro building, pro streamlining, making it easier to build and having zoning reform, and and stop private equity from buying up these single family homes.

Speaker 1:

What about international Will will we will we make progress at least in California on those issues in the next ten years?

Speaker 4:

Yes. Because I I I think people realize we didn't make enough progress over the last ten years that this is a failure of California policy. And whoever's elected the next governor, I can't imagine it won't be on a, abundance agenda when it comes to housing. And it's not gonna be okay. Let me do it at the last year, try to do something of of an eight year term.

Speaker 4:

It's gonna be day one. How do we start to do things that it's gonna build more housing? And so I I I think it's been a wake up call for for California.

Speaker 1:

That makes sense. Yeah. Any quick comments on the current state versus federal AI regulation? We didn't get to touch on that earlier. And and you had some comments recently on SB ten ten forty seven, the the bill in California.

Speaker 1:

But what's your updated view on on where regulation should be happening?

Speaker 4:

Well, look. Ultimately, need a federal regulatory framework. But the way you get good federal legislation is having legislation in the states. That's federalism. And I don't understand how you would have a moratorium on having state legislation when federal legislation right now looks bleak.

Speaker 4:

The prospects of it are are bleak. It is such an unpopular position even among Republicans. So my view is build a consensus that you can have thoughtful regulations at the federal level and work on that. Don't stop states from from regulating. And this idea that, okay.

Speaker 4:

You're gonna stop all the growth. I mean, my district is $18,000,000,000,000 of value. We've got five companies over a trillion dollars. East Of The Mississippi, there's not a single trillion dollar.

Speaker 2:

You know? California's undefeated. It's so good.

Speaker 4:

You talk to folks in, like, Bucks County, Pennsylvania where I grew up, and they're like, come on.

Speaker 2:

Yeah.

Speaker 4:

Come on. They're producing more wealth than ever before. Like, what we wanna know is how is how are our kids gonna fit into this? Yeah. And I I just think that that I wish more tech leaders you know, sometimes gets it as a Jensen Wong is talking about this.

Speaker 4:

Like, yeah. I mean, I've had how do we create economic development opportunities in places that have been left out? How do we make sure that everyone comes along on the AI revolution? I I just think it would it's in tech companies' interest to embrace this in in a similar way as the economic royalist embrace the New Deal eventually. I mean, you you can't have just a capitalism that is only working for some with with large chunks of the country suspicious and and left out.

Speaker 2:

Yeah. I I just worry that we don't know the shape of what we're regulating yet. Like, the unintended consequences of social media took five, ten years to develop. I mean, two years ago, we were reflecting on this. People were worried about AI killing everyone and creating the Terminator.

Speaker 2:

And then what wound up happening well, it wasn't really political misinformation. It was much more people chatting with it for a really long time, going crazy, you know, maybe overbuilding, maybe risk in the debt markets. Like, the risks were very hard to predict. There were risks, but it wasn't exactly what we thought. And so I'm always I I'm I'm a little bit, like, hesitant about, like, you know, maybe there should be regulations, but how when will we be confident that we know how to regulate it?

Speaker 2:

Is it right is now the right time? Do we have clarity? Because a lot of the stuff, it stands on, you know, what we already have fair use. We already have copyright protections, and so a lot of it can be enforced through the courts, I would imagine. But, of course, if if new problems come up, they need to be resolved, and that's the way we resolve them in in a democratic society.

Speaker 4:

I think that's fair. The places I focus on are jobs Yeah. And, American citizenship.

Speaker 2:

I agree with you on the jobs part, but it it just feels like the jobs, we haven't seen a collapse. And and even people building the AI technology are like, this is gonna put everyone out of jobs, and that's good. And then the people that hate the technology are saying, it's gonna put everyone out of a job, and that's bad. And it's kind of crazy because they all agree that the jobs are going away, and yet what do you get when you actually look at the jobs figures? It seems like we still have jobs.

Speaker 2:

Like, it seems like we we actually can't delegate to the AI, and I can't just say, hey. You know, trucker, like, I I want the AI to handle this one. It just it's just the technology is not there yet. And will it be a year? Will it be five years, ten years, a hundred years?

Speaker 2:

There's a whole bunch of incentives to say it's coming right now, and it's hard to get a read on. And predicting predicting when things will happen is is you know, fortunes are won and lost on that on that alone.

Speaker 4:

Totally agree with you. John Maynard Keynes said we'd all be working fifteen hour work weeks, and he was

Speaker 2:

on Exactly.

Speaker 4:

He knew more about economics than Yeah. Any of us. So, you know, it's hard to predict. But I think what we can do is when you look at Daron Acemoglu, he says, why don't we have a neutral tax code so we're not incentivizing a a depreciation of investment in technology and automation over hiring people? I mean, there are things we can do that make it that we we we prioritize having people in the loop.

Speaker 4:

And then there things we can do in our social media environment that protect us as citizens and kids. Two things are like, let's eliminate bots. Right? Elon Musk talked about doing this on x, and there's still a ton of bots. But a lot of the bots that use AI are, in my view, hurting our democracy.

Speaker 4:

And then let's protect kids from some of the harms on Yeah. Social medias.

Speaker 2:

Yeah. Yeah.

Speaker 4:

You know? So Yeah. I agree with I guess I you know, I love I love sparring with folks, and I appreciate sort of the criticism I've gotten from some of the tech folks for the the the tweets on on AI and and and drivers. But I guess what I would hope for tech people listening to this is don't resist every form of of of regulation and and sort of dismiss people's anxieties. Instead, be part of how we get smart regulations and how we answer people's concerns.

Speaker 4:

Because if 70% of the American people believe the American dream is dead and have a concern about AI, like, the answer to that for anyone who's been, like, in a relationship is not to dismiss it and say they're dumb. It's to say, okay. How do I address that anxiety so that we can move forward? And I guess I I I guess my hope would be that there'll be more tech leaders like that. Victor Peng is one who was a former leader at AMD.

Speaker 4:

I mean, there's some people who are thinking in that way, and I I I think it's in Silicon Valley's interest to have that kind of view.

Speaker 2:

No. That makes sense. I think

Speaker 1:

you really you really freaked people out with the there should be a tax on mass job displacement.

Speaker 2:

Well, there is a tax on the profits. Right? Like, we we tax profits.

Speaker 1:

So Yeah.

Speaker 2:

I mean, it it it there there's a question of, like, maybe we adjust that, but it's it's all these are all dials that already exist. We're just discussing how we turn them, I would imagine. Yeah.

Speaker 1:

I don't know. Well, thank And

Speaker 4:

a lot of times, you know, I'm this is one thing Yeah. Different for me than other politicians is I I toss ideas out there. If I think there's good pushback, then I adjust my views. And I I'm like a politician like this. I just talk like I talk to someone someone over a drink over at a bar.

Speaker 4:

You know, I and everyone else is, like, so scripted. Oh, we can't put out an idea because, you know, maybe it'll come back ten two years later on Face the Nation. I just don't think that's what our politics are. I it's like, put your ideas out there. Like, what human being doesn't have some ideas that are dumb?

Speaker 4:

Like, maybe maybe Einstein didn't or something. Most of us, yeah, we put up good ideas, we put bad ideas. I love it. We think

Speaker 1:

I love I love that I love that approach and it and it's and it, certainly sparks a conversation. Conversation.

Speaker 2:

And it certainly fits with what we've done here today. This was really fun. We really appreciate you coming on the show and just, like, going all over the place and just talking through all this stuff. It's fascinating. I'm learning a ton, and, we really appreciate you taking the time to come talk to

Speaker 1:

us. You Thank so much for coming on.

Speaker 4:

Well, you guys are doing great. Seriously, you're you're elevating the conversation in

Speaker 2:

Yeah.

Speaker 4:

Silicon Valley, and it's an honor to be on. And, I I look forward to meeting you.

Speaker 2:

We'd love to have you back on the show and and go way deeper on all of these. And I'm sure by the time the next time you're on, all the data points will be different. And and we'll be looking at and we'll be staring at new problems, and they will require new solutions and new discussions. And so thank you so much for taking

Speaker 1:

the time to come talk to us. I appreciate your approach.

Speaker 4:

Thank you.

Speaker 2:

Have a great day. We'll talk you soon. Bye. Before we bring in our next guest, let me tell you about numeral.com. Let numeral worry about sales tax and VAT compliance compliance handled so you can focus on growth.

Speaker 2:

Our next guest is

Speaker 5:

Where do I have

Speaker 1:

a Jonathan.

Speaker 2:

Jonathan Swartlin. Don't function. Hey. Sorry to keep you waiting. We were in, you know, a a political quagmire.

Speaker 2:

We were in the swamp.

Speaker 1:

We're in the swamp.

Speaker 2:

We went to

Speaker 6:

the swamp.

Speaker 2:

We don't normally go to the swamp. Normally, we talk about series b's. We talk about large series b's. He did get us going, though. He was telling us how much value has been created in in his district.

Speaker 2:

It's in the trillions. It's in the tens of trillions. And we were just, know, foaming at the mouth about the market caps, and then we said a bunch of other stuff. But thank you so much for coming on the show. For those who aren't familiar, introduce yourself.

Speaker 2:

Introduce the business. Tell us what's going on.

Speaker 9:

Absolutely. Great to be here. Now you're climbing out of the swamp. We're gonna talk about something a little less swampy.

Speaker 6:

Thank you.

Speaker 9:

We talk about health. It's great to see you guys.

Speaker 1:

Well, mean, is health is, like, honestly more political than politics.

Speaker 2:

It can't be. It can't be.

Speaker 1:

But but this conversation won't be.

Speaker 9:

Yeah. It's funny. We say we actually say that biology is bipartisan, though. And like this idea of everybody can agree that nobody likes to suffer. Yeah.

Speaker 9:

You know? And everybody can agree that preventable death shouldn't happen. Yeah. So it it it comes at but, of course, the nuance of how you get there can become political because who's gonna pay for it. Right?

Speaker 2:

Yeah. Yeah. That. But also just

Speaker 1:

Well, that or or the well, this diet

Speaker 2:

is This diet. It's right wing. That diet. Oh, working out? That's a right wing thing.

Speaker 2:

Or, like, no. This is left wing. And, you know, different ingredients became politically charged over the last

Speaker 6:

few years.

Speaker 1:

My powder is better than your powder.

Speaker 2:

For sure. And sometimes there's political influences on the right and the left who actually have the same supplier, and then they put different branding on top of it, and they sell that. That that's a fascinating rabbit hole to go down. But, anyway, we're not here to sell supplements. Let's talk about the business.

Speaker 2:

What what Exactly.

Speaker 9:

You know, it's funny. It's like, I'm not left wing. I'm not right wing. I'm the whole bird. Otherwise, you fly around in circles.

Speaker 9:

It's kind of the idea with

Speaker 6:

all this. I like it.

Speaker 2:

I like it. The whole bird. That's great.

Speaker 1:

The whole bird. The whole turkey.

Speaker 2:

So, yeah, take us through the shape of the business these days. What what what's the value prop to consumers? What's the progress been? How big is the company? Kind of set the table for us.

Speaker 9:

Okay. So simple value prop is get on top of your health. It's time you open your health. So what does that start with? It starts with a new platform.

Speaker 9:

It's $1 per day to join, and the platform includes twice a year comprehensive lab testing at over 2,200 locations, any Quest Diagnostics around the country. You go and you test everything, heart hormones, liver, kidney, thyroid, cancer signals, you name it. Up and down, all of that data goes into a platform, into an app that explains you what's actually happening inside your body. And these are the things that you would not get in a physical. This is, like, a true, true deep look.

Speaker 9:

And what function has created is this entirely new standard for your health that every year for the rest of your life, you know that you're well on top of whatever's happening inside your body. You're seeing how it's changing over time. You're making sure that you're getting well ahead of disease, you're doing everything you can to feel your best. So that's that's the the value proposition, and that's what Function delivers right now. We started with lab testing.

Speaker 2:

Yeah.

Speaker 9:

Because that's that's, like, that's the most impactful data. 70% of medical decisions are based on lab testing. And recently, we acquired a company. Mhmm. You might have heard about this.

Speaker 9:

It's called Ezra. Mhmm. And Ezra is an imaging business. And so Ezra does and what has been amazing for us, we've gotten FDA cleared AIs that have reduced the time that it takes for somebody to get an MRI. K?

Speaker 1:

Mhmm.

Speaker 9:

So why does that matter? One, nobody wants to be signed in an MRI machine typically.

Speaker 2:

Yeah.

Speaker 9:

Two, it also massively reduces the cost, and it picks up the efficiency. Yeah. And so you what we've actually done is we've introduced lab testing, became one of the largest, most powerful lab tests in the country, and then we went into imaging. On the imaging side, we're bringing down the cost. And what you're seeing actually emerge is this new standard for health.

Speaker 9:

We took the most impactful parts of the health system for capturing your data, and we we packaged it up into something that's really simple to understand and really affordable for for many, many people.

Speaker 1:

Talk about how talk about how MRIs were used historically. Are these things that get done when you have acute pain or you have an issue, then you're doing it? This like Tear

Speaker 2:

your ACL.

Speaker 1:

Yeah. This feels like flipping it and saying using it as, like, preventative preventative care. Is is that the right read?

Speaker 9:

That that's where I read it. And not just preventive. I would just I would just say responsible because this idea of preventive is great, but it's also what might be happening right now that you don't even know about. Right? And so the word preventive and the word early are a little tricky for me.

Speaker 9:

Because the word early, it's like, why is it early detection? Then we just call it detection. Can't we just get rid of the word early? What MRI does is, traditionally, it allows somebody to look inside the body. But to do that, it's been really, really expensive.

Speaker 9:

You get an MRI, you you know, tear your ACL, something like that. Like, you basically you you have to send thousands of dollars to look inside your body. And that's the way insurance is set up, and that's the way MRI is set up. But what MRI can do is it can look at every single organ and look for tumors that are point two centimeters, two millimeters. It can look for stroke risk, aneurysm risk, endometriosis, hernias, tears, everything.

Speaker 9:

So if you actually wanna understand what's happening inside the body, an MRI is an incredible way to do it. But it's been so arcane and so difficult. It's never actually been architected and set up to look at the body and get well ahead of things. And it's usually been, oh, you go in a hospital, you broke something, you have an issue, you go look at this one particular area. In function's case, you can actually look at most of the body through an MRI, and you can detect cancers early, detect the aneurysm, stroke risk, etcetera.

Speaker 9:

Mhmm. And you can do it for $499, and you can do it across almost 200 locations by the end of this year. There's never been anything like this. This is the first time in history this has been possible. It is the first time in history it's been possible geographically, from a cost perspective Mhmm.

Speaker 9:

Technologically. And culturally, it's changing. People are realizing this. What I was alluding to before, it's a really important point, a new standard of health is emerging, and that standard includes twice a year comprehensive lab testing. It takes a ten, fifteen minutes each time.

Speaker 9:

You go in. You hear whole body testing. You find out what's actually happening inside. And the second thing is now a quick MRI every year. If you do it, what you're doing is you're actually creating a baseline for your whole health, and you're seeing how things are changing over time.

Speaker 9:

You're catching velocity. You're seeing bad trend lines, and you're also just flagging critical issues as well as finding out what you can optimize and what can be better in your life. And what's crazy to me is the current standard, like, status quo. We've all done this. We've all gone into the doctor's office.

Speaker 9:

They test you for, like, 20 things. Get a phone call in three weeks. You're good to go. John Jordy, see you in six months, a year, two years, whatever, and you move on with your day. And that's just this episodic once in a while, very narrow perspective on your health.

Speaker 9:

That's gone. But they miss they're not looking at cancer, and they're really not even looking at heart disease, the two leading causes of death, let alone metabolic dysfunction, hormonal issues, thyroid issues, and function looks at all that. I'll give you a crazy stat. A new study just came out. Forty five percent of people that were hospitalized for their first heart attack did not have what is considered high risk cholesterol.

Speaker 9:

That should be terrifying. Why? Because if you go to a doctor's office today, a regular old physician's office for a checkup, you get your LDL checked. Right? You guys have done Yeah?

Speaker 2:

Yeah.

Speaker 8:

Okay?

Speaker 9:

That marker was born in the nineteen fifties. It's older than my father.

Speaker 1:

Vintage.

Speaker 5:

It's a vintage marker.

Speaker 4:

It's a vintage

Speaker 2:

Some people would say Lindy. Don't Some people would say that's Lindy. Okay? Let's just steal, man, for a minute. They might say it's Lindy.

Speaker 9:

So so look. There is no world where any top cardiologists say, I'm just gonna rely on LDL cholesterol based. What every top cardiologist will tell you, let's look at ApoB. Let's look at LB little a.

Speaker 2:

Yeah.

Speaker 9:

Let's look at lipid particle size. For most people, they don't those words are foreign to them.

Speaker 2:

Yeah.

Speaker 4:

But they

Speaker 9:

should be. I mean, it's this avant garde stuff. Right? But so there are way better ways to look at the heart, but we're relying on something that's back in the nineteen fifties as status quo. And what Function has done is for hundreds of thousands of people now, we've actually delivered a new standard of health that includes twice a year testing of everything that looks at your heart, that's looking at your kidneys, your thyroid, your hormones, everything.

Speaker 9:

Women don't have to go to their doctor and ask for a horn panel and get chased around. Instead, they can actually get a look at what's going on with their hormones. And then on cancer side real quick. Cancer side, four hundred you're you're four times more likely to survive cancer if you catch it early. But right now, status quo is you have

Speaker 4:

to wait till

Speaker 9:

have symptoms to catch cancer.

Speaker 1:

Yeah. It's crazy.

Speaker 9:

There's no way that's okay. I don't want that for my family. So and now there's technology where we can actually with an MRI as well as with a grill test that we're we test for many, many, many thousands of people, we can actually get way ahead of these things. So I can talk about it

Speaker 1:

as well. Yeah. Yeah. So I'm I'm sold on the product. Think it's I think it's hard I think it's hard hard not to be.

Speaker 1:

It's it's the best kind of, like, value offering, I think, in health, like period. And I was sold obviously when you were raising pre seed back in the day, I have it been two two and a half years ago or something like that. Feels feels like forever ago. Can you give us like a a I'd love to get your view on an update of like the the market structure. A lot of companies have seen you guys weren't Function wasn't the first first lab testing and and and health platform like this to exist.

Speaker 1:

But your guys' execution and the growth, I think you're one of the at least growing faster than a lot of the fastest growing AI companies that we're seeing out of the last year. Give us an update on the shape of the market, how you see the market evolving. Because like I was saying, a lot of people are trying to ride your coattails. But I'm curious for for an update there.

Speaker 9:

You know, we the the market is realizing that the word consumer health has been this, like, dirty word for twenty years or something. And it's not it's it what it is is it's premise on the most primary thing that we experience as human beings is our biology. It's our it's our life experience. And what's the LTV of your health? Right?

Speaker 9:

You'd be willing to pay anything for health. It's the most valuable thing in the world for you. And so we're finally in a place where we can actually see technology and products broadly applied to health. And so it's it's you're looking at a TAM that conservatively is $7,000,000,000,000. And some

Speaker 1:

Give it up for $7,000,000,000,000, Hans. John, hit the hit the hit the Gong for a $7,000,000,000,000 TAM. Had to. Had to. Anyways, continue.

Speaker 4:

Continue. I

Speaker 9:

love it. I love it.

Speaker 5:

No. So so so look.

Speaker 9:

This is this is people have been spending absurd amounts of money on their health through these massive service platforms like insurance companies and big health systems. And finally, people are saying, you know what? Health happens outside of the doctor's office, and I'm taking it into my own hands. And what we're doing is we're bringing scientific and medical rigor directly into a platform that people they themselves can sign up for, they themselves can manage, and so they can make decisions for themselves. And that gets them way ahead of diseases.

Speaker 9:

You know, as I saying before, like, this is this is not a it's not a trivial space. I think it's it is the best is the most anticipated service for AI. It is the best application of AI in the world is to our health because that is the major experience. And so of we're we're we're surprised that it that the the category and all the all the competitors aren't it's not that it's not bigger, that more people are jumping into this. Like, we we know that people are gonna try to ride these coattails, but but we just we are our head is down.

Speaker 9:

Our focus is on how can we deliver as much value per dollar for each one of our members. We have hundreds of thousands of members, soon millions of members. We have been growing really fast because we're at we're actually delivering something to somebody that has real, real, real substantial value. And at a time when a lot of technology can do a lot, it's it's like, what are we really paying for? And it's like, hey.

Speaker 2:

Where can where can people get started? You mentioned it's a dollar a day.

Speaker 9:

Correct.

Speaker 2:

Does it take me through, like, the customer flow. Is it just a website, and then I go to the lab, explain how people can get work Yeah. Can get going?

Speaker 9:

Okay. So it used to be $999 when we started. It was manual. It was Per day.

Speaker 4:

Dollars per

Speaker 9:

day. No. The thousand dollars

Speaker 4:

per year.

Speaker 2:

Sign me up.

Speaker 9:

Thousand dollars. No. He said

Speaker 1:

He said per year. Don't don't worry. John's just messing with you.

Speaker 9:

So it started at a thousand bucks per year. Then we worked really hard to to bring up the efficiencies in tech, got it down to $4.99. And and a couple weeks ago, we announced it's now $3.65. It's, like, when it actually $3.65. Dollar per day.

Speaker 9:

Because what health health is an everyday thing, and it's an understandable price.

Speaker 4:

Yep.

Speaker 9:

And when has health care actually been deflationary?

Speaker 2:

Yeah. And then so so, go to Quest Labs probably twice a year.

Speaker 9:

Go to functionhealth.com.

Speaker 2:

Yep.

Speaker 9:

Functionhealth.com.

Speaker 2:

Function health dot com. Simple.

Speaker 9:

You just sign right up. Yeah. Right there in the scheduler, you sign up for your lab appointment.

Speaker 2:

Yep.

Speaker 9:

You show up at the lab. Yep. Your blood drawn, urine collected. You walk out ten, fifteen minutes later. Cool.

Speaker 9:

In twenty four hours, results start pouring in.

Speaker 2:

Yep.

Speaker 9:

Now now your app is live.

Speaker 2:

Yep.

Speaker 9:

And now all the data is coming in. It's making sense of it. And you test every six months. Every six months.

Speaker 2:

Got it.

Speaker 1:

Exactly. Think people one of one of the reasons people underestimated this kind of category as it was emerging is so many people got burned on, like, DNA testing. Open DNA DNA, like, the 23andMe's. You test it once. Mhmm.

Speaker 1:

And then there's, like, zero incentive to retest. Right? Yeah. Yeah. You you did 23 and Me.

Speaker 1:

You have have the data. Well, depends.

Speaker 2:

Are you working on your DNA or not? Yeah. Have you been modifying your DNA?

Speaker 1:

If you

Speaker 2:

modify your DNA regularly, you should probably be testing your DNA regularly. Yeah. You never know.

Speaker 1:

You never know.

Speaker 2:

I might have I might have rewritten my entire DNA, All of it. From from start to finish, every base pair is different now. Sign me up again. I'm ready to go.

Speaker 9:

We have to study you if that's the case. We're gonna have to we're gonna have to bring you in.

Speaker 1:

No. John John to be studied, honestly. Generally. God. And so

Speaker 9:

I and

Speaker 2:

What does 25,000 Diet Cokes do to the human body? We're gonna find out. What is 500 Diet Cokes a year?

Speaker 1:

A dollar a day on function and four diet and and at least four Diet Cokes a day for John. We're actually running we're running a split test. We have the exact same lifestyle. We we show up at the gym every morning. We work out.

Speaker 1:

We prep the show. We do the show. We hang out with our family. We just we're gonna do that forever. But John drinks Diet Coke, and I drink

Speaker 2:

Latayina, Yerba Mate, podcast in a Clint can Yeah. From Andrew Huberman, of course. And, we're gonna find out. Yeah. Yeah.

Speaker 2:

We're gonna find out. Well, thank you so much for taking the time to come chat with us. We have a small bit of breaking news I wanna get to before our next guest. So Thank you. We will be seeing you soon.

Speaker 2:

Oh, one last thing. Give us the numbers in the last fundraising round. I wanna ring the gong for real.

Speaker 9:

Yeah. Let's do it. Series b, $298,000,000 raise. 2,500,000,000 evaluation. But look look look.

Speaker 9:

The the thing to think about here is that's basically a dollar for every American adult.

Speaker 2:

There we go.

Speaker 9:

And so what what that is is that's a that's a vote on your health.

Speaker 5:

That's not just

Speaker 2:

I love it. Appreciate you guys. Thank you so much for taking the time to stop by. We will talk to

Speaker 1:

you soon. You here, Jonathan.

Speaker 9:

Long live TVPN. We'll talk to

Speaker 2:

you

Speaker 1:

soon. For for for the next one, for the c, come in come in person. We got a seat here for you. Yeah. Be honored to have you in person.

Speaker 2:

Be great.

Speaker 9:

Let's do it, brother.

Speaker 2:

We'll talk to you soon.

Speaker 1:

Great to see you. Goodbye.

Speaker 2:

Let me tell you about Vanta. Automate compliance and security. Leading it's lead Vanta's leading AI trust management platform. Also, if you're running a Neo Cloud, you gotta get on Vanta because that's one of the criteria for cluster max. I'm not kidding.

Speaker 2:

Not making this up. SOC two compliance is a big factor in in in actually making it up the tier rankings for ClusterMax because, of course, if you're training on customer data, you need SOC two compliance. You need the the the whole process. Anyway, the the the, breaking news that I wanted to get to really quickly is Josh Kushner is partnering with OpenAI. OpenAI, he says, we are excited to announce a strategic partnership between OpenAI and Thrive Holdings.

Speaker 2:

Through our partnership, OpenAI will become an equity holder in holdings, and collectively, we will set out to deliver frontier technology to our customers. For decades, technology has has transformed the world's largest industries from the outside in. We believe the AI paradigm will be different in that some of the most profound transformations will now occur from the inside out. We view the businesses we that we own and operate as the right reward system to build, test, and improve industry specific products and models. So the race is on.

Speaker 2:

Is it inside out or outside in transformation? What's gonna happen? These are the new fast take off short timeline, long timeline. Are you an inside out guy or an outside in guy? This is gonna be the defining debate over the next couple days.

Speaker 1:

Well said.

Speaker 2:

Get ready to lock in. We'll be covering it here. We'll probably have some people on who are digging into this, investing in this, getting, you know, have long takes, short takes. Who knows? But I wanna get to the bottom of what, this outside in versus inside out trans transformation will look like.

Speaker 2:

We've been digging in a little bit, talking to some folks who are building companies, buying companies.

Speaker 1:

Taylor says, deal guy Yuga.

Speaker 2:

This is the deal guy Yuga. It's happening. It's happening. Well, before we bring in our next guest, let me tell you about Figma. Think bigger, build faster.

Speaker 2:

Figma helps design and development teams build great products together. We have Cristobal Valenzuela from Runway in the Restream waiting room. Let's bring him in. How are you doing? Good to see you again.

Speaker 2:

Thank you so much for taking the time to come talk to us on such a big day. Us off with an inter a reintroduction on where the company is today and then the news. I'd love to know about the news.

Speaker 5:

Yeah. Thank you for having me again. It's been a while. Yeah. Yeah.

Speaker 5:

So big big news. We just released our latest Frontier model, Gen Runway Gen Gen 4.5 is Yeah. Some model we've been working on for, like, quite some time. It's the the best video model right now in the world, which is a pretty remarkable fit. Yes.

Speaker 5:

So I think it's it's it's it's pretty good. It's pretty fun to play with. You know?

Speaker 2:

To be

Speaker 1:

clear, that's that's my audio I'm adding.

Speaker 2:

So it's not not your That's video.

Speaker 5:

That whole fashion. But perfect timing.

Speaker 2:

But but but let let's play some of the video. I wanna see, this, the the the demo videos that you put out, the examples, and I wanted to ask you a bunch of questions about it because, it's a it's an extraordinary claim. Google is a serious company. They have a very serious asset in YouTube, and I'm fascinated by, so first, give me give me the, the news. Video arena leaderboard, that's the that's the ranking that you're using.

Speaker 2:

How is that scored? How does that actually work?

Speaker 5:

So it's a it's kind of like a a way of crowdsourcing, like, performance. You basically ask people in the Internet to vote against two videos, and it's anonymous. So you vote left or right, and then as you keep on voting, you accumulate more votes. And once you vote, you can see, like, who you voted for, but before beforehand, you don't know. And so over the last couple of months, we've been working for, like, this entire new way of, I would say, training both video models and image models in such a way that, hopefully, we thought it would, like, all compete, others in the arena.

Speaker 5:

And and we got results a couple days ago, and, yes, we managed to basically outcompete, all other video models, including both Google and OpenAI, which is which is a very remarkable feat if you think about the scale of resources. Like, I think it's the era of Ilya was saying this is the era of research again, and I agree, but it's also the era of efficiency. Like, really good, really focused teams with highly efficient, like, you know, mandates can get really far.

Speaker 8:

Mhmm.

Speaker 5:

And so yeah.

Speaker 2:

Yeah. Tell me about what you optimized for here because, Sora seems it's an incredible model, and it it was for, like, a minute, like, woah, really mind blowing. Then I feel like I kind of developed an immune system for it, and I can clock a Sora video. And it feels like Sora was very much trained on TikTok almost or vertical vertical social media video. And so what have been the breakout Sora videos?

Speaker 2:

It's been a lot of, dashcam footage and, doorbell, nest camera footage and They've started posting videos.

Speaker 1:

Degraded the model dramatically.

Speaker 2:

They have degraded the model a lot. Whereas v o three, it felt like it had, it it it had a little bit of the Hollywood polish, but it was more like Michael Bay when I looked at it. It looked very saturated. It was cool. It looked good.

Speaker 2:

But what you went for, it feels a little bit more, I wanna say, cinematic even though that's kind of an overused term. But talk to me about what your goal was or even if you if you even if you have a goal when you go into a training run like this.

Speaker 5:

It does. So I think there's an explicit goal and an implicit goal. I think in a way, all models, specifically video models that are more visually, like, clear or, like, perceptible, have some sort of personality behind it. And I think that personality reflects a little bit both the point of view of the company and, like, the way you wanna train the models in the first place. So to your point, like, if you wanna make, like, like, consumer slob and, like, quick, like, shareable stuff, you're gonna train the models just from the ground up very differently that for the stuff that we're trying to do, which is a much more professional, like, high quality, very controllable set of, like, tools.

Speaker 5:

And so a little bit what you're, like, basically outlining is, I would say, the personality of the models and somehow also reflects the personality of the companies. Like, if you're trying to sell ads, you're gonna do a very different model from from if you're trying to make creative tools. And so I don't think there's one single recipe or one single ingredient. It's more of a just like taste. Like, I think that that word gets thrown a lot in research this day, just taste.

Speaker 2:

Yeah.

Speaker 5:

I think taste is both the research. Like, what what do you wanna work on? Like, having vision, like, having, okay, when I when I pick these specific problems I wanna work on, and this is how we're gonna solve them, and this is what we've learned over time. That's one form of taste. And the other one more aesthetically is, like, what things look good?

Speaker 5:

Like and that's Like,

Speaker 1:

flavors on a construction site.

Speaker 4:

This is actually very delicious.

Speaker 2:

That's your taste. That's your taste. Yeah.

Speaker 5:

Look at this Look at

Speaker 2:

the Hilarious.

Speaker 5:

Right. Look at the motion of the donkey moving, like, camera, the angles, like, amount of data creation or team of artists and, like, filmmakers and, like, people have It's not it's not it's not, like, trivial, to be honest. I think that's also the taste component. Like, shots like this, that's, Some

Speaker 2:

of this is horrifying. To me, guess that's the point.

Speaker 1:

Had to summon the demon

Speaker 5:

on this one.

Speaker 2:

Think I'm gonna

Speaker 1:

talk to you. Are you have you been inspired by anthropic at all? It feels like somebody could put you in the Anthropic for video bucket and that, like, they're just, like, extreme focus on code and and ignoring everything else. And meanwhile, your competitors, like, are putting a lot of resources towards this, but they're not betting their entire business on it in the way that you are. Yeah.

Speaker 5:

I think it's a it's like a a mercenaries versus, like, visionary type of, like, I would say bet. It's like you you wanna have people who who who feel, like, very committed to the vision long term, and the way you do that is like you're very focused on like the culture and like that culture eventually shines in the product. I think Entropic has also that. You can you can tell like who works there and like how they think and it's it's also very cohesive cohesive in a way.

Speaker 4:

Mhmm.

Speaker 5:

I think we spent somehow a similar amount of time, like, doing that in a way, and and I hope you can tell via the models themselves that, like, that personality comes across nicely as well. Yeah. And and I and I agree, like, you don't that at the end will be perhaps the most defining part of the companies that, like, stay in the long run. Like, I think if you just throw money at the problem, you're not gonna get too far, to be honest.

Speaker 2:

Yeah. What went into the actual training run? Are you at a are you at a scale now where it's a meaningful capital investment to build a model like this? We saw the scaling paradigm change from, like you know, maybe it's a $100,000,000 to do a big frontier language model run. Then we were talking about billion dollar training runs, bigger and bigger training runs.

Speaker 2:

The results are remarkable, but has it been a remarkable amount of investment to get here, or are there more efficient ways to actually get to a frontier result without spending frontier money?

Speaker 5:

Yeah. I mean, it's definitely not cheap. Like, this is not, like, traditional SaaS. Like, you know, like so you definitely have to spend more more money, more resources. But I think we're proven that, like, we're not spending tens of billions of dollars to get there and to, like, overcome the challenges.

Speaker 5:

And look, to be honest, like, the model is not perfect. There's a lot of things that are gonna improve and

Speaker 2:

Sure.

Speaker 5:

We're gonna fix and we're gonna do larger training runs and we do more over time. But it's kind of a I would say the expense the most expensive thing is, like, the natural intuition the team builds around what kinda works and what doesn't work. It's gonna go back to the idea of research states. Like, you can't throw money at it. You just have to spend enough time.

Speaker 5:

I've been we've been working on running for almost a decade. Yeah. And so there's a lot that you've learned over time about what works and what doesn't that informs a lot of the efficiencies on training. And, yes, like, models will, like, you'll need more money to train larger and bigger models. Like, if this is the worst the models will be, imagine them in, two years.

Speaker 5:

Like, you're gonna get there by training larger models for sure, but also knowing how to train them in the first place. And that's the part that I think is hard to quantify per se. And and what I'm really excited about is is only what the models can can do, but also, like, the efficiencies are not even training, but on inference. Like, this is a price point that's very comparable to our previous models. So it's actually very usable, and, you'll be using it in real time very soon.

Speaker 5:

And so that level of, I would say, efficiency at inference level, we haven't yet seen it, and and and I think we're we're gonna get there very soon.

Speaker 2:

Yeah. Fascinating. I mean, of those videos are very distracting. They're pretty remarkable.

Speaker 1:

Your unlimited plan is includes 2,250 credits monthly. How much video can one actually generate with that?

Speaker 5:

Well, technically, unlimited.

Speaker 1:

Oh, okay. I was confused because it said it said there's still, like, a a a credit credit system.

Speaker 2:

But No.

Speaker 5:

So so we have a queue. There's like we we have, like, compute and you there's queue, and you get into the queue and you generate us, like, the queue becomes available. If you would just wanna generate, like, fast, you pay for credits. So but but depending on how anxious you are with, like, major generations, so it's a measurement of how how fast you want it. But eventually, you can just literally generate unlimited.

Speaker 5:

It's it's by the way, I think no one else has a plan like that. It's pretty it's a pretty good deal.

Speaker 1:

What is like, what are the length of generations that are that are most commonly being done today? And is that a metric that you track? Like, are people consistently is it is it like a twenty second scene that's most common today? And and are you trying to get to two minutes or two hours? Like, how how do you think about

Speaker 5:

duration? Well, technically, you can do you can do, like, arbitrary durations if if you wanted. But, like, the average scene duration in, like, a short film or a movie is, like, actually two to three seconds long at the most, and that's actually been trending down. Like, the scene. Right?

Speaker 5:

The scene itself, it cut. It's, two to three seconds long on average. Yeah. And so when when you actually when when people mean, like, I wanna join the forty five minute, like, long thing, you don't want forty five minutes of, like, one camera, like Yeah. Fixed.

Speaker 5:

Mhmm. You want, like, sync parts and world, and you want the character, like, a shot, a medium shot, a long shot, and you have you know? Like, that's a different problem from, like, creating one continuous long sequence. So the one continuous long sequence for me is less interesting than the, like, multi shot approach where you can create much more compelling, like, narrative work. And I think we're not that far away from that being a reality where, like, you can generate consistent narrative work, like, really good visuals, really good stories, like, with the level of quality of the videos that we're seeing right now here, but they're all tied together in a way that just makes it feel, like, cohesive with each other, you know?

Speaker 5:

Yeah. And and so that's a different problem, I would say, altogether.

Speaker 1:

Yeah. There was some debate on the why doesn't the cursor for video Oh, yeah. Exist yet. Do you have any any thoughts there?

Speaker 5:

What's the cursor for video?

Speaker 2:

Basically, a nonlinear editor, like a Premiere Pro, a DaVinci Resolve, an Adobe After Effects for video, cursor for video, like replacing the actual bones of the software that the that the editor that the video creator uses. There's been a couple apps that have spun up, Runway. Originally, the reason I was using it back in the day was be for green screen,

Speaker 1:

for for Chroma

Speaker 2:

Key, basically. It was fantastic for that. And it feels like that, building a canvas, building an NLE, that feels like one potential pathway to victory. It's way it's also very difficult because you can't just fork Versus Code. There are no leading open source NLEs.

Speaker 2:

On the flip side, when you if you wanted to play nice with Adobe, you could be a vendor a la the way Nano Banana is now vended into Photoshop, and that could be a solution. And, you know, there's a variety of ways to to win. I'm interested in hearing your approach.

Speaker 5:

Yeah. That's definitely definitely interesting question. And, by the way, shout out for you for being an OG on runway since, what, 2039. Yeah. Something

Speaker 4:

like that.

Speaker 2:

Crazy. I love that.

Speaker 5:

So so so my two thoughts are, first, the art of like NLE and editing and film, it's an art. And it's just a lot of like pacing and like details that are very nuanced and specific. It's it's it's about granular details and it's hard for, I would say, model to or system to automate that level of, like, decisions. That's on a purely NLE side. Right?

Speaker 5:

But I would say, at least for us, more interestingly, is the question of, like, do we need an NLE in the first place? Right? Like, do we actually need this primitive? So if you think about nonlinear editing, this idea that you're, like, stacking frames of video against each other and, like, you're cutting them Yeah. Before it was with physical racers, now we have digital racers, you're cutting things together.

Speaker 5:

My bet is that you probably won't need, like, NLEs. Like Mhmm. That whole paradigm will feel like a fax machine, like, in a few more years. And so

Speaker 2:

Yeah. No. That's somewhat what's happening with, like, the the the the devons and the clogged codes and the codexes of video. I just I do wonder if there's gonna be an intermediate step, or maybe it'll just be absorbed by the current NLEs. I mean, I'm sure that's what your are using.

Speaker 2:

Right?

Speaker 5:

Yeah. I I don't know. We'll we'll see it play, but I I'm not I'm not too fond of, like, you know, pushing, like, better versions of NLEs out there. I think there's there's something around how you make videos and how you interact with this AI systems that just naturally allows itself with different primitives. And if you think also about the fact that very soon you'll start to see this happening in real time

Speaker 2:

Yep.

Speaker 5:

Like, when you make real time, like, narrative work, or videos or experiences, how do you wanna call them? Like, you don't need to edit things async because you're generating on the fly, and you have people interact with them. And so it changes that's what I'm saying. It changes the nature of, like, those things in the first place. And and there's a transitional period where, like, you'll you we're seeing, like, NLEs being augmented with AI, but I think it's that's transitory.

Speaker 5:

I don't think it's gonna pay out, like, in the long run.

Speaker 2:

Yeah. Yeah. No. I think

Speaker 1:

Has has Hollywood capitulated yet what's going on there? We we had a it's funny. I've been hearing I've been hearing more and more about Suno from from not just guests and friends at the show but just like random people out in the world. It sounds like every single musical artist now is like using it in some degree even if they're not willing to talk about it. What what is the the the case in in traditional Hollywood and entertainment?

Speaker 1:

You can't exactly hide that you're using AI video. It's basically out in the open immediately. And there's just so much like, so much negative energy that gets focused on it, specifically from people that are within the industry.

Speaker 5:

You know you know, I think that the the negative energy is like the water problem with AI, you know? Like, it's it's kind of this this unrealistic, like, and very noisy, not representative sample of what's actually happening within the industry. If you go to LA, you speak with the agencies, the talent, with the filmmakers, with the studios, the production teams. They're onboard on AI like years ago, like months ago. Like, they're fans, they're using it, they understand it.

Speaker 5:

Of course, there's pockets of people who are like like more advanced than others. Mhmm. But I would say that the the narrative publicly hasn't yet catch up with that. I posted because my some people might not wanna speak about it, or it's much more interesting to say, like, all the negative things that the thing to say the positive things. I would say Hollywood already has overcome that, and they've they've they're pretty much on board.

Speaker 5:

I would say gaming companies are now where Hollywood companies were like a year and a half ago or two years ago. So that's, I would say, an industry who's now catching up more to what AI can help them and how they can use it. So, yeah, I I would say that some of those are not a bit fake, to be honest.

Speaker 2:

Yeah. Well, thank you so much for taking the time to come on the show. To the whole team. Busy day. We appreciate it, and, I can't wait to play around with the new model.

Speaker 2:

We have a benchmark here, Bezel Bench, where we try and recreate a very complicated shot from, that we shot practically with, a bunch of different watches, with our with our intern or gap semester, Tyler Cosgrove. And it has a very the the shot's very long. It pulls out. It twists around. It's it's a pretty complex shot, and that's our current benchmark, and we'll be testing.

Speaker 2:

And we'll let everyone know how it goes. But thank you so much for your time to come test out with

Speaker 5:

us. We'll talk to you for update. Take care. Goodbye. Let

Speaker 2:

me tell you about Julius dot a I. The AI data analyst that works for you Join millions who use Julius to connect their data, ask questions, and get insights in seconds. We have Vincent from Prime Intellect in the restream waiting room. How are you doing? Great to see you.

Speaker 2:

It's been too long since last weekend.

Speaker 8:

Thanks for having me.

Speaker 2:

Congratulations. Master of real figure finding the one day that we're not live to launch your new news. Tell us what happened on Wednesday

Speaker 1:

Grateful that

Speaker 2:

one day that we were off of streaming.

Speaker 8:

Yes. So excited to give you a rundown. So, basically, for for the broader context with, primarily, kind of like our broader goals, really creating, open frontier models, and infrastructure for everyone to create them. And, last week, we released Intellect three, which is basically really, like, a scale up towards scaling RL and post training

Speaker 2:

Mhmm.

Speaker 8:

And creating, like, a sort of model, especially for, like, more agentic tasks. So, basically, what we did is we took GLM and and did a whole SFT stage and RL stage to create kind of, like, a state of the art 100,000,000,000 parameter MOE model. And, really, kind of, like, that whole infrastructure is is kind of quite a challenge, like, from, like, the RL environments to the broader, like, code sandboxes and the whole stack to do post training. Mhmm. That's basically what we built over the last half year.

Speaker 8:

I think Will Brown came on the show to unpack some of it on the verifiers and environment side. So, basically, that's kinda like what we released last week and really proved that kind of like we got performance at a 100,000,000,000 scale that does find open source only 300 to 600,000,000,000 parameter models like DeepSeek, RON, for example, achieved before. So, basically, getting to better performance actually at a much smaller scale. And I think in general, it it showcases that, like, open models are starting to catch up. Obviously, I think quite interesting is in in general seeing the trends that not just with our model, but also more broadly with other releases like DeepSeek today Yeah.

Speaker 8:

And over the weekend that actually they're also on par with, like, the closed models now. And I think really our goal is so it was almost like a a preview release, but already sorta, is we basically released, like, our early checkpoint. And we're actually scaling it much further also on more, like, agentic capabilities, but basically really, like, making it sort of across, like, a range of task. And, really, I think the foundation of this, which was quite interesting, is that we created this environment software. Anyone in world can create one of these RL environments

Speaker 4:

Mhmm.

Speaker 8:

Which we ultimately then included in the training run. So, basically, different people in in the open source contributed actually to the RL environments that, we trained on for this for this model.

Speaker 2:

So, yeah, give me a concrete example of, like, this shift of businesses that need to, you know, buy a model that has been trained in a specific RL environment. You know, we've heard the example of, like, someone's creating a clone of DoorDash, and they're figuring out how to do DoorDash orders agentically. But what else are you seeing? What are some other good examples of when a business, would pull this off the shelf from all the different opportunity, from all the different APIs that are out there and create something, I guess, semi custom for a specific business use case? Like, what are you seeing out there?

Speaker 8:

Yeah. So I think what's interesting is, like, there's, I think, two buckets. Basically, there's a bunch of these people, like, creating our own environments Yeah. For the labs, like the DoorDash, clones, etcetera. So, basically, to push really the capability.

Speaker 8:

So I think we're in this paradigm right now. Obviously, we're ultimately like, scaling RL is the main way on on how these models improve. Right? Like, we've seen it with Opus or with g b d five and and Gemini. Like, that was mainly, like, I think, a scale up in RL.

Speaker 8:

But, basically, what we are seeing are two things. It's like, on the one side, it's like, there's a lot of demand for these RL environments. But then the other side, RL is very sample efficient, so you can take an old model and and and then really create an RL environment for the specific use case you care about and scale capabilities for that. So I think good example of this was, for example, Cursor with Composer. Like, that was, like, what's what's widely believed or or known.

Speaker 8:

It's, to be a scale up of an open source model, and the RL environment was Cursor. Like like, they basically just gave it, like, the the tools and and the things within the harness and application of Cursor itself. Yeah. So they trained basically that model and like, really on on getting really good at using Cursor. I think we'll see the same play out, like, across all the applications where, basically, the the broader theory is, like, every application, every company will be an AI company or AI native and will have have an opportunity to really post train and use RL to make the models work specifically on on the application.

Speaker 8:

So you need to take examples of, like, say, a Figma. Right? Like, if they want to make make their platform agentic, really, they need to create an RL environment around Figma and post train on on that environment to be able to serve that within Figma. Like, kind of, like, out of the box, like, the closed models won't be perfect at, like, renavigating and and making those applications agentic. So I think that, like, that's the broader theory.

Speaker 8:

I think, really, it's also it's like, it's so like, the the capital requirements are are much, much lower than I think the big labs want to believe you. Like, in a sense, where it's like, you can, for, like, hundreds of thousands of dollars, like, post train a model. Right? It's, like, to to be much better on your application. Then also to you are able to, like, serve the

Speaker 2:

model cheaper. One weird trick. Punch train a model for a 100 k and create a better so, I mean, that that's basically what you're saying is that is that if I'm Figma as an example, and I could use a frontier model that's really expensive and beefy and it knows everything about it knows some stuff about Figma, but it also knows about the Roman empire, I can go in RL on just my particular application and have a smaller model that's fine tuned on open source, you know, a Exactly. Open source model and and get better performance than with the big beefy, you know, do everything omni model. Is that right?

Speaker 8:

Exactly. And I think, really, you get better performance, but also at a lower price point potentially.

Speaker 2:

Right?

Speaker 8:

Because you can really specialize the model to be extremely good for your use case. So I think you could see this with, like, Cognition

Speaker 4:

Sure.

Speaker 8:

Their own model with, like, cursor posturing their own model Yeah. Composer. Composer's And also it's like, it's much cheaper to serve. It's much faster. Like, same for for the model Cognition was building.

Speaker 8:

So I think what we're seeing and and we we started to work with, like, dozens of customers on, like, helping them basically do post training in RL. Yeah. I think we're basically starting to see a huge pull in in terms of, like, enterprises realizing that, like, if they want to get a specific capability, RL is the way to get it and and ultimately enables them quite capital efficiently, like, to to train those models and serve those models. And then really get to, like, a point where even in deployment, like, all the interactions from the user help improve the model. So I think with the Cursor example, like, every, for example, Cursor tap interaction, every yes and no that a user gets to the model is updating the model every two hours.

Speaker 8:

So it's what, like, Dwarkash talks a lot about. Two hours. Like, online RL. Yeah. Like like, basically retry like, continuously training the model in two hour interval and pushing updates every two hours to versus tap.

Speaker 8:

So, basically, every user using PERSO for the last two hours is is being post trained on, so to speak, like, with kind of like an online RL loop. I think that's something which we'll see more and more. Basically, applications will do their own RL, their own post training.

Speaker 2:

Yep.

Speaker 8:

Actually, then and and this is, like, really how we on on Hubble, basically, towards AGI. Right? It's like the the question is, like, why haven't we, say, automated, like, specific and valuable knowledge work yet? And I I think the answer that also, like, Sholte was speaking about, for example, on on the good example of, like, automate automating Texas and accounting for them. Right?

Speaker 8:

It's like no one has really created our own environments, post trained on them, and then serve the model in the application where the end user is. And then, ultimately, the the end user's interaction with the agent can improve the model further. Right? So I think that's really the paradigm that we see play out, which I think is really a paradigm of, like, thousands of models or, like, millions of models that, like, basically continuously improve and where actually the applications win to some extent through distribution. Like, ultimately, they own the the end customer interaction, right, where it's like even the cursors and and permissions have, like, a advantage there over folks who are basically just model providers and who don't interact with, like, millions of developers.

Speaker 8:

And I think we'll see the same play out, like, across all the different applications. And it's something, like, from on the side that talked about also in the context of, like, Copilot and Microsoft. Right? Right? They like, they own distribution.

Speaker 8:

They can, like, create the cursor for Excel or for, like, PowerPoint or other things, right, and then whole strain on all those interactions. So I think we'll see this, like, play out, I think, across, like, all the different verticals. And I think it's like a broader trend, you know, just like every company needs to become AI native. Right? Like and own also to just keep owning the distribution.

Speaker 8:

Like, they don't want to give all of it up to the to the big HR labs.

Speaker 1:

Yep. That makes sense. We got a question from our intern, Tyler. If we can shoot over there. Yeah.

Speaker 1:

I guess I I saw you guys talk about this a little bit online, but is there any, like, point of you guys training your own base model?

Speaker 8:

Yeah. So, basically, I think one interesting release in this context was, like, today, we actually released like, we supported RCI in in their base model release, which is, like, kind of, like, catching up to the the Chinese base models. So, basically, we we supported them in training a small MOE base model, which achieves, like, pretty sort of results. So we released that, I think, like, an hour ago with them. And and we're actually now, like, ramping up with them to towards, like, a much bigger base model.

Speaker 8:

So fully, like, pretrained from scratch. So we actually just have, like, 2,000 b b three hundreds going live, I think, yesterday to ramp up, like, towards, like, a much bigger retrainer. And I think, like it's really I think, like, the broader pattern is, like, since kind of, like, LAMA had some reorgs and changes, and Mistral became sort of, like, a forward deployed European enterprise play or something. I think there's really no one left outside of China right now Mhmm. To go end to end in the model stack.

Speaker 8:

I think others like Reflection, I think, are trying to all also pick that up. But I think there's very few players, I think, outside of China. So I think that's our broader goal. It's really is, like, serving, like, the the the world more global globally, but also, like, the West and The US with, like, an end to end pipeline. Right?

Speaker 8:

That's, from data to pretraining to retraining to post training, like, full stack and making that accessible to, like, enterprises and people who are, like, trained on models. So I think, like, there's a huge, I think, pool where a lot of enterprises or even, like, like, sovereign like, nation states, etcetera, like, they can't train on Chinese open models, but they also they can't rely on closed models. So I think there's a huge gap in the market right now that we're trying to fill of of really, like, serving kind of, like, that, whole segment.

Speaker 1:

Do you have anything else, Jordy? No. This is great.

Speaker 2:

I I wanna know one last question about, you know, what what will the market structure look like in maybe a year or two around, like, implementing these RL environments for company? Because when I when I see you know, you say every every company is an AI company. I I believe that that's somewhat true, and I believe every tech company, maybe every founder led tech company under ten years might be able to say, okay. Yes. We're gonna go and train, fine tune a model, and turn our our application into an RL environment.

Speaker 2:

But if I'm, you know, the Coca Cola company, you know, I might not be at that level of, like, going and building RL environments for every business process. I'm probably more of a buyer of this AI as SaaS almost. So how do you see that kind of breaking out? How do you see, like, a a truly legacy, you know, nontech company adopting a fine tuned LLM or an r l or r l'd model?

Speaker 8:

Totally. No. I I think there's, like, early adopters and later, like, later adopters. I think Coca Cola might be more like a later doctor. They might not need to adopt it early on.

Speaker 8:

Yeah. But I think they are more adopting it just, like, in less obvious places. It's like, ultimately, I think they're initially just, like, using the AI tools that use us, for example.

Speaker 2:

Right? Yep.

Speaker 8:

In in a sense where it's like, say, customer service. Right? It's like is is a, like, perfect example of, like, where you get a lot of gains out of post training. Mhmm. And then, like, they might put like like, basically, the AI native customer service platforms might use us to post train using Coca Cola data

Speaker 2:

Sure.

Speaker 8:

To serve them a better, like, model. So I think what we'll see play out, I think, is really just, like, making, like, a a lot of that, like, so accessible to your point that it feels more like using SaaS Mhmm. Where I think, like, one element of it is, like, we are, like, launching also, like, a whole, like like, RFT platform basically and and offering to make it extremely, like, easy and plug and play. But then there's also, like, a forward deployed element, right, where you can outsource a lot of that stuff to our team. And I think the other element is, like, really like, we're walking walk in in terms of, like, making our own thing kind of, like, agentic and autonomous that you could basically just use, like, an autonomous AI researcher to do all of it for you.

Speaker 8:

Right? Like, that you basically just, like, plug it into your system and, like, the AI even, like, creates AI for you. Yeah. Like like and I think, like like, I think that's the next paradigm is really making, like, making in general training models, like, fine tuning models, post training models, like, as accessible as byte coding is today. Right?

Speaker 8:

In a sense, it's like I think with byte coding, like, literally every human on Earth is able now to, like, code some stuff up. And I think we'll see the same play out with AI over the next twelve months, and that's one of the big things that we're playing into. We're kind of, like, pushing towards, like, autonomous AI research Yeah. Where AI can do most of it for you.

Speaker 2:

Well, thank you so much for taking the time to come and talk to us on the show. Congratulations on all the progress. Very, very And we will talk to you soon.

Speaker 1:

Great to see you, Vincent.

Speaker 2:

Goodbye. See you, guys. Cheers. Have a good one. Let me tell you about Privy.

Speaker 2:

Privy makes it easy to build on crypto. Rails securely spin up white label wallets, sign transactions, integrate on chain infrastructure all through one simple API. And I'm also gonna tell you about adquick.com. Out of home advertising made easy and measurable plan, buy, and measure out of home with precision. Our last guest of the show is Ben Hilack.

Speaker 2:

He do the Jaguar rebrand?

Speaker 1:

That's That's him. Ben, welcome. And we'll follow him forever.

Speaker 2:

How are you? How are you? Grab a seat. Hang out. Good to see you.

Speaker 2:

Oh, you brought hats. Fantastic. Thank you. Please, grab a seat. Introduce yourself.

Speaker 2:

Introduce the company. What's the news?

Speaker 7:

Yes. So my name's Ben. Hi, Yes.

Speaker 1:

Hey. Let's take a second for the flow.

Speaker 2:

Fantastic. Thank you.

Speaker 7:

This is kinda like

Speaker 1:

a vintage Silicon Valley flow that's somewhat of a lost art.

Speaker 7:

You know, I appreciate it. You guys have great hair as well. Pressure. You'll notice I'm not wearing a hat today. And it's because I did notice, actually.

Speaker 7:

I discovered a blow dryer, I think, around nine months ago, ten months So that was big blow. It's But, been the same yeah, my name is Ben, as you guys know. I'm the CTO of a company called Raindrop. Mhmm. So really simply put, we monitor agents in production.

Speaker 7:

Mhmm. So we were building a product ourselves probably around two years ago now, which is like a coding agent. And we realized that there was just this huge gap of, like, if you're using Sentry, if you're using traditional analytics, you know, they're covering, like, the things the users are clicking. And almost everything that's happening in your product, if you're making an agent, is just not covered. So you just have no idea

Speaker 1:

what's Because agents are going absolutely wild.

Speaker 7:

They're going crazy.

Speaker 1:

They're going haywire.

Speaker 7:

You know, what's been insane, I think, one of the things that's been, like, really kind of critical to our growth in the last couple months has been realizing that as agents get better, this problem gets worse. So that was not necessarily intuitive to us in the beginning. You know, you think like, oh, well, agents are gonna get better. Maybe this problem becomes less important. But it's like, actually, as they become more capable, they can use more tools.

Speaker 7:

More valuable. So, for example, if you take a company like Replit, it's like, you know, maybe a year ago or two years ago, or when they first launched, you know, you couldn't quite get as far, right? Maybe you could just get like a personal website or something. And so, if it messes up at that point and kind of gets stuck, it's like, okay, maybe it's not the end of the But now, with Replit, you're able to build just like real applications. Like, people are building real production applications.

Speaker 7:

So now, if you get to a point where it gets stuck, something goes wrong, suddenly it's like it's a real issue. So that was not intuitive before.

Speaker 2:

So, agent's a pretty overloaded term

Speaker 6:

Sure.

Speaker 2:

At this point. I I think of, you know, when I fire off a deep research report in ChatGPT, that's an agentic workflow, to some customer service agent that's happening completely behind the scenes, and the customer might not even know that they're Yep. Dealing with an agent. And then there's coding agents. There's a few that you mentioned.

Speaker 2:

Totally. Are are you, dividing the market and trying to focus on an early landing zone first, or do you want to do all of those?

Speaker 7:

Yes. So we focus on essentially and I will say, agree. The word agent's overloaded. We're very hesitant to use it for a really long time, and then we realize it actually matters.

Speaker 2:

Of course.

Speaker 7:

So we focus on products that have some sort of user input and some sort of assistant output, eventually. So that's sort of our focus. So what we're not focused on is, for example, we're not going to focus on specific ML pipelines or things like, maybe translating text, or summarizing text even. It's like, we want to see the user has some sort of request, the assistant is responding to that request. And we do map essentially everything that happens in between that initial, you know, user input and to what the actually what the assistant actually responds.

Speaker 7:

Mhmm.

Speaker 2:

And then what's the go to market for you?

Speaker 7:

I mean, it's been a little crazy, actually. We've had a lot of inbound. So some of our biggest customers have been inbound. A lot of it has been, like, when we first launched, I think, like, guess this was, like, six months ago or seven months ago now, agents weren't as big of a deal. And so I think in the first month or two, we had lot of customers who were, oh, okay, like, I have evals

Speaker 1:

think we'll need this.

Speaker 7:

Yeah. But it didn't really make sense for them. And a lot of them came back in the last, like, like, a month or two after that. We're like, holy shit. Okay.

Speaker 7:

Now I get it. We need you. Like, so it's actually been a ton of inbound. We do what we don't really pay for advertising, anything like that. You know, if we see a really crazy failure in the news, we'll reach out to that company, obviously, and be like, hey.

Speaker 7:

This is something we can help with.

Speaker 2:

Sure. Sure. Sure. How are you thinking about the, you know, target, like, the best type of customer? Are you segmenting it by size?

Speaker 2:

Do you want to go enterprise upfront because they're implementing agents at scale? Or are you more likely to see immediate results of the start up that just kind of gets it and they can hop on really quickly? Like, how are you thinking about prioritizing, if you are at all?

Speaker 9:

Yeah, it's

Speaker 7:

a really good question. I think that we really look at the entire range, and I think that we see and have always seen start ups as being a really core part of keeping our company healthy. Sure. You know, I heard a while ago that, like, post hoc has this where they look at, like, what percentage of YC companies in every batch are using them. Sure.

Speaker 7:

Sure. Sure. And so that's why we started with startups. Like, they're always they're able to move faster. So, for example, like, when a new model comes out, just to give actually a very specific example.

Speaker 7:

So gpd five introduced intermediate reasoning, right? It was kind of one of the first models to do this, where, like, it's going to make tool calls, it's going to look at the results of those tool calls, think about it, and then make more tool calls, take that, think about it, write, know, more tool calls. It sounds small or subtle, but actually, it kind of means that, you know, if you architected these your system, your pipelines in the wrong way, you just couldn't use that. It's like and it really helped. So where startups will just throw everything out the next day.

Speaker 7:

Right? And they'll ship a whole new thing in a week. You don't see, like, you know, like, if you look at the biggest enterprises, they're not going to do that. So you can learn really fast with startups. That being said, on the flip side, I think that the problem we're solving is actually most painful for enterprises.

Speaker 7:

Right? It's like the the the the most critical high stakes environments are where, like, failures cost the most in every single sense.

Speaker 2:

Yeah.

Speaker 1:

Yeah. How much of categories of agents that you're excited about that are maybe under hyped today. Coding agents. Coding agents are like sufficiently hyped, I

Speaker 7:

Coding agents are and for good reason.

Speaker 1:

Reason, but like, and may maybe they're deserving of more hype. Yeah. Yeah. Yeah. But but what what other category you know, think I think people have been sold on the AI, BDR Yes.

Speaker 1:

Haven't may maybe companies are getting a ton of value from it, and they're getting so much value they don't wanna come on TBPN and talk about it Yeah. Because they don't want their competitors to know. But and then obviously, like, CX feels sufficiently Sure.

Speaker 4:

Mhmm.

Speaker 1:

Hyped. But what what else are you seeing?

Speaker 7:

Man, it's there's so many different things. Like, I think, you know, speak, for example, language learning, I think the better like, as models get better, that experience just actually starts to become really, really, really viable. Mhmm. So, like, that's an example of something where it's like, yeah, it existed a year ago, it existed two years ago. But, like, as voice models get better, as, like, the models themselves get better, it can it's actually not just like, you know, if you try to use ChatGPT, for example, to learn a language, you sort of can.

Speaker 7:

But if you ask it to, like, critique you, for example, it just never will. Like, if you say something wrong, it just isn't going to stop and be like, hey, look, actually,

Speaker 6:

like still glazing. Even if right.

Speaker 7:

Yeah. Exactly.

Speaker 2:

It will is the most complicated Spanish sentence.

Speaker 7:

It will. It will. Right? So, like, it turns out, I think we see this with a lot of products, that, like, getting something right is actually a lot of details and really, really understanding that domain. Mhmm.

Speaker 7:

So I think we're seeing that in literally every domain, like, whether it's, like, marketing, whether it's, like even just, like, the idea of having a personal assistant. Like, notably, we don't have that yet, which is crazy. Right? Like, we we have these assistant models, but then none of us actually have a assistant. We can just chat and be like, hey, send this email.

Speaker 4:

Yeah.

Speaker 7:

Right? I don't.

Speaker 2:

Yeah.

Speaker 7:

And so I but I think we're starting to see products actually, like, nail that, like, smaller mostly, but

Speaker 2:

How are you thinking about just I I I don't know if I don't know if, like, if you're Sentry for AI agents, does Sentry actually handle this? But just types of AI failures that happen for more infrastructural reasons. So just the GPUs are on fire, or, like, there's just not enough GPUs in this particular cloud, you just see a spike in demand, so you just can't provision more. Like, those types of more more tactical errors, do you help with that?

Speaker 7:

Sort of would be the answer. So I think it's actually really interesting is that one thing we realized about evals is that they don't catch those sort of issues. Like, you know, you're kind of testing just like the model, what is the model responding?

Speaker 2:

Sure.

Speaker 7:

But then there's all of these things that happen in between. Like, I remember really, really early on when we launched, one of the issues that a customer caught was, like, their file upload was broken. So a bunch of users all started complaining about, like, oh, like, the file upload's taking too long. Sure. It's like, well, it's not like an AI problem, but it is.

Speaker 7:

Yeah. And so we see that with, like, tool calls. We saw one of our customers had an issue, sort of what you're saying, which is that, like, they started having, like they they have their own GPUs, they started having, like, an infrastructure error Mhmm. And it was mixing up responses between users. Mhmm.

Speaker 7:

And so users all started complaining, like, hey, that's not what I like, what are you talking about? That's not my system. It was, like, an increase in that in, like

Speaker 4:

I don't

Speaker 2:

know if you're talking about Meta, but I think that happened in Meta.

Speaker 7:

It wasn't Meta. They're not one of our customers yet. But But there was a situation where, like,

Speaker 2:

people could share it was it was not that bad, but it was something like, I could share my chat with you, but if I shared it with you and I didn't know that I was sharing it, it would go out everywhere. And so, yeah, stuff like that happens. Totally.

Speaker 7:

There's all these sorts of things. So you can actually catch those sort of problems. It's actually one of the one of the things is, like, that ground truth is actually really, really important because if you just see, like, a few errors, like, let's say you have tool call like, agent calls tools,

Speaker 2:

like Mhmm.

Speaker 7:

Yeah, that's gonna error once in a while. Right? Like, that's might not be the biggest deal, but especially once you if you can see when it actually starts to affect users, like, that's really that's really powerful.

Speaker 2:

Yeah. Yeah. That makes sense. What about degradation of models under the hood? I feel like people I don't know if it's just a meme.

Speaker 2:

I've noticed it here and there. I'm not I'm not benchmarking everything every night, like, some big companies. But it does feel like that sometimes. Right? My

Speaker 1:

top agent.

Speaker 2:

It feels like sometimes I'm like, wait a minute. They I used to respond in this many tokens. Now it responds this way. It used to look HD. Now, like, definition, like

Speaker 7:

know. I agree with you. I I think I think it's real. I know that I I can't say too much. I know that at least on one occasion

Speaker 2:

Yeah.

Speaker 7:

That I I think people were led to believe that there wasn't a thing. I I know that there was. Okay. So that's it. You know what I mean?

Speaker 7:

I I can't say who. It's big company. And because I I noticed this and I thought

Speaker 2:

I was like say whose hands were caught

Speaker 7:

red hands. Red Yeah. Exactly. And and like, it it was like, I thought it was a cursor problem. It was like some really absurd behavior.

Speaker 4:

And then

Speaker 7:

I went into Chatuchuette and it was doing the same oh, I just But anyway, yeah, like, I I think that the reality is that, like, every single one of these providers are, having these sort of problems, and they're they're trying to optimize costs. Totally. They're trying to, make changes. Yeah. Like, so I think it's natural.

Speaker 2:

And some of them I understand where I'm like, oh, okay. Well, yeah. Realistically, I haven't used that in a long time. I came back. I kinda cherished.

Speaker 2:

Yeah. Yeah. I don't really mind that you put me on the lower tier.

Speaker 1:

Yeah. Yeah. Yeah.

Speaker 2:

I just hope that for the people that actually, like, went and built businesses around this that are using it at the API level that are hopefully paying for the service at a high gross margin to you, you're not degrading the service behind their backs. Like

Speaker 7:

A 100%.

Speaker 2:

Right. So anyway, who did the deal? Anybody we know?

Speaker 1:

You wanna hit the gong?

Speaker 2:

You wanna hit the gong?

Speaker 7:

Oh, let's do it. Yeah.

Speaker 2:

Yeah. Hit hit the gong. Tell us how much you raised. How much did you raise? How much did you raise?

Speaker 2:

We

Speaker 7:

raised so we we raised $15,000,000 total. Lightspeed.

Speaker 2:

Who did the deal? Bucky? Yeah. Let's go.

Speaker 1:

Let's go. Let's for Bucky.

Speaker 7:

Let's hit it again for Bucky.

Speaker 2:

Love This

Speaker 4:

one's for you. Bucky's for Bucky.

Speaker 2:

Clean hit. Yeah. We're big fans

Speaker 4:

of Bucky

Speaker 2:

up here. I just wanted to get him a shout

Speaker 7:

out to Us too.

Speaker 2:

Us I

Speaker 7:

think when the moment we met him, we're like, okay. Like, he mastered our energy, like

Speaker 6:

He's great.

Speaker 7:

Great vibe.

Speaker 2:

Yeah. Yeah. He's doing good.

Speaker 1:

How's building the team going?

Speaker 7:

It's going. It's going. I think we're really, really picky, we've realized. And so it's really hard. Mhmm.

Speaker 7:

And I think hiring in San Francisco is really hard. Mhmm. We have a great team. It's honestly really, really small still.

Speaker 2:

Yeah. Wanna get out of San Francisco, could book a wander with inspiring views, hotel, greedy minis, dreamy beds, tier cleaning 20% concierge service. It's a vacation home, but better. You can do it off-site there.

Speaker 1:

We could do it off-site. I once used a team off-site as recruiting tactic. I think we're going on an off-site in two weeks. Okay. Posted a picture.

Speaker 2:

Oh, yeah. You wanna come? Yeah.

Speaker 1:

We got an amazing Totally.

Speaker 7:

We'll do it. I'll I'll do it.

Speaker 1:

Great emergency.

Speaker 7:

Yeah. We're doing it. So if you're watching right we'll I'll post a picture soon of

Speaker 2:

of the Okay. Fantastic.

Speaker 7:

But we have an amazing

Speaker 1:

I figure if you're if you're picky and you're in San Francisco, it's like the most ruthless, like, talent war constant.

Speaker 7:

You know, the other thing is that that think when you hire amazing people, they have zero tolerance for working with people that are not amazing. Yes. Yeah. And so, I I think you can't even fool yourself as a founder if you start, like, know, whether you're work channeling, whatever. It's like, if they're just, you know, if it doesn't fit, like, everybody everybody knows and feels that.

Speaker 2:

Have you had to bring anyone soup? Are you familiar with this?

Speaker 7:

I'm not familiar

Speaker 2:

with this. Okay. So apparently, the AI this is from Ashley Vance. This is a scoop just dropped on core memory on the podcast. So he had Mark Chen, OpenAI's research chief on the show as part of a post Gemini three sit down to get the update from OpenAI.

Speaker 2:

And he said, I knew the AI talent wars were rough, but not this rough. Zuck is out there apparently delivering handmade soup.

Speaker 7:

Woah. Wow.

Speaker 2:

And OpenAI has soup counters. And so I guess

Speaker 1:

Wait. They count how much soup is I'm sorry. Soup counter. Wait. I don't even

Speaker 2:

know what this means.

Speaker 8:

But Oh,

Speaker 7:

I see. I see. Like, they count how

Speaker 2:

many times No. No.

Speaker 6:

I think

Speaker 1:

it's like a just like a place where you soup. Right?

Speaker 2:

Aggressive. Is it what exactly does this tit for tat? We can we we we can play this on the show later. But Nice. Well, but yes.

Speaker 2:

I

Speaker 7:

mean No. No. My my my partner You do

Speaker 2:

need to cope with some for

Speaker 7:

sure. Thing works. I'm joking. You know, we we do typewritten I'll I'll write a note on a typewriter, you know, when we do our offer letter. Right?

Speaker 7:

That that adds something a little That's good. Sass.

Speaker 2:

You Are

Speaker 1:

you messing with

Speaker 7:

that? I no. I'm serious. I love typewriters.

Speaker 2:

No. I I I like that. It's just a

Speaker 1:

way to actually value this

Speaker 7:

Yeah. Yeah.

Speaker 2:

All the text is AI generated. I'm sure.

Speaker 7:

Of course. Yeah. Yeah. I'm just copying from chatty receipts. No.

Speaker 7:

I think it's like

Speaker 2:

a little bit of a You're not just new attire. You're a revelation. This is a statement.

Speaker 1:

Yeah. Yeah. Exactly.

Speaker 2:

They're having fun. Well, that's great. Congratulations on all the progress. Very excited. I'm sure you'll be back on the show soon

Speaker 7:

I will.

Speaker 2:

Giving us plenty more updates. And it's been fun because, I mean, I believe that we started tracking your journey via your viral joke post about doing the or something. But we've always had fun featuring posts. Have you here. In person.

Speaker 2:

Yes. Live in person.

Speaker 7:

One one year ago today, remember roughly one year ago, I was sitting in a parking lot

Speaker 1:

Really?

Speaker 7:

And I I was listening to But remember that those guys that were talking about that tweet?

Speaker 2:

This is the whole this is the whole shtick. It was, little love letters to Silicon Valley folks. Just, little messages of just, hey, we we found something that you did fun because anyone can like, anyone can repost, you know. It's it's easy to send a small thing. Yes.

Speaker 2:

It's very hard to actually print it out, sit down, talk about it. It means something. But we appreciate your post. And we appreciate you coming on the show

Speaker 1:

Appreciate you

Speaker 2:

guys. And hanging out today. So thanks so much.

Speaker 7:

Thank you. We're gonna close

Speaker 2:

on the show. Thank you. We'll talk to you in just a second. While he's walking off, let me tell you about getbezel.com. Shop over 25 26,500 luxury

Speaker 1:

watches. Intelligent

Speaker 2:

watches. Authenticated in house by Bezel's team of experts. I also need to tell you about 8sleep.com. Exceptional sleep without exception. Fall asleep faster, sleep deeper, wake up energized.

Speaker 2:

I had a rough night. Kids have been all over the place, but I still got John. 98.

Speaker 8:

98.

Speaker 1:

98.

Speaker 2:

90 Wow. That is remarkable. Well, is there any breaking

Speaker 1:

news? We'll see if

Speaker 2:

You wanna go through some breaking news? Buco Capital bloke is on the timeline. You can feel the panic behind the urgency and intensity with which people are defending NVIDIA. It feels visceral and quite intense. You can tell how much is riding on this.

Speaker 2:

It makes a lot of sense. What what else did you wanna

Speaker 1:

I thought it was notable. PagerDuty has fallen to a $1,100,000,000 market cap.

Speaker 2:

That was a

Speaker 1:

good And they're at 500,000,000 of ARRs are trading. They're not growing anymore. They're trading at 2.1 x ARR. It's profitable according to Jason Lumpkin over at Saster.

Speaker 4:

Mhmm.

Speaker 1:

So, yeah, rough time out there if you're not growing regardless of the revenue scale. Two days ago, we we shared that Enron Mhmm. Back 11/29/2001, NVIDIA replaced Enron in the S and P 500. I saw this post go out from our incredible team, and I immediately Googled to fact I

Speaker 2:

was like, there's no way. Someone has made a terrible mistake on our team, and we are doing fake news unironically now. We used to have some fun, but apparently, this is real.

Speaker 1:

It's real.

Speaker 2:

It's real.

Speaker 1:

Jensen was like, I'll take that spot.

Speaker 2:

November 29. Okay. Obviously, that's not how it works. It is it is much more mathematical than that, I believe. Standard and Poor's picks the largest companies.

Speaker 2:

And after certain ebbs and flows of the market, they swap folks in and out. Yep. But this went pretty viral, 5,000 likes. But what is really interesting is, of course, the the the NVIDIA N rod, like, comparisons are just so silly to me. Obviously, it's like you know, the the discussion is like is like, will it go from being the best business in the entire history of the world to being, like, you know, somewhat competitive and have to deal with, like, minor competition from other people?

Speaker 2:

It does not seem like it's some ridiculous Enron situation that's, like, so insane. People are just having fun with that headline. But what is incredible is this this branded shirt he's wearing. Look at this thing.

Speaker 1:

Fantastic.

Speaker 2:

So awesome. I love it.

Speaker 1:

Not enough people trying to go snipe vintage Nvidia merch.

Speaker 2:

It's a great shirt. It's a great look. And I feel like it's gotta make a comeback. The button down this is the pre Silicon Valley, I'm just in a t shirt era. But it's post suits, you know.

Speaker 2:

It's like, we're not suits. We're working in technology. We're still gonna throw on a collar, but we're gonna dress it down a little bit. No tie.

Speaker 1:

Guys, scroll up on this for a second.

Speaker 5:

Yeah.

Speaker 1:

I'll keep going. Keep going. Oh, who's not who's not calling Tyler? You gotta follow the account. No.

Speaker 1:

This is not my account.

Speaker 2:

I think this is more of like a burner account situation.

Speaker 1:

Oh, it's a scraper that we use to for

Speaker 2:

It is. It is. It is.

Speaker 1:

You gotta correct that, Tyler. Yeah. Come on.

Speaker 2:

That's not

Speaker 1:

Gorkum over at Fall had an absolute banger. What's that? This was a chart showing ASML sells fewer than 500 units per year and generates 37,000,000,000 in revenue. Is there any company in the world with a wider moat? And Gorham says, series a pitch meeting.

Speaker 1:

Sorry to cut you off, but what happened in December 2024? Since there's like a slight dip in the chart. Yeah.

Speaker 2:

What what did happen? Why did their revenue drop in 2024? I don't I actually don't know. Was it just so much pull forward from 2023 or something?

Speaker 5:

I don't know.

Speaker 1:

Maybe they they were developing some hubris. They decided to get complacent.

Speaker 2:

Yes. I mean, I certainly understand the the concept. Okay. According to the CEO, customers in Taiwan had delays and weren't ready to take delivery yet and orders got pushed back at the same time. China raced to get as many machines as possible before export controls tightened.

Speaker 2:

Okay. That makes sense.

Speaker 1:

What's up? Sash Zatz says, Oxford Dictionary didn't get the memo. Apparently, rage bait named

Speaker 5:

whatever the year.

Speaker 1:

What? I think it I think it No. No. No. I think they're actually right.

Speaker 1:

That it would be the word But of the it is so funny that you you posted this and then and then Oxford dictionary. Yeah. So this is true. Rage according to BBC, rage bait named Oxford word of of the the year 2025. It certainly feels that way on the timeline.

Speaker 2:

Your post, 1,000,000 views on this. 3,600 likes. People really, this really set the agenda for a little bit. Wow. Congratulations.

Speaker 2:

What what a banger essay. Should TVPN do a word of the year? I like that. Or motion?

Speaker 1:

Motion.

Speaker 2:

Motion might be our

Speaker 1:

word of

Speaker 2:

the year.

Speaker 1:

Word of the year.

Speaker 2:

Motion's a big word of the year. Motion named named word of the year 2025 by TBPF.

Speaker 1:

If you have it, you'll know.

Speaker 2:

You'll know. We'll call you. Tyler In has other in other breaking news, Keither Bois is taking shots at Airwall X. Airwall X is now on the other side of a billion dollars in ARR. What I love about this chart is that isn't that we hit a big milestone

Speaker 1:

founder, Jack Dang.

Speaker 2:

It's how fast the business is accelerating. It takes more than six years to a 100,000,000 ARR. What does Airwallex do exactly?

Speaker 1:

Can we I think they provide payment rails for a bunch of American fintechs to handle international.

Speaker 2:

Oh, okay. Okay. And and so Keith Raboy, been on the show multiple times, says, cool growth chart. Have you disclosed to US customers like Rippling, bill.com, Brax, Nivon that you're quietly sending their customers' data to China. Airwallex has become a Chinese backdoor into sensitive American data like, from AI labs and defense contractors.

Speaker 2:

You must already know this, but your China based ops infrastructure and investors create legal obligations to assist with CCP espionage upon request. Through Airwallex, Beijing can assess supplier payments for AI labs so they could know who's who's using what models. Payroll data for defense contractors, personal data for employees abroad, that's obviously not good. Obviously, many companies do business in China, and that's not inherently a bad thing. But your company has become a guaranteed vector transfer to the Chinese government, and that's a different thing entirely.

Speaker 2:

You have multiple points of vulnerability, people, legal structure, CAF table. What's happening? You route global payments for US companies and critical sectors without disclosing that you're under a Chinese jurisdiction. You moved your HQ to Singapore. Well, seems like a step in the right direction maybe.

Speaker 2:

But your largest operational footprint is in China. Okay. No. So maybe one step back. One step back.

Speaker 2:

And hundreds of your engineers in Mainland China touch production payment systems. You are subject to Chinese law that requires Airwallex employees to support CCP intelligence request and quietly hand over data when asked. You hid this from your customers, but you are well aware of your obligations to China, and that's why you insist on protection of Chinese data access to your contract. Thanks to you, the Chinese government now has direct, covert, legally enforceable access to sensitive financial information. This is a big story.

Speaker 2:

This is a this is a crazy scoop from Keith Raboy. And I'm I will be interested to see where this goes, how how how quickly they can they can, you know, remedy this. This this popped up a couple of years ago with during the Clubhouse era. The Clubhouse back end, I believe, was was at one point, you know, was working with a Chinese company. Or maybe maybe it was that there was a company that did, like peer to peer peer to peer audio streaming that was based in China.

Speaker 2:

And so if you were building a competitor, you might use that company. Yeah. And so

Speaker 1:

I became familiar with Airwallex through the 20 BC episode that that Harry did with Jack, the founder.

Speaker 2:

Is it ripping? Is it I mean, it seems like the business is doing really well.

Speaker 1:

Yeah. Yeah. Yeah. Yeah.

Speaker 2:

Oh, well.

Speaker 1:

Anyways Well, what else I'm sure we'll

Speaker 2:

hear more about it.

Speaker 1:

To say. We gotta get on with We do. With Menlo Park.

Speaker 2:

Okay. Well, thank you so much for listening.

Speaker 1:

Hanging out with us today.

Speaker 2:

Will see you tomorrow. Please leave us five stars on podcast The and

Speaker 1:

break the Thanksgiving break was absolutely brutal for us That's rough. Every single day.

Speaker 2:

And hopefully, had a great Thanksgiving.

Speaker 1:

Wake up and just twiddle my my thumbs wishing we were podcasting. Yeah. It's great to be back. Hope you had an amazing break or or a little holiday, and we'll see you tomorrow.

Speaker 2:

See you tomorrow. Cheers. Goodbye.