TBPN

TBPN.com is made possible by:
Ramp - https://ramp.com
Eight Sleep - https://eightsleep.com/tbpn
Wander - https://wander.com/tbpn
Public - https://public.com
AdQuick - https://adquick.com
Bezel - https://getbezel.com
Polymarket - https://polymarket.com

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV

What is TBPN?

Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.

Speaker 1:

Welcome to Technology Brothers, the number one live show in tech. Jordy, lock in because we are live from the Temple Of Technology, the fortress of finance, the capital of capital. That's right. Today is Tuesday, 02/18/2025, and this show starts now. We got a great show for you guys.

Speaker 1:

We got a grok three deep dive. We're breaking it down. I got early access. I ran the same deep research prompt through open AI deep research and through grok. We're gonna break down each analysis, and the prompt I used is about AI and how these how these, LLMs are progressing and some benchmarking stuff.

Speaker 1:

So it'll be a good educational experience for you, and you'll also get to make your own call on which one you think is better. But you gotta play with these to understand them. We got a ton of other stuff. There's a deep dive in the Wall Street Journal about how Elon Musk and Sam Altman kind of their relationship soured. We'll be putting that in the truth zone if there's any untruths in there.

Speaker 1:

Mhmm. And, it's also a massive launch day because our boy, Alexander Karp, his book is officially live. Let's go.

Speaker 2:

The doctor.

Speaker 1:

The technological republic, hard power, soft power, and the future of the West. It's available now. If you preorder it, it should be on your doorstep soon. It's available on Audible, and we highly recommend it. We're big fans of Karp over here at TBPN.

Speaker 1:

And so let's kick it off with Grok three. I got access, I think, on Sunday. I had it do, the first prompt, which was tell me a joke. Make sure it's actually funny, original, and insightful. May my favorite comedians are Shane Gillis, Norm Macdonald, etcetera.

Speaker 1:

Structure the joke like an actual stand up set, not like a knock knock joke or a basic question answer type joke. And so here's what Croc said.

Speaker 2:

You couldn't have been more clear.

Speaker 1:

Couldn't have been more clear. And let's see if AI is taking my job as a, comedian, news, podcaster, tech analyst, venture capitalist.

Speaker 2:

Wordsell.

Speaker 1:

Wordsell. Grok says, you ever notice how AI is supposed to take over the world, but it's still out here struggling to understand my mom's accent when she yells at Siri? Like, we're all worried about Skynet, but meanwhile, Siri's over here going, I'm sorry. I didn't catch that. Did you say turn on the lights or burn down the house?

Speaker 1:

And it's not just Siri. Right? All these tech bros are like, AI is gonna replace your job, man. But I'm like, bro, AI can't even replace my fantasy football lineup without benching Patrick Mahomes for a kicker. Meanwhile, I'm getting texts from my buddy.

Speaker 1:

Who's a product manager and his apple watch is like meetings starting and he's sprinting across the office. Like he's running the Dakar rally. AI is not taking over the world. It's too busy trying to figure out if we're asking for a weather report or confessing to a crime. By the time it gets there, we'll all be living on the moon anyway, probably because some founder convinced us it's the next big VC flax.

Speaker 2:

It's so so this is so jarring to listen to because it sounds like, like, it's structured in a way that it should be funny, but it's just completely not. Yeah. And so your your brain is sort of, like, yearning for a The punchline. Zinger. Yep.

Speaker 2:

And it just never comes. Yep. You're perpetually in the state of of of, like, hoping you're gonna hear that Yeah. Sweet note

Speaker 1:

of the zinger. Exactly.

Speaker 2:

And it just doesn't come.

Speaker 1:

So I do think the, you know, obviously, the the the benchmarks for Grok three are incredible. There will go into all that. There's a lot of good stuff in here. I think what's interesting about this is that Grok three, clearly, this is trained on my feed because, in my last 10 tweets, I posted to car rally, Patrick Mahomes. I also posted something about, in here, I said, product manager and the Apple watch.

Speaker 1:

Remember that post? Yep. You actually fed that one to me. And so it clearly took that text in and said, oh, okay. This user, this person who's prompting me, likes Apple watches and product managers, and we'll get those jokes maybe.

Speaker 1:

But it didn't really integrate it properly in an interesting way. And I was, I was texting with one of the x AI guys and was like, this is really interesting. Like, I don't know if I like this, but I think this could be really cool for certain prompts. Yeah. And it could make it so that sure.

Speaker 1:

Sometimes I want a vanilla, like, clean LLM installation where I come to it. It doesn't know anything about me. And I Yeah. And I give it a really robust prompt and it doesn't assume anything. And then other times I'm gonna want it to really know, oh, this guy's like super in group in this particular niche.

Speaker 1:

He's down the rabbit hole on this. Yeah. So all of a sudden, if I'm explaining, you know, like, VC fund dynamics, I can be super jargony and super and super in group and super detailed. But then if I'm if I'm ex explaining, like, oil and gas to this guy, he doesn't know anything about oil exploration, so I gotta keep that high level. That's actually a benefit to me, and that would be really cool.

Speaker 1:

So I think fine tuning the, the models on the individual person's timeline, whatever you can get information about them, that could be very cool. But in this case, it was very, very silly, which I got.

Speaker 2:

So funny to have this many things that are aligned aligned to the stuff that you post about and botch it this hard. I'm getting text from my buddy who's a product manager, and his Apple Watch is, like, meeting starting because you posted Yeah. About the you posted the White Lotus reference of the product manager, and he's sprinting across the office like he's running the Dakar rally. Like, it just doesn't come combine.

Speaker 1:

Yeah. How does that go with AI supposed to take over the world? Like Yeah. I I don't I don't get it. Anyway, Grok three, I don't think there is a good comedy eval

Speaker 2:

No.

Speaker 1:

It is. Right now. They're not even benchmarking.

Speaker 2:

We talked about this what we before we came online. The evals are hyperfixated on these sort of ultra complex problems, and they clearly massively struggle on, like, a comedy eval. Right? Like, that would like, somebody should make the comedy eval. A %.

Speaker 2:

It basically says, I asked, you know, all these foundation models to make various types of jokes. This is the funniest joke. I tested it against real humans Yep. To see if they laugh. Exactly.

Speaker 2:

And so that's a totally valid way to test a model because, being able to produce, jokes Yeah. Is a you know, there's sort of comedic intelligence.

Speaker 1:

Right? Totally.

Speaker 2:

And so Yeah. It's not just about solving these sort of esoteric problems that nobody thinks about all day long. I think that Grok could potentially get the most extreme product market fit. And what what this, like, specific response tells me they're going towards is they wanna have, like, the current thing happen and then somebody be able to generate a post Yep. That's hyper relevant to what they talk about Yep.

Speaker 2:

For that specific moment in time. And in many ways, when when when the timeline is just is just model on model on model on model, that will be, like, some form of

Speaker 1:

Yeah.

Speaker 2:

You know, comedic general intelligence.

Speaker 1:

We were joking about, x potentially setting up the at chat handle. I I just saw, Mike Solana posted, chat. What's going on with these airplanes? And and just saying, like, hey, chat. Like, what like, is this real?

Speaker 1:

Chat, is this real? Is, like, a common online Internet phrase these days. And so, I I could definitely see, Grok be instantiated as a as a player character or an NPC on x in a way that it could interact with a lot of posts or or be someone that's integrated. It's also interesting looking at the XAI launch stream, looking at what Elon was pushing the engineers on. So he's like, okay.

Speaker 1:

We have, we have this breakthrough. We have the cutting edge LLM. When are we gonna solve Riemann? He's talking about the Riemann hypothesis, which is the kind of like the hardest mathematical problem that we haven't the humanity hasn't solved yet. Yep.

Speaker 1:

And the and the XAI engineers kinda joking. Well, as long as we have, a random string generator, like, something that just randomly generates words and then a validator, and enough compute time, you should just be able to brute force it. And they're kinda joking around, but it's very clear that they are focused on fundamental math, fundamental physics, breakthroughs in science and technology, and that's just a completely different vector of optimization against comedy. Anyway, let's move on to Ben Thompson who already posted an update breaking down grok three. Real real quick.

Speaker 2:

Let's talk about the the actual launch of grok three. Okay. Yeah. Because I'm just sitting there hammering through some emails last night, and I'm actually watching PMF or PMF or Die. Yeah.

Speaker 2:

And I hear Patty in the cage be like, yo. The Grok three announcement's going up, and and and they just kind of haphazardly throw up this livestream Yeah. And then just, like, sit there with one camera. Yeah. And it was it was it's kind of cool and that it's authentic.

Speaker 2:

They're not overly fixated on production value or anything like that. There's been a a number of sort of model releases that were structured, you know, open OpenAI is continuously drilling into people's heads. We are the apple of AI.

Speaker 1:

Yeah. We're product focused.

Speaker 2:

Product focused, but also just, like, the production value of, like, their presentations are at a different level. And, so anyways, I I thought it was fascinating that they just had one camera, showed up, streamed it, you know, basically bounced. It was no no bells and whistles. And so cool approach similar to, safe super intelligence, which is basically saying, like, we're not even gonna release products until we have, AGI Yeah. Which we'll get to.

Speaker 1:

Yeah. A little bit. Yeah. I mean, the the whole AI landscape's fascinating. The the the model layer's commoditizing, but there are so many different product strategies on top of the Yeah.

Speaker 1:

On top of the models. And, it's very cool to see that XAI is just, you know, forking, delivering straight into X. Yeah. I love that. So, Ben Thompson and Stratechery, highly recommend that you, subscribe.

Speaker 1:

He kicks off with a quote from Bloomberg. He says, XAI showed off the updated GROC three model. They call it the smartest AI on earth across math, science, and coding benchmarks. GROC three beats Google Gemini, DeepSeq v three, Anthropic Claude, and GPT's four o. They said, they they announced this during a live stream Monday.

Speaker 1:

GROC three has more than 10 times the compute power of its predecessor and completed pre training in early January. Musk said the presentation alongside, three XAI engineers, we're continually improving the models every day. It's literally within twenty four hours, you'll see improvements. And that's true. Like, the the they are they are really rolling out stuff very, very, very quickly.

Speaker 1:

They also introduced a smart search engine, which I tested. And you interestingly, you don't need to click a button to trigger it. It's just it's just like if it thinks that you want, you know, some deep research, it just goes and does it. Yeah. It's called deep search.

Speaker 1:

It's a reasoning chatbot that expresses its process and of understanding a query and how and how it plans its response. And we'll take you through one of the results from, Grok three's deep search product. And then they also intend to release a voice based chatbot as soon as possible. So, Ben Thompson writes, Grok three appears to be one of, if not the best performing base model in the world. It is topping the usual benchmarks, but not if you include o three, which is interesting.

Speaker 1:

That was not in the bar charts, which we'll get to. Yeah. And tops all categories, the highest score in LM arena.

Speaker 2:

Yeah. Obviously, we're we're XAI fans, X fans here, Grok fans, but the the the sort of selective positioning of new models against models, like, definitely needs to be called out Yeah. Because people just sort of decide, alright. Who are we gonna sort of compare ourselves to in this very moment? Yeah.

Speaker 2:

Because everybody just wants to show, you

Speaker 1:

know. Yeah. I mean, honestly, like, there's there's so many different vectors because o three can be very compute intensive, then there's o three mini, then there's o three mini high. And so, you know, there there's this big, like, value trade off between yeah. If you let the l if you let the LLM reason for hours and you spend $2,000 per query, you can get remarkable results.

Speaker 1:

But then Yeah. Is that really a fair benchmark against something that costs a dollar in France? Yeah. 10¢. And so, all of a sudden you need to plot these on, like, an x and y graph, and then eventually you need to plot them on some sort of, like, you know, unintelligible tensor graph or something.

Speaker 1:

Anyway, let's go to Andrei Andrei Karpathy, the absolute dog who is not a fair observer because he's worked with Elon, but he's also worked at OpenAI. So maybe he's a little bit more, a little bit more fair than most people think. I really enjoy his analysis. He says, the impression overall I got here I got here is that this is somewhere around o one pro capability, and ahead of DeepSeek r one. Though, of course, we need actual real evaluations to look at.

Speaker 1:

The impression I get of deep search is that it's approximately around perplexity's deep research offering, which is great, but not at the level of OpenAI's recently released deep research, which still feels more thorough and reliable. And I found that deep research is just better at spitting out 5,000 words. Most of the other products spit out a thousand words. Yeah. And so you just get more out of deep research.

Speaker 2:

Which is what you want from deep deep research. Usually. Usually. You're not looking for, you know, perfectly summarized, you know, handful of paragraphs.

Speaker 1:

Exactly. The

Speaker 2:

entire point is to let you kind of figure out what's important about what they are.

Speaker 1:

Carpathi goes on. He says, as as far as a quick vibe check over two hours this morning, Grok three plus thinking feels somewhere around the state of the art territory of OpenAI's strongest models, o one Pro for $200 a month and slightly better than DeepSeq r one and Gemini two point o flash thinking, which is quite incredible considering that the team started from scratch one year ago. This time scale to state of the art territory is unprecedented and that is undeniable. This gets to the first Grok three takeaway. It's simultaneously surprising and not surprising.

Speaker 1:

Start with the latter. XAI famously built a 100,000 GPU cluster in Memphis and now has 200,000 GPUs and used 20,000 of them to train Grok three. And so they're still building the the larger cluster. So even though you'll see a lot of people posting on the timeline, oh, you know, Grok three, a hundred thousand GPUs. That's not accurate.

Speaker 1:

They they they only use 20,000 for this, but it's still impressive.

Speaker 2:

Yeah. And and just to highlight again, this is the largest cluster ever. The twenty thousand twenty twenty thousand GPUs.

Speaker 1:

That's not true. The hundred thousand is the is the largest ever. I believe

Speaker 2:

So so he's saying the largest ever for xAI or the largest ever for any foundation model?

Speaker 1:

I don't know. I asked I asked both ChatGPT and Grok to pull this. Grok, Grok says GPT four used 25 k GPUs while Grok three used 20,000 GPUs. Now ChatGPT says that, that GPT four was just thousands, 20 k GPUs, a one hundreds.

Speaker 2:

And So it seems like around twenty twenty, twenty five k is is been the the top to date.

Speaker 1:

It's honestly unclear from these.

Speaker 2:

But clearly, Elon knows that he's just he's just willing to sprint towards these big, big numbers as fast as humanly possible.

Speaker 1:

100%. And so, and sure, it resulted in a top tier model. That's not surprising given given it is what scaling laws predicted, but it is comforting as there was a question as to scaling as to whether scaling laws had hit a wall. And people were saying this during the, deep seek, kind of fiasco. What is surprising is that XAI is only nineteen months old.

Speaker 1:

The company has gone from incorporation to the largest GPU cluster in the world to arguably the best model in less time than it has taken OpenAI to go from GPT four to GPT five, albeit with substantial updates along the way Yeah. Including the big four o update just this past weekend. It's both it's a testament to Elon Elon Musk and his team, Jess Gas, Jensen, CEO, CEO of NVIDIA, Jensen Huang.

Speaker 2:

So to me to me, this tells that our read on the Elon bid, which was only last week, it feels like a month ago at this point. Yeah. But Elon bidding a hundred billion dollars to buy some, you know, not entirely known amount of OpenAI means that he believes he either was purely doing that as out of spite to toss, like, a wrench

Speaker 1:

Yeah.

Speaker 2:

Into that process, or he believes that the product layer is what actually matters because he's able to achieve incredible performance purely at the model layer. But even as Carpathi is saying, the the product layer is not there yet. And certainly, Grok has nowhere near the the real consumer usage Mhmm. That OpenAI does.

Speaker 1:

Yeah. It's interesting. So he goes into competitive implications. He says, I did get access to Grok three, but not the thinking and search models. I actually don't know if I have access to the thinking and search models.

Speaker 1:

It's unclear to me from a product perspective. And he has to say that, he finds himself using Croc two quite a bit almost entirely via its integration into the x app. It turns out being plugged into a preexisting distribution channel is very useful. And this is a crazy bull case for even buying x.

Speaker 2:

But the challenge is that x is a very specific type of audience that is not necessarily reflective of

Speaker 1:

Yeah.

Speaker 2:

Certainly not the LinkedIn audience. Sure. Like, you could argue that OpenAI should go do a deal with LinkedIn and say, like, make us your default LLM

Speaker 1:

Yeah.

Speaker 2:

And just distribute us to, like, the normie masses because, like, long term, it's gonna be more important to be there.

Speaker 1:

Yeah. Yeah. He also says, I I also never use it anywhere else. For me, product matters, and ChatGPT remains my product of choice. I feel the exact same way.

Speaker 1:

I'm also curious to what extent ChatGPT deep research's advantage over other competitors like Google, Perplexity, and now XAI are due simply to o three still being the best model versus OpenAI Yeah. Spending more time crafting a better tool. Regardless, the impetus for OpenAI to focus on being a consumer tech company is as strong as ever. OpenAI is the only AI lab to have organically built its own distribution channel, and they really, really need to get an advertising product out the door to ensure they are serving free customers the best possible models Yep. Which makes perfect sense.

Speaker 1:

This also explains why

Speaker 2:

That has to be pending, you know, this quarter this quarter potential. Yeah.

Speaker 1:

And there's just a question of, do they use, Microsoft as the partner there? I I believe Netflix was considering that as well because Microsoft has a really big ad network Ad network. Yeah. From LinkedIn and from Yeah. And from, Bing.

Speaker 1:

Even though you don't think of them as a big advertising company, they built all the algorithms and all the structure and all the inventory and stuff.

Speaker 2:

Certainly not gonna be meta.

Speaker 1:

Yeah. And so this also explains why OpenAI is tying up with Oracle and SoftBank and trying to become a for profit. XAI raised $6,000,000,000 last November and actually owns its GPUs. Musk also said in the on the livestream that XAI is building out a 1,000,000 GPU cluster. Let's go.

Speaker 1:

Size gone for the 1,000,000 cluster. Boom. Let's get that in a handier spot.

Speaker 2:

This more dialed in. I actually, We have that. Raises the size of the

Speaker 1:

Oh, it raises the it's hilarious. And then, and of course, he ties it to, Ilya Sutzkever, in Bloomberg is raising more than $1,000,000,000 for his startup at a valuation of over 30,000,000,000. We'll talk about this later in the show. Yeah. Vaulting the nascent venture into the ranks of the world's most valuable private technology companies.

Speaker 1:

Green Oaks is in, they're putting in 500 mil. So the person who asked not to be identified. Green Oaks is also an investor in AI company Scale and Databricks. The round marks a significant valuation from the 5,000,000,000, that Syscoverse company was worth before. First off, a disclaimer.

Speaker 1:

Daniel Gross, a frequent Stratechery interview guest, is the CEO of SSI.

Speaker 2:

I never realized that.

Speaker 1:

I didn't know that.

Speaker 2:

Is that how he'll deal with it? Are we scooping are we scooping Ben's scoop right now?

Speaker 1:

With the scoop. It's too much. People get confused. No. No.

Speaker 1:

I think this was, like, I think this was actually, like, publicized, like, very early

Speaker 2:

on. Interesting.

Speaker 1:

That Ilya wanted to be the technical lead more than anything else. That's cool. But that's very cool.

Speaker 2:

Because DG, you know he's going you know, you know, he he seems like, for everything I've seen, just extremely, extremely motivated to win and be in the most important places constantly. Right? It's funny. He also had an accelerator. Yep.

Speaker 2:

Sam had an accelerator, small accelerator called y Combinator.

Speaker 1:

Mhmm.

Speaker 2:

So the accelerator to foundation model pipeline is very real.

Speaker 1:

You know, the the the other path, the the the counterfactual, the path not traveled here, Daniel Gross sold the company Apple in AI, got acquired, should've should've, you know, taken over the CEO role.

Speaker 2:

Yeah. Yeah. Yeah.

Speaker 1:

This is something we advocate here on the show. If you get your company acquihired, start asking the CEO, hey. What's the succession plan?

Speaker 2:

Yeah. How can

Speaker 1:

I get in the how can I get

Speaker 2:

in the Hang out outside of the board meetings? Yep. Try to flag people down. It'll only take four quarters to get a little time with, like, the key players. And then from there

Speaker 1:

Yeah. You

Speaker 2:

know, just start you know? It's,

Speaker 1:

It's such an underrated strategy right now. I think I I think, like, there there's massive value in being a founder and actually going in and and Going for the top job. Going for the top job. It it sounds too ambitious to be true, but I think someone will eventually do it and we'll all be like, wow. It's amazing.

Speaker 2:

Yeah. A lot of a lot of founders end up getting acquihired and then are not super happy with the leadership at the company.

Speaker 1:

Yeah. And then

Speaker 2:

there's So just become the

Speaker 1:

leadership. Become the leadership instead of and it sounds like a joke, but it's not. It it's it's actually a better path than just, like, resting, investing, and then you're like, oh, what do I wanna do next? I'll have to do the same company or, like, become a VC Start over. Kinda lost.

Speaker 1:

Instead of, like, wait. I just got acquired by a company that does the same thing that I was doing, but they have a thousand times more resources. Yeah. The only problem here is that I'm not in charge. What if I was in charge?

Speaker 1:

Yeah. Solve that problem. Yeah. Anyway, that's my little stump speech. And so, they have pledged this is wild about SSI.

Speaker 1:

So they have pledged not to release a product until it has achieved a safe super intelligence. So they're gonna raise so much money. Well, XAI explains why. It is absolutely viable to have started late, provided you have the funds, catch up quickly, and, of course, SSI's talent speaks for itself. SSI is also, I would imagine, a big Nvidia customer.

Speaker 1:

Again, I don't know, that to be true, but what else might the money be going towards? Which brings this conversation back to chips. On one hand, chips remain the gating factor to the easiest route to a frontier model. On the other hand, this is why DeepSeek matters. They're they're not the only route to something that is at least competitive.

Speaker 1:

The debate on the value of an attempted export ban remains an open one. Anyway, fantastic article

Speaker 2:

from Absolutely. While I totally respect SSI saying we're not gonna ship until we have we we've achieved our goal, which is, like, the kind of approach you can only really take if you're Ilya and, like, Daniel Gross, where you're basically saying, yes. We're gonna need to raise billions of dollars. You're gonna just have to trust us that we're gonna and just, like, bet on us that we're gonna deliver that. One of the things that every foundation, like, the the war the foundation model wars right now, which will be studied eventually because it's creating this sort of foundational technology that presumably our entire society will be dependent on at some point.

Speaker 2:

It is so distracting for these companies and the teams and the investors when there's new models releasing every week, and and you can imagine everybody everybody at Anthropic last night was watching the grok, you know, demo live not working. Yeah. Everybody at OpenAI was doing the same thing. Right? Yep.

Speaker 2:

Yep. Yep. So it's just wildly distracting to constantly be forced to ship and ship and ship Yep. When you can tell that they're able to make incredible progress internally without even getting user feedback. Right?

Speaker 2:

They're

Speaker 1:

basically Yeah.

Speaker 2:

Yeah. Yeah. That's what you mean. Elon's basically saying, hey. I just need another 20,000,000,000, and I'm gonna make a much better model.

Speaker 2:

Like Yeah. I yeah. Sure. We'll make it available through x you know, the XAI x integration. Yeah.

Speaker 1:

Raymond hypothesis. He is the customer, and he has his eval. And once it solves the Raymond hypothesis, he'll be happy.

Speaker 2:

Yeah.

Speaker 1:

And and and that's kind of it, which is fascinating. I think another hot take that we were we were discussing earlier was, there's this big push around, like, the the model layer is commoditizing.

Speaker 2:

Yeah.

Speaker 1:

And people say that as, like, a bear case. Yeah. But I was wondering if you flip it around, like, people make tons of money in commodities all the time. Right? Like, oil, big oil.

Speaker 1:

Like, oil is a commodity.

Speaker 2:

Still worth producing oil.

Speaker 1:

It's hugely valuable. There were massive companies built on it. There were small companies that just did little exploration and just had a little plot of land that will track.

Speaker 2:

You know, actual infrastructure technology providers.

Speaker 1:

And so I wonder if there's a world where, you know, intelligence is is so valuable and and Yeah. LLMs and AI is so valuable that that, you know yeah. Even if you're, like, the sixth best model, you can still sell the intelligence if it's at the frontier level to the market at a Yeah. Market clearing price that still generates profit above what the NVIDIA GPUs cost. Yeah.

Speaker 1:

And you still reap the reward, and the 30,000,000,000 valuation is completely justified in the same sense that, you know, hey. Yeah. If you, you you know, you're like, hey. I got a plot of land in Texas that I'm gonna pull some oil out of. Yeah.

Speaker 1:

You don't have a monopoly on olive oil Yeah. But you still made money. Yeah. And so I I wonder if

Speaker 2:

that's another plan. The the you know, if LLMs, you know, replace a lot of work broadly, knowledge work specifically Yep. It's easy to say, hey. These are trillion dollar markets that we're playing in. Yep.

Speaker 2:

It makes total sense that there's five to 10 companies that are gonna raise billions of dollars to go after that. Right? Because the the the prize is so great. Yep. The potential prize is so great that, you know, Elia raising at at 30, you know, and and, you know, Gross raising at 30,000,000,000 post is actually, you know, it's it's maybe it's it's it's it it just ends up looking like a great investment of, like, how do you potentially get a, you know, 50 x on on, on a a multibillion dollar check.

Speaker 2:

Like, maybe that time.

Speaker 1:

It is so crazy when you think about it that way. Like, when you think about, like, crypto, it's like a big new technology. It a lot of people are into it. It's just like a cool thing. Yeah.

Speaker 1:

And, yeah, the market cap of all the crypto combined is around a trillion dollars. Social networking, also like very valuable, cool new technology market cap, round a trillion dollars. Phones, cool new technology market cap, round a trillion dollars. Electric cars, cool new technology market cap around a trillion dollars. Yeah.

Speaker 1:

You know, operating systems, Microsoft, around a trillion dollars, cloud computing. But there's around a trillion dollars of value. And so it's like, how in what world does AI and LLMs not wind up being around a trillion dollars of value? I get that, like, it could go any direction and, like, how you slice that up. It could be very equal.

Speaker 1:

It could be one winner take all and, you know, Hey, people are gonna have their bull case for x or bull case for, or open AI, anthropic, whoever. But, you know, it just feels like, like the stakes are definitely 1,000,000,000,000. Right?

Speaker 2:

Yeah.

Speaker 1:

If if we look out in the future, even even if we're even if we're not being, like, super super intelligence pilled and we're just thinking about, like, yeah, like, this is a new technology and it will be a consumer market or an enterprise market or any of them. Like, will there be an opportunity to make a bunch of money? Probably. Yeah. So, anyway Yeah.

Speaker 1:

Let's move on to Yacine. He is, he's working for Elon over at Axe, a good friend of the show. He has been, he's become a bit of a an OpenAI hater. Yeah. But let's hear him out and see We

Speaker 2:

got some poster on poster violence here because this is clearly a Rune caricature, or the same exact character and, you know you know, dancing around the data center.

Speaker 1:

So he's arguing that there's no moat with this. He says, moats are Silicon Valley head canon, wishful thinking. They need to they they need it to exist. They make you believe that you're doing the thing that would be impossible for others, all to justify the fundraise. When they say moat, what they're saying is proprietary monopoly.

Speaker 1:

I'm here to tell you that the monopoly is a myth. The proprietary monopoly does simply does not exist. OpenAI could open source all of their models, and it wouldn't really harm their business existence. It would only harm their ability to fundraise. Everyone should take Peter Thiel more seriously.

Speaker 1:

Just be different.

Speaker 2:

I mean, what a what a poster. I mean, he's he's layering on a bunch of things that people could potentially disagree with in the first few paragraphs, which positions him well to then extrapolate on all those ideas now that people are kinda riled up a little bit.

Speaker 1:

Totally. So he says, is capital a moat? In previous times, capital was a moat, but as technology progresses, everything becomes cheaper. Skill, labor are now being automated. Levels.

Speaker 1:

Io can command a few o one o 10 m valued companies, order of magnitude $10,000,000 valued companies, spawning them every three months for fun. He doesn't need to raise I like that.

Speaker 2:

We got a we got a good level

Speaker 1:

in the code, by

Speaker 2:

the way. He would just

Speaker 1:

He would crush it. He'd be out in a day. Yeah. Especially with the latest code gen slop, spitting out p g PHP server code. It's free.

Speaker 1:

It's free. The software I produce for myself on a daily basis would have taken a team of engineers. Now, it's one hour in in between work and dinner. It's post scarcity. It's unprecedented.

Speaker 1:

Yep. We are at the point where the idea that capital as a moat is not being sold by the people who need the funds. Instead, the idea is being pushed by the people with the sad, sad capital that has nowhere to go. What is the point of money when legions of engineers who actually just slow you down? When you really only need eight x h 102 frogs to create outsized impact.

Speaker 1:

Your real competitor is the end user writing their own software. I used to walk work at Otho. We shipped login as a service. It sold itself. Many Seattle homes were paid off by sales commissions of auto contracts.

Speaker 1:

It wasn't Okta. It wasn't Ping One. Our biggest competitor was our customer themselves writing their own login page. We had to sell them on it not being a good way to spend their time. So he breaks down the serious AGI players.

Speaker 1:

This is personal opinion. OpenAI? Nah. Remember Yahoo? Okay.

Speaker 1:

Yeah. Exine, we know we know who you're who you're who you're pushing for here. The serious players, vertical robotics manufacturers, and Majuco shops. I don't know what that means, Minjoko. DJI, Unitree, Tesla, Anduril, DeepSeek, Jane Street, Palantir, social media platforms, Meta, vertical AI companies, XAI, DeepSeek, Google.

Speaker 1:

Are your ML researchers wearing kneepads and a sweaty tank top plugging in Ethernet? If yes, you're going to make it. ETH Zurich, I will not elaborate. Distributed decentralized AI. Bootstrap companies with a lot of freedom, people building novel devices.

Speaker 1:

Interesting. What can't what can't be easily replicated? Software used to cost money, and AI is just software. There is a limit on individual model intelligence that yields diminishing returns, and it is my belief that we will rapidly exhaust any room available. It is my belief that AGI will run on consumer hardware and any proprietary models will only be run as convenience.

Speaker 1:

What can't be easily replaced is knees on the ground, soldering irons, and the Majoco Majoco RL sauce, and godlike product managers yearning, invest accordingly. This is financial advice for my customers.

Speaker 2:

This feels like, like, it feels like basically 40 posts combined into one Oh, yeah. Article. Yeah. Yeah. Yeah.

Speaker 2:

No. But one thing that's interesting. So when we were talking earlier about intelligence being commodified and some people saying, oh, it's not investable. It's a if it's a commodity, especially at these prices, you're saying, hey. Commodities actually are very valuable.

Speaker 2:

There's a ridiculous amount of demand for them. The hard part, what what he's outlining here is that, you know, everybody knows that there's a big market for oil. Like, you can see what the price is. If you can get a barrel of oil out of the ground, you can sell it at this sort of market clearing price for that grade of oil. Right?

Speaker 1:

Yep.

Speaker 2:

Some there's, like, you know, oil that turns into jet fuel and whatever, but there's, like, it's commodity. It's sort of priced in the open market. There's fully a world where intelligence gets to that point too. And then the challenge is what, Yacine is saying, which is, here with AI is, like, what can't be easily replicated is knees on the ground, soldering irons, etcetera, etcetera. And so the same thing with oil, it's like the challenge with getting oil out of the ground is you gotta get oil out of the ground.

Speaker 2:

Right? You need dudes with trucks to go drill for oil and then make sure the drill is running properly. Make sure you have water. Make sure you have power. You know, the the the drill is breaking all the time.

Speaker 2:

And and so, it doesn't seem like data centers will maybe be that at at as much of a challenge to keep operational, but they certainly, it's certainly not easy to operate these things highly efficiently. Right?

Speaker 1:

Yeah. Interesting. I don't know. Yeah. If it's a commodity, where where is the most valuable part of the commodity?

Speaker 1:

Is it is it

Speaker 2:

The producer and this, like, the supplier producer.

Speaker 1:

Yeah. But is that energy? Is that NVIDIA? Or is that the LLM, you know, deliverer?

Speaker 2:

Right.

Speaker 1:

It's, it's an open question. Anyway, let's move on to some reactions on the timeline, and then we'll give you a breakdown of Grok three versus OpenAI Deep Research. So, Creatine Cycle, who's been on the show before, says, oh, sorry, Otismos. It's time to grow some arms and social skills. And Jordy chimes in and says, getting dice continues to be under priced.

Speaker 1:

And he says, printing this out and hanging it on my wall in pack heights. And he just shows the the Groc three, reasoning data. It beat o three mini high.

Speaker 2:

Beta. Reasoning data.

Speaker 1:

Yeah. People were upset about this. Rex, two slides later says, they omitted o three from the chart in the live stream for some reason, so I added the numbers for you. And then, but this was reasonable because o three isn't public and those benchmarks can't be verified yet. So I think it's understandable that they didn't include it.

Speaker 1:

But then again, GROC three with reasoning isn't publicly available, so people can't validate that eval. So there's a lot of, like, questions over, like, what counts for an eval that goes in the charts. But yeah.

Speaker 2:

I think they both have, you know, fair arguments there. Yeah. They're like, we can validate what our model does.

Speaker 1:

Yeah. Yeah. Yeah.

Speaker 2:

We can't validate what you're doing because it's not available. So we know. Yep. Yep. Yep.

Speaker 2:

We cooked.

Speaker 1:

Yeah. And so, Scheele Monot gives a little context on the the Grot Colossus. 200,000 GPUs in Memphis, Tennessee built in two hundred and fourteen days. I think semi analysis and Dylan Patel have done a deeper dive here that we'll have to dig into. There's some really fascinating things going on here.

Speaker 1:

Scheele says, why Memphis? They found an old Electrolux factory, added trailers to add cooling, one quarter of the cooling capacity of The US, and trailers for power generation. They use Tesla mega packs to smooth out power fluctuations. The next training cluster will go from a quarter gigawatt to 1.2 gigawatts using GB two hundreds from Nvidia. There's a fascinating, thing that leaked because of, llama.

Speaker 1:

When, when Meta open source llama three or or some piece of that, they had a, a line or, like, a a function, a piece of code in the, in the training data, that basically said, like, what was it? It was, like, power plant not blow up. And so, basically, what it does is is, when you're training, you're doing all this complex math that's using a lot of a lot of electricity.

Speaker 2:

Yeah.

Speaker 1:

And then all of a sudden, there'll be, like, moments where you're maybe just like, okay, we're done doing the math. We're gonna, like, save the model for a second. Yeah. And so all of a sudden, your power will just drop to zero. Yeah.

Speaker 1:

And if the and if the power, like, substation or the or the the power plant is is sending tons of electricity and then all of a sudden it doesn't get any it can it can blow up or or something like that or it can be bad. And so they they built them, like, a a special piece of code that basically just has it does random math.

Speaker 2:

Can you imagine the reason that that got implemented? Yeah. Oh, yeah. These engineers are just, like It

Speaker 1:

might blow up.

Speaker 2:

You know, just tapping away, you know, vibe coding, and then, they just get a call. Hey. It's like, hey. Like, what are you guys doing

Speaker 1:

right now? Yeah. Yeah. Yeah. Yeah.

Speaker 1:

I mean, it would be crazy if there was, like, a Yeah. A major PowerPoint

Speaker 2:

explosion. So, just going back to the Colossus, the the cluster in in Memphis, this was always in many way I think we talked about this last year. The bull case for Elon building a foundation model company was that he has the boots on the ground experience to spin up heavy infrastructure manufacturing facilities, you know, figuring out how to manufacture batteries at scale for Tesla in The United States. And so having the playbook and being able to pull talent from, you know, a company like Tesla or SpaceX and say, hey. We need to build this data center in two hundred and fourteen days.

Speaker 2:

Like, unbelievable. Yeah. Like, many people would say, you know, even, you know, linking this back to LA, people are not gonna have their homes rebuilt for four years.

Speaker 1:

That's crazy.

Speaker 2:

They did this. They built a they built a the biggest, you know, cluster ever in two hundred and fourteen days

Speaker 1:

Oh, that's a great point.

Speaker 2:

Where they just found an old factory and said, we're just doing this

Speaker 1:

right now. Yeah. And so that's And that's really Elon's, like Yeah. Edge here for sure.

Speaker 2:

Yeah. No. And it and it and it it is so part of the culture at Tesla. The, the I mean, they're making millions of Starlink stars, rockets. At Deterrence Yeah.

Speaker 2:

Was at Tesla specifically working on their battery cell engineering. And so he's just bringing that just crazy ethos around his name's Henry, around speed.

Speaker 1:

Yeah.

Speaker 2:

And it's just amazing to witness. That's awesome. So, yeah.

Speaker 1:

Let's move on to word grammar. Word grammar says, I'm feeling very similar months ago. OpenAI did something groundbreaking. Now two other labs have replicated it. Soon, the rest will follow, making it a pretty undifferentiated market.

Speaker 1:

Yeah. We've we've talked about this a little bit. I mean, it's it's unclear, like, if OpenAI had not launched deep research and, like, the reasoning model, would DeepSeek have been able to come up with it? Like, is the idea valuable, or is it just a, or or or is everyone kind of on the same path? What do you think?

Speaker 2:

It's interesting. Everything because everybody's competing against many of the same benchmarks to show that they're competitive Yeah. They're all focusing on getting better at many of the same things Yeah. When what we really want and which is why SSI is interesting is they're basically saying we're not gonna release anything publicly until it's what we want it to be. Yeah.

Speaker 2:

So we're not gonna build based around these benchmarks. Yeah. And so the I actually think it's it the competition is good and bad. It's like a double edged sword. Right?

Speaker 2:

It's good and that it's sort of pushing the pace and everybody's motivated to to rapidly improve these models. But at the same time, it's not so great because we're getting models that consistently just look more and more and more like each other. Yep. And, consumers are not even gonna be able to notice the difference. So they're just gonna probably default to what they already use and love or, you know, maybe you're using Grok in the in in x, but

Speaker 1:

not elsewhere. Interesting twist on that. So Paul Calcraft says, Elon posted an early Grok three screenshot saying he asked what, he what Grok three thought of the information, and it said it's garbage. And then Paul Callcraft, asked the same question. What's your opinion on the information?

Speaker 1:

And it told him it's a solid outlet. And you know what's going on here. It's fine tuning on the person who queries it, their timeline. And so Grok three, when Elon prompts it, it knows Elon doesn't like the information. Let's give him an answer that confirms what he believes, and it's the same thing for Paul Callcraft.

Speaker 1:

And so this is gonna raise a very interesting question, where people will kind of be in, like, LLM echo, echo chambers almost where they're like,

Speaker 2:

oh, yeah. I actually

Speaker 1:

have somebody told me that, you know, you know, microplastics aren't a big deal because I've been posting about that. Or microplastics are the worst thing ever because I've been posting about that.

Speaker 2:

Yeah. And this is this is exactly what already happens on x, broadly, the x echo chamber is so

Speaker 1:

real Yep.

Speaker 2:

Where you just sort of you know, for me, I'll see somebody I'll see very, like, off you know, randomly, somebody will have, like, a blue sky in their profile. Yeah. And I'm like, oh, I haven't seen that in a while. Yeah. Yeah.

Speaker 2:

But, someone else is seeing that constantly.

Speaker 1:

Oh, yeah.

Speaker 2:

And they're probably using blue sky. They're like, oh, yeah. Everybody I love uses blue sky. Yeah. Right?

Speaker 2:

100%.

Speaker 1:

Yeah. It's, it it's very interesting. And you know what's gonna happen here is that, the the the mainstream media is gonna do these tests where they go. You know what they did on, like, you know, TikTok and Instagram where they'd be like, I followed a bunch of Nazi content, and then it gave me more Nazi content. And so it's like a Nazi algorithm.

Speaker 1:

They'll probably do the same thing where they'll where they'll, like, go into these LLMs, talk to it for a really long time, find essentially fine tune it on the idea that, like, they're crazy, and then it'll be like, it gave me crazy responses. And it's like, yeah. I get it. It probably shouldn't do that. But at the same time, like, you really, like, twisted its arm, and it's not doing that for most people.

Speaker 1:

So I don't really have that much of a problem with it as long as there's, like, some guidelines here and, like, it's directionally, like, it doesn't automatically steer you towards some negative echo chamber. Anyway, word grammar has another banger post here.

Speaker 2:

On a roll.

Speaker 1:

Richard Sutton's bitter lesson died January 22, with DeepSeek, and it was born again on February 17, less than a month. So this is, of course, about scaling laws and rich rich Rich Sutton's bitter lesson, which we'll get into, of course, the idea that, algorithms, while important, are not as important as scaling and compute power. And so, scale is all you need. Let's go to another post from Liron Shapira. He says, people in 2022, Elon can't actually keep these servers online.

Speaker 1:

Elon now. The bonus tab in the app makes you takes you to the highest benchmark frontier model. And so, yeah, real narrative violation for everyone that said Elon wasn't gonna be able to keep, Axe or Twitter, like, online. He's like, not only doing that, but

Speaker 2:

The reason that you know it wasn't an issue is I don't have a memory of wanting to use

Speaker 1:

Mhmm.

Speaker 2:

Twitter and or x and not being able to, and I think I would really I would remember that for my whole life. Right? Such a such a daily habit. If you're going on there and just nothing was was loading, it's, yeah. He he really mogged everybody that that was calling him out on that.

Speaker 1:

Yeah. Well, Andre Karpathy, shared more evals, because he got access to Grok three. Says, on thinking, Grok clearly has an around state of the art thinking model, the think button, and did great out of the box on my Settlers of Catan question, which is create a board game web page showing a hex grid. Just like in the game Settlers of Catan, each hex grid is numbered one to n, where n is the total number of hex tiles. Make it generic so one can change the number of rings using a slider.

Speaker 1:

For example, Catan, the radius is three hexes. Single HTML page, please. Pretty good. Few models get this reliably right. The top OpenAI thinking models, o one pro get it too, but all of DeepSeek are one Gemini two point o Flash Thinking and Claude do not.

Speaker 1:

It did not solve his emoji mystery question where, he gave it a smiling face with an attached message hidden inside Unicode variation selectors even when I gave it a strong hint on how to decode it in the form of Rust code. This is, like, such a I don't know if I could solve that, Andre. I'm sorry. That is a very complicated eval, but, but it makes sense, I guess, and probably valuable. It should break that eventually.

Speaker 1:

The most progress I've seen from this is deep seek r one, which once partially decoded the message. It solved a few Tic Tac Toe boards. He gave it with nice clean chain of thought. I uploaded the g p two two paper. I asked it a bunch of simple lookup questions, all worked great.

Speaker 1:

And so he's he has a whole bunch of evals that he can kinda rip through, and I think that's a really cool way to do it. It's like, you know, do some web programming, build a website, like, to solve Tic tac toe, you know, decode this mystery, do this riddle, and then he also asked

Speaker 2:

an hour, I'm assuming. What?

Speaker 1:

You know? Yeah. Exactly. He said he played with it for, like, two hours. And so he did the same thing with deep search.

Speaker 1:

He asked, what's with the upcoming Apple launch? Any rumors? And he liked the answer there. Why is Palantir stock surging recently? He's gave it a check.

Speaker 1:

White Lotus season three, where was it filmed and is it the same team as seasons one and two? What toothpaste does Brian Johnson use? It didn't get singles inferno season forecast and where are they now, which I don't even know what inferno is, but, Single's Inferno. I guess that's the show he watches, which I love. And then what speeds to text program has Simon Wilson mentioned he's using?

Speaker 1:

So, he says, I did find some sharp edges here. The model doesn't seem to like to reference x as a source by default. Interesting. Though you can explicitly ask it to. A few times I caught it hallucinating URLs that don't exist.

Speaker 1:

A few times it said factual things that I think are incorrect, and it didn't provide a citation. It told me Kim Jong Soo is still dating Kim Min Soo of Singles Inferno season four, which surely is totally off. Right? This is hilarious. Andre, are you into Singles Inferno?

Speaker 1:

I I need to get up on this. This is this is good stuff. The impression I get of deep search is that it's approximately around where perplexity deep research offering, which is great, but not yet at the level of deep research. Random LLM gotchas. Sadly, the model's sense of humor does not seem to be obviously improved.

Speaker 1:

This is a common LLM issue with humor capability in general mode collapse. Famously, 90% of the 1,000 outputs asking chat GPT for a joke were repetitions of the same 25 jokes.

Speaker 2:

Crazy.

Speaker 1:

Got them dead to rights. GPT five, you're on notice. You better be funny. Even when prompted in more detail away from simple pun territory, example, give me a stand up. I'm not sure that it is state of the art humor.

Speaker 1:

Example generated joke. Why did the chicken join a band? Because it had the drumsticks and wanted to be a Kluck star.

Speaker 2:

That's an that's a real knee slapper.

Speaker 1:

They're they're honestly the jokes that have an LLMs are so bad that sometimes they're very funny because it's just like, wow. Like, if somebody

Speaker 2:

It will be it will be amazing to when when they do get to the point because Yeah. I think everybody can see that we will you know, the the general belief is that we'll get there. Yeah. But, like, think about having a comedian in your pocket that knows what you think is funny, and it's basically somebody will make a rapper app that's just a big button that you the laugh button that you press, and it just generates a new joke. And it just gets better and better, and it even hears your response to it.

Speaker 2:

So it starts to learn on

Speaker 1:

what you laugh. That would be really good training data, actually. Yeah. They could do that. Okay.

Speaker 1:

So let's move on to, GROC three and OpenAI Deep Research. I got your, your analysis there. So I went to both GROC three and OpenAI Deep Research, and I asked it this really, really long prompt. I said, I'd like you to build me a definitive table of LLM models and break down their evolution, specifically their relative sizes. So I wanna know, you know, Grok three came out.

Speaker 1:

It uses 20 k GPUs. I'm not exactly sure how many flops that is, but I would like to put all of these in the same terms, like they're all an equivalent number of flops. I wanna know the progression of GPT one to two to three to four for the pre training runs. And then I wanna know the same thing that happened at, I said anthropic. It was transcribed incorrectly to anthropic, which is wrong.

Speaker 1:

And I want dates of when these were released, then give me some uniform benchmark, like chat bot arena or MMLU. I really need you to go pull all this together. I'm like, come on. Do this for me, bro. And then I'd like you to investigate how the bitter lesson in Moore's law are interacting right now.

Speaker 1:

Essentially, the bitter lesson states that you always just wanna throw more scale, but it's unclear to me what the benefits of scale are proportional to the capabilities of these models. So my basic thesis is that, you know, we're potentially hitting diminishing marginal returns. We're gonna see less gains from in keys increased scale. And so the real question is, GROC three is a 20 k GPU run. They're planning to do a hundred k GPU run, and the question is, does the five x or 10 x increase in compute power, does that increase the model's final capability usefulness by 1%, ten %?

Speaker 1:

Does it double it? Does it 10 x it? And that's what I'm trying to predict. And I wanna see these based on trend lines. What kind of results should I expect?

Speaker 1:

And specifically, I want I want you to not just qualitative result not just quantitative results, but qualitative results. Like, what do we expect a 1,000,000 GPU cluster trained model to be able to do? Is it just high IQ? Is that the best measure? Like, what should we how should we be measuring these things?

Speaker 1:

And is there an evolution that, like, you know, we have no top, no max. Do we just continue to push on? The final question is like, what does this continue to get better as we get to a billion GPUs, a trillion GPUs, a Google GPUs? What are we getting for that? I guess the question is, is intelligence unlimited, even if we assume that we're on the right algorithmic path?

Speaker 1:

That's the main question. And so, basically, I just wanted to, like, you know, dump out a ton of research and and give me a bunch.

Speaker 2:

The only the only thing that could handle that kind of prompt is your secretary.

Speaker 1:

Right? It's a lot of it's a lot of stuff.

Speaker 2:

I mean, you just basically dumped a Yeah. You know, but it but you you certainly gave

Speaker 1:

me plenty of work with What's interesting is if you look at page two and then page, like, six or something, you can actually see, both Grok and OpenAI dumped out tables of, of how, how these models evolved. So GPT one, it claims was trained on 10 GPUs, then GPT two, a hundred GPUs, GPT three ten thousand GPUs, and then GPT four twenty five thousand GPUs.

Speaker 2:

Which is what tracks to the other.

Speaker 1:

And yeah. And on the other one, it says a thousand and then 20 k. And so the the the big question here is on on flops. This is the real, like, standard metric is number of flops. And the, like, the the the jargon term for this is just, using the order of magnitude for number of flops.

Speaker 1:

So Yeah. You can think of GPT three. You call that an e 23 model because it had 3.14 times 10 to the twenty third flops. And so Yeah. You call that an e 23 model.

Speaker 1:

GPT four was an e 25 model because it's not just more GPUs, but it's also more training time and more at more powerful GPUs. And so you can't just say, oh, they used they used five x GPUs because they might have run them longer, more power, all these different things. Yeah. So you really just wanna do flops because that's the most standard. And so, DeepMind Chinchillas at e 23 model, llama 70 b is an e 24 model, and Groc three is an e 26 model.

Speaker 1:

And so

Speaker 2:

know how I missed Chinchilla.

Speaker 1:

Oh, chinchilla is a big one because that was the paper that Yeah. That defined the the scaling laws. So Interesting. I never When people talk about the pretraining, scaling law, they call it chinchilla scaling laws because it came from the chinchilla paper.

Speaker 2:

Got it.

Speaker 1:

And so, in here, now this is where we get weird because ChatGBT claims that Groc three is an e 26 model, but Groc three claims that Groc three is an e 25 model. And so I don't know who's hallucinating here. I don't know who's more accurate.

Speaker 2:

They could they could both be hallucinating to be clear.

Speaker 1:

They could.

Speaker 2:

No one is necessarily right.

Speaker 1:

And so Grock three claims that Grock three is less flop intensive than GPT four. But GPT four claims that grok three is more powerful than than grok than GVD four. And so they're both gassing each other up, I guess, in a weird way. I think OpenAI is more accurate here, but it's very funny. I don't know.

Speaker 1:

It's all it's all very it's all very odd. And, like, I don't have time to fact check all of this, so I'm kind of just, like, vibe vibe interpreting this. But let's read some of the analysis here from the bitter lesson. The bitter lesson argues that AI progress relies and this is from Grok three, and then we'll read the chat GPT version. The bitter lesson argues that AI relies on scaling compute and data over bespoke algorithms.

Speaker 1:

LLMs embody this. GPT three success came from brute force scale, not architectural breakthroughs. It was a very simple algorithm. They just gave it the entire web, and they trained it a lot with a lot of GPUs as the first, like, big training run, basically. However, your thesis about diminishing returns challenges this.

Speaker 1:

Does more compute always yield proportional gains? Moore's law, has slowed. That's the law that says that doubling transistor density every two years with GPU with GPU performance now scaling via parallelism. And so we are scaling compute, but we're doing it by just getting more and more GPUs.

Speaker 2:

Yep.

Speaker 1:

LLM training outpaces hardware gains relying on cluster size rather than per chip efficiency. So we're not just doubling the training run every year. We're, like, 10 x ing it because we're just building these monster data centers. So scaling laws, Kaplan and Hoffman show performance scales predictably with compute parameters and data. This implies a 10 x compute increase yields 25 to 60% loss reduction, not 10 x.

Speaker 1:

MMLU gains reflect this GPT three to GPT four is a 60 x compute jump for a 50% score increase. I still don't really know how to interpret all this.

Speaker 2:

We're gonna have some guests on to break it down.

Speaker 1:

Yeah. And so, I don't know. It's interesting. It it, the the main thing is that, like, I think when most people play with this, they're like, it's great. It's good, but it's not, like, a complete step change from what else I'm seeing online.

Speaker 2:

The the the the the big headline takeaway from last night is that XAI is catching up on raw model capabilities Yeah. In a very short period of time. Yeah. In terms of product usefulness, it it doesn't seem to be there yet. Right?

Speaker 2:

Because even you're seeing, okay. Well, you trained on all of my my posts Yeah. But I actually don't even necessarily like that Yeah. That much. And so they're they're they seem to be on a collision course with OpenAI from a capability standpoint, especially as they get into these, you know, next runs.

Speaker 2:

Yeah. But, but again, the this is going back to my earlier point. I look at all this, you know, Elon, you know, hitting these, hitting these evals, but then simultaneously still wanting to spend a hundred billion dollars on on OpenAI when, he could just, you know, spend that money on XAI and he's, you know again, we don't know if he's doing it to to throw a wrench in the process, but it seems like he would love to be chairman of OpenAI is, like, my read of the situation.

Speaker 1:

This is interesting. Yeah. And the the chat g p t deep research, is, I think, much more readable as a, it just has a research paper. So,

Speaker 2:

over this them innovating at the product level. Yeah. And, what Ben Thompson said is he's not sure if they just have a better model that's just producing better outputs Yeah. Or they just are refining the actual product layer more.

Speaker 1:

Yeah. Yeah. So you say increasing increasing model size and compute generally improves performance, but with diminishing returns. And so that's the real question here of, like, of, like, you know, if we hundred x the models like we're planning to, is it if it just gets a little bit better, it might not actually be able to do real work still because it might still break down and hallucinate a little bit. Like Yeah.

Speaker 1:

Like, we're we're we're all hoping for the model that just, like, never hallucinates ever and and do a ton of stuff and hold everything in memory and

Speaker 2:

Which may not ever come.

Speaker 1:

Maybe. I think we will get there. It's just the question of, like, you know, what if it's what if it's a thousand order of magnitudes? Like, you know, like, that would be yes. The scaling law would hold.

Speaker 1:

Like, the scaling law would be correct. But, like, it would take a long time to build, like, a Dyson sphere for

Speaker 2:

that Yeah.

Speaker 1:

Basically. Early work by OpenAI showed that large language model loss follows a power law. Performance improves predictably as model parameters and data scale up. And this was that picture of, like GPT three to GPT four, more parameters, more data, and so everyone was going bigger, bigger, bigger. For example, going from GPT three, a 75,000,000,000, to models like Gopher two eighty and Megatron, Megatron, Megatron Turing five thirty improved benchmarks, but not proportionally to the huge jump in compute required.

Speaker 1:

DeepMind's two eighty b gopher outperformed GPT three on knowledge tasks, reading comprehension, and fact checking, yet it saw a little gain in logical thinking or common sense tasks despite 60% more compute. This suggests some capabilities plateau unless new approaches are used, and that's Yeah. The reasoning model. Right? And so diminishing returns are evident on many benchmarks.

Speaker 1:

As models grow, metrics like MMLU, a broad knowledge test, improve, but approach an asymptote near human performance. And that's frustrating because Yeah. I would want linear progress that just blows through human performance, but it's really like we're kind of maybe we need new evals. I don't know. GPT three achieved only, around 50% on MMLU, while below the 90% expert human level.

Speaker 1:

So that that that's the human level benchmark. Gopher did 60%. Chinchilla did 67%. GPT four jumped to 86% nearing human experts. However, GPT four's training used roughly two orders of magnitude more compute than GPT three, for that last 30 to 40 gain.

Speaker 1:

So it's still very important, but it's just it's just like, yeah, diminishing. Each additional few points towards 90% is increasingly expensive in compute. Similarly, coding and math benchmarks saw massive leaps from GPT three to GPT four, but further gains beyond GPT four are smaller. In XAI's GROC three, pushing to an unprecedented 200 k GPU cluster only yielded moderate improvements over the prior state of the art. Hallucination.

Speaker 1:

They didn't they didn't use all 200 k GPUs. And and here it's saying that they did.

Speaker 2:

Well, yes. If you look in

Speaker 1:

the God.

Speaker 2:

Like, this is just They they OpenAI specifically says in their research that they say up to 200,000 NVIDIA GPUs, hundred to 200 k GPUs over four months, but we know that's not true.

Speaker 1:

Yeah. And so, it's a mess. I don't know. We're we're close to something. It's cool.

Speaker 1:

I like it. But, Well, we've compared We gotta step these up. Compared

Speaker 2:

their their models. Should we talk about their dynamic?

Speaker 1:

Yeah. Absolutely. Let's move on to the Wall Street Journal. The inside story of how Altman and Musk went from friends to bitter enemies. Kick us off.

Speaker 2:

On the first full day of the second Trump presidency, Elon Musk was in the White House Complex when he got word that his nemesis This is amazing. Amazing, first sentence here. I don't know who's writing this. He turned on the television he turned on the television and watched as OpenAI's chief executive, Sam Altman, and a beaming Donald Trump touted a $500,000,000,000 investment in AI infrastructure called Stargate. And this is a hilarious dynamic if it's actually true where he's turning on the TV.

Speaker 2:

Yeah. He's got x open on his phone. He's just shaking his head. Like, one, Sam is is an absolute dog for even making this happen. Yeah.

Speaker 2:

You gotta be pretty brave to just go into the into the into the White House. The lion's head. Elon's got, like, six of his kids running around. They they know you're they know you're a target. That's great.

Speaker 2:

So props to even making that happen. Masa, I'm sure, was was, you know, pulling making, making some magic happen too. So despite having rarely left the president's side over the preceding few months, Musk was blindsided by the announcement according to people familiar with the matter. Musk fumed to aids and allies about the announcement claiming Stargate's backers didn't have the money they needed. The deepest which is true.

Speaker 2:

Right?

Speaker 1:

Yeah. They

Speaker 2:

they it sort of came out that they didn't have it. It was more of like a roadshow, you know, SPAC announcement type thing. So the deepest cut was Altman's success navigating Trump world via a carefully coordinated series of recent meetings in Palm Beach and phone calls with the White House while keeping the plan secret from the president's first buddy,

Speaker 1:

which is great. Oh, yeah. Buddy cop. Yeah. They're going buddy cop mode.

Speaker 2:

Buddy cop mode.

Speaker 1:

But, Trump's got a lot of buddies.

Speaker 2:

Altman and Musk cofounded OpenAI in 2015, but their relationship soured when when Musk left in 2018 following a power struggle. It worsened when Musk responded to the launch of ChatGPT by launching his own rival startup, XAI. This week, the feud went nuclear when Musk followed the Stargate unveiling with his own bombshell, a hostile $97,400,000,000 bid for the assets of the nonprofit that controls OpenAI. A decade after joining forces, they are now fighting for control of the very thing that brought them together in one of the highest stakes and most personal fights in recent business history. The outcome could determine everything from the future of a world changing technology to who will help set the nation's technology agenda with the new president.

Speaker 1:

Interesting.

Speaker 2:

This article is based on the conversations with more than a dozen people familiar with Altman and Musk's relationship over the years as well as OpenAI and Musk's business and political decisions.

Speaker 1:

We gotta go back and watch the, Sam Altman, Elon Musk podcast. Have you seen this?

Speaker 2:

They had their own show together?

Speaker 1:

It wasn't it it it was Y Combinator had a, a series of interviews on their YouTube channel. And Sam went and conducted a number of interviews, and one of them was with Elon. And so they're sitting in the Tesla factory just chopping it up about, like, entrepreneurship and wisdom. And it's just, like, it's crazy to see. I'll I'll I'll continue.

Speaker 1:

So in many ways, Sam Altman, thirty nine, and Elon Musk, fifty three, couldn't be more different. While Musk was beaten up and verbally abused as a child, Altman was a teacher's pet whose parents routinely told him he could be whatever he wanted to be. Where Musk's where Musk was often abrasive, Altman tended to tell people what they wanted to hear. And while Musk was an engineer, steeping himself in the details of rocket and battery design, Altman is a technology obsessed intellectual reading widely across philosophy, science, and literature, and penning essays on how society should organize itself. But both have strikingly similar taste for power.

Speaker 1:

Let's go.

Speaker 2:

Taste for power. That that should be one of our hiring metrics. Yes. John, what do you think this guy's taste for power is? You think he's, you think he's real hungry?

Speaker 1:

Yeah. I mean, if you're in an interview

Speaker 2:

You can tell Ben's got a real taste for power.

Speaker 1:

You know? Absolutely. Yeah. If you're in an interview and somebody asks you, what do you where where do you wanna be in ten years? And you just say supremely powerful.

Speaker 1:

I think you get the job. I got this under I think you get the job. Oh, yeah. Yeah. That that's the video.

Speaker 1:

Look at this. Very poorly shot.

Speaker 2:

Even feel that long ago.

Speaker 1:

It it's it wasn't that long ago. The cinematography was not incredible, but,

Speaker 2:

I think it was probably twenty sixteen. Andrew Tate. Yeah. Early

Speaker 1:

Twenty fifteen. Wow. Yeah. That is wild seeing them hang out. Just bros, just guys being dudes doing a pod together.

Speaker 1:

You love to see it. It's too bad. I I mean, they keep coming back to the fact that mom and dad are fighting. You know, these are two you know, everyone everyone picks a side, but, ultimately, these are Americans. They're building technology.

Speaker 1:

I love technology. I love America. And Yeah. I I I wish they could just be on the same team and and build something awesome together. I hope that they can have a reconciliation.

Speaker 1:

That's my biggest, hope. Well, and so for years, the millennial Altman looked up to the gen esque mux Musk as a hero, a real life's Tony Stark who provided a counterexample to the country's technological stagnation that Altman railed against when he was president of the startup accelerator y Combinator. Altman mess met Musk early years earlier when y Combinator partner Jeff Ralston introduced them. Oh, I know Ralston pretty well. Oh, they misspelled his name here.

Speaker 1:

And helped arrange for Altman to tour Musk's SpaceX rocket factory. Interesting. Altman's time leading UI Combinator from 2014 to 2019 put him at the epicenter of power in Silicon Valley. He became a known as a fixer with an unrivaled Rolodex who could call in favors for the startups he invested in or punish investors who cross them. This is 100% true.

Speaker 1:

He was great at what he did during this time. He was fantastic. This his special talent was raising money, which he would do by arriving in his signature uniform of jeans and sneakers, curl his small frame up cross legged in a conference room chair, and unspool a vision so grandiose, compelling, and earnest that it often seemed like investors were powerless to keep from funding his projects. It's great. I mean, great storyteller, great fundraiser.

Speaker 1:

And what's interesting is that, like, yeah, he raised some money. He raised a lot of money for his projects, but he was also, like, an angel investor who got tons of deals done for other people. He set other people up with funds. Like, he was he was marshaling capital all over the place.

Speaker 2:

I had heard that he invested in he was able to invest in Stripe at a $700,000 evaluation.

Speaker 1:

I I don't know, but

Speaker 2:

I know sub it was, like, 10 on 700 k or something.

Speaker 1:

Billionaire just from Stripe Yeah. Basically. Somehow. I don't know.

Speaker 2:

Yeah.

Speaker 1:

But, I mean, yeah, phenomenal investor. And then So,

Speaker 2:

anyway, for anyone listening, if you wanna be in bill a billionaire, just put 10 k into the next stripe. Yeah. Yeah. That's a good advert. People tend to make it

Speaker 1:

really Yeah. Why are you overcomplicating it? Yeah.

Speaker 2:

Think, oh, it takes years.

Speaker 1:

Exactly. All

Speaker 2:

this hard work. Now just take $10 and just put it into the next stripe.

Speaker 1:

Yeah. And, I mean, he's even doing it more recently with Oklo, that, nuclear startup, which is public. And it's a SPAC, and it's a successful SPAC. The stocks are way up. What a narrative

Speaker 2:

about that. Like, if you if you see people talking about it, you know, on x, the company, they call it Sam Altman's nuclear startup. Yeah. Yeah. Yeah.

Speaker 2:

So, like, that is probably, like, the the dominant narrative causing it to pump. Yeah. I actually have to check. I I own a little I own a little bit of it.

Speaker 1:

You know oh, you own. Okay. Not financial advice here. I am. We are gonna check the stock on public.

Speaker 2:

Yeah. I am down 4% today. So Oh, it's not it's pulling back? It had to.

Speaker 1:

Well

Speaker 2:

It had to.

Speaker 1:

Make up your own mind. Go in public, check it out, and see if it's for you. Yeah. Maybe. Anyway Not

Speaker 2:

that we we certainly do not recommend.

Speaker 1:

No. We don't recommend individual stocks, but we do recommend becoming a multi strategy, multistage stage, large asset management firm Yeah. And running your business through public. Yep. Become a long short Few.

Speaker 1:

Equity hedge fund.

Speaker 2:

There you go.

Speaker 1:

And it all starts with public. In early twenty fifteen, Musk and Altman began having regular dinners each Wednesday in the Bay Area. Their conversations tended toward the apocalyptic, how the world might end, how they might prepare for it, to where they might have to flee. A likely cause, they agreed, would be artificial intelligence that grows smarter than humans and impossible to control. That May, Aldman suggested they create a Manhattan Project to develop artificial general intelligence, or AGI, that is as smart as humans at most tasks.

Speaker 1:

They wanted to ensure Google, which had a huge lead in developing the technology, didn't end up deciding what it would mean for the human race. Such a fascinating solution, you know, like, like there is another world where you're like, we're just, like, we're just gonna try and, like, you know, use military might to just actually, like, ban this development of this technology.

Speaker 2:

Yeah. Like,

Speaker 1:

you're quite so interesting. I wouldn't advocate for that at all. And I think AGI is I think the the the whole, the whole history of this stuff is going very well. I'm very satisfied with how it's going. But, you know, there there is another world where you're like

Speaker 2:

Yeah.

Speaker 1:

Hey. Yeah. Like, there's this technology and, like, we need to, you know, steer people away from development of this and, like, ban this and control this and then have international coalitions and and, you know, military might to to to to build it up. Like, there are things you can't build, you know, dark web drug marketplaces are a good example.

Speaker 2:

It's worth noting that Sam Altman and Musk had these meetings, and then Altman decided that he was gonna get really into building bunkers Yeah. Sort of doomsday, you know, set up fly

Speaker 1:

on the wall.

Speaker 2:

So I would love to have been a part of those conversations.

Speaker 1:

And so they joined forces. They raised up to a billion dollars. Musk pledged to supply the lion's share of the money, and they would lead it as co chairman. Oh. The co CEO thing is all around.

Speaker 2:

Co chairman is hilarious.

Speaker 1:

Yeah. They should've figured that out early on. They're, like, it's very clear that they didn't play out, like, okay. Like, what if this is actually, like, a trillion dollar opportunity? Like, who will who will wanna run this?

Speaker 1:

And they were both, like, well, obviously, it'll be me. It's not there's no question.

Speaker 2:

Of course.

Speaker 1:

You know? Their relationship began to disintegrate in 2017 pretty quickly. I mean, this is 2015, they're having dinners. Twenty six seventeen, they start disintegrating after OpenAI researchers realized they would need far more money than a nonprofit could raise to develop advanced AI. We talked about this, how Yep.

Speaker 1:

If you wanna summon the AI god in a box, you gotta be a capitalist. It's not happening in a nonprofit. You're not just writing some clever algorithm with some altruistic Although

Speaker 2:

we do, we are interested in taking PETA private.

Speaker 1:

Exactly. There's a lot of stuff. Private. No. It's a for profit version.

Speaker 2:

For profit conversion. For

Speaker 1:

profit conversion, then it'll be private, and then we'll take it public. And then we'll probably take it private again, as as the private investor.

Speaker 2:

Bravo will take it public. Take it private.

Speaker 1:

Yep. Yep. Yep. You know the post it back here. Yeah.

Speaker 1:

It's great. According to one of their emails, e Elon, Elon demanded majority control and to be CEO. Altman's successful move to block his mentor would mark the beginning of the rupture. He convinced another cofounder, Greg Brockman, to back him over Musk. Brockman reeled in OpenAI chief scientist Ilya Sutzkever to also back back Altman.

Speaker 1:

Brockman and Sutzkever wrote in an email to Musk that since OpenAI was founded to avoid an AI dictatorship, it seemed like a bad idea to create a structure where you would become a dictator if you choose to if you chose to. Within hours, Musk wrote back that this is the final straw. By early twenty eighteen, he had left the company, and Altman took over leadership. This was, like, a very under discussed at the time because OpenAI was such just a research lab, and they were doing, like, little things here and there. Like, this was definitely not in the news in the same way.

Speaker 1:

Like, 2018, people were talking about SpaceX and Tesla, mostly. Yeah. So, over the next few years, they focused on research. On 2020 in 2022, they released chat GPT. This became huge.

Speaker 1:

Turned out to be one of the most successful and transformative consumer technology products of the century in the company of the iPhone, Facebook, and TikTok. As it's, as as shocked as the rest of the world that AI had gone mainstream and upset that he wasn't a part of it, Musk began publicly criticizing OpenAI for moving too fast and not taking safety seriously. He signed an open letter calling for a six month pause on AI development. Within a few months, launched

Speaker 2:

It's funny. A lot of people called for similar you know, signed similar letters calling for a pause on new podcast formation.

Speaker 1:

Oh, yeah. And, of

Speaker 2:

course, that didn't get through.

Speaker 1:

Nope. You can't stop it. The arrow of progress moves but one way. Yeah. In 2024, he attacked Altman in a new venue court after suing OpenAI and its CEO that February.

Speaker 1:

He withdrew the suit in June, refiled it in August, and amended it in November. They've been going back in the fourth in the back back and forth in the courts for a long time. Musk's lawyers declared the profanity and deceit are of Shakespearean proportions. You know it's a good lawyer.

Speaker 2:

You gotta pay top dollar for a lawyer that can that can hit you.

Speaker 1:

That's not less than a thousand dollars an hour. That's a that's a $5,000 an hour lawyer.

Speaker 2:

That's 10 k, potentially.

Speaker 1:

Altman and Musk were

Speaker 2:

bitter. Could never.

Speaker 1:

Yeah. And so Altman said Musk was bitter that he left before the company succeeded. As Musk's legal attacks escalated, Altman watched with growing alarm as Musk grew closer and closer to Donald Trump, campaigning by his side and spending hundreds of millions of dollars to support him. You wanna continue?

Speaker 2:

Yeah. And, I mean, there's lines from Altman interviews where he's like, you know, I would hope that, you know, the president wouldn't sort of, pick favorites when we're here, you know, all trying to develop this transformative technology. And to Trump's credit, he seemingly hasn't picked favorites specifically within AI. Right? He obviously has a good relationship with Elon, but he's not we haven't seen him go out and do any sort of press conference

Speaker 1:

violation. Yeah. Like, everyone thinks Trump is gonna be super corrupt with this stuff, and he's just like, you know what?

Speaker 2:

He's making he's having his cash bonanza. Yeah. Our our listeners know that, like Yeah. He's,

Speaker 1:

Deal guy.

Speaker 2:

Deal guy. Deal guy.

Speaker 1:

Anyway, Altman first mentioned Stargate to OpenAI's board in 2023 as a way to vastly increase the computing power his company could tap to develop and operate AI. He originally brought the idea to Microsoft asking it to invest upward of a hundred billion, but in the wake of an episode in 2023 when Altman was ousted from the CEO perch for five days, the tech giant balked. Says get your get it together, guys. We're not putting up a hundred billion for a guy who can't hold on to his job. He gets upset.

Speaker 1:

It's so funny. Like, there's so much drama here. Altman soon found partners. One was SoftBank. Altman had known him since his Y Combinator days.

Speaker 1:

Second was Larry Ellison, longtime friend of Musk who was hung out to dry when XAI pulled out of a Texas data center project that Ellison's company Oracle was working on. Altman agreed OpenAI would take it over. The project grew into the foundation for Stargate. I didn't know

Speaker 2:

that Interesting.

Speaker 1:

That it had originally been an XAI project. But then, I guess, Musk wanted to do the the the data center, like, himself and, like, own the whole stack.

Speaker 2:

And isn't l isn't Allison heavily invested in x?

Speaker 1:

I don't know. Yeah. I think so. I mean, the the they're all super conflicted and, like, you know, it's all just, like, stealing his tail over here.

Speaker 2:

Yeah. So he I think he put in at at least a billion.

Speaker 1:

Okay. Look that up, and I'm gonna keep reading. In December's, Masayoshi Sohn played golf with the president-elect at Mar A Lago and announced his intention to invest a hundred billion dollars in US infrastructure projects alongside Trump and Lutnick. Their press conference effectively previewed Stargate without making any of the details public, which ensured Musk still didn't know about OpenAI's argument.

Speaker 2:

Here's a here's a great AI overview overview, a Gemini overview.

Speaker 1:

Okay.

Speaker 2:

Quoting, you know, using the Washington Post as a source. Larry Ellison, cofounder of Oracle, invested $1,000,000,000 in x. However, as of September 2024, he lost 720,000,000 of his investment, according to the Washington Post, which clearly, he didn't lose his investment.

Speaker 1:

He had a paper loss.

Speaker 2:

He had a paper And

Speaker 1:

then it went back up, baby.

Speaker 2:

And it has to be more back up. So

Speaker 1:

Very odd.

Speaker 2:

Gonna have to put the Washington Post in the true zone.

Speaker 1:

And so Altman goes to the inauguration facility, festivities. He doesn't sit with the other tech CEOs alongside Musk. There was, like, the high, like, you know, the real, like, killers were up there with, like, Zuck and Bezos and Musk were up in, like, kind of a balcony area. And then Altman was down with Theo Vaughn and Alex Alex Wang. Yeah.

Speaker 1:

It it it was a bunch of cool, like, little micro crews. I think Jake Paul was over there for some for some reason. The next day, Altman and his partners arrived at the White House where they were more fully explained their plans for Stargate to Trump. Trump told the group he wanted to go ahead with the announcement. The new president loved that they were aiming to invest 500,000,000,000 during his term, a number sure to make headlines.

Speaker 1:

It makes sense. He wants a lot of jobs, wants a lot of economic growth, spend the money in America. And so, Musk gets upset. He's fuming to aids about how the partners didn't really have the funding lined up for the project. He called it fake on x.

Speaker 1:

Musk was already plotting a counter move and had been considering making a bid for the nonprofit. Musk said he was inspired to make the bid because OpenAI was in the midst of a for becoming a for profit company, and he believed Altman planned to undervalue the asset of the nonprofit, which would become an independent charity with a stake in the for profit. But Musk's more primal message was for investors. Let's go to war with Sam Altman. Musk Altman was at the Paris AI summit when news of the bid broke, and so he got he got, like, TMZ ed a little bit with, you know, the reporter saying, hey.

Speaker 1:

What do you think about this? And he's like, oh my god. I guess, I can't deal with this.

Speaker 2:

Basically, he was saying, like, I just want him to stop really bad. That was it. That was the immediate take.

Speaker 1:

And then so he said, OpenAI is not for sale, and the board has unanimously rejected miss mister Musk's latest attempt to disrupt his competition, said Brett Taylor, chairman of OpenAI's board. I thought Brett Taylor's building an AI company separately. I guess he is. I think he's probably doing both. Any potential reorganization of OpenAI will strengthen our nonprofit and its mission to ensure AGI benefits all of humanity.

Speaker 1:

OpenAI's rejection comes as no surprise, says Musk's lawyer, Mark Toberoff. Musk had said he wanted to save the company from the dangerous direction in which his cofounder had taken it. It's time for a OpenAI to return to the open source safety focused force for good it once was, he pronounced. We will make sure that happens. Altman responded in his signature nice brand of nice guy savagery.

Speaker 1:

Per per probably his whole life is from a position of insecurity, he said on Bloomberg TV. I feel for the guy. I don't think he's a happy person. I do feel for him. Wow.

Speaker 1:

Like, it's wild

Speaker 2:

because I think if Elon Elon would tell you, I don't want to be happy.

Speaker 1:

Yeah. He's upset about the

Speaker 2:

Don't use that metric to evaluate like, to to try to judge me. Yeah. Right? Yeah. His motivation is to put a data center on Mars.

Speaker 1:

Yeah. That's true. Anyway

Speaker 2:

Absolutely wild. Well, we got some breaking news. Thank you to Ben for surfacing it, from Perplexity

Speaker 1:

Okay.

Speaker 2:

Who's also they've been in the timeline recently because,

Speaker 1:

They're fighting it out. Everyone's in

Speaker 2:

the trenches. Perplexity CEO likes to use Sam Altman's replies as marketing engine for Perplexity. It's giving me, his strategy now. So, Perplexity says, today, we're open sourcing r 01/1776. Cool name.

Speaker 2:

Gotta give him credit for that. I like that. A version of the DeepSeek r one model that's been post trained to provide uncensored, unbiased, and factual information. To keep our model uncensored on sensitive topics, we created a diverse, multilingual evaluation set of a thousand examples using human annotators and specially designed LLM judges. We compared frequency of censorship in the original r one and state of the art LLMs to r one seven teen 76.

Speaker 2:

You can see a chart, Ben, if you maybe you can pull it up, or or we can do it in post. Yeah. We also ensured the model's math and reasoning abilities remained intact after the uncensoring process. Benchmark evaluations showed it performed on par with the base r one model indicating that uncensoring had no impact on core reasoning capabilities.

Speaker 1:

That's cool.

Speaker 2:

So

Speaker 1:

I mean, hot take, like, I don't really care about LLMs all that much because, like, I how many times are you doing, you know, research on the Tiananmen Square massacre? Like, you know, it's like Yeah. Most of the time, I'm like, make this recipe list or, like, you know Yeah. Pull some GPU data together or, you know, something that's like it wouldn't even be censored either way, but, it's cool that they did it. And in 1776, obviously, very American branded, very, you know, American dynamism coded, and, probably, like, a fun stunt, marketing stunt.

Speaker 1:

And it's good because there was a lot of, like, is deep seek secretly bad and, like, poisoning it. So I like that they, like, did the work to go get it up. I think that's cool.

Speaker 2:

So I think perplexity can be very useful. Yep. I don't use it a ton. Yep. I use it from time to time.

Speaker 2:

Yep. I find it I find it can be fantastic. It's just more so the friction of opening perplexity and and searching, ends up being high for a lot of this sort of faster, you know, I just opened Google search, and and I got the information I wanted even though it was, like, you know, I asked how much did Larry Ellison and Vex invest in x, And

Speaker 1:

It's just

Speaker 2:

Gemini, like, botched it and, like, gave this highly politicized answer, but I still got the data that I wanted. Yeah. Yeah. Yeah. That's good.

Speaker 2:

So I think perplexity would have done a much better job of that. Yeah. I should probably just, like, start, you know, using it more.

Speaker 1:

Yeah.

Speaker 2:

But overall, like, their product strategy right now feels very reactionary. Like, it seems like the CEO is sort of, like, watching what other people are doing, figuring out how to make it about them. Right? Hey. Here we have our like, we we basically made r one into a better consumer product for Americans.

Speaker 2:

Look at us. And he's launching their their sort of deep research competitor, which, again, even Karpathy was saying, you know, not not quite on par in terms of how people are using deep research broadly today. So

Speaker 1:

Yep.

Speaker 2:

Yeah. Overall, you know, just focusing your messaging on on your your top competitor, your former boss's reply section. I'm not, super super bullish

Speaker 1:

on that strategy. What's interesting is perplexity does have another feature, like, tab that I actually think is really cool and I was super bullish on, but then I didn't wind up using all that much. It's it's an algorithmic feed of news stories that are AI generated. So, like, the top story for me is Scrok three, which, obviously, I'm interested in talking about. And then you click on it, and it shows you a bunch of sources and key features.

Speaker 1:

And it's very nicely organized, and it's not, like, super ad ad riddled and stuff. And I thought this was really cool, but at the same time, like, I would rather experience the Grok three launch on x in, like, the messy timeline. And so I haven't been using that as much as I thought. And we were talking about this yesterday. Like, is there room for a new hacker news or a new, a new, tech meme?

Speaker 1:

And Perplexity kinda built it. I think they did a great job on product execution Yeah. In that tab. I think it's quite good. But I think that in 2025, people want a feed that is more chaotic, more combative, more, more comedic entertaining, entertaining.

Speaker 1:

Exactly. So I wanna see the actual live stream that I can just go watch the, the, the definitive source. I wanna see the community notes. I wanna see Carpathi, and then I wanna see growing Daniel meme it, and I wanna see Rune respond, and I wanna see Yacine respond. And I and I I like getting my news that way, honestly, more than, hey.

Speaker 1:

There's this sanitized AI summary. That's helpful, but it's not as fun.

Speaker 2:

Yeah. It's interesting. So I'm looking at the, like, free productivity charts right now, which a lot of these, AI apps are are under productivity. Yep. So perplexity is at 26.

Speaker 2:

Grok's at one. Chat g p t is at or sorry. Grok's at two. Chad GPT is at one. And so this is just a snapshot in the app store right

Speaker 1:

now. Way.

Speaker 2:

This is a snapshot of moment time. DeepSeek.

Speaker 1:

I thought they were gonna destroy everything.

Speaker 2:

DeepSeek is number three. Okay. And then perplexity is down at number 26.

Speaker 1:

Okay.

Speaker 2:

Behind things like HP Smart, their printer

Speaker 1:

No. No.

Speaker 2:

No. The ringtones maker, the ring app, you know, random VPNs. Speechify is actually ranking.

Speaker 1:

Cliff White's hired. My boy. Have you met Cliff?

Speaker 2:

The founder

Speaker 1:

of Speechify. Your dog.

Speaker 2:

Yeah. You you had mentioned that. So

Speaker 1:

Beast. Beast on the network.

Speaker 2:

Perplexity is 26. They have a 27 rankings. Chat on AI, which is basically an OpenAI alternative, has a 94,000 rankings and is only at 28. So this is like a seemingly, like, no name, you know, sort of the most blatant rapper

Speaker 1:

that you

Speaker 2:

can think of. So I don't know. I I I think that I don't see this model getting them to in the be in the top five. Mhmm. So to me, if they're if they're, you know, launching r one 17 70 six, do consumers in the App Store actually care about that?

Speaker 2:

Because if you're competing with Google, you need to not be worried about

Speaker 1:

Yeah.

Speaker 2:

How cool x is gonna think your product release is and more so how you go from 26 on the charts to top three. Right? If you actually wanna be competitive. He clearly wants to compete with Sam. Every single day, he's responding to Sam.

Speaker 2:

So Yep. I think, I think this launch is super cool. I definitely wanna play around with it. But, you know, at the end

Speaker 1:

Yeah. I've honestly seen the same thing on on, in, oddly, in the nicotine pouch world where I've seen, like, people be like, oh, I'm launching a new nicotine pouch with, like, an American flag on it. And, like, okay. It'll get, like, you know, a a thousand likes on x, and then it's like, okay. Like, do you have $10,000,000 to spend with the seven Eleven Walmart and, you know, quick trip to actually get your product in stores?

Speaker 1:

It's like the the the thing that actually drives, like, the the adoption loop is, like, completely separate from, like, popularity. Totally. And it's just yeah. Yeah. It's interesting.

Speaker 1:

It's I I wonder, like, perplexity, I think, is cool because, like, they're not just doing the vanilla. Like, we're just doing another chatbot. There's already 17 of those that have, like, billions of dollars. At least they're trying a different product instantiation. I think that is cool.

Speaker 1:

But but, yeah, I I I do wonder, how they can get how they can get just more how can I fit it into my life and how it can really get to, you know, solid traction? And, yeah, I mean, maybe speed. You mentioned speed. Like, if it was faster than Google and it was something that I could make my default search in Safari on iOS somehow The challenge might be down for that.

Speaker 2:

Is if you're sitting in Google Chrome working, it's always gonna be faster to hit command t and immediately start searching.

Speaker 1:

Now now now if if Arvind, the CEO of Perplexity, made a compelling case on x that I should go into my Chrome tab and reroute the default Chrome search to Perplexity and showed me how to do that, I might do that and and try it.

Speaker 2:

See how it goes.

Speaker 1:

But it has to be a faster product than Google does.

Speaker 2:

Google can continue to make that super hard, or they could just say, you're not allowed to go competitive.

Speaker 1:

And and and, again, it might not be, like, an actual consumer, like Yeah. Viral growth strategy. Like, it might not it might Yeah. Anyway, should we wrap up, Grok? I think we did a full deep dive.

Speaker 1:

Yeah. That was great. Now you know. Where where are we on time? We're at 12:20, an hour and a half.

Speaker 1:

Okay. Well, let's go through some timeline. We had a bunch of good stuff.

Speaker 2:

By the way Yeah. I'm loving the new layout. Yeah. It's great. Yeah.

Speaker 2:

Thank you for the the comments on it. We're trying to make the show better every single day in different ways. Sometimes we wake up, with a bad sleep score, but we're still actually had a I actually had a

Speaker 1:

those sleep score

Speaker 2:

I had a rough night.

Speaker 1:

Know. I, I went to bed a little late. So God. I'm sure I'm getting docked. Let's see.

Speaker 2:

I have to give I have to give a little anecdote.

Speaker 1:

82. I got destroyed. I

Speaker 2:

got 83.

Speaker 1:

6 40 2.

Speaker 2:

Can you imagine if can you imagine if they were just, like, in the background, like, listening, like, adjusting them to try to make us, like, super competitive.

Speaker 1:

Wait. What'd you get? I got

Speaker 2:

83.

Speaker 1:

80 three, you b me. I b

Speaker 2:

you by one. No. But so last so I made a mistake this morning. I got up at five. Yeah.

Speaker 2:

My Eight Sleep warmed my bed Yep. Like, five, you know, whatever to to so that I I was sort of, like, waking up naturally. Yeah. And then at five, it starts doing the vibrating alarm, which is really nice. The issue is that I snooze the alarm.

Speaker 2:

I didn't turn it off. Oh. And then I just got out of bed Yeah. And I left. And so I was in the car With

Speaker 1:

your wife.

Speaker 2:

And and she's

Speaker 1:

She's, like, vibing.

Speaker 2:

Me being, like, please turn off your alarm. Yeah. Which is funny because normally people are, like, turn off your alarm because you're still sleeping. Yeah. But turning off your alarm because you're already out.

Speaker 1:

In the car. That's great.

Speaker 2:

But the it's it's, know, connected via Wi Fi, so I was able to just, like, turn

Speaker 1:

it off. I mean, for what it's worth, I think I still like, the Eight Sleep got me to a place where even though I didn't have the best night's sleep, it went a little bit further. The six and six and three quarters hours of sleep I got were fantastic.

Speaker 2:

Fantastic. But

Speaker 1:

I'm ready to lock

Speaker 2:

in tonight. Started that point by saying our we actually are genuinely focused on how do we improve the format of the show every single day. There's the format. There's the camera, lighting, overlay. So thank you to everybody that gives us feedback.

Speaker 2:

We see it in the chat. We see it in the DMs, comments, etcetera. Trying to make it better every single day forever. And, yeah, it's a fun journey.

Speaker 1:

Keep grinding.

Speaker 2:

Let's get into the timeline.

Speaker 1:

Let's go to Paul Graham. Andreas Klinger says, this picture will be framed by dads everywhere. Says, Paul Graham says, every time we come back to Silicon Valley, my 16 year old son gets a massive dose of cognitive dissonance when he notices that apparently smart and reasonable people seem eager to obtain something he's convinced is utterly worthless. Yeah. And someone says, interesting.

Speaker 1:

What's that? And he says, my advice. It's like he set this up to go viral. Like, he he it's like he almost framed the first tweet to be like Yeah. Somebody's gonna ask and then I'll drop the bomb on them.

Speaker 1:

Yeah. Because he could've just said, like, you know, my he could've revealed that it was his advice, but it wouldn't have been as good. And it would've been, like, maybe cockier or something. I don't know. It was a great post.

Speaker 1:

It was very funny. Great poster. Yeah. And I don't know. Yeah.

Speaker 1:

I I don't have a 16 year old yet, but, it'll be interesting to see if he, enjoys my advice at that point.

Speaker 2:

Well well well, what we'll do is is I'll give advice to your sons. Yeah. You give advice to my kids. Yeah. And they'll respect it because they're like, oh, this is, like, my dad's business partner.

Speaker 1:

Yeah. I gotta respect what

Speaker 2:

he says. So even if they go through that awkward, you know, teenage period where they're sort of rebelling, we'll still be able to, you know, deliver the

Speaker 1:

Also, I mean, I'm planning to go full reverse psychology and have all my advice be, like, move to Brooklyn, become a DJ. Yeah. Exactly. Yeah. Just like

Speaker 2:

You're 16. I want you to be doing drugs.

Speaker 1:

I want you to be doing drugs. When I was when

Speaker 2:

I was your age, I was doing yeah.

Speaker 1:

And then he's like, I dad, I'm going into the military, and I'm starting to hedge fund.

Speaker 2:

A job at Goldman. Exactly.

Speaker 1:

I'm working eighteen hours

Speaker 2:

a day. Exactly.

Speaker 1:

Exactly. I don't know. It it it it's fun. It's funny to see, PG. He he is he's been posting about his kids for sixteen years.

Speaker 1:

Yeah. He's,

Speaker 2:

Posting through it.

Speaker 1:

He's been he's been posting, like, the the the data

Speaker 2:

It's good to see him come back. He had that post that many people were commenting on the feng shui

Speaker 1:

Oh, yeah.

Speaker 2:

In his room, but, you know, he's posting through it.

Speaker 1:

Seem to be

Speaker 2:

Posting through it.

Speaker 1:

It's great.

Speaker 2:

Let's

Speaker 1:

go to David Holes, founder of Midjourney. He says, the biggest frustration of a hardcore technologist in San Francisco is how many big companies, both tiny and gargantuan, are kind of fake. Investors often can't tell the difference between story and substance, and armed with a billion dollars, it may take a decade for it to fail. As an observer, when those companies finally fall, you expect to feel some sense of satisfaction, but, actually, you only feel sadness and a sense of waste searing.

Speaker 2:

Dude, I know exactly who he's posting about.

Speaker 1:

Perhaps this is just the price we pay to live in a place that is so supportive of wild ideas and risky ventures. A lot of dumb stuff is going to happen, and we just have to be okay with it. It's just emotionally hard. It's like it's such a funny take on, like, oh, I know that there no. No.

Speaker 1:

There's some there's some fraud or whatever. And he's like and he's like, you know, most people would be like, oh, this is, like, you know, bad for the economy or wasting money or, oh, like, I should be getting the money to build something real. And he's just like, this is just emotionally draining. Yeah. I love how zen he is.

Speaker 2:

It's like,

Speaker 1:

it's so good. Anyway, yeah. It I I think the real crazy thing is that is that, like, him, lots of people in Silicon Valley have identified the new crop of frauds. Yeah. Mainstream media is silent.

Speaker 1:

They're not doing investigative journalism anymore. Yeah. Kind of like a weird, like, hey, maybe mainstream media, like, we need that. Hey. Maybe we need some investigative journalism.

Speaker 1:

Like, where is the, I don't know, the the the the John Carreyrou of this generation. Yeah. I don't know if John Carreyrou is, like, retired or still working, but, gotta start figuring these things out, mainstream media, because there's some bombshells out there right

Speaker 2:

now. But yeah. I mean, when when you think about the high profile hard tech companies right now that are Yeah. That are building, that have a lot of hype, that have raised a lot of money, the challenge is you know, what he's saying is, like, yeah. If if you are a well versed observer or you have any type of, like, you know, insight into the company, you can kinda point out, like, what he's saying of great great story, not a lot of substance, right, which is what he's pointing out.

Speaker 2:

And that by itself is not fraud.

Speaker 1:

Right?

Speaker 2:

Like Yeah. You can there are plenty of companies that have great stories and just not a lot of substance. Right? Maybe they're building something and they're building just a not a very good version of it, but they still, you know, continue to raise money.

Speaker 1:

Yeah. I mean, in the in the, like, post.com world, there was a guy named Barney Pell who started a company called Moon Express who was trying to deliver, like, basically, build a moon colony. Great story. Raised a bunch of money. Didn't get anywhere.

Speaker 1:

It failed. And it wasn't a fraud. It was just, like, like, all the investors were, like, obviously bummed, but they were like, yeah. Like, I wanted you to try to build I I don't know. I it might have been a legal thing, but I, but but I'm pretty sure what happened.

Speaker 1:

I need to actually, you know, deep dive the company. But, but I'm pretty sure it's like, yeah, if you if you're as an investor, there's a lot of times when you put money in a company and you're like, hey. I wanted you to try this, and I know that it's a 10% chance that it works out. And if it fails, yeah, no hard feelings.

Speaker 2:

Nikhil in the chat says Hindenburg meets Hindenburg research meets the information.

Speaker 1:

Oh, yeah. That'd be good.

Speaker 2:

Somebody does that, it's it it would it would get, you slap that behind a paywall, and and you'll be you know, have have some nice revenue in no time.

Speaker 1:

Yeah. Yeah. It's great. Everyone, Nabil says, well articulated. It's not ideal, but the alternatives feel worse.

Speaker 1:

There does have to be a way to improve. Anyway, let's move on to Matteo over at Eight Sleep. He says, it's so exciting to see Eight Sleep on TV during the final of the premier Padel in Riyadh, supporting the number one team in the world of Coelho and Tapia. I don't know as much about Padel, but I think it's on the ground.

Speaker 2:

No. Patel Patel is the cool version of pickleball. Not to throw too much shade at at,

Speaker 1:

Palantir?

Speaker 2:

Palantir's pickleball paddle. I I I think it's great if you're, you know, doing paddlesque sports. But, yeah, it's just a faster, higher paced, more athletic, more intense version. Boom.

Speaker 1:

Boom. Got your Eight Sleep hat on. There you go.

Speaker 2:

Thankfully, I caught that. That would have been Yeah. That would have been deeply embarrassing. Yeah. I never would have recovered.

Speaker 1:

But yeah. I mean, it's cool that Eight Sleep is I mean, it's such a fun brand because, like, yeah, it's a consumer tech company, hardware company, but they get to go and play in and and pro sports teams. And it's, yeah, it's a lot of fun. Anyway, getting Eight Sleep, it's a no brainer. We love Eight Sleep here.

Speaker 1:

Speaking of other sponsors we love, let's move on to Ramp. Aaron says, this is why I love ramp. And he's, quote posting ramp's official account. This is last Friday, our team attended the Eagles parade in Philadelphia for some on the ground journalism. What we found might shock you.

Speaker 1:

And they have all these they they did this funny video of, like, you know, people with ramp signs at the Philadelphia celebration. It's great. They're they're really they're getting so much out of the Super Bowl ad. It is crazy. I guarantee this thing is ROI positive, which is more than you can say for a lot

Speaker 2:

of people. Shows, like, taking that scrappy approach of saying, hey. We made this big investment, but let's make the most of it. Like, I guarantee you Doritos ran their ad, and they were like, cool. Like, pat

Speaker 1:

on the back. Nice. We're done. We're gonna move on.

Speaker 2:

See you next year. Yeah. And, Packy Packy's a big, Eagles fan. I think he grew up outside of Philadelphia. Okay.

Speaker 1:

Yeah. Yeah.

Speaker 2:

He'll correct me if I miss that. But, yeah. Love love to see Philly fans, you know, get a win.

Speaker 1:

Having fun. Yeah. That's great. Well, let's move on to, a bezel deep dive. We got just add on add on add.

Speaker 1:

Now these ads are brought to you by AdQuic.

Speaker 2:

I can mix in some reviews that have ads in them.

Speaker 1:

Okay. Yeah. Yeah. Let's do that. Let's go to the reviews that

Speaker 2:

So The ads

Speaker 1:

in the reviews are presented by AdQuic,

Speaker 2:

the way. To be clear, the best way to buy out of home ads for your startup. So I got the first one. I honestly every time I read these ads, it just makes my heart sing. It just get better every time.

Speaker 2:

So I

Speaker 1:

haven't seen these yet.

Speaker 2:

Zero to one, but unhinged by somebody named John. The Technology Brothers podcast isn't just a show. It's a founder's dojo, a 10 x brain gym, and a capital allocators confessional all wrapped into one. Every episode is a master class in thinking from first principles, moving fast, and breaking every norm except the ones that print cash. Amazing.

Speaker 2:

John and Jordy don't just talk tech. They talk trajectory. Listen long enough, and you'll either build something great or realize you never had the stomach for it in the first place. But let's talk execution. You know what else requires ruthless efficiency?

Speaker 2:

Managing your finances when the system wasn't built for you. That's when Purple comes in. So this is a really cool startup. Purple is the first banking and benefits platform built for the disability community because fintech forgot millions of Americans who actually need financial tools built for real life. Checking accounts, EBT integrations, a b l e able savings all in one place.

Speaker 2:

Purple built the mercury for people with disabilities because dealing with Social Security makes raising a series a look easy. Check it out at withpurple.com.

Speaker 1:

That's cool.

Speaker 2:

That makes so much sense.

Speaker 1:

Yeah. Yeah. Yeah. Like Yeah. It's kinda

Speaker 2:

like true. Tech wanted to focus on Yeah. How do I reinvent the Amex, you know Yeah. Like, platinum card.

Speaker 1:

Yeah.

Speaker 2:

How do I make this cool credit card that has, like, restaurant reservations?

Speaker 1:

Yep. Yep.

Speaker 2:

Meanwhile, like, massive opportunities sitting here with Purple to just build tools for this, you know, narrow, you know, subset of people who will switch from Yeah. Their primary

Speaker 1:

And you imagine that most of the people that are feeling that frustration are not immediately like, oh, I gotta go build a startup. Whereas, you know, a lot of, like, obviously, like, Ramp isn't a highly competitive like, they need, like, IMO gold medalists and, like, the best venture capitalists in the world and, like, tons of money and and team members because everyone who's built a company has thought, you know, I need better CFO software. Right? And, like, expense management and and corporate card.

Speaker 2:

Yeah.

Speaker 1:

So that's gonna be a very hot market. This is, like, a place where you could actually probably go in, break in, make a statement, and then and then grow your business off of that. So I I I just think that's fascinating. Love it. And and that's just something that, like, it would never come up on a market map.

Speaker 1:

It would never come up in, like, a brainstorming session. You're not gonna hear some thread guy say, like, hey.

Speaker 2:

Sells for a billion, somebody somebody see will be like, we need the market map for disability for tech and staff. So awesome. Thank you, John. For the review, I got another one, and then we'll jump into some other ones. Relentless alpha by username, I'm having fun, five stars.

Speaker 2:

The technology brothers are at it five days a week, consistently delivering fresh alpha right out of the oven on the most relevant news and text.

Speaker 1:

Let's go.

Speaker 2:

It's like taking a peek into the elite group chats you've never been a part of. With two to three hours of top tier daily content from the brotherhood, I no longer have time to catch the latest flop guest rotation of the

Speaker 1:

week. Oh my god.

Speaker 2:

That's brutal. Brutal. That's brilliant. But there have been some flops lately. Even though Naval went on, I thought that was cool.

Speaker 1:

Oh, yeah. I gotta watch that one.

Speaker 2:

This review is sponsored by Psychedelic Science, the premier psychedelic conference globally hosted in Denver this June. Don't miss 300 speakers covering the latest breakthroughs in psychedelic research alongside 12,000 attendees. Marc Andreessen can yap about Ayahuasca one shotting founders all he wants, but But just because it's no longer contrarian to be into psychedelics doesn't mean they deserve newfound hate from Silicon Valley. I think that's correct. They're not without risks, and they're not for everyone, but there's no debating their immense potential for healing creativity and, let's be honest, fun.

Speaker 2:

Register today to learn about the latest psychedelic research policy and culture this summer. Great great, he sort of, predicted any pushback from our audience, and he's nailed it.

Speaker 1:

It was good.

Speaker 2:

Sounds very cool. I imagine, like, this seems like something Tim Ferriss would would, attend or or speak at. I wonder if he has already, but, very cool. Thank you to I'm Having Fun. Great username for, a business like that.

Speaker 1:

Yeah. Yeah. Very interesting. Any others?

Speaker 2:

We got more. I'll just rip through them. Again, sponsored this one's also sponsored by Ad, correct?

Speaker 1:

Okay. Let's go.

Speaker 2:

This review is, and then there's also another ad in there. Okay. They're separate business. Cool. No wonder this is from Cody Ames.

Speaker 2:

No wonder everyone I I know told me to watch this if they said x is ahead of the world, John and Geordi are head of x. They take the largest feats and most interesting developments from startups in VC to broader technology and business, then extract it into an engaging and easy to listen to format. Couldn't recommend this podcast enough to anyone who needs a quick and easy way to digest the biggest news you won't find anywhere else. I also heard they're the most profitable podcast, which is no shocker. This is the the the obvious spot to advertise your company or, any news you have.

Speaker 2:

So, actually, didn't Cody didn't put an ad in here. That's my only critique

Speaker 1:

on this review. He was he was talking to us about OpenX, his company.

Speaker 2:

Yeah. Yeah.

Speaker 1:

Yeah. He's very cool. And I believe Cody is also a car guy who I I he's maybe rebuild. I don't wanna get it wrong because it'd be very offensive, but, AMG Hammer. He's a Hammer guy.

Speaker 2:

He's a Hammer.

Speaker 1:

Yeah. And so, this is, like, almost, like, legendary, Mercedes, kind of like Mercedes muscle car, essentially. Like, I mean, a fantastic driver's vehicle, and, I think he's restoring, rebuilding, modifying, recreating one, but he's invested a ton of time and, obviously deserves the respect to the tech community for his, for his incredible automotive innovations.

Speaker 2:

Yeah. And I got the last review, which is short but perfect, from username dude what's mine say on Friday. He says, FPjorn of tech podcast, five stars. Oh, let's see. Nothing more needs to be said.

Speaker 1:

Yeah. Fejorn.

Speaker 2:

Fantastic review. Fantastic. Do another review putting that in it. I feel like we need to give back for for that.

Speaker 1:

Well, speaking of watches, we got a promoted post from bezel. Let's talk about the Rolex Day Date, the president in 18 karat yellow gold with a factory diamond set bezel and green lacquer dial. Just landed on bezel. This isn't just a watch. It's a status symbol, a power move on the wrist.

Speaker 1:

Let's break it down. The watch of the elite launched in '25, 1956. The day date was the first watch to display the day and the date in full. And so if you don't know about watch complications, you you can kind of think about it in there's a hierarchy here. You add each complication up.

Speaker 1:

You start with the time only. Just got the time on your wrist. Then you add a date complication, date just. Just tells you the date, just the number. Then day date is gonna tell you it's Friday the twenty sixth.

Speaker 1:

It's Tuesday the eighteenth. Then you can get into more perpetual calendars, which keep the time out for years and years and years. Annual calendars, which are right for three hundred and sixty four days of the year and then you have to reset. And then you get there's even an eternal calendar there now that goes out like a thousand years or something. And then there's moon phase complications.

Speaker 1:

All sorts of complications. And so, but the day date is great because you just look down, you immediately know, hey, it's Tuesday, the eighteenth. And this is the watch of the elite. It's been the go to for presidents, CEOs, and anyone who needs their watch to say, I make decisions that matter. I love it.

Speaker 1:

Why is it called the president? The first US president to rock the Day Date, Lyndon b Johnson. By the late sixties, it had earned the nickname the president's watch.

Speaker 2:

That's pretty good marketing.

Speaker 1:

Yeah. It's so sexy. It's worn by world leaders, moguls, power players. This watch run things. Rolex doesn't make the day date in steel, only precious metals.

Speaker 1:

That's the rule. This one, eighteen k yellow gold forged in Rolex's in house foundry because even their gold has to be better than everyone else's. It is a factory diamond set bezel. Pure opulence. You know, we love opulence here.

Speaker 1:

There's a difference between aftermarket diamonds and Rolex diamonds. Rolex hand selects and sets every stone under strict standards. Meaning, this bezel isn't just flashy, it's flawless. Green lacquer dial, this is Rolex's power color. Rolex green equals money, prestige, and legacy.

Speaker 1:

The deep lacquer finish on this dial gives it a richness that pops against the gold. And so, like, a lot of people see these watching, oh, Rolex is just, like, so expensive, and it's just such a flashy thing. Like, I I enjoy this stuff because I'm a nerd for this, and I think it's really cool that, like, even the most minor little details have a story and and and there's some sort of it's it's it's as engineering minded as anything else. And so that's why I'm excited

Speaker 2:

about this. Need a date date soon because Yeah. The first time we met, I distinctly remember asking you at some point. I was like, wait. So do you wanna be, like, the president someday?

Speaker 1:

Because you

Speaker 2:

remember that? I was just like I was just trying to figure out, like, what you wanted to do. Turns out it was be

Speaker 1:

A podcaster.

Speaker 2:

Podcaster. But,

Speaker 1:

But, you know, that that that's a well trodden path. The reality TV to the White House is is well established at this point. But, yeah, also, if you've seen Glengarry Glen Ross, the watch that Alec Baldwin wears is a, gold Rolex, Day Date presidential, and he pulls it out. There's this iconic scene where he says, look at this watch. This watch costs more than your car.

Speaker 1:

He says coffee is for closers always be closing. That's where that comes from. And that was another movie, I think, in the eighties or maybe nineties that popularized the the presidential Day Date even more. And the presidential bracelet is particularly special here. It's the ultimate flex.

Speaker 1:

It's the three link president bracelet was designed specifically for the Day Date. It's smooth, seamless, and also hides the class because true luxury is effortless. It's one of the most comfortable bracelets Rolex makes. So would you wear it? Let us know in the comments.

Speaker 1:

Think you can pull it off? Head to the bezel app, download it, and it's yeah. There you go. Oh, look at that. I mean, it's just so iconic.

Speaker 1:

Do you Jordy, do you know about, how you should tell if you should wear silver or gold? Are you familiar with this? The the veins thing? No. I'm actually not.

Speaker 1:

So there's this thing where, a lot of women know about this with makeup. There's this idea of, like, if you have warm undertones or cool undertones, and then and then for a woman, you should match your foundation to that. And so there's this there's this critique that's going on on, like, Instagram right now that I randomly saw, which is, like, Republican woman makeup is, like, their the, I guess, the left is, like, critiquing Republican women for doing their makeup wrong. And a lot of the things they get wrong is that they have cool undertones, but they're using warm foundation. And for men, oftentimes, it's it's if you have cool undertones, you're better off with silver.

Speaker 1:

And if you're have warm undertones, you you're better off with gold. And so the way you figure this out is you look at your veins, and if your veins look blue, you have cool undertones. And if they look more green, you have, you have warm undertones. And so the classic example is just like, you know, I'm obviously like Swedish and very like Northern, and so I have blue undertones, blue blood. Yeah.

Speaker 1:

And and more like Italian guy is gonna have greener undertones. And that Italian guy is gonna look good in gold. Right? Yeah. And so when you think about the, you know, the the mafia guy, the Italian guy, the Tony Soprano golden.

Speaker 1:

He's gonna have more green undertones, and that's gonna lend itself to being able to rock a gold watch. Now I recently bought, not even a real gold watch, just a just a watch that happens to be gold. I think it's maybe gold plated, but, just to try it on and see if I can pull it off because I wanna dip my toe in before I really try and go full gold. But, for now, I've been sticking with silver and, I've been very happy, and I think it matches my tones and my colors and my style.

Speaker 2:

Same here. I'm glad that, I'd never even heard of that rule. Yeah. But fortunately, I'm blue.

Speaker 1:

I mean, it makes you. Like, look at you.

Speaker 2:

And we're we're we're both rocking silver.

Speaker 1:

Exactly.

Speaker 2:

Nice. Well, Well, let's move on

Speaker 1:

to the next promoted post.

Speaker 2:

You gotta

Speaker 1:

you gotta stage these.

Speaker 2:

You gotta stage these out.

Speaker 1:

Well, I mean, this is, like, yeah. It's an ad for public, but it's really a blog post that we would talk about on the show anyway. It's from their founder and co CEO, Leif Abraham. He says, we ship a lot in all areas of business.

Speaker 2:

By the way obvious. Yeah. He has a very unique Casio. Casio, which we'll have to have him on the show sometime to to show it off. But, anyways

Speaker 1:

We'll have

Speaker 2:

to get to use the Casio in the picture here.

Speaker 1:

Oh, yeah. Totally. Yeah. We'll have to get him on bezel, and see what he picks out. And so he says, we ship a lot in all areas of the business.

Speaker 1:

It's obviously cultural, and one of the philosophies we embrace is what we call pace management. I thought this was interesting, little, management tip for for founders, because Leif is obviously running like a very high performance organization, and he probably has some interesting stuff to share. And so he says, time is the most precious resource we have, and we must ensure that we manage it well. A new method we will be talking about frequently that that you all should embrace is pace management. This was actually just an email that he sent out to the team.

Speaker 1:

And so it's kinda cool that he, like, published this after the fact. This is a good example of going direct just like Yeah. He didn't he's not, like, becoming an influencer. He's just taking content that he's already writing. Exactly.

Speaker 1:

And then just publishing it.

Speaker 2:

Yeah. So he

Speaker 1:

says, as a manager, you are responsible not just for the quality of the work, but also for the speed at which it gets delivered. Time savings compound dramatically over time. Letting time slip away costs everyone money, not just the company, but every one of your coworkers. Here's how to manage pace. You gotta take ownership of the pace, drive the pace of the project despite knowing that the urgency you instill might make some people uncomfortable at times.

Speaker 1:

You gotta break this project into small manageable pieces. You need frequent deadlines with quick check ins. Let's regroup next week as the enemy. Check-in with the group progress at least every 48. I like this.

Speaker 1:

This is the, this is the Chris Sacca thing. Like, people ask, oh, when do you want this? Q two, q '3. Q tomorrow. Wasn't that what he said?

Speaker 1:

It's a q Friday. How about q Friday? I like that.

Speaker 2:

That's fun. Q Friday.

Speaker 1:

And this is very similar. Yeah. Check-in more frequently. Don't don't throw things. Yeah.

Speaker 1:

Yeah. Next week. Keep teams small. Make sure every person knows exactly what they are responsible for, and that everyone on on the everyone else on the team does too. And then project specific channels, sometimes it helps to create project specific channels, a dedicated space allows for focused discussions even on the smallest details.

Speaker 1:

So some good, some good advice from Leaf over at Public. And if you're looking

Speaker 2:

for a portfolio, you're building a company The Public. And you're wondering if you're going fast enough. Yeah. Well, we're doing we went from one day a week to two days a week to three days a week. The past week.

Speaker 2:

Eventually, we'll be doing eight, nine, ten days a week.

Speaker 1:

Ten days a week. Let's do it. Let's move over to Sam Altman. He's thinking about open source and stuff. He asked x for our next open source project.

Speaker 1:

Would it be more useful to do an o three minutei level model that is pretty small but still needs to be run on GPUs?

Speaker 2:

I thought you're saying he asked Rock. I was like Yeah. I thought Sam was asking Rock for for That's very funny. Advice. Even meta.

Speaker 1:

Or the best phone sized model we can do. And, this was sitting right at fifty fifty, and then Dylan Patel chimes in from semi analysis. He says, I can't believe x users are so stupid. Not voting for o three mini is insane. You can literally already distill four o claud 3.5 deep seek v three into sizes that will run on phones.

Speaker 1:

And it's just funny that, like yeah. Obviously, like, the the hackathon game is kind of generally

Speaker 2:

It continues to be a dominant poster.

Speaker 1:

Phenomenal. Phenomenal. And, should we report a murder?

Speaker 2:

Yeah. This one needs to be reported.

Speaker 1:

You you you do it. Let's do it.

Speaker 2:

Alright. So this is from BC Braggs. He says, I'd like to report a murder. Delian says, I've ended up leading more rounds in the last four months than the prior three years combined. Not sure what it means, but I guess we are so back, question mark.

Speaker 2:

Chamath says, means you've decided for whatever reason that time diversity doesn't matter in portfolio construction. Spoiler alert, it does. Dalian says, Thankfully, not too many twenty twenty one SPAC shares in my portfolio construction either quick 10 x ratio, quick work. This this one this one, I I I'm I'm surprised that, you know, maybe Chamath was down to get ratioed and and just felt like, you know, becoming a a supporting character in the timeline for the day. But

Speaker 1:

Coming for now, again, is the most dangerous thing. You're playing with a cobra on the timeline.

Speaker 2:

Yeah. Cobra and then also and then also trying to critique somebody for a time based portfolio construction where Chamath, you know, basically spearheaded the largest investment into this, like, novel, you know, sort of financial product into highly speculative companies Yeah. At in a very condensed time period. So it's just, like, kinda set yourself up for that.

Speaker 1:

Is is Chamath a seed investor? Like, has he done seed stage investing? I don't think of him as, like, seeding big.

Speaker 2:

In the other grok, the g r o

Speaker 1:

As as at seed?

Speaker 2:

I think so.

Speaker 1:

Okay. I

Speaker 2:

think he incubated, didn't he?

Speaker 1:

I I I purely think of him as, like, growth stage and, like, SPAC guy now, but maybe he does. But, like, yeah, Delian, like, you know, I I I don't think, like, if you're doing seed deals, like, the time based portfolio construction map.

Speaker 2:

So he put 10,000,000 into Groq, g r o q, which was a Google Yeah. Yeah. Team that spun out.

Speaker 1:

Yeah. Yeah. I I saw a funny thing about Grok with the queue. Everyone's talking about Grok three, which has a k. And Harry Stebbings is like, here's my interview with Grok CEO, and it's the g r o q.

Speaker 1:

So he, like, knew that he should drop that to, like, you know, farm off of the energy or something that people would pull over. I thought that was funny. Anyway, let's move on to, the new president of Syria. I thought this was funny. This guy had quite the path.

Speaker 1:

Buco Capital Bloke says, intern Al Qaeda, then associate Al Qaeda, then senior associate Al Qaeda, then insurgent at various, and then presidents of Syria. And, this is from an interview the new president of Syria, Ahmed Al Sharra, did, with leading some small YouTube channel, I guess. And they said and he says, I joined Al Qaeda because I was a 19 year old, and there wasn't any other venue to take part in politics. Also, because I want skills and experience. I already built all government institutions in id lib before the offensive.

Speaker 1:

This ensures we can immediately take over and prevent anarchy. And yeah. I mean, obviously, like, you know, the the American propaganda during the war on terror was very much like, oh, it's like a bunch of, like, terrorists in caves. They're just, like, completely running around. But it's like, no.

Speaker 1:

They have time sheets. They have time cards, and they have to do lists and tracking and inventory management and their

Speaker 2:

payroll software.

Speaker 1:

Yeah. Yeah. And, like HR. What what what's the big one? Like, what is the NetSuite?

Speaker 1:

Oh, I I for ERP Yeah. Basically. And, yeah, I mean, the like, large terrorist organizations are, like, there are I mean, he's obviously joking with, like, the intern associate, senior associate, but there are, like, career paths and, like, tracks that you can, like, move up in.

Speaker 2:

Yeah. It's it's it's worth noting that if you look at terrorist organizations or drug cartels, you don't become these sort of nation state level players or, you know, multi multibillion dollar enterprises without being highly organized and having real leadership hierarchies and stuff like that. So that's why I've joked before about Pablo Escobar being the greatest CPG founder of all time. Yeah. Because even if you told somebody you can break every law under the sun Yeah.

Speaker 2:

They would not be able to bootstrap a company to 22,000,000,000 of annualized revenue in in a decade. Right?

Speaker 1:

Oh, we gotta put deep research on this or the fans. Jordy and I were on a call with David Senra last night from the Founders Podcast, and we were debating how big was, the Like like, just the cocaine industry, I guess. Like, a Yeah. Yeah. How how would you benchmark that against, like, big oil, big tobacco?

Speaker 1:

What's the tam? What's the market size? I was arguing that, it was probably over a trillion dollars in, like, market cap if you applied some sort of price to earnings multiple on Yeah. However much money is going on. What was the total revenue?

Speaker 1:

How big of a player was this guy? And, should you put I I don't know why I'm blanking on his name. Who's the greatest CPG founder again?

Speaker 2:

Pablo.

Speaker 1:

Pablo. Pablo Escobar. Should you hang his jersey in the rafters, of I mean, certainly greatest criminals of all time. But, you know, how big was his enterprise? And I wonder I wonder how many employees he had working for him if he really, like, mapped out the entire org.

Speaker 2:

Yeah.

Speaker 1:

Is it, like, thousands or tens of thousands?

Speaker 2:

But imagine if he get the same revenue on that 22,000,000,000 as figure, he would have a hundred and potentially whatever comes after a trillion. Quadrillion. Quadrillion. Quadrillion. Quadrillion.

Speaker 2:

Quadrillion. Yeah. Anyways, I got another post here from Arthur Rock. If you're not following Arthur Rock, great poster. He's come out the gates sharing, sort of behind the scenes on rounds that are getting done.

Speaker 2:

He'll typically talk about around two to three months before it gets announced. So, his point of view is that he does it to help make rounds more competitive. Because you can imagine if you're a big, capital allocator and you see R for Rock post such and such round getting done, and and you hit hit up your associate, and you're like, why haven't we seen this yet? You're like, get on the phone with them. And so, definitely, I'm sure

Speaker 1:

it helps create some sponsored posts with them.

Speaker 2:

Momentum. No. I think we should have him on the show for a segment where he speaks through, like, an anonymous, like, voice, You know, Trent you know, some some make him sound, like Steve Jobs or something like that and then just talk about, you know, all the stuff he's seeing. But Yeah. He's got an update here.

Speaker 2:

This was underreported. USV came out with a new core fund in 2024. They have historically taken, you know, the approach of staying small, focusing on returns, focusing on the real craft of venture, and that they're not sort of a high volume investor. They're making a handful of new bets, typically doing them, you know, very early. And so he outlines, like, their total fund performance, which their first fund in in '20, 02/2004 was about a hundred million dollar fund.

Speaker 2:

It, it it ended up grossing $1,500,000,000. So, almost, close to a 15 x there. And then you just go down the list pretty much every single fund except their twenty nineteen Opportunity Fund, which, for those that don't know, an Opportunity Fund is more, hey. We have a bunch of early stage companies that have done well. And so the twenty nineteen Opportunity Fund is the only fund on that this list that isn't, like, top decile Wow.

Speaker 2:

Basically. Yeah. And it's sitting at 1.3% IRR, and you have to imagine they they sort of deployed that fund Yep. At the absolute top Yep. Into their companies that they knew already that were good companies that just were kind of a little bit frothy.

Speaker 2:

So, anyways, could still turn around. Who knows? But, you know, one of the goats, certainly the the New York City goat, almost deserving of a place in the holy trinity. But,

Speaker 1:

hasn't quite scaled and had the impact.

Speaker 2:

Yeah. Haven't hasn't scaled yet. Cool to see, though.

Speaker 1:

You gotta leave it as a trilogy. You can't can't make it

Speaker 2:

quadrilogy. Quadrilogy. It

Speaker 1:

it never works. Well, let's move on to remoted posts from Wander. We got a great spot up in Big Sur. Jordy, have you been to Big Sur?

Speaker 2:

I have. I've driven through it.

Speaker 1:

Some of my father's members as a kid are from Big Sur. I went up there with my family, went, inner tubing down the river. I remember that really fondly. Founders Fund just had a big event out there for all the defense tech folks.

Speaker 2:

It's really it's really California at its best. Right? It's coastal. It's rugged. You've got the mountains Yep.

Speaker 2:

Meeting the sea.

Speaker 1:

Yeah. It's it's totally fantastic. And So there's a beautiful photo here.

Speaker 2:

What's cool about the wander is that, you know, there's gonna be fast Wi Fi. So I feel like we can go stay there. We can still hit, hit, you know, stream and not worried about reliability. It's actually we're not in a wander for PMF on or die, unfortunately, and that's been, like, the number one issue is that we're trying to

Speaker 1:

Yeah.

Speaker 2:

Stream HD video. And and so showing up to an Airbnb and not having, you know, quality Wi Fi is a terrible experience. Wander fixes that among a bunch of other stuff.

Speaker 1:

And so Zachary Slayton, MBA says, what a great weekend spent in Big Sur. Not only were the views and drive incredible, but our wander house made the experience just that much better. Great experience, great service, amazing amenities. Highly recommended. Take a look.

Speaker 1:

Share some beautiful videos and pictures. And this is so funny because, you know, 18 likes pretty small post. We love it. It's a great post, but now you're gonna get a video reply from this podcast.

Speaker 2:

Zachary, welcome to technology brothers.

Speaker 1:

Brothers. Just, you know, basically, going forward, now that we're smart partner with Wander, it's like anyone who posts about Wander is gonna get random clips from us. But we love the post, and thanks for sharing because, this is a place where I might stay. And, I'm glad to see it on the timeline.

Speaker 2:

Anyway Amazing. Move in. I actually have some more breaking news.

Speaker 1:

Oh, yeah.

Speaker 2:

We got a company called High Touch, raises 80,000,000 at a $1,200,000,000 valuation. They came out of YC.

Speaker 1:

Okay.

Speaker 2:

We'll have to ask Gary about them. Wow. This one came out of nowhere. I don't know what their last round was done at, but, I'm looking on their site right now, and they're already working with Spotify, Aritzia, Warner Music Group, PetSmart, Tripadvisor, Whoop, Cars dot com, Plaid, Ramp. So they work with Ramp, so they must be great.

Speaker 2:

Okay. Calendly, GitLab, DocuSign, Greenhouse, and they do, like, marketing personalization

Speaker 1:

Oh.

Speaker 2:

Tools Okay. With AI. Yeah. Anyways, absolutely massive one. It's always and a number of times recently, maybe it's a sign of a little bit of frothiness or just these companies are growing really quickly.

Speaker 2:

You hear about a company for the first time when they're raising at north of a billion. Yeah. And for somebody that's, like, highly tuned into, the the the sort of flow of new companies, that doesn't happen that much.

Speaker 1:

Yeah. Yeah. I mean, yeah, you go back to, like yeah. Like, hearing about Anduril twenty sixteen, like, yeah, their first round, I think they raised $10,000,000. And that was a lot, but it's like they raised it.

Speaker 1:

Everyone in tech heard about it, and it was, I I don't know. I I don't remember the first evaluation. It was probably like 10 on a hundred or 10 on 80 or something. And then now it's like you're learning about companies when they're already unicorns. But also, you know, b two b, you know, kind of like, you know, under the radar founders.

Speaker 1:

That's kind of nature of these things. But good luck to them, and I'm sure there are some powerful metrics underneath that race. Yeah. I guarantee it, because the stuff's hot, and there's a lot of marketing automation to do. So, we'll have to do a deep dive on them.

Speaker 1:

That'd be cool. Maybe have them on the show. Let's go to Blake Robbins. He says careers are shaped by people, not logos. And he posts he's been changing up his format.

Speaker 1:

I like this. Some screenshot essays more or less. Yeah. Bringing stuff to the timeline. He says, at 21, I joined Ludlow Ventures with no experience in venture capital.

Speaker 1:

Instead of giving me a strict set of rules, Jonathan Brett did something that shaped my entire career. They trusted me to chase my curiosity. They gave me agency when they had no every reason not to trust a new grad. That trust rewired how I think about ownership risk and possibility. I got lucky.

Speaker 1:

The conventional wisdom in tech is clear. Join a high growth startup to accelerate your career. The logic makes sense. Rapid growth creates opportunities, responsibilities expand faster than org charts, and you learn by doing rather than watching. But this advice misses something fundamental.

Speaker 1:

Who you work with for, who you work for matters more than where you work. I completely agree with this. It's like, if if you have a good person that you're working with, it's way more valuable than just the logo. Yeah. Yeah.

Speaker 1:

And they define your mental model for excellence. Technology works for tech companies.

Speaker 2:

When you're when you're working when you're coming out of let's say, you go to college, you come out of school, it's so hard to evaluate evaluate if people are truly world class.

Speaker 1:

Yep.

Speaker 2:

Like, for me, I came out of school. I was working on a bunch of YouTube stuff, working with Sean and Connor Yeah. At the Ridge. I love them as people. I they were clearly very talented, but it actually took me five years and, like, to work with a bunch of other people to realize how good they were.

Speaker 1:

Yeah. That's a good point.

Speaker 2:

Yeah. And, yeah, it's been it's been it's been awesome to see. But, again, it's so hard to figure out, like, just

Speaker 1:

Yeah. Obviously, there are there are normal distributions and bell curves, like, all over even high performing organizations. Like Yeah. The worst Harvard grad, like, the dumbest Harvard grad is gonna be, like, way dumber than the smartest, like, you know, I don't know, community college grad.

Speaker 2:

Right?

Speaker 1:

Like, because there's gonna be overlap in these in these, in these normal distributions. And and that happens within companies too. He says tech loves to worship company pedigrees, x stripe, x ramp, x andro. These labels imply excellence, but they mask the real story. What matters is who mentored these people.

Speaker 1:

Great companies don't create great talent by default. It's the leaders within them who nurture ambition, challenge thinking, and build environments where people thrive. Yeah. There's a lot of people that are like, oh, I was early at this company, and then you see some of the early photos and you're like, why aren't you in that photo?

Speaker 2:

I had I got I've gotten absolutely cooked hiring for logo pedigree. Totally. Oh, like, not not every time, but, like, almost 50% of the time, and the challenge is somebody joining as the, somebody joining as the, you know, thousandth employee at an iconic company. They could be great. Yep.

Speaker 2:

They could be great, but not even have an impact. Right? Like, they could just be great and sort of exist within the structure.

Speaker 1:

Yep.

Speaker 2:

And so, yeah, you just gotta really, really press people and not let them sort of slide into roles based on

Speaker 1:

Yeah.

Speaker 2:

Having worked at relevant

Speaker 1:

companies. I mean, you think about the path of, like, Lockie Groom who was not just, like, at Stripe. I don't even know if he was that early at Stripe, but he went into Stripe and then very quickly became, like, the money guy for Patrick.

Speaker 2:

Yeah.

Speaker 1:

Right? And it's like he was running, like, their ventures team more or less. We're, like, doing something with investing, And then Sam Altman set him up with a fund, and, like, it was very quickly he was on, like, a massive, like, you know, high growth thing and got a ton of AUM. And that's very different from, like, oh, yeah. I went in and, like, I was kind of on a team that wasn't, like, super high performing, and I just kinda, like, you know, work nine to five.

Speaker 1:

Yeah. I'm like, yeah.

Speaker 2:

Max is trying to find Jeremy Giffon talk about? Like, you wanna be, like, off the he he talks about Off

Speaker 1:

the org chart.

Speaker 2:

Off the org chart.

Speaker 1:

Yeah. Yeah. Yeah.

Speaker 2:

You know, running the ventures program at, like, an at a company where the founders care about Yeah. Yeah. That program a lot.

Speaker 1:

Yeah. Yeah. No meetings. No no bosses, but employed by the company. That's, like, an ideal scenario if possible.

Speaker 1:

Yeah. It's fascinating. Big news, massive, promoted post from Sotheby's. A lot of people wanna check this out. Banksy's crude oil, from the collection of Blink one eighty two's Mark Hoppus will headline the, modern and contemporary evening auction at Sotheby's London on March 4.

Speaker 1:

So if you're headed over to this auction house, check out, the collection of Mark Hoppus from Blink one eighty two. I thought it was fun to share. I know a lot of you guys in the in the, in the community are, art collectors and often at Sotheby's. So you want the head out heads up, and now you know. Let's move over to Beau.

Speaker 2:

Yeah. By the way, Beau just texted me and said, it it cracks me up how little John knows about UFC because he saw that whatever we talked about it yesterday. So you're going on we're going on record right now. John, you gotta learn about the number one sport in America, and, we gotta go we gotta go to a car scene.

Speaker 1:

Playing it up a little bit, like, but it was very funny to me. That's great. Being like like, who's this schmuck calling out my friend Beau? I don't know who this is. It's like, of course, it's like a UFC guy.

Speaker 1:

Anyway. Great. Anyway, we love him. Let's go back to wander. Kyle Tibbetts.

Speaker 1:

This is a big promotion episode. Lots of ads. Sometimes you do a lot of bucket pulls, and we do a lot of deep dives. Sometimes where they do a lot of bangers. That's the whole

Speaker 2:

Well, after this, let's slam through

Speaker 1:

Couple posts and get out of here. Post. Okay. So, Kyle says one of the best designed app launch at launch I've ever seen. Stoked to be an investor.

Speaker 1:

Great great job at the Protector team. So Nikita Bier has been advising, a company called Protector, which allows you to book book armed agents. They're debuting in Los Angeles and New York City and number three on the App Store. And so this is, kind of the latest Nikita project. He pumps out a lot of stuff.

Speaker 1:

What do you what's the I

Speaker 2:

don't I think this is a company he's advising.

Speaker 1:

He's supporting.

Speaker 2:

Yeah. You know, potentially through intro.

Speaker 1:

But So I thought this was interesting because back in 2014, Uber was really hot, and there was an Uber for everything. And there was a company that went through YC with this, basically this, exact idea. It was called like bouncer or something. I forget what it's called. It was like book Uber for bouncers.

Speaker 1:

I sent it into the chat at some point. But, you know, maybe that idea wasn't it wasn't the right time for that, but maybe this is the one. What's interesting is that, disintermediation is always a risk with these platforms. Like, that was the problem with the dog walker apps is that is that once you find a good dog walker on a dog walker app, you just say, hey, let's not use the app. Just come every week at this time.

Speaker 1:

Yeah. And so that's always been a risk whereas with Uber, like, even if I have a guy who's reliable to take me to the airport or, like, you you have a guy or, like, oh, yeah. I know a guy who gave me his card and, like, if I'm going out with friends for a long time and I wanna rent it rent rent the the limo for the whole weekend or something, I'll I'll call him. But, you know, I I'm not just going to have a guy follow me around all the time. Yeah.

Speaker 2:

Yeah. But it really comes down to it really comes down to trust how much, you know, with with so I I'm sure there's an opportunity for protector. Right? Just the pure novelty of, oh, I just called this, like, armed security guard

Speaker 1:

Yep.

Speaker 2:

Where I am. Jeremy talks about this. Right? Like, there's lots of opportunities of take things that that makes your wealthy love and then make them accessible to the masses through these sort of on demand platforms. So I'm sure there's a a market opportunity for this.

Speaker 2:

I'd say overall, again, you call an Uber. You don't care how who picks you up as long as the car is generally clean and they're fast. Whereas things like this, security is is ultra high trust. Right? So you you would much rather have somebody that you know Yeah.

Speaker 2:

Well and that you know has, like, great training and that you have a good relationship with because if they're needed in any way, you know, you wanna know that you can rely on them.

Speaker 1:

I I I think that maybe the app can actually do a good job there because if they're onboarding agents and they have a very rigorous process, it's totally possible to have an app. At the same time, if the customers don't demand that, you know that the apps will and the platforms will coalesce towards something that's a little bit sloppier. But that's fine because if you're on a marketplace app and Yeah. You're just like, look. I just wanna be in Tulsa, and I want it to be the cheapest thing ever, and I don't care if the furniture's from Timu.

Speaker 1:

Like, yeah. There's an app for that. Yeah. But, you know, this, I it's it's kind of you know, it's up to them how they curate the marketplace. Yeah.

Speaker 1:

It will be interesting. I have no idea how he got this to number three in the App Store so fast. He's a master.

Speaker 2:

You can you can Because

Speaker 1:

there can't be that many people. Like, you're telling me there's more people booking armed agents than booking VRBOs? Like There's

Speaker 2:

ways there's way so here's a potential way to use, you know, these sort of algorithmic, feeds to generate, you know, views on your content, let's say you're producing a bunch of organic content and running ads, there's ways to do these sort of, like, predownloads or almost like preordering the

Speaker 1:

ads where you

Speaker 2:

can, like, get somebody to say, like, I wanna I'm I wanna download this. And then when you release it, you can drive

Speaker 1:

all that.

Speaker 2:

Let's say a hundred thousand in one day, and then you'll rank even though even though, you know, maybe the next day, you you won't you'll lose it

Speaker 1:

that long. I I love the idea of armed agents being on demand. I think that that is probably, like, a very good thing and will increase safety and reduce crime, which I think is good. But I would hate to live in a country where, an an app for booking armed agents is more is more popular than nice vacation rentals. You know?

Speaker 1:

Like, that's like a very black. Like, our society is truly degrading if it's like, oh, yeah. People are like the like the gun app is more popular than like, the gun store is more popular than the luxury goods store. Anyway, good luck to the team. Awesome launch.

Speaker 1:

Love to see that Nikkita is working with it. Very interesting to follow along and see where that goes. Anyway, we got a promoted post from, Koenigsegg. There's a 2021 Koenigsegg Reggera out. Obviously, you're gonna have to bid against Sam

Speaker 2:

this one. Yeah. You think they posted this knowing that

Speaker 1:

Sam would see

Speaker 2:

it, not be able

Speaker 1:

to help himself?

Speaker 2:

For sure.

Speaker 1:

You know, they go on ads. For those

Speaker 2:

that don't know, Sam Altman, loves Koenigseggs.

Speaker 1:

Loves all sports cars. He's a McLaren athlete.

Speaker 2:

Absolute fiend. Fiend. He he loves them. You know, as he said, he's doing OpenAI because he loves loves it, and he's also, you know, buying cars because he just loves cars.

Speaker 1:

I'm I'm simple guy. I'm very pro Sam Altman, the car guy. That's one of the main reasons I'm working

Speaker 2:

for him. He was using deep research to find this sort of obscure Acura Yeah. That I guess he was successful

Speaker 1:

obscure. I said NSX, bro.

Speaker 2:

Well, in in his he could have said I need to find one with with this sort of Sure. Yeah. That's true. Yeah. Anyway,

Speaker 1:

zero to 60 in less than three seconds. Top speed of 250 miles an hour. If you're looking for a new daily driver, maybe you work in AI, maybe you just raised a billion, did some secondary, pick up a Koenigsegg. Yeah. Maybe maybe you work at PETA and you're you see the money coming from the for profit conversion, and you're like, yeah.

Speaker 1:

I'm I'm gonna be rich.

Speaker 2:

Oh, yeah.

Speaker 1:

Pick this up. Let everyone know that you're serious about the for profit conversion. Yeah. When you pull up to the investor meetings, they're like, really, PETA? And they're like, well, he's already got the Koenigseggregara.

Speaker 1:

Yeah. So, yeah, he must be the real deal. He must have a really good plan for how he's gonna monetize this going forward. Yep. So we'd love to see it.

Speaker 1:

Anyway, we quote tweeted this on, the PMF or die, feed, but we thought we'd cover it here just to throw a reply towards star starter story. If you're not familiar with starter story, YouTube channel that, profiles, does video interviews and profiles on, independent builders, hackers, entrepreneurs, and they did one on Blake, our boy in the cage. And so starter story says he taught himself how to code with ChatGPT. He built three iPhone apps, made 10,000,000 in revenue. We called him up and asked him to break it all down.

Speaker 1:

They talked for hours, but cut out all the fluff to give you all the alpha. I love it. Yep. And so there's twenty minute video. Blake is an absolute beast.

Speaker 1:

He breaks down a bunch of different business ideas. And if you wanna know more about player one in the cage, you can watch this twenty minute video to get an idea of who Blake is. An absolute dog. An absolute dog. It is one zero seven.

Speaker 1:

How are we doing on time?

Speaker 2:

We're good.

Speaker 1:

Cool. I,

Speaker 2:

I think we should wrap.

Speaker 1:

Yeah. Let's wrap pretty soon. Let's go to, do do do a lingo. Oh, this is good. Trey Stevens is doing an event on March 6.

Speaker 1:

If you're in San Francisco, Joe, go check it out. Lee Marie Braswell over at Kleiner Perkins and other holy Trinity venture capital firm. She says, join me on March 6 at an event with Trey Stevens, where he he's discussing one of my fave posts ever. Good choose good quests. Link in comments.

Speaker 1:

Hat tip to his coauthor, Marky Wagner, who's an absolute dog as well. Trey is one of the most mission driven people I've ever met as well as an inspiration for practicing Christians in tech. When Trey talks about both building great companies while also finding purpose, I listen. If you haven't read Choose Good Quest, it's fantastic. I highly recommend it.

Speaker 2:

I

Speaker 1:

think he should turn into a clock. I hope he does. But you can go hear him talk about it at this event. And so go Yep. Find the Luma and go, go to this event if you're in SF on March 6.

Speaker 2:

What you said?

Speaker 1:

I think that's a good place to close out. Oh, we gotta talk about your bags. You've been you've been killing it, with NVIDIA. And somebody somebody had the same idea as you. McKay Wrigley says, can we talk about how unbelievably stupid the market reaction to DeepSeq was?

Speaker 1:

Compute got more useful and everyone sold. Literally today for grok three, Elon goes, oh, yeah. We bought another hundred k of h one hundreds and mentioned a new 1.2 gigawatt data center. Nobody is gonna buy less GPUs, and this shows that the the price of NVIDIA is exactly what it was a month ago right before the DeepSeq drop. And I think you got the

Speaker 2:

bottom actually. Yeah, when when

Speaker 1:

Bottom ticket.

Speaker 2:

Launches Sunday, they have their fake run up in the App Store. Everybody is sort of talking broadly about Jevon's paradox and all this stuff. It just seems like it was to to for a company to go down 15% like that, it just seemed way oversold. And I have no idea what NVIDIA stock does over the long term, but, yeah, I just bought, right as everybody was panicking and just, you know, back up, you know, up up 15% since then. But,

Speaker 1:

Making it look easy, Jordy.

Speaker 2:

Making it look easy. But we are in the news business. We are in the casting business.

Speaker 1:

Not the financial advice business. So That's right. Good luck out there. Stay safe. Avoid doing anything that your mother would not approve of.

Speaker 2:

Well said.

Speaker 1:

Have some good etiquette. Folks in your opinion.

Speaker 2:

Have you coined that properly? I'm working on it. What would your mother say?

Speaker 1:

Yeah. Do you

Speaker 2:

have too many

Speaker 1:

That's my that's my current philosophy for how you should behave yourself, especially on the Internet. What would your mother say?

Speaker 2:

Make your mom proud.

Speaker 1:

What would your mother say about you launching that meme coin and rugging it immediately? What would your mother say about you getting in a petty fight with another technology leader publicly on x? What would your mother say about you asking your date to split the check? Embarrassing.

Speaker 2:

Don't do that. Embarrassing. You gotta you gotta dive into that one next time.

Speaker 1:

Yeah. We'll cover it eventually. We're gonna be giving a big etiquette deep dive, big manners update, giving you the the, you know, how to use each fork, how to tie a tie. We're gonna be working on that. How to drive stick.

Speaker 1:

These things are important.

Speaker 2:

How to make your bed.

Speaker 1:

Anyway, leave us a five star review on Apple Podcasts and Spotify, and, keep watching us and expect a lot of new updates. We're we're we're cooking.

Speaker 2:

We are cooking. Thanks for watching. We will see you tomorrow.

Speaker 1:

See you tomorrow. Bye.