TBPN

Diet TBPN delivers the best of today’s TBPN episode in 30 minutes. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays 11–2 PT on X and YouTube, with each episode posted to podcast platforms right after.

Described by The New York Times as “Silicon Valley’s newest obsession,” the show has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

Well, there is a whole bunch of news to run through. The first story is that Meta employees are apparently token maxing and competing on an internal leaderboard called Clawdonomics for status as a token legend. This is from the information. Over a recent thirty day period, total usage on the dashboard topped 60,000,000,000,000 tokens, and this sparked a huge debate over how much is Meta actually spending with Anthropic. Of course, the other big news is that Anthropic just passed $30,000,000,000 in run rate revenue with one of the probably the steepest revenue growth chart in human history.

Speaker 1:

Absolutely legendary.

Speaker 2:

Yeah. This chasing status as a token legend reminds me of kind of maybe it was a year ago at this point you were saying like, well, tokens ever become like eyeballs the way eyeballs were during the Yeah.

Speaker 1:

The .com era.

Speaker 2:

Yeah. Right? Just optimize for eyeballs. Yeah. Obviously, not every eyeball visit to a website is created equally.

Speaker 2:

Yep. But people were optimizing for eyeballs. And now, know, I don't The reaction to this, I think, has been generally at least online, like been, I I guess, reassuring.

Speaker 1:

A lot

Speaker 2:

of people are saying, Gary Basin says, you, why? Marty says, Goodhart's law. When a measure becomes a target, it ceases to be a good measure. And so who knows what's actually going on internally. But we do know Zac is pushing the entire company to be as AI native as possible.

Speaker 2:

And this guy loves spending money, too, right?

Speaker 1:

I have a crazy bull case here that I will run through. Let's get through some of the story. First, we got to pull up this comic from xkcd in the comments here. When a metric becomes a target, it ceases to be a good metric. It's right under the leading post.

Speaker 1:

There we go. And says and the other counterparty says, sounds bad. Let's offer a bonus to anyone who identifies a metric that has become a target. It is good. Don't think that's going

Speaker 2:

on here. Lighter was texting a friend at Meta and sent the the post we just discussed on Tokenmaxxing Yes. And said, true? And the person said, yes. It's pretty sad.

Speaker 2:

But I mean, imagine. So so Meta has been there's been rumors

Speaker 1:

Yeah.

Speaker 2:

Of Meta layoffs for a while now. Sure. Unclear how many, if any, if any have happened. But if you're sitting there, the company Zach is saying like, we need to get AI native. Is saying, we need to get AI native.

Speaker 2:

And then suddenly there's a token leaderboard.

Speaker 1:

Yeah.

Speaker 2:

You do not wanna be at the bottom of the list. I will say that. Right? Yeah. You know, you don't wanna be the you don't wanna be the guy who's having to explain like, no, well, I'm actually getting the most out of each incremental token.

Speaker 2:

The other guy is just like, set up an agent that just counts one Just checks that every single line over

Speaker 3:

and over

Speaker 2:

and over.

Speaker 1:

Or something. Yeah. Yeah. I mean, you have to measure the actual output, the impact on the business. I mean, fortunately, Meta has been a huge beneficiary and a huge winner of AI.

Speaker 1:

The ads are getting better targeting. They're seeing they're delivering more ads, and the quarterly earnings have been strong. The headline number here that of took everyone by surprise is that Meta Staff used 60,200,000,000,000.0 tokens over thirty days, which would pencil out to about onethree of Anthropix's ARR was the number that was thrown out. But both of these claims are pretty questionable. And so Tyler did some back of the envelope math to show that the onethree revenue estimate is way, way too high.

Speaker 1:

And I don't know, do you want to take us through some of the reasoning there? And then we can talk about the knock on effects of all this.

Speaker 3:

Yeah. Okay. So 60,200,000,000,000.0 tokens is the number. Like, we can just assume that's true. Mhmm.

Speaker 3:

So basically, I'm going to assume all the employees are basically just using Opus four point six.

Speaker 1:

Yeah.

Speaker 3:

So then there's basically three numbers you need to look for

Speaker 2:

Yep.

Speaker 3:

In like the API cost. So there's like input

Speaker 1:

Yep.

Speaker 3:

There's a cashed input and then there's output.

Speaker 1:

Sure.

Speaker 3:

So for Opus four six, it's $5 per million tokens on input. Yep. It's 50¢ per million tokens on Cached. Input cashed, and then it's $25 on output.

Speaker 2:

Yeah.

Speaker 1:

So if you multiply that 60,200,000,000,000.0 tokens at the highest possible rate, $25 per million tokens, then you do get like a billion dollars in them. Yes. Which is crazy.

Speaker 3:

That's not what's happening. The crazy number. But like you have to think about it like, you know, if you're using like Cloud Code or Yep.

Speaker 1:

Or any

Speaker 3:

of these coding agents, you know, the vast vast majority of the of the tokens used is input. Yeah. Because like, so imagine you're working on some, you know, coding file. Right? Yeah.

Speaker 3:

There's like thousand lines of of code in the file. Maybe the the model's only changing like 10 at most. Right? Yeah. So that's a very small percentage.

Speaker 3:

So the output tokens are gonna be a very small percentage of the total tokens going in. Right? Open router publishes, like, lot of this data, so you can kind of use those ratios to figure out what is actually like, what are the actual numbers of of the, you know, input versus cash versus output. Yep. So this is to just

Speaker 1:

to get sort of, like, market standard averages baseline benchmarks. Yeah. Now Meta could be using these tools differently, but if we are to assume that they're the the shape of their agentic coding efforts are similar to the average, this is what the numbers look like.

Speaker 3:

So so maybe that there is like some bad incentive where people are just saying the model, like, count up to a billion and then do it again.

Speaker 1:

Yeah.

Speaker 3:

So then it's like totally skewed. But if if they're doing it relatively normally Yeah. So on OpenRider, it's about 98.9 percent of all tokens are input.

Speaker 1:

Input.

Speaker 3:

And that's including cash ones.

Speaker 1:

Yeah. Because you're stuffing the context window with all your code base or Correct. Huge amount of context.

Speaker 3:

Yeah. It's going around finding That's not changing every time so you can cache it.

Speaker 1:

Yep.

Speaker 3:

Yep. Yep. So then it's like 1.1% is is output. Yep. So basically, if you basically get all the numbers, million tokens is gonna be $2 and like around 26¢.

Speaker 3:

Yeah. So that'll get you to something like a $136,000,000 a month for the 60,000,000,000,000 tokens. Yep. Right? So that's like way less than the 900.

Speaker 3:

Yep. So that that would be 1,600,000,000.0 a year, like, run rate.

Speaker 1:

It's still huge.

Speaker 3:

That's a lot. But that is still in the max they're

Speaker 1:

in the top.

Speaker 3:

Yeah. That's assuming that OpenRouter the the the the kind of breakdown of how they're using the tokens is the same as OpenRouter Mhmm. Which I think it's not. If if we assume that, that's like $4,500 per engineer if there are, I think, 30,000 engineers at Meta Yeah. Every month.

Speaker 3:

$4,500 on tokens.

Speaker 1:

$4,500. That's actually in line with what I've heard a lot of other people spending in terms of their token budgets. Yeah. 5,000

Speaker 3:

not bucks. Like absurd. Absurd. If you're trying to incentivize people to use them.

Speaker 1:

Yeah. Yeah. No. Not at all.

Speaker 3:

But so so you can actually see the breakdown on Open Rider of how people are using tokens. 17 the biggest plurality is OpenClaw, which is 17.6%. Yeah. And then Cloud Code is is 16.8.

Speaker 1:

Sure.

Speaker 3:

So I think if you think about Cloud Code, you would imagine that, like, in Cloud Code, there's the kind of percentage of cash tokens is gonna be higher than in in OpenClaw.

Speaker 1:

Yeah.

Speaker 3:

So I think Meta's usage is actually gonna be more heavily based on the the cash tokens. Sure. So if you do it just based off like clawed code usage, you'd actually see a higher percentage of of the input tokens be of the total tokens. So it's only like point 8% is the output.

Speaker 1:

Yeah.

Speaker 3:

So then if you get all those numbers through again, it's only like 55,000,000 Yeah. A month, which would be 669,000,000 a year. Yeah. And each engineer would be like $1,800.

Speaker 1:

Yeah. That yeah. That's actually pretty low.

Speaker 3:

Which is like, I think very reasonable.

Speaker 2:

John Chu over at Cosa says, plenty of my Meta friends told me folks have been building bots that just run-in a loop burning tokens as fast as they can due to this policy. It is an app it's an absolutely stupid policy and it's similar to how Meta uses lines code to measure engineering output. Managers are are supposed to use it as a proxy and dig in to understand work complexity, but plenty of managers are lazy and just don't. That was in response to Christine over at Linear saying, ranking engineers by token spend is like me ranking my marketing team by who's spent the most money. Yeah.

Speaker 2:

We may not have hit our KPIs, but Joe spent 200,000 on a branded blimp that only flies over his own house. So he's getting promoted to VP.

Speaker 1:

I'm I'm pro branded blimps though. I like that idea. So my take on this was that yeah. It it sort of ties to what Jensen Huang was talking about at GTC. He was saying that an engineer that's making $500,000 might soon command something on the order of $250,000 a year in token budget.

Speaker 1:

Undercarpathi had a similar line. He said, It's all about tokens. He said on a podcast last month, What is your token throughput and what token throughput do you command? And so Meta actually has two different harnesses internally. They have a version of OpenClaw called MyClaw, and then they also, of course, acquired Manus.

Speaker 1:

But it appears that they're running Claude, maybe Opus under the hood to actually generate the tokens that come through those harnesses. The interesting thing is that at 2 and $50,000 AI budget per engineer, you're at like $20,000 a month. And so based on Tyler's math, this feels like, Okay, there's going to be another maybe 4x to get to Jensen's prediction. I think it makes clearer the strategy with Meta Super Intelligence Lab. Because if you're looking at it's clear that they're spending hundreds of millions of dollars on this just for internal code gen tooling, like running their business.

Speaker 1:

They are going to spend an inordinate amount of money on frontier inference. And so training a model there, they will be able to amortize the training cost of the next model that they build, not just over can they get a product out that goes viral and becomes its own standalone chat app that people pay for or maybe it's ad supported. Like just on the internal usage, they could be running a multibillion dollar token bill that they would have to pay another lab. And so if they develop that internally, it's pure vertical integration. And then you also have everything that's happening on the actual ad targeting and content delivery side.

Speaker 1:

And when you add up all of those, all of a sudden, the big question has been like, does Meta is Meta going to be able to launch an entirely new AI product like Vibes or something like that? And this is a data point that to me says they don't need to. Just from a pure vertical vertical integration story, the investment in MSL can pencil out. What what are you

Speaker 2:

laughing? I just want you to get to your schizo theory. What's a schizo theory? That the this this this whole like Tokenmaxxing thing is like a is like a barrage while they distill the model.

Speaker 1:

Oh. Oh. Yeah. Yeah. I mean, there is a there is a world where if you're running, if you're generating trillions and trillions

Speaker 2:

of a model They're like, meta's really like burning through a lot of tokens. And you have generate everything.

Speaker 3:

It's like,

Speaker 2:

oh, we're just Tokenmaxxing.

Speaker 1:

Yeah. I mean, there's another story about distilling we'll get to later in the show. But there there is a question about if I have a if I write an essay and then I have a model rewrite it, those tokens, they are from that model provider. They I buy them. They become mine.

Speaker 1:

Can I train on them? That's probably out of terms of service. So you would think no. But you sort of wind up in this Ship of Theseus world where if Meta pays Anthropic $100,000,000 or $1,000,000,000 to go rewrite every line of code, every email, every Slack chat, every internal message, like basically map the entire organization, rebuild it, They wind up with an incredible training corpus that they can use for their next model. But I would imagine that they can't, and I imagine that the enterprise contracts go both ways.

Speaker 1:

The lab can't train on the corporate information. That's standard in all of the enterprise contracts. And I would imagine that the opposite is true as well, although it is this fuzzy Ship of Theseus world where if you're using coding agents to upgrade your infrastructure and then you want to run and train some model on your infrastructure, do you have to pull out the tokens that were revised by the AI lab that you don't have the right to train on? It's all very interesting. Apparently, startups that have gone out of business are able to sell their corporate histories for something like $1,000,000 to data brokerage firms and AI labs now.

Speaker 1:

Have you heard about this?

Speaker 2:

Yeah, heard about it. Yeah. Skeptical? I'm I'm skeptical. I mean I mean, certainly there there is a market for it.

Speaker 2:

But Basically, all the code more database.

Speaker 1:

That a company built over a few years. Maybe they

Speaker 2:

Code, but also usage within Usage. Different enterprise. Yes.

Speaker 1:

All sorts of different stuff.

Speaker 2:

In other news Yes. Intel is joining Terafab.

Speaker 1:

Yes. Let's repeat.

Speaker 2:

Intel is proud to join the Terafab project with SpaceX, XAI, and Tesla to help refactor silicon fab technology. Intel says our ability to design, fabricate, and package ultra high performance chips at scale will help accelerate Terafab's aim to produce one terawatt a year of compute to power future advances in AI and robotics, and throwing up a post of hanging with mister Musk himself.

Speaker 1:

Let's go through the Wall Street Journal's coverage of this. Elon Musk is partnering with Intel on his ambitious Terafab project, which aims to build specifically designed chips for SpaceX and XAI as well as for Tesla. In an announcement Tuesday, Intel said it would work with the companies to design, fabricate, and package ultra high performance computing chips at scale. The company shared a photo of chief executive Lip Bu Tan shaking hands with Musk, CEO of SpaceX and Tesla. The partnership is a win for Tesla, which has struggled in recent year Intel.

Speaker 1:

Intel, which has struggled in recent years, leading the company to cut production capacity when demand was surging for data center chips and when competitors like NVIDIA and AMD have thrived. That was always a just such a tough pill to swallow when you would talk to the ASIC companies like Cerebras, and

Speaker 2:

you would

Speaker 1:

say, hey, like, you're doing something new. You're not doing NVIDIA chips. Is there any way you could get off of TSMC? And they're like, no. Like, we still need to be in Taiwan.

Speaker 1:

Obviously, there's a huge geopolitical component here. We can get into all that. But last year, the Trump administration reached a deal to acquire an equity stake in Intel for around $9,000,000,000 to help secure the American chipmaker's business. The US government held 8.4% of Intel's shares outstanding as of March 20, according to securities filings. The figure doesn't include warrants that could increase the government's equity stake in Intel.

Speaker 1:

Terafab represents a step change in how silicon logic, memory, and packaging will get built in the future. Tesla and SpaceX confirmed the partnership in post on X. Musk unveiled the plans for a single facility in Austin, Texas to make chips to be used by SpaceX and XAI, which merged in February as well as by the publicly traded Tesla. He pitched the project as an opportunity to quickly experiment on chip design by designing and manufacturing the chips in one facility. The fab will make chips for use in Tesla's robotaxis, which they're already fabbing, I believe, at Samsung, although they do have NVIDIA Dojo chips, I think, that are TSMC.

Speaker 1:

Optimus will also need chips, they are planning to use Intel for that as well. So these are two areas of priority for the electrical vehicle maker as it shifts its focus to artificial intelligence enabled products. It will also make chips optimized for use in space where SpaceX is planning to deploy huge numbers of satellites capable of handling AI computing tasks.

Speaker 2:

Who else do you think they need to get involved here? Because just the two of these the two of these got, you know, Intel Yeah. And Tesla Yeah. Coming together. It's it's it's good to have more involvement.

Speaker 2:

But still, I think the entire project

Speaker 1:

No. We've seen we've seen a few of those, like, AI leader gatherings in DC where you see Tim Cook and Sundar and Sam Altman and Dario and all the leaders are together. And I was always hoping that at one of those dinners, they would say, Okay, everyone's going to try and say the biggest number, but this time it's going to be how much you're committing to Intel and how much you'll how much you'll buy from them if they come online with a competitive product. Because the demand side is has has always been a big problem for Intel that they have the capability. They have the plans to build the nanometer, three nanometer plant like a Frontier plant, leading edge fab.

Speaker 1:

But every other company has been so tied to TSMC. But we I think everyone now acknowledges that TSMC is not investing super heavily in CapEx. They're not scaling up as much as the industry would like them to. And so lots of folks have sort of signaled towards a chip bottleneck coming in the next few years, and Intel has the opportunity to communicate that. This seems like the first step in that in that chain.

Speaker 1:

So companies, including Tesla, often design their own semiconductors but need a supplier to actually make the so called make them in a so called chip fab. Musk's companies have sourced chips from a wide range of suppliers, including NVIDIA, Samsung, Taiwan Semiconductor. Oh, I got it. Musk said that Terafab is needed because his company's demand for chips is is slated to far outstrip the supply it gets from partners. I was listening to Chuck Robbins from Cisco talk about data centers in space, and the heating issue came up.

Speaker 1:

And he was like, Yeah, I really have like a solid answer for that yet. But I do think that if you are if you are bullish on data centers in space, you have to start with the fact that Starlink works in space currently? Because it is doing compute. It's

Speaker 2:

not You couldn't doing possibly put Gigawatts. Let's be honest, John. Couldn't possibly put a computer up there.

Speaker 1:

Yeah. Like, there there are computers with like, they don't they they can't inference frontier models. They can't you know, it's not gigawatts in space yet. But there are, I believe across the entire Starlink cluster, megawatts of compute in space with solar panels. And they do heat up because you are running a chip that routes packets across the Internet from one satellite to the next to get you your Internet via Starlink.

Speaker 1:

And so it's not that it's a solved problem. It's that we are actually we are on a path to, you know, deploy some level of compute in space. Tyler?

Speaker 3:

Yeah. Mean, we've seen, like, Philip Johnson, like, there are chips in space right now. Like, their GPUs, I think, aren't there? He said there were, like, five or six h one hundreds. Right?

Speaker 3:

Yeah. Yeah. So so, like, they do work. It's like the Yeah. I think most people's problem with space data centers is that economically, it doesn't make any sense.

Speaker 1:

Well, so yes, that is the correct angle. But a lot of

Speaker 3:

people are getting Not that it's like 100 physically percent.

Speaker 1:

No, no. There is a whole conversation about like it is impossible. And you need to move past that into the economic equation, which then gets you into timelines and actually thinking about what needs to happen to dissipate that heat. But clearly, yes, you can. I mean, can put humans in space on the ISS and cool that.

Speaker 1:

Like we have created ways to move heat around in space for decades. It's obviously a new challenge. But I think starting with the baseline of like there is compute happening in space right now. We're going to try and I mean Elon wants to like thousand exit, 100,000 exit, million exit. I don't even know how what the what the scale is, but orders of magnitude.

Speaker 1:

And so there's new engineering challenges.

Speaker 2:

Speaking of space, looks like Elon is going to use SPCX as a ticker for the SpaceX IPO, which he had to acquire from Matt Tuttle, hence the ETF's ticker change shown below. Eric from Bloomberg says, we predicted this could happen in a December note. Nice catch by Will, who famously gave the Meta ticker to Zuck. I did not know that Will Hershey had the meta ticker previously. We know we know somebody that's on

Speaker 1:

Who who had the meta ticker?

Speaker 2:

A guy named Will Hershey.

Speaker 1:

Oh, interesting.

Speaker 2:

There's a company called Round Hill. But we know somebody who's

Speaker 1:

I think it was Matt Ball.

Speaker 2:

We had somebody here come Yes. Outside in of show hours and say that they were squatting on a bunch of tickers and the and the idea I seems so I I think I think what what what might be the reality is that you actually that it needs to be it needs to be further along than just reserved. I don't know

Speaker 1:

I think so.

Speaker 2:

Having it. You can go. If you're a startup today, you can go reserve your ticker today. Okay. But I'm not sure that that actually gives you enough leverage to when when Elon comes knocking ready for an IPO.

Speaker 2:

You actually have priority over. Alright. We gotta talk about a corporate retreat that went badly wrong.

Speaker 1:

Okay.

Speaker 2:

Technology company Plexx took its 120 employees to Honduras for a week long bonding experience. It was a disaster from the moment they arrived. Senior executives at the tech company Plexx were eager to treat their one and twenty fully remote staffers to a week long corporate getaway in a tropical paradise. Pop quiz. Tyler, do you know what Plex is?

Speaker 1:

I don't know about Plex.

Speaker 3:

No. Have we seen Plex before?

Speaker 2:

I don't know either. So we all failed. But now it's your job to figure it out. Will continue. The plan for the Honduras trip was simple.

Speaker 2:

Company meetings and team Is this

Speaker 1:

a streaming company?

Speaker 2:

By powdery soft beaches during the day and island fun at night at a cost of roughly half 1,000,000 to the company. They'd build the trip around a survivor theme with teams and challenges. But it'd be fun, not too physically grueling. The CEO of Plex, a free streaming platform, would play a role similar to that of survivor host Jeff. Perhaps the executive should have taken it as a sign that just as the first bus of staffers pulled up to the resort, the chief executive was already in his hotel bathroom experiencing the initial waves of a violent stomach infection.

Speaker 2:

What followed was a comedy of errors including military drills that outpaced anything this group of office workers had in mind, a rogue porcupine stranded airplanes and one syringe to the butt of an employee. Corporate retreats are generally assumed to be torture or at least a semi stressful chore, what with their forced fun activities and hybrid work play environments that leave workers confused about boundaries. Is that is that like the industry standard? That seems wild.

Speaker 3:

I don't know.

Speaker 1:

I don't think I've ever been on a corporate retreat. I've been on some like Founders Fund events, those aren't really retreats. Those are more just like conferences. But I don't know. Corporate retreat seems I don't know.

Speaker 1:

Never unexplored territory for me.

Speaker 2:

It's no wonder the new season of Jury Duty, a comedy series that tricks an unsuspecting non actor into believing his off the wall fictional circumstances are actually happening is set at a corporate off-site. But in real life, Plexcon twenty seventeen beats anything on TV. Here's the story of an all staff company getaway told by six people who were there, a trip where most everything that could go wrong did go wrong. Nearly a decade later, they're still working together. Yeah.

Speaker 2:

Still talking.

Speaker 1:

So it it

Speaker 2:

had It's crazy that they

Speaker 1:

So was a bonding experience.

Speaker 2:

Yeah. Well, yeah. It's crazy that this is now now coming out. So Sean forty two, founder of Monoker Partners, an independent corporate retreat agency that planned the trip. About three weeks before we arrived in Honduras, we got an email from the hotel's general manager that said, I will be departing.

Speaker 2:

I wish you the best with your retreat. I knew something was off. Three days later, another email. The head chef was no longer going to be at the hotel. Scott52, chief product officer and Plex co founder.

Speaker 2:

We get there. We've got to take the bus from the airport. Dirt roads, you start getting closer, and there are guard towers around the property. People with machine guns and stuff. A lot of people were like, where are we going?

Speaker 2:

Keith, the CEO of Plex, fifty four. We usually go a day early and we set up. If there's any little thing, we have to get it right just so the employees have the best experience possible. Keith woke up the day that people were coming in Sunday morning and he is sick as a dog. Everyone there is fried.

Speaker 2:

Basically, people are telling me, don't eat the vegetables, don't eat the

Speaker 1:

vegetables? That's like the

Speaker 2:

No. No. No. Because they they clean it. They wash it in water.

Speaker 2:

Oh. It's usually not filtered water. Right? Because it would just be kinda crazy to

Speaker 1:

Yeah. Yeah. Here it is. I I So

Speaker 2:

I've gotta have a salad. Just one salad. So I got e coli, which maybe the worst thing you could get possibly ever. Just as people were arriving on the buses, I was like, I had lost eight or 10 pounds. They had a doctor come to me, which apparently is pretty standard.

Speaker 2:

They nailed an IV bag to the bed post. Nailed it. People are arriving for a party that night. The next day is Survivor theme kickoff. There's not one person on the planet more excited about Survivor than Keith and his wife.

Speaker 2:

They have watched every single episode. My wife and I met Jeff, the host of Survivor. What what I wanted is when everybody shows up, I do a Jeff. Welcome to the island. Here's the theme for the week.

Speaker 2:

But Scott got to do it. The opening Survivor thing was a contest where people on their different teams open up a platter. You have to eat what's on the platter. Sean. Sean, who's the Plex head of business development.

Speaker 2:

Who are gonna call? Yeah. Somebody is somebody is cold texting me Oh, yeah. Pitching me their their startup and they've called me a bunch of times today.

Speaker 1:

Wait. Wait. Is it actually them or is it their AI agent?

Speaker 2:

I I wish I could pick up. It's just like a little bit Yeah.

Speaker 1:

It's little bit much to picks up

Speaker 2:

But, yeah, cold texting somebody, like getting their number, I I don't think that's the new meta. No. It's it's it's bold.

Speaker 1:

We we heard from an executive in in tech that they are getting dozens of emails every single day trying to recruit them. And every email comes from a new Gmail account that's that's unregistered, brand new. But it's all LLM written, very different, doesn't really do all the research but has a few keywords in there. And it's clear that someone is building sort of like a next gen recruiting agency that's basically just a lot of spam. Feels like the the the end result will be like a return to relationship building and not like broad top of

Speaker 2:

I should read the cold the cold text from this morning. I have nothing again nothing against cold cold email and just, you know, being being bold, but I did read this out loud to you, John, so I'll read it to everyone.

Speaker 1:

Mhmm.

Speaker 2:

So I got a text from an unknown number today at 7AM. Alright, Jordy. Good news or bad news first? This is blank. And I'll leave the name out.

Speaker 2:

And then I just get a PDF of a deck and then a text. Alright, Jordy. The bad news is this was an unplanned introduction. And on the surface, probably lukewarm outreach. The good news is that there's zero doubt you're now in touch with the founder with the most grit of anyone you've interacted with the past twelve months.

Speaker 2:

And likely, anyone you'll interact with over the next twelve months. 50,000 seed round passes over the past ten months. Here to make 50,001. So so, you know, you should be coming in being like, I've been passed on 50,000 times.

Speaker 1:

Yeah. I'm hoping through.

Speaker 2:

That gets through.

Speaker 1:

That seems like a rough estimate though.

Speaker 2:

Every months of feedback and iterations have made it better. So you're seeing more quality more quality presentation than rejection 10,000. Looking forward to your message.

Speaker 1:

The chat wants the builder to pitch. They want you to hear this out. Everyone's in favor of this. You they they the chat wants you to get on the phone with them. Do it live.

Speaker 1:

I mean, they wanted to do live. I don't know if you should do live, but you should take the call.

Speaker 2:

I will take the call. I will take the But

Speaker 1:

Let's go back to the corporate retreat.

Speaker 2:

So they hire they hire a former Navy SEAL

Speaker 1:

Okay.

Speaker 2:

To basically haze the team on the beach. Mhmm. And you can pull up a picture, an image here.

Speaker 1:

The quote is, this is not a super fit group in general. One of our biggest mistakes was hiring a former Navy SEAL to pump the team up. As I'm in my room dying, I could hear them out there doing all the drills and yelling. And so I'm in here thinking, this is terrible. It sounds terrible out there too.

Speaker 1:

We're doing army crawling on the beach. It was a 100 degrees. I bailed out partway through. I went into the ocean just to cool off. I went in probably on all fours because I was tired.

Speaker 1:

It's not a fit group not a super fit group in general, the ex Navy SEAL is like, we can tone it down. No problem. We get up there and it's hot and humid and people are passing out. I don't think he'd ever seen quite such an unfit group. We ended on, I guess, what's probably a golf course.

Speaker 1:

On command, everyone had to hit the grass. Everyone's silent. We're pretending we're Navy SEALs, but I happened to land in the wrong spot. I'm just like, Oh, God. What is happening?

Speaker 1:

I was sitting on a fire ant hill. I was wearing shorts. I jumped and had my I had hives and bumps from the bites. This is ridiculous. Someone saw an alligator on the golf course.

Speaker 1:

Sounds like a ridiculous There

Speaker 2:

was a porcupine that fell through one of the ceilings.

Speaker 1:

This is like a fire festival for

Speaker 2:

corporate The fire festival of corporate retreats.

Speaker 1:

Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software. The company is making a preview of its new AI model called Mythos available to about 50 companies and organizations that maintain critical infrastructure, including Amazon, Microsoft, Apple, Alphabet owned Google, and the Linux Foundation. Cybersecurity researchers and software makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs, and the effort is set to help companies stay one step ahead of cybercriminals and other threats. This feels like a very good rollout strategy generally, both because we've seen a huge amount of cyberattacks and hacks and accidental releases.

Speaker 1:

Like, even if it's not, you know, there's been we had a member of the security team from CrowdStrike on the show last week talking about the rise in cyber attacks broadly. Getting the the the the most frontier models in the hands of big companies early. Great from that perspective, and then also just great as a product demo, which will get the entire organization excited about deploying the technology broadly. So very good is like a as a b to b go to market motion. This makes a ton of sense.

Speaker 1:

In some other more positive news, OpenAI, Anthropic, Google are uniting to cop to combat model copying in China. This is a bigger discussion around AI safety. We've talked about this. You look at that. Some some

Speaker 3:

Who knew? Faith in the knew

Speaker 1:

that you could get along. Yes. Yeah. I mean, like, I mean, I'm sure people in the chat have have seen the New Yorker article where there's just tons and tons of quotes from various AI leaders, all, you know, upset with Sam Altman. And the inter AI drama has been bubbling up since the dawn of Open AI.

Speaker 1:

Like Open AI was started as a reaction to Google, then Anthropic leaves and teams up with Google. And then Elon doesn't like Anthropic. And then Ilya Sutskever and Mira leave, but they don't join Anthropic. And so there's been so many personalities and so many disputes. I feel like the takeaway is that this is all extremely high stakes.

Speaker 1:

There's a technological transition happening, a huge amount of money on the table, a huge amount of influence on the table. And so everyone is sort of clamoring for their share and it's creating a lot of friction. So rivals OpenAI, Anthropic PBC, and Alphabet Inc. Google have begun working together to try and clamp down on Chinese competitors, extracting results from cutting edge U. S.

Speaker 1:

Artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that three tech companies founded with Microsoft in 2023 to detect so called adversarial distillation attempts that violate their terms of service according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by U. S. AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk.

Speaker 1:

And so I was trying to square this question of distillation and model commoditization with the news that Anthropic has reached $30,000,000,000 in run rate and has agreement with Google and Broadcom for multiple gigawatts of TPU capacity. Like, clearly there is insatiable demand for frontier tokens, frontier models. They're incredibly expensive to train. We saw in The Wall Street Journal that these

Speaker 2:

Expected training costs from

Speaker 1:

Yeah, was training and in France, but it was hundreds of billions of dollars. And so the hope that you're able to amortize that over at least a couple of years, know, a long The shelf life of a model after you train it is pretty limited if you're being commoditized and copied. If you're being distilled, it's even faster. At the same time, just staying on the frontier clearly leads to an incredible ramp in revenue. So is commoditization a real problem?

Speaker 1:

It feels like it's almost just more of a problem from an AI safety perspective because you can't have the geopolitical conversation like what Bernie Sanders is proposing around different different labs working Pausing together element. Potentially pausing or slowing down or just just even adding more constraints and reviews before models get released. It's harder to do that if you have a different country that's racing ahead and moving much faster and trying to close that gap. Leave us five stars on Apple Podcasts and Spotify. Sign up for our newsletter at tbpn.com, and we will see you tomorrow.

Speaker 1:

Goodbye.