TBPN

Diet TBPN delivers the best of today’s TBPN episode in 30 minutes. TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays 11–2 PT on X and YouTube, with each episode posted to podcast platforms right after.

Described by The New York Times as “Silicon Valley’s newest obsession,” the show has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

TBPN is made possible by:

Ramp - https://Ramp.com

AppLovin - https://axon.ai

Cisco - https://www.cisco.com

Cognition - https://cognition.ai

Console - https://console.com

CrowdStrike - https://crowdstrike.com

ElevenLabs - https://elevenlabs.io

Figma - https://figma.com

Fin - https://fin.ai

Gemini - https://gemini.google.com

Graphite - https://graphite.com

Gusto - https://gusto.com/tbpn

Kalshi - https://kalshi.com

Labelbox - https://labelbox.com

Lambda - https://lambda.ai

Linear - https://linear.app

MongoDB - https://mongodb.com

NYSE - https://nyse.com

Okta - https://www.okta.com

Phantom - https://phantom.com/cash

Plaid - https://plaid.com

Public - https://public.com

Railway - https://railway.com

Restream - https://restream.io

Sentry - https://sentry.io

Shopify - https://shopify.com/tbpn

Turbopuffer - https://turbopuffer.com

Vanta - https://vanta.com

Vibe - https://vibe.co


Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

It was a massive weekend, so much news. But we missed you. We missed you on Friday. We were traveling. We went to Montana.

Speaker 2:

Terrible day to be out.

Speaker 1:

Terrible day to be out because it was

Speaker 2:

Every single time Tuesday. Yeah. We've had an off day Yep. It ended up being a massive news day. So lesson

Speaker 1:

Yeah.

Speaker 2:

Never take a day off.

Speaker 1:

Yes. Never take a day off. Truly. What an absolutely crazy weekend. Of course, there's the the war with Iran.

Speaker 1:

The big news in tech was The US halts the use of anthropic AI after tension over guardrails. So this is in The Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escalation of the government's clash with the company over how its technology can be used by the Pentagon. I am directing every federal agency in the United States government to immediately cease all use of Anthropix technology. We don't need it.

Speaker 1:

We don't want it. And we do not do business with them again. We will not do business with them again.

Speaker 2:

No. We don't negotiate with terrorists.

Speaker 1:

Trump said Friday in a social media post, the defense department and other agencies using Anthropics Claude models will have a six month phase out period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time. But I guess a lot of this has to do with like FedRAMP and actually getting

Speaker 2:

new But this is a lot more than switching to a new model to run deep research reports

Speaker 1:

Yep.

Speaker 2:

Where this so you're involving classified systems. Sure. The context that people didn't have Yeah. Last week Yeah. Was that The United States was headed to war.

Speaker 3:

Yeah.

Speaker 2:

Right? And so even having that context, feel like is is pretty important.

Speaker 3:

Mhmm.

Speaker 2:

Right? It it sort of explains the 5PM deadline. Urgency. Anthropic had taken issue with how their products were used in the Maduro raid. Yep.

Speaker 2:

There's a new conflict that's unfolding. Yep. And so that makes the the aggressive timeline make a lot more sense. It also makes the six month phase out make more sense because national security is on the line. Yeah.

Speaker 2:

This morning, Scott Besson said at the direction Mhmm. Of the president, the US treasury is terminating all use of anthropic products, including the use of Claude within our department. Yeah. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security.

Speaker 2:

Yeah. US Federal Housing Fannie Mae and Freddie Mac are also terminating the use of anthropic products, which was announced this morning.

Speaker 1:

Yeah. Which I I think goes in line with the original direction. Trump said, I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these statements come out from sort of every different federal agency as they sort of get their transition plan together, figure out what are the requirements for their particular agency because I imagine some agencies aren't operating in classified environments. It's going be much easier for them to onboard to a Gemini or an OpenAI or a Grok very quickly.

Speaker 1:

Some of them, it's going to be a longer plan. But they're all getting on board, and there's been a big debate over how Dario has handled this, where is he in the right, where is he in the wrong, where is the where has the government potentially overstepped? Have they been too aggressive? Or are they doing everything appropriately? Everyone is weighing in, and we're gonna take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on.

Speaker 1:

In many ways, Ben Thompson does a great job sort of painting the broadest picture around, like, what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor side, which is you're talking about a $200,000,000 contract for a company that does $10,000,000,000 in ARR. This is 2% of revenue. In many ways, it's a bump in the road.

Speaker 1:

And so I think a lot of people will be squaring how serious is this for Anthropic, what does this mean for the other foundation model companies, what does this mean for the future of the relationship between tech and Washington, D. C, but there's a lot more context. So the way I process this was interesting because I was very I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government?

Speaker 1:

Like, I am an American. I've I've run businesses. I've never actually sold anything to the government. But hypothetically, I could imagine the government coming and wanting to buy, I don't know, ads on TBPN or Lucy products or any other consumer packaged goods product that I've made. My assumption is that the private companies should have very little say in how the government uses those products.

Speaker 1:

And I was trying to zoom out and think about like AI is so complicated because it could be superintelligence, could be autocomplete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means. And in some scenarios, it's like super critical, really complex. And in other ways, product. It's just a it's just a service like an Excel sheet, like Microsoft Windows installation, like a car.

Speaker 1:

And so yeah. And so I was thinking, like, if I was the CEO of Ford and I make Mustangs and Ford Explorers and f one fifties, and the government comes to me and asks me to buy some cars. I should probably treat them like any other customer. I probably shouldn't say, no. No.

Speaker 1:

No. I don't approve of this particular govern what the government's doing, so I'm I'm just not gonna sell you any Mustangs to drive around on the military bases because I don't like the military. Then if they ask me, hey. We love the Ford Mustang. We love the f one fifty.

Speaker 1:

We love the Explorer, but we're going to war, and we want you to put bulletproof glass on there and armor. That seems like a different discussion. That seems like I might need to, you know, set up a different manufacturing line. I might need a different assembly line. Like, the car is going to be heavier.

Speaker 1:

And if I put bulletproof plating on all the cars, well, like, lot of families are going be like, I don't want an arm room.

Speaker 2:

It's going to hurt my business.

Speaker 1:

Yes. It's to hurt my business. Exactly. And so that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this.

Speaker 1:

Like the Humvee, of course, the Hummer is owned by General Motors, and that brand has separated. And now most military vehicles are made by defense contractors, but there is some bleed over. And there are some times when private companies do dual sourcing or dual use technologies. But all of that is just like a discussion. And that cost should be part of a new contract effectively, in my case.

Speaker 1:

And this was loosely what was happening.

Speaker 2:

Yeah. And Dario, in the CBS interview, quote, We are a private company. We can choose to sell or not sell whatever we want. There are other providers.

Speaker 1:

At the same time and we'll get to the actual CBS interview. But he said, Anthropic has been one of the most proactive AI companies in working with the U. S. Government. We were the first to deploy models on classified clouds and the first to build custom models national security, which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge.

Speaker 1:

This was sort of predictable that you would get to

Speaker 2:

this Yeah. This was the moment he had been waiting for.

Speaker 1:

In many ways. And so it's weird that you would be able to predict that this would happen, that there would be this question of who gets to decide how the technology is used. And you wouldn't just be like, Well, I

Speaker 3:

know how it's going to play

Speaker 1:

out, so I'm not even going to go in the lion's den. Instead, it was like, We're leaning in with the government. We're deploying unclassified clouds, training custom models, but we still want authority over the final last sticking point on how these models are deployed, what they're used for. And that feels a little odd. In the Ford example, like, if I sell them a Ford f one fifty and they say, hey.

Speaker 1:

We're gonna take it to Iraq and and and go do a military mission. I'm gonna be like, look. Like, it's not ready for that. It's not armored. You shouldn't do that.

Speaker 1:

But if they do it, then it's kind of on them. I should be clear about the capabilities of the vehicle and how bad it would be in that situation, but it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty based on what you know about the capabilities of the I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now it's bad salesmanship.

Speaker 1:

Most salespeople would just be like, Yeah, everything's great. You can use it for anything. They overpromise and then underdeliver. He's doing the opposite. But it's certainly responsible if that's his true belief.

Speaker 1:

If he believes that these models are not good for a particular use case, telling your customer that, Hey, it's just not ready for that. You're just going to have a bad time. It's not going to work. That's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly.

Speaker 1:

And then I think the government should be able to determine when and where they're effective. They can't break the law, and Congress and the American people by extension are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. It's not unreasonable to share the capabilities of your product with the government, which I think is is is totally fine.

Speaker 1:

So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that language in their contract.

Speaker 2:

Well, here's the thing though. So we we know that Anthropic took issue with the way that Claude was used in Venezuela.

Speaker 3:

Yeah.

Speaker 2:

Yeah. And the Department of War would have known Yeah. That, hey, we're going to war. Yeah. Right?

Speaker 2:

Yeah. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline

Speaker 1:

There's this information asymmetry.

Speaker 2:

Yeah. They have this deadline. The Department of War knows that they're going to war. Yep. They're like, we need reliable AI systems Yep.

Speaker 2:

For this conflict. We now know the war. The president said this morning, said the war is gonna stretch four to five weeks. Mhmm. Right?

Speaker 2:

I think on Friday, we all assumed that it was gonna be, you know, in and out super quickly.

Speaker 1:

Yep.

Speaker 2:

So the timeline is extending. Yeah. And the Department of War is sitting there being like, we need to know Yeah. That our the provider of these AI systems is gonna be reliable.

Speaker 1:

Yeah.

Speaker 2:

Just a little bit ago Totally. They took issue with it.

Speaker 1:

Yep.

Speaker 2:

Right? Yep. Can we count on them? Yep. They start this kind of renegotiation process, Yep.

Speaker 2:

And to to try to build up confidence that, hey, we can rely on these systems in an active conflict. Yep. In a conflict that is feels already much more serious and will have much greater implications than the Venezuela conflict. Right? No, totally.

Speaker 2:

And so, Anthropic is looking at this in a different way and clearly is like leaning in and like really in some ways felt like they were kind of like not respecting the process. So like when I when I or even the deadline, right? So Emile Michael came out Friday night and said it was 05:13, thirteen minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario.

Speaker 2:

Dario says he's in a meeting. Mhmm. And I feel like in that in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go. Yep. But the Department of War is sitting there being like, you won't even jump on the phone, you're telling me there's a meeting that you're in that's more important.

Speaker 2:

And that just screams to me like, hey, we can't count on this. Yep. We can't count on this provider. Like, we need to take drastic action. Yep.

Speaker 2:

Now, this whole supply chain risk designation, we'll get into that Yep. Later. That's a whole other thing. Yep. But I can see why the department war came out of last week and was feeling like, hey, we cannot rely on this provider.

Speaker 2:

Yeah. We need alternative solutions.

Speaker 1:

Yeah. Yeah. If I'm shipping cars and I'm like, oh, I I actually I I disagree with the latest decision. I'm not I'm not gonna put the cars on the transport. A lot of people were, like, really, really keen on boiling down the the the terms to, like, these two, like, buzzword y lines.

Speaker 1:

And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous? What is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where you get into, like, the ideas of of deals that stick.

Speaker 1:

You can have the same exact contract line item or terms of of a deal Signed agreement. With with with two different people, and it can be a wildly different experience. Most entrepreneurs have have felt this because they were like, yeah. I had a handshake deal with one VC. It was 20% a in board seat.

Speaker 1:

And I had another deal with another VC, 20% in a board seat. And the one VC was, like, suing me and threatening me the entire time, and the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability that when the hard decisions come, that they will be made in a legal, logical, consistent with American values way is, I think, what you need to put forward if you want to work with the government effectively. So Semaphore reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions, like, Who is Nicolas Maduro?

Speaker 1:

But I don't know how much of a joke that is. And I also I don't know how bad of a thing that is. I actually think yeah. Tyler, what what

Speaker 4:

do you have

Speaker 2:

to say

Speaker 1:

like on that?

Speaker 4:

On the context of of Venezuela, like specifically, like what is actually reported is is that after an Anthropic employee inquired with Palantir about Claude's role in the raid a Palantir senior executive notified the Pentagon. Yeah. So I think it is like kind of blowing out our proportion to say that like Anthropic is against using Claude in Venezuela. Yeah. It's an employee.

Speaker 4:

It's non executive.

Speaker 1:

Article about

Speaker 4:

Maybe it's like Dario telling an employee to go check on that. But like, we don't know. It could just be like a random employee.

Speaker 1:

I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson, where Tucker asks Ted Cruz, what's the population of Iran? And Ted Cruz doesn't know. And it was framed as, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's somewhat fair. You could go either way on that.

Speaker 1:

But I just think LLMs are good for that type of thing. Like, what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in, and LLMs can help with that. And so I feel like that's just a good thing. Like, if you just zoom out and just ask, do we want a more knowledgeable and educated government workforce across everything that they do? It seems like absolutely yes.

Speaker 1:

And so there was a perception that this was, like, going to kill Anthropic because if NVIDIA has a government contract, then they can't do any deals with Anthropic whatsoever. And that's not true, apparently. The supply chain risk is specifically if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so still dramatic.

Speaker 1:

I think Darius said it was unprecedented. It's only been used for foreign countries.

Speaker 2:

Yeah. Emil Michael was going through the timeline. Mhmm. He said, today at 09:04 p. No response yet to my calls or messages to Dario.

Speaker 2:

Today at 08:25

Speaker 5:

Mhmm.

Speaker 2:

Anthropic writes, we have not received direct communication from the Department of War. Of course, Emile Michael is the undersecretary of war. Today, 05:14 Secretary of War tweets supply chain risk designation. Today, I called Dario's business partner at 05:02 asking to speak to Dario because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room with no notification to me.

Speaker 2:

I called Dario at 501, no answer. I messaged Dario asking to talk as well.

Speaker 1:

So speaking of Dario on CBS, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts. There were a lot of anti posts, but it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology hallucinate and should but should not be used for autonomous weapons, which is clearly a commentary on using AI at the Department of War broadly.

Speaker 1:

But I thought it would have just been better, like much more stronger communication for him to say, hey, look, we're Anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Like, our system is awesome at that, but we don't make a product that we'd recommend using for autonomous weapons. He is an expert in LLM capabilities, but he's not necessarily an expert in DoD capabilities. It was odd to hear that he was like sort of painting with a broad brush and clearly believes, which is fair, it's his belief, but he clearly believes that the Department of War should not be using AI broadly.

Speaker 1:

And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally the right of the people to be secure in their persons, houses, papers, and if facts against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that, but there are obviously a lot of nuance and different things.

Speaker 1:

Like, if public information can does that count as surveillance? Does the IRS count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that where surveillance is broadly popular. There's other things that where it's massively unpopular.

Speaker 1:

And of course, it gets into the actual definitions 20 lines deep to understand what happens in the court. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable, but the court gave notice that going forward, this should not be used and that the the laws need to change. The whole debate right now is is Dario, like, the god king corporate emperor of this private company that he has control over and, like, you don't get to vote if what he does versus democracy, America, government. There are other reactions, and other breakdowns. We can we can actually kick off with this breakdown of Ben Thompson's piece.

Speaker 1:

Ben Thompson, as always, lays out the reality more clearly than I could have despite my attempts. By Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude and as much as I dislike Hegseth's extra legal might makes right maneuvering, I will ask you again, what did you expect? Vibes?

Speaker 1:

Essays? This is the reality of all too many of my EA followers that that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. One of Dario's favorite books is The Making of the Atom Bomb, making of the atomic bomb. And it tells the story of the scientist that built the atom bomb, then eventually that technology was nationalized.

Speaker 1:

And he apparently gives this book out to anthropic employees and has sort of seen it as, like, a road map for what might happen with AI. Is it a cautionary tale? Like, we

Speaker 3:

haven't had nuclear war in

Speaker 1:

seventy years. We built the nuclear bomb, probably, like, not the best technology, pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been knock on wood, but it's been successful in my entire life, in my parents' life. The bombs haven't fallen since the forties. And so this idea of the government having authority over something that is as powerful as nukes, I I I feel like, why fix it if it ain't pro?

Speaker 2:

Yeah. The way that I was personally processing it, I was Yeah. I saw that the CBS interview had happened. Yeah. This was Friday night.

Speaker 2:

Yeah. Right? I went to the Paramount app to try to find interview. Couldn't I find

Speaker 1:

went to the RSS meeting. Couldn't find it either. It's on YouTube. It has a 0.3 views.

Speaker 2:

Yeah, so it went out over the weekend. Almost in the same session, I'm seeing that we are now at war as a country. And so all the kind of blowback against OpenAI, I was processing that of like, we want our this technology is critical. The government clearly needs it. And now we want the labs leaning into working with the Department of War Yep.

Speaker 2:

At this critical moment in time.

Speaker 3:

Yeah. Even now, I hear many of

Speaker 1:

you say something akin to, if this is what it comes to, I'd prefer King Dario to King Hegseth. Listen to yourselves. This is a declaration of war. Given this, of course, Hegseth is taking the action he is now. You decide you thought I was joking when I referred to this situation as a Thucydides trap.

Speaker 1:

Anthropic is a rising power.

Speaker 2:

Heading over to Palmer, he says this gets to the core of the issue more than any debate about specific terms. Emile is sharing, prior to their new constitution, Anthropic had an old one they desperately tried to delete from the internet. Choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort. Palmer says this gets to core of the issue many more than any debate about specific terms. Do you believe in democracy?

Speaker 2:

Should our military be regulated by our elected leaders? Or corporate executives seemingly innocuous terms from the latter like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage?

Speaker 2:

Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like, what level of information classified and otherwise does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more?

Speaker 2:

At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisers, I still believe. And that is why, bro, just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree? It is it's so simple. Please, bro.

Speaker 2:

Is is an untenable position that The United States cannot possibly accept. And Emil Michael had said that Anthropic wanted to block searching over public databases as well.

Speaker 1:

Roman Helmholme guy says, hi. I'm a private citizen who developed a super weapon potentially a thousand times more powerful than nukes, and now I'm selling it to the government. But I get to choose who they fire it at and how everyone and and and how. Everyone, please respect my decision.

Speaker 2:

David Sachs had shared a clip in

Speaker 5:

DC in May where we we talked to them about this and the meetings were absolutely horrifying and we came out, basically decided we had to endorse Trump.

Speaker 3:

Mark, Mark, get add just a little color to absolutely horrifying. What what what did you hear in those meetings?

Speaker 5:

They said, look, AI AI is one of these techno AI is a technology, basically, that the government is gonna completely control. This is not gonna be a start up thing. They they actually said flat out to us, don't start don't do AI start ups. Like, don't don't fund AI start ups. It's not something that we're gonna allow to happen.

Speaker 5:

They're they're not gonna be allowed to exist. There's no point. They basically said AI is gonna be a a game of two or three big companies working closely with the government, and we're gonna basically wrap them in a you know, they I'm paraphrasing, but we're gonna basically wrap them in a government cocoon. We're gonna protect them from competition. We're gonna control them, and we're we're gonna dictate what they do.

Speaker 5:

Yeah. I said, I don't understand how you're gonna lock this down so much because, like, the math for, you know, AI is, like, out there and it's being taught everywhere. And, you know, they literally said, well, you know, during the Cold War, we we classified entire areas of physics and took them out of the research community and and and and and, like, entire branches of physics basically went dark and didn't proceed. And that if we if we decide we need to, we're gonna do the same thing to to to the math underneath AI. Wow.

Speaker 5:

And I said, I've just learned two very important things because I wasn't aware of the former, and I wasn't aware that you were, you know, even conceiving of doing it to the latter. And so they basically just said, yeah. We're gonna look. We're gonna take total control of the entire thing and just don't

Speaker 3:

don't And what was and Mark, what was steel mannette for the listener? Like, what was their argument?

Speaker 5:

I'll do my best to steel mannered. So one is just, like, to the extent that this stuff is relevant to the military, which it is, if you draw an analogy between AI and autonomous weapons being, like, the new thing that's gonna determine who wins and loses wars, then you draw an analogy to the in the Cold War, that was nuclear that was nuclear power and that was the atomic bomb. And, you know, the federal government the steel man would be the federal government didn't let start ups go out and build atomic bombs. Look. I think part two is there's the social control aspect to it, which you which is where the the censorship stuff comes comes right back, which is the the the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how it and how the government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem.

Speaker 5:

Like, I that is happening at at, like, hyper speed and AI. And, you know, these are the same people who have been using social media censorship against their political enemies. These are the same people who have been do doing debanking against their political enemies, and they basically I think they wanna do they wanna use AI the same way. And then look, I think the third is, I think this jet this generation of Democrats, the the ones in the White House under Biden, they became very anti capitalist, and they wanted to go back to much more of a centralized controlled planned economy. And you saw that in many aspects of their policy, but I think quite frankly, think that the idea that the private sector plays an important role is not high up on their priority list.

Speaker 5:

And they think generally companies are bad and capitalism is bad and entrepreneurs are bad. And they've said that a thousand different And they demonize entrepreneurs as much as they can.

Speaker 2:

But yeah, Elon also piled on to Saxe's take, which centered around a lot of those staffers allegedly going over to Anthropic.

Speaker 1:

Let's move on over to Netflix and Paramount because there's news in the bidding war. How David Ellison finally got what he wanted, 10 no's, and then finally got it done. For six months, the son of one of the world's richest men kept hearing the same unfamiliar word, no. Even before he closed a deal to combine his company with a much bigger one, David Ellison was already plotting to do it again. Once his Skydance media took control of Paramount, he turned his attention to a Hollywood icon launching an audacious takeover bid for Warner Brothers Discovery that would give the Ellison family full control of a sprawling media empire.

Speaker 1:

So he came in with an offer of $19 per share. Finally got it done at 31 a share.

Speaker 2:

Sleep Well says, let me get this straight. Paramount approaches Warner Bros. For acquisition. Netflix puts a higher offer for Warner Bros. Paramount puts an even higher offer at seven x leverage.

Speaker 2:

Netflix declines to match offer. Now, Paramount and Warner Bros. Will have to license all their content to Netflix to pay off all that debt, three d chess. A lot of people were thrown around the succession

Speaker 1:

Oh, yeah.

Speaker 2:

Moment. Congratulations on saying the biggest number.

Speaker 1:

So Paramount will be footing the $2,800,000,000 breakup fee paid from Warner to Netflix.

Speaker 2:

Which was paid Friday.

Speaker 1:

Oh, it was paid already? Yeah. Yeah. And Netflix stock is up. Paramount stock's also up.

Speaker 1:

And just David Zaslow has to be one of the greatest deal makers in history now. Got got the absolute Yeah.

Speaker 2:

Maximum price Dan for Fifer says, so somehow Netflix was able to force one of its rival to overpay for another one of its rivals, putting them into a messy long process of unification and got paid 2,800,000,000.0 for it. Zaslov apparently said the deal may not close. If it doesn't close, we get 7,000,000,000 and we get back to work. There we go. Also said if Warner Brothers is going to survive, they needed to be bigger and we Need to be global.

Speaker 2:

Into the block news, happened on Thursday, and we didn't get to cover the follow-up.

Speaker 1:

That happened like six months ago, right?

Speaker 2:

Yeah, that's six months ago.

Speaker 1:

Six months ago.

Speaker 2:

Seems about right. AGI age.

Speaker 1:

Yeah. Most of you have heard about Block's 40% layoffs by now, but the numbers are even worse. Engineering was hit harder. We've lost close to 70% of our engineers. The company you once know as a prolific open source software contributor no longer exists.

Speaker 1:

And so I was wondering, like, they're laying off 40%. How will they be shifted? Because the AI narrative, the job displacement narrative, that could be back office people that are processing manual workflows. Or it could be software engineers who now there's a smaller team that's getting more leverage out of AI tools, and so you write more off. There's also just the world where you're a mature software company and you have lock in and you're like, yeah, we actually don't need to ship that many more features.

Speaker 1:

We have sewed for so long. It is time to reap. I am still bloat pilled. I I I still believe that this is somewhat of a unique Bloat driven. This is this is somewhat of a unique situation.

Speaker 2:

But it didn't stop the market from absolutely puking on Friday. AmEx at one point was down something like 7%. MWT says, I'm fully on board with spiraling into a depressive episode over the rapidly approaching neo feudalist breakdown of society, but I worked at Square in 2017 and my job had no tasks. I sat on the roof eating free snacks all day with a MacBook. Maybe block laying off a ton of employees is a sign that AI is gonna destroy everything or maybe the stock is down 80% from the highs and they over hired and AI is a convenient excuse.

Speaker 2:

I don't think we ever said the words that We never rang the gong. OpenAI raised a $110,000,000,000 round of

Speaker 3:

funding from Amazon, Nvidia, and SoftBank. So it's like we're grateful for the support from

Speaker 2:

our partners and have a lot of work to do to bring you the tools you deserve. That's probably the biggest that's a Gong record?

Speaker 1:

Yes. It's the biggest round for a private company ever. It's also about one quarter of venture capital outlays that are expected for 2026 in one round. Wild. From venture capitalists broadly.

Speaker 1:

Of course, this money is from the hyperscalers. It's it's more complicated than your average VC deal. I don't even know if this will be included in the VC Funding Funding data. Tally. Yeah.

Speaker 1:

Because it's such a big round and it's from so many strategic. But lots of more capital for OpenAI. See you tomorrow.

Speaker 2:

I can't wait. Goodbye. Have a wonderful evening.