TBPN

This is our full interview with Ben Thompson, recorded live on TBPN.

We discuss his essay on Anthropic, why AI is colliding with hard questions about state power, surveillance, and military leverage, whether private labs can realistically defy governments when AI becomes a true source of geopolitical power, and how trade offs around Taiwan, chips, and democratic accountability could shape the next phase of the AI era far more than abstract debates about safety in the lab.

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to podcast platforms immediately after.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

Sign up for TBPN’s daily newsletter at TBPN.com

TBPN.com is made possible by:
Ramp - https://Ramp.com
AppLovin - https://axon.ai
Cisco - https://www.cisco.com
Cognition - https://cognition.ai
Console - https://console.com
CrowdStrike - https://crowdstrike.com
ElevenLabs - https://elevenlabs.io
Figma - https://figma.com
Fin - https://fin.ai
Gemini - https://gemini.google.com
Graphite - https://graphite.com
Gusto - https://gusto.com/tbpn
Kalshi - https://kalshi.com
Labelbox - https://labelbox.com
Lambda - https://lambda.ai
Linear - https://linear.app
MongoDB - https://mongodb.com
NYSE - https://nyse.com
Okta - https://www.okta.com
Phantom - https://phantom.com/cash
Plaid - https://plaid.com
Public - https://public.com
Railway - https://railway.com
Ramp - https://ramp.com
Restream - https://restream.io
Sentry - https://sentry.io
Shopify - https://shopify.com
Turbopuffer - https://turbopuffer.com
Vanta - https://vanta.com
Vibe - https://vibe.co

Follow TBPN:
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

We have Ben Thompson in the Restream waiting room from Certecari. Welcome to the show, Ben. How are you doing?

Speaker 2:

I'm good. Hopefully, I have the right microphone turned on this time.

Speaker 1:

You do, and it sounds fantastic. Thank you so much for joining on short notice. Thank you for writing Anthropic and Alignment. It is a fantastic piece that I think covers all of my questions. But I want to start with like just how did you process the weekend?

Speaker 1:

How did you get to this particular place? And then, like, what is your key thesis with Anthropic and alignment?

Speaker 2:

I mean, this is one of those ones, I don't know if it's good or bad that it came out sort of at the end of the week, so I had a lot of time to think about it. Ultimately, I think it was good because I'm not sure anyone very as explicitly made the point I did.

Speaker 1:

Yeah.

Speaker 2:

And maybe it was bad because, I feel like there's a lot of, like, caveats. Maybe in retrospect, I should have put in the article that would have addressed a lot of the points that people are upset about.

Speaker 1:

Yeah.

Speaker 2:

Basically, zooming out Mhmm. This was not a normative article where I'm saying what's happening is good or bad. Mhmm. And that's really the one caveat I really wish I would have put on there. Mhmm.

Speaker 2:

I mean, I'm being out there accused about, like, a Neil Patel, like, the full throated fascist endorsement of fascism or something like that. And it's like, relax. Okay. Can I get get some some some credit for the last x number of years? Basically, the and there there is a deep rooted concern that I've had for a long time about and I'm now hesitant to even use sort of EA as a term because it's kind of now politicized, thanks thanks to the events of the last week.

Speaker 2:

But a failure to grapple with a world of guns is basically the long and short of it. And I actually think Alizar has been the one guy who's been honest about this, where he wrote that Time article about potentially bombing data center someday.

Speaker 1:

Yeah.

Speaker 2:

And that's actually a point worth bringing up, which is all this stuff is right now in the digital realm with robotics and potential other applications, and it's obviously being used for military operations. It's crossing over into the physical realm. But if AI is as powerful as people say it's going to be, then there are going to be real world reactions to that. And if we're going to analogize it to nuclear weapons, as Dario Amade has done repeatedly, you have to think through what's what would happen in a world where a private company developed nuclear weapons. Mhmm.

Speaker 2:

What would the government's response be? And that's not to say that the government response in that case is good or bad. Yeah. Or does it follow sort of constitutional principles or whatever it might be? Obviously, I want them to.

Speaker 2:

On the surveillance point, I've been concerned about the application of computers to our surveillance laws for years. Like, so many things in our society assumed a certain level of friction in doing things that computers already obviated, and AI is gonna just do that on steroids. Mhmm. I do think we need new laws. I think all this stuff is is correct.

Speaker 2:

And I think the idea that AI being applied to these commercially purchased datasets, for example, is a huge problem that I don't want to happen. The concern I have is that if this technology is as powerful as it is on pace to be, unilaterally imposing restrictions, even if those restrictions are good, isn't just an issue as far as who rules us, the democracy issue that sort of Palmer Luckey, I think, very eloquently raised. It's inviting very bad outcomes for those asserting that in general. And I feel there's been a lack of awareness of this. That's why I brought up the the Taiwan China thing.

Speaker 2:

This has been a frustration I've had with Anthropic generally. They talk about, you know, Amade has been very outspoken in terms of opposing selling chips to China for in a narrow, you know, aspect very, very good reasons. My pushback has always been what happens if we get super powerful AI and China doesn't? What are they going to do? Sure.

Speaker 2:

The optimal thing would be to just bomb TSMC out of existence because suddenly that becomes optimal even with all the cost that that that does. And then what? Then what are gonna do? Yeah. Like, we're entering this like, I don't like getting into political Same.

Speaker 2:

Posts. It's not fun at all. I've I've I've not having fun with this. It's not enjoyable. I could promise you

Speaker 1:

this. Yeah.

Speaker 2:

And some people are like, well, you should've just made the post private. I'm like, no. I actually I really want Anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful. And now it's starting to happen for real. And I've I guess over the weekend, part of it was just I felt compelled to say this and girding myself to do so.

Speaker 2:

And even then, I still wasn't I haven't I haven't waited in this for a while, and it's it's no fun. But it is what it is.

Speaker 1:

Can you unpack a little bit more of that that tweet that you posted where you did the find on the Dario article for Taiwan and saw that wasn't mentioned?

Speaker 2:

Oh, I mean, I just got I've sort of griped about this in general. I I think that

Speaker 1:

So so do you just think he should be he should be talking about the Taiwan issue more deliberately? He should be messaging that? Like, why is it important that why is it why is it significant that he doesn't mention Taiwan?

Speaker 2:

Well, I think the position about not selling chips to China is a totally legitimate one. I understand the argument. I could make that argument if I needed to. Like, I have advocated the opposite. That number one, not only should we be selling chips to China and a generation or two behind, which has always been sort of our standard practice with chips, we should also be allowing Chinese companies to fab with TSMC.

Speaker 2:

That is a restriction that has come down. Now these Huawei chips are somehow manufactured by TSMC. Let's not look too closely at it, but we should explicitly be allowing it. Okay. And the reason for that is I think it is a safer equilibrium to have China dependent on Taiwan than to try to cut them off from Taiwan while we are dependent on Taiwan.

Speaker 2:

Taiwan is 70 miles off the coast of China. It's not an ideal position in the world for us to have a dependency on it and China to not have a dependency on

Speaker 1:

it. Yeah.

Speaker 2:

So this and this is the problem. All this stuff has everything going forward has massive trade offs.

Speaker 1:

Yeah.

Speaker 2:

The implication of letting China fab with TSMC or the implication of letting them buy NVIDIA chips is that they gain these incredibly powerful AI capabilities that is driving this entire debate. That is, in a vacuum, not a good thing. But nothing's in a vacuum.

Speaker 1:

Yeah.

Speaker 2:

Everything is a trade off. And in that specific area, I think that just it's repeatedly, again and again, being absolutist about the chip issue when I am frustrated to not see any public comment about the that's not quite fair. He has made comments about, oh, yeah. That would slow down sort of the adoption of the long run if Taiwan got got bombed. I'm like, that's, in my mind, that's an insufficient consideration of the possibility of Taiwan getting bombed.

Speaker 2:

Now again, I'm biased in that regard. I lived there for nearly two decades. But it's just the reason I brought it up in this context is if AI is what it is, the people with guns are going to want to have a say. Yeah. Whether that be domestically, whether that be internationally, that might be in the context of the US government just taking it, trying to kill your company because they feel you're not cooperating, or it might be the context of China deciding it has to act because The US is becoming too powerful.

Speaker 2:

Mhmm. Because the you know? And it's not a fun debate. It it it does I do think the nuclear angle is a good one. Mhmm.

Speaker 2:

It has echoes of the proliferation, question of mutual assured destruction, all those sorts of things. And that's just gonna be the reality of the debate going forward. And, again, it's not very fun, but I think it's also irresponsible to sort of run away from it.

Speaker 3:

Mhmm. How much attention or or or what kind of factor do you think the information asymmetry between the Department of War and Anthropic played last week? It felt like in hindsight, Department of War knows they're headed into a major what is now looking like a drawn out conflict. Anthropic sitting there thinking, hey. Then we got this, like, arbitrary deadline.

Speaker 3:

Why do we need to renegotiate this now? And then if if going off of Emil Michael's timeline, it sounds like they were still in the final hour trying to make a deal happen. And according to Emil, Dario was in a meeting and and was busy and wasn't really respecting the deadline, which maybe he felt was kind of artificial, but in hindsight, now looks like it was significant because the Department of War was, you know, taking the the country into a conflict and wanted to know, hey, can we lean on one of our one of our AI partners?

Speaker 2:

I don't know. I mean, I I think that seems pretty arbitrary to have cut I mean, I'm hesitant to speculate. I don't know what was going on. I don't know the angles. I think and that's why I didn't sort of delve too deeply into it.

Speaker 2:

And I also think some of the specifics like this supply chain risk probably over broad. Yeah. And almost certainly the way it was stated in the tweet is definitely overbroad if you actually go and re reread the statute. I think the goal that I was and, again, this is where I wish I had sort of put more caveats to say, look, I'm not actually talking about all that stuff. I don't really care.

Speaker 2:

I do care, but that's not the point of this article. The point of this article is there's all this talk about alignment. That's why I put that in in the headline. And I that's a question. Question.

Speaker 2:

That's you know, a more pressing conversation than probably ever before. Anthropic exists in the context of The United States. Mhmm. And that's why I put that quote, you may not be interested in politics, but politics has an interest in you. What is politics?

Speaker 2:

War by other means. You might not be interested in that. It is going to have an interest in you. And my there's a like I said, a certain long standing frustration of not fully grappling with that fact, having dorm room theoretical arguments about AGI. You go back to that post over Christmas about like AGI in like one hundred years and no one having any jobs or being worthless or pointless or whatever, which included some implicit assumptions around property rights existing in a hundred and fifty years as they exist today.

Speaker 2:

News flash, if that happens, property rights as they exist today are going away. All all these rights and this is a philosophical argument. That's why I started with the international law concept. All these rights, all these laws are subject to the agreement of those governed by them to follow them.

Speaker 1:

Mhmm.

Speaker 2:

And the final say is those who successfully inflict violence.

Speaker 1:

Mhmm.

Speaker 2:

And again, this isn't fun to think about. It's not pleasant. You would like to assume we operate in a world of laws that everyone follows them and goes by them. But to the extent AI is as impactful and powerful as it is, the more these questions, fundamental questions that we thought have been settled for hundreds of years, if not thousands of years, are going to be raised. And this is just the first of several episodes where I think that's going to happen.

Speaker 1:

I grew up in sort of, like, post Cold War, no ducking cover. Didn't have a lot of fear of nuclear Armageddon. But Dario Amade is, you know, a fan of this book, The Making of the Nuclear Bomb. And it seemed like he sort of predicted that if AI becomes super powerful, The US might take a similar approach that they did with regulation of nuclear weapons. And I'm and when as I was thinking about that, I feel sort of good about the way nuclear weapons are regulated.

Speaker 1:

Like, I feel like it we got the good ending, we haven't had nuclear weapons drop in seventy years. And it seems like things are going well there, as well as they can, considering there's this amazing, tremendous, like, dangerous technology that exists. But it hasn't been deployed, and it hasn't actually bombed anyone. But how do you think he's processing that book? How do you think you're how do you think we should be processing that idea of the government running the same playbook that they did with nuclear weapons?

Speaker 2:

It's pretty interesting. I mean, on one hand, just from sort of a physical perspective, dealing with weights and software is very different than dealing with fishable material or, I guess, the super bombs are like they're actually like fusion devices.

Speaker 1:

Right? Yeah.

Speaker 2:

And that is trackable. It is Mhmm. Interceptable. You know when Iran, to take a pertinent example

Speaker 1:

Yeah.

Speaker 2:

Is trying to build enrichment facilities

Speaker 1:

Mhmm.

Speaker 2:

All of which makes the problem easier to solve. Yeah. So that's difference number one. Difference number two, and I really wish I would I I had this included and I cut it so that the sort of the the article will be tighter. But there is a very interesting point in technological history, which was the early days of Intel.

Speaker 2:

And Bob Noyce made the decision that we will sell to the government, but we're not going to design chips for the government. And the distinction there was you had guaranteed orders, which was great. The government would take your IP, and there was and in his mind, the more important thing is there was limited volume. Mhmm. And the way that he foresaw correctly that this was going to be a very upfront capital intensive process of designing shifts.

Speaker 2:

You have to design them. You have to have the equipment, all of which is in the billions of dollars today. Back then, it was in the tens of millions and hundreds of millions, is you need to find the largest possible market, we're excited future. About

Speaker 1:

That is at stake on steroids with

Speaker 2:

AI. And Yeah. People like, I was talking to someone, like, why doesn't the government just have just get someone to make their own model? It's like because it's like, you you talk about government contracts, we're like single digit billions. We're talking about for the amount that's going into CapEx, the cost of these models.

Speaker 2:

We're talking hundreds of millions of dollars for the models and hundreds of billions of dollars approaching $1,000,000,000,000 a year in CapEx. That is only sustainable and viable if you're selling to everyone. And but that introduces the entire new dynamics where the government built nuclear, it started there, and it started with a lot of assumptions because it was a government program. We are necessarily for economic reasons because of all the upfront costs entailed, starting with private companies of which the government is one of many customers. And that introduces the assumption that, well, it's a private company with private property rights and all those sorts of things, all of which I want to be true.

Speaker 2:

Again, I don't like how this is going down at all. The point here is to say there's a good reason why it's not going down that way. And there needs to be cognizance that even though this is a private company that is building the model general purpose and for very good reasons wants to put restrictions. Again, I think the the the same ratings one is a very powerful argument that I agree with. The problem is that you just need to be aware of, yes, the government is a small customer.

Speaker 2:

The government is also the entity, again, not to be blunt, with guns. Like, they they you know, like, why do I pay taxes? Because the law says to pay taxes. Yep. No.

Speaker 2:

At the end of the day, I pay taxes because, you know, if you really wanna distill down, if I don't, someone with guns will come to my house and throw me in jail. Right? Like like, we don't think about that. But at the end of the day, where do these assumptions and laws and rights flow from? And as long as that is still the case, that it needs to be a decision making factor for these companies.

Speaker 1:

How do you think this plays out for Anthropic? It's such a small contract, but it's so important in the zeitgeist. There's There a lot of people that are rallying around Anthropic because of this. There's a lot of people that are pulling away from Anthropic because of this. It feels like there is a business to be built that doesn't work with the government, but delivers coding models and knowledge retrieval systems and a whole bunch of really valuable products and technology, and it winds up being fine.

Speaker 1:

But at the same time, you don't want this, like, hairy relationship with the government adversarial to go on for a long time.

Speaker 2:

I would like them to sell to the government, now like Congress, to pass a law addressing these digital surveillance issues.

Speaker 1:

Yeah.

Speaker 2:

And a lot of people are like, that's unrealistic, which I'm amenable to. But at the end of the day, if you don't have it's legal or not legal as your guiding standard, the only alternative is someone has to decide.

Speaker 1:

Yeah.

Speaker 2:

And the implication of that not being a sufficient justification is that means a private executive is deciding. Yeah. And if AI is what it is, I think that's going to be I use this word intolerable. I didn't mean intolerable to me. I meant intolerable to those with power to have a private executive making those decisions or not.

Speaker 2:

And if you think about if power if we're gonna have this very sort of brute analysis that power flows from or laws flow from power, AI is

Speaker 1:

excited we're

Speaker 2:

opportunity

Speaker 1:

world. To

Speaker 2:

to The goal isn't to find we just won't use that. Anthropic. And I so do think the goal is to hurt anthropic.

Speaker 1:

Yeah.

Speaker 2:

And you're if you're not going to be subservient to us, you're not gonna be allowed to build a power base. Period. Mhmm. And again, I'm not endorsing all this. Yeah.

Speaker 2:

It's just a matter of it's not a surprise this is happening. Yeah. And it be this needs to be just a this is a real risk factor, a real that has to be considered in all these decisions.

Speaker 1:

Putting on my Dario hat, I'm thinking about a different way to achieve the goals with maybe less sacrimony. And I threw out this idea that maybe the better solution is, like, work with the government, but then lobby for a surveillance act and actually

Speaker 2:

try I I wish the White

Speaker 1:

House would come out

Speaker 2:

and say, yeah. There's a digital problem. Let's work on a bit. Like, I I I don't Yeah. Probably another regret I have is sort of putting this all on Anthropic.

Speaker 2:

That that was sort of the angle I was concerned about. I and that left me, I think, fairly open to the critique that this is just, like, defending the White House's approach. And that was again, that was I was trying to be a higher level that's saying, look. This is what's gonna happen.

Speaker 1:

But yeah. The the I'm just thinking from the perspective of way

Speaker 2:

to find a middle ground here.

Speaker 1:

I'm just thinking of, like, from the perspective of, like, the if the White House is, like, this immutable thing, I mean, you are in you know, involved in Anthropic, like like, one advice would be, hey. Okay. Instead of going in and having this confrontation with the government directly, go and start a political action committee that lobbies for change in the way that you want through the democratic process?

Speaker 2:

Yes. That is the ideal process. I understand why people are frustrated and skeptical about this.

Speaker 1:

Okay.

Speaker 2:

I used to have this debate a lot in the context of antitrust and aggregators. And one of my sort of theses about the aggregators and antitrust is that the the antitrust laws are fundamentally unsuited to dealing with aggregators because antitrust law has historically been about control of supply, and the power of aggregators flows from control of demand. And so you end up with all these solutions that I call pushing on a string. You're just trying to get people to change how they behave. Yeah.

Speaker 2:

And that doesn't work very well. Like like, Google has always been right. Competition has always been just a click away. The problem is people aren't clicking. And, like like so so the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it.

Speaker 1:

Yeah.

Speaker 2:

And therefore, my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work. And the reaction is always, that's impossible. We can't pass new laws. And okay. But realize the implications of what of what you're saying.

Speaker 2:

I mean, I saw a tweet. I I again, I I I didn't like it, so I lost it forever. One of the most infuriating things in the world. But someone was like, I would definitely rather have Dario Amade make these decisions than and he and to this Twitter's credit, he wasn't limited to Trump. To me, this isn't a Trump issue.

Speaker 2:

This is a any politician issue. Yeah. He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process.

Speaker 1:

Yeah.

Speaker 2:

And points for the honesty because that's the actual choice that is that is being put forward. Mhmm. And you could say congress isn't gonna do anything. Therefore, Amade should. Just appreciate that is giving up on the democratic process and saying we should have unelected, unaccountable individuals making weighty decisions.

Speaker 2:

And, I understand the sentiment. It's hard to imagine congress passing laws about anything.

Speaker 1:

Yeah.

Speaker 2:

But just realize that's, like, that implication is is quite fraught.

Speaker 1:

Yeah. It's a huge change from I mean, I just I just spawn in and believe in democracy and then understand it and study economics and and just have reinforced my belief in the American project throughout my entire career. And now it it really is people discussing an entirely different world of governance, which is has been not something people have talked about publicly for a very long time, but it is here

Speaker 2:

for sure. Right. And and they always come in on these Trojan horses that are eminently defensible. Again, I'm with Anthropic on the digital surveillance point. I've been I've I've been concerned about it for years.

Speaker 2:

Been writing about it for ages. And it's similar there is an analogy to the to the monopoly. Like, you have all these laws that assume someone has to actually physically go somewhere and tap into a phone line. Yeah. But if you can do it with computers at scale, like, suddenly, you you had all these assumptions that limited what the government could do that magically disappeared, not because the law changed, but because we got computers that could do the job of an individual at scale Mhmm.

Speaker 2:

Infinitely.

Speaker 1:

Mhmm.

Speaker 2:

And AI, again, is going to the idea that the NSA by the way, this is my sort of, like I had to admit this in the article. Yeah. I was so confused why the Pentagon was so obsessed with domestic surveillance. Know. I didn't realize the NSA was part of John and I had the of the Pentagon.

Speaker 3:

John and I had the same moment.

Speaker 1:

Yeah.

Speaker 2:

Yeah. Yeah. You sort of thought about it as like an independent agent like the CIA, but but that that's that made a lot of this story make more sense. Right. No.

Speaker 2:

Exactly.

Speaker 1:

I Yeah. Feel like a lot of tech people are, like, reading the Fourth Amendment today and understanding, like, some of these, like, pretty basic processes.

Speaker 2:

Well yeah. But, like, it's pretty like, the whimples are massive. Like, like, I'm not denying it. Like like and it it it's similar to the chip thing with China. Like, my prescription for anthropic to give in is to allow these massive loopholes to be exploited and for the NSA to allegedly in the service of investigating foreign adversaries, but by, you know, the process basically surveilling the domestic population, I think is bad.

Speaker 2:

And the reality is the nature of trade offs is you're choosing between multiple bad options. Mhmm. And at some point, it's like, which team are you signing up for? They both suck. Choose one.

Speaker 1:

What do you think of the the messaging around, like, the models themselves not being capable enough to be used in the context that the Department of War asked for? Because I felt like Dario was sort of speaking for all frontier labs. He said that these technologies broadly are not suitable for these missions just yet. I'm not sure that he has all of the information on the other side to know about the efficacy. He certainly understands his models and what's capable on the I

Speaker 2:

mean, I think that yeah. I I would I I mean, I would assume they're definitely not capable. That that I think that point is more of a precedent setting one. Mhmm. I think Anthropic's position is significantly weaker on that point.

Speaker 2:

Mhmm. Like, at the end of the day, we either trust the military or not to make these sorts of decisions. That's why we have a military.

Speaker 1:

Yeah.

Speaker 2:

And and so I I just I have a harder time. And I think the the digital savings point is so compelling for them because I I think it may be my personal biases. Totally.

Speaker 1:

I think

Speaker 2:

it's a huge problem. Yeah. The this various anecdotes again, I hate the reporting from these because you can tell, like, the weeks coming from which side for each of these.

Speaker 1:

Yep.

Speaker 2:

But, you know, this idea that putting forward these hypothetical examples of, like, oh, you could call us and we'll figure it out then. It's like, no. Come on. Like, are you serious about this? Like like so, yeah, I think that's a weak argument for them.

Speaker 2:

So that's why I almost focus more on the digital sort of this one just because I think it is a very compelling argument in favor of the anthropic position.

Speaker 1:

Mhmm. Mhmm. Jordan, anything else?

Speaker 3:

Oh, there's a lot more. What are you what are you gonna be tracking going forward? Obviously, the story Yeah. Is

Speaker 1:

Good luck. Stay strong.

Speaker 2:

No. I mean, the OpenAI angle is obviously interesting. I didn't really get into OpenAI. Yeah. It's hard to parse exactly what's going on.

Speaker 2:

It seems to me they have agreed to the to the Pentagon that they will be the Pentagon will be limited by lawful capabilities.

Speaker 1:

Yep.

Speaker 2:

And they'll make their own judgments about weapon usage. And as I understand it, OpenAI is like, we will, on our side, be free to stop the model from doing digital surveillance

Speaker 1:

Mhmm.

Speaker 2:

Which sounds like you're in sort of a jailbreak competition. It's like, we're gonna agree to have a jailbreak competition with the US government, which I again, it's an example of how fraught this is that that's probably the good place to come down on. Now there's obviously these dynamics of competing for the same talent base. Being in San Francisco, you know, the this is part of, I think, Anthropics. Anthropic has a local advantage Yep.

Speaker 2:

In that most people, I think, in the industry are with them, and they have a national PR problem in that I think a lot of folks outside of tech don't understand why tech companies always try to or resist helping the US government. Mhmm. And so it it's kind of an interesting dynamic where I think OpenAI is in step with the broader public and very much out of step with sort of their talent base in in San Francisco. And so that's gonna be very interesting to to see how that plays out.

Speaker 1:

Yeah. It's it's remarkable that Google has stayed out of the fray given all the Project Maven background and stuff. Like, they must be so happy. They're just like

Speaker 2:

Well, that's the other interesting thing is I this is actually goes back to Google, I believe, where Google had the project.

Speaker 1:

I I

Speaker 2:

think this is right. Yeah. But I but I think Google had project Maven, which their employees objected to. Yep. And therefore, that went to AWS.

Speaker 2:

Yep. And then I some combination of I think the Pentagon is using Anthropic because using what? Higher higher FedRAMP designation.

Speaker 1:

That's right. So so that's

Speaker 2:

why Anthropic was already allowed for classified content and OpenAI wasn't. Again, I don't know the ex

Speaker 1:

It was I I studied maybe it pretty likely. It's a wild story. I mean, the it was similar, like, AI for the military, the same, like, killer robot fears. The actual I mean, Google was a subcontractor on that project. And what they were actually exposing to the to the government was TensorFlow APIs that would run on Google hardware.

Speaker 1:

And so they weren't actually writing any AI software, but they wanted to effectively classify images from drones in The Middle East, see that's a car, that's a house. And previously, they had Air Force airmen just sitting there, like, clicking, and they were like, okay. We're gonna automate that. Right. But but it was still, like, scary, don't be evil, working with the government, military.

Speaker 1:

And then there was a backlash. They pulled out. Then eventually, they went back in and and had a new head of Google Cloud. Yeah. I mean, is

Speaker 2:

you know, it's hard to I speak for myself personally. I obviously have the biased angle because of Taiwan. I have the biased angle where I think there are you know, just in general, there is this very naive view of the world that doesn't understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving into this naive mindset

Speaker 1:

Yeah.

Speaker 2:

That we have no duty to support the military. And there's this tension's been so the it's attention's been brewing for years Yeah. Which is, are you an American company subject to American law and even beyond law, just morally compelled to support the US military Yeah. Or not. And there's an equally American sort of idea of moral consciousness.

Speaker 2:

I'm able to say no. That's why we have the First Amendment. Right? This goes into the can the government compel a company to do something? It goes back to some of the questions that happened, you know, with the first Trump administration.

Speaker 2:

Mhmm. And, you know, I I've been on both sides of this. Like Yeah. Which I

Speaker 3:

And this is what in CBS interview. He said, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. He's already sort of like making this case.

Speaker 2:

Yeah. Which, again, it is a case that I support. Yeah. But the point here is there's always the question with, like, a bubble or whatever. Is it different this time?

Speaker 2:

Sure. And I guess that's sort of the question I'm raising.

Speaker 1:

Yep.

Speaker 2:

Is AI actually applicable to every other technology that's come along? Or if it is the potential to be a source of power going forward, it's going to be dealt with as such.

Speaker 1:

Yeah. That makes sense. Last question, we'll let you go. How happy should Ted Sarandos be right now?

Speaker 2:

I mean, I think he had the killer quote in the last couple of days where I think someone was asking him if this is such a jewel and it's so rare. Like, isn't it a problem that you're missing out on it?

Speaker 1:

Yeah.

Speaker 2:

And he's like, well, have you seen the history of Time Warner? Which I think sounds about right. I'm not sure I'll I'll be the entity with all the debt that Paramount and Warner Brothers is going on. I think there's a bit where Netflix is always, in the very long run, been positioned, I think, to be the final buyer.

Speaker 1:

Mhmm.

Speaker 2:

Like, who else are content companies going to sell to?

Speaker 1:

Yeah.

Speaker 2:

I feel like they sort of I feel like they've been spooked by YouTube a little bit, and they felt a need to push forward

Speaker 1:

Yeah.

Speaker 2:

That bring the bring the the future forward.

Speaker 1:

Mhmm.

Speaker 2:

That was not allowed to happen, but that means their original plan, I think, is still in place. So probably probably pretty happy, all things considered, I'm gonna say.

Speaker 1:

It's great. Well, I'm I'm excited to get back to natural discoveries and and more anodyne Yeah.

Speaker 3:

Remember it was on Cheeky Pite, you were talking about getting sucked into Yeah. The I don't know. So And here we are.

Speaker 2:

So I put that quote at the beginning of my article. You know, you may not be interested in politics. It was politics and interest in you. That was about Anthropic, and it was also about me.

Speaker 1:

Yes. Yes.

Speaker 2:

Did you do?

Speaker 1:

Welcome welcome to 2026. Well, we thank you for taking the time to come chat with us. Yeah.

Speaker 3:

Great to see you.

Speaker 1:

And a fantastic article. We appreciate you, Ben. Talk to

Speaker 2:

you soon. Thank you. Have a

Speaker 1:

great day.