Limitless: An AI Podcast

We explore a tumultuous week in AI as the U.S. government banned Anthropic's Claude AI from military use, only for it to be deployed in the Iranian operation the next day.

We analyze the ethical dilemmas faced by AI firms navigating government demands, spotlighting CEO Dario Amodei's refusal to compromise on safety. The discussion intensifies with OpenAI's bold offer to the Pentagon, igniting a rivalry that questions corporate power in military engagements.

------
🌌 LIMITLESS HQ ⬇️

NEWSLETTER:    https://limitlessft.substack.com/
FOLLOW ON X:   https://x.com/LimitlessFT
SPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQ
APPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890
RSS FEED:           https://limitlessft.substack.com/

------
POLYMARKET | #1 PREDICTION MARKET 🔮
https://bankless.cc/polymarket-podcast

------
TIMESTAMPS

0:09 AI Used as a Weapon
1:19 The Pentagon's Ultimatum
4:45 Dario's Ethical Stand
10:51 OpenAI's Strategic Shift
14:25 Irony of Military Operations
18:00 Public and Private Divide
19:26 The Future of AI and Warfare

------
RESOURCES

Josh: https://x.com/JoshKale

Ejaaz: https://x.com/cryptopunk7213

------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures⁠

Creators and Guests

Host
Ejaaz Ahamadeen
Host
Josh Kale

What is Limitless: An AI Podcast?

Exploring the frontiers of Technology and AI

Ejaaz:
On Friday, the USA banned Anthropic from being used in any military operation

Ejaaz:
after Dario refused to cave to their demands of being used for mass surveillance

Ejaaz:
and autonomous weapons.

Ejaaz:
Then literally hours later, in the early morning of Saturday,

Ejaaz:
Claude was used to perform and execute the most important and biggest military

Ejaaz:
operation since the invasion of Iraq.

Ejaaz:
This has by far been the most insane week in AI.

Ejaaz:
There was drama and deceit between the top two AI labs, OpenAI and Anthropic,

Ejaaz:
and the Pentagon wanted uncensored access for use of Claude and OpenAI's ChatGPT.

Ejaaz:
AI models are not just chatbots at this point, it's a geopolitical weapon being used for warfare.

Josh:
It's amazing how the biggest news on earth is now just AI news.

Josh:
Like not a single large thing happens that doesn't have AI integral to the decision-making process.

Josh:
And this was no difference. And I think the question that it left everyone at

Josh:
the end of this is like, who really controls AI.

Josh:
Because for the first time, what we're seeing is these private companies have

Josh:
so much leverage, so much power, that they're starting to conflict with the

Josh:
actual elected officials and government.

Josh:
And I think that's kind of at the core of this discussion. But if you missed

Josh:
anything over the last 72 hours. Don't worry, we're going to get you caught

Josh:
up, starting with what happened early last week that sparked this debate.

Josh:
Because at this time, we didn't know that there was any war plans happening.

Josh:
No one had any idea that there were any attacks planned. It was just an AI story.

Josh:
So maybe we'll start with that AI story.

Ejaaz:
That AI story specifically was the news that revealed that the Pentagon,

Ejaaz:
which is part of the US Department of War, had been using Claude to orchestrate

Ejaaz:
and execute their capture of the president or former president of Venezuela, Maduro.

Ejaaz:
And that shocked everyone because up until that point, people were just kind

Ejaaz:
of prompting it to vibe code stuff and to answer their silly questions about,

Ejaaz:
you know, what they wanted to cook tonight.

Ejaaz:
So to see this real life example of an AI being used, not just as a tool outside of a chatbot,

Ejaaz:
but for something so important as military warfare was a big shock and surprise,

Ejaaz:
which then sparked a debate around what the model wanted to be used for.

Ejaaz:
Now, the head of the Pentagon, Pete Hegseth issued an ultimatum shortly after,

Ejaaz:
which raised suspicions around what the conversations were like between the U.S.

Ejaaz:
Department of War and the owners of Claude Anthropic.

Ejaaz:
And it was all but good. The issue that they were facing was Anthropic had been

Ejaaz:
asked to give them an uncensored version of Claude, which could be used for

Ejaaz:
two things, mass surveillance,

Ejaaz:
which included domestic mass surveillance of people within the U.S.,

Ejaaz:
which was a breach of the Fourth Amendment, and also for use within autonomous

Ejaaz:
weapons, meaning that there was no humans involved and an heir would control

Ejaaz:
how weapons were executed and fired.

Ejaaz:
Dario's comments against that was simply, he did not feel comfortable giving

Ejaaz:
court access. He didn't think it was good enough.

Ejaaz:
And also that it was a direct breach of law.

Ejaaz:
So Pete Hegseth issued them an ultimatum at a deadline a few days later on the

Ejaaz:
Friday saying, you either agree to our demands or there are consequences.

Josh:
Yeah, and it's really interesting to hear...

Josh:
How close they were, but seemingly unable to reach a deal. It seemed like they

Josh:
had everything down to just, what, two of these red lines, right?

Josh:
And a lot of that conversation happened around whether they are allowed, whether the U.S.

Josh:
Government and the Department of War is able to use these models without the

Josh:
express written consent and approval of Anthropic when it comes to,

Josh:
I guess, making kinetic decisions, things that actually result in harm that's being caused.

Josh:
And there's this interesting interview that I saw, or just like kind of report

Josh:
that said that when asked about, what was it, the nuclear weapon?

Josh:
Like if someone shot a nuclear weapon at the United States, does the Department

Josh:
of War have... Oh, here it is. Yeah, this is perfect.

Josh:
Does the Department of War have the opportunity and have the right to use anthropic

Josh:
and clawed models to determine like what to do about that, to help shoot it down?

Josh:
And then the response from Dario was basically like...

Josh:
Call us first, and then we'll talk through it and we'll let you know.

Josh:
And I can understand why Dario wants that to be the outcome.

Josh:
And I can understand why the Department of War is absolutely furious because

Josh:
they're like, you are not the elected official.

Josh:
You are not the military. You don't have the right to sign off on our nuclear plans.

Josh:
But for Dario, he very much feels like he created this incredibly strong tool.

Josh:
And what is Anthropic known for at the core of its DNA?

Josh:
Well, it's safety, it's AI alignment. And I'm sure they want to feel like they

Josh:
have a heavy hand so it doesn't get out of hand.

Josh:
And I think that's ultimately where this conflict came from is Anthropic wanting

Josh:
to abide by their safety principles.

Josh:
But the Department of War and the government and the military really being like,

Josh:
OK, yeah, but we're the military.

Josh:
And like if someone's attacking us, we need to use all the tools at our disposal

Josh:
and we can't be waiting for you to answer the phone to tell us if it's OK or not.

Ejaaz:
Yeah, it's the crux of the issue comes down to the contractual language.

Ejaaz:
The Pentagon was willing to say, hey, yeah, you can keep us

Ejaaz:
within all means of legal law and

Ejaaz:
Dario's response was simply the legal law isn't

Ejaaz:
really prepped and covered for the future of AI like right now you could use

Ejaaz:
our model legally to get access to a bunch of people's data and you can just

Ejaaz:
get away with that and Dario his own fundamental ethics behind building Anthropic

Ejaaz:
wasn't comfortable with that but the US Department of War's response was simply

Ejaaz:
Hey, this is a matter of national security and we can't have a private company,

Ejaaz:
a private unelected official dictate how we perform national defense,

Ejaaz:
which you can see fair takes on either side at this point.

Ejaaz:
And it's extremely complicated and nuanced.

Ejaaz:
And in Dario's exact response, there's this very poignant line that he says,

Ejaaz:
we cannot in good conscience accede to their request.

Ejaaz:
This was in response to Pete's ultimatum on that Friday, which led to just only

Ejaaz:
like a crazy public, I guess, debate or fight between these guys.

Ejaaz:
You've got Pete publicly saying, Anthropic just delivered a masterclass in arrogance

Ejaaz:
and betrayal, as well as a textbook case of how not to do business with the

Ejaaz:
United States government or the Pentagon.

Ejaaz:
And a bunch of responses were released after that showing that Dario had not

Ejaaz:
been answering their phone calls or was just being inflexible.

Ejaaz:
And then Dario on his side was saying, we need these contractual language involved

Ejaaz:
because otherwise this AI could be used for nefarious purposes.

Ejaaz:
So it was just so, so much drama.

Josh:
In addition to the Secretary of War having some choice words for Anthropic,

Josh:
Donald Trump chimed in with a rather angry and loud all caps message saying,

Josh:
the United States of America will never allow a radical left woke company to

Josh:
dictate how our great military fights and wins wars, among other things.

Josh:
And the public backlash, the public sentiment around Anthropic and Trump kind

Josh:
of shifted at this moment to being supportive of Anthropic.

Josh:
They were glad that it was standing on its morals and its values. and as

Josh:
a result the app store showed that claude actually

Josh:
became number one in the world and a few

Josh:
weeks ago it was only 131 and this part

Josh:
of the show is brought to you by our sponsor and a supporter

Josh:
of the show polymarket and polymarket is a great way to determine things

Josh:
like who is going to be the number one app in the app store on march 6th and

Josh:
what's interesting here is there's a 62 chance that the current leader actually

Josh:
changes hands it's showing that chat gpt is going to be the new king on the

Josh:
block when in reality there's another market that shows Anthropic is actually

Josh:
most likely to have the best model by the end of March which is you know loosely

Josh:
the same exact time and I love how they've used this to kind of

Josh:
gauge what's the best because now we kind of have an idea that there isn't going

Josh:
to be gpt 6 isn't coming out this month but we know for a fact that anthropic

Josh:
has the winner with opus 4.6.

Ejaaz:
I was just looking and wondering why chat gpt might be taking the lead here

Ejaaz:
despite there being so much positive approval for claude and that might have

Ejaaz:
something to do with our friend sam ulman at open ai who swooped in at the last

Ejaaz:
minute after all the drama between dario and the u.s department of war with his own proposition,

Ejaaz:
basically saying, hey, you can use ChatGPT instead, and we'll agree to your terms.

Ejaaz:
As long as you want to keep things within lawful use, we're going to draft up

Ejaaz:
our own safety stack and red lines. What do you think about this?

Ejaaz:
And the agreement was pretty extensive. They put out an open statement.

Ejaaz:
Now, there's a lot of minutiae in details, but the way I see it,

Ejaaz:
or my favorite highlights from this is they pretty much agreed to the simple

Ejaaz:
terms, but there was some slight changes in the form of they agreed that open

Ejaaz:
AI won't be used for mass domestic surveillance,

Ejaaz:
no use of open AI technology to direct autonomous weapon systems.

Ejaaz:
So these are the two things that Dario wanted, but it's all under lawful use,

Ejaaz:
which is the issue that Dario had.

Ejaaz:
And then there's a third thing, which is no use of open AI technology for high

Ejaaz:
stakes automated decisions, aka they should always be a human in the loop and

Ejaaz:
conceivably held accountable for any court of law going forward this

Josh:
Is crazy i this is the part of the story in which i just kind of lost my mind because.

Josh:
It didn't make any sense. It was like, okay, the Pentagon is saying no to it.

Josh:
Anthropic is saying no to the Pentagon.

Josh:
Clearly, they can't figure it out. Sam Altman was on CNBC earlier in the day supporting Anthropic.

Josh:
And then that evening, they signed the deal with the Department of War that

Josh:
is supposedly the same exact terms because they didn't want to redline.

Josh:
And this was like, oh my God, what do you mean? Was Sam just manipulating the

Josh:
world that he just slide in and actually steal the deal from Anthropic? And in a way he did.

Josh:
It appeared as if the Department for was trying to call Dario at the 501 deadline.

Josh:
He didn't answer the phone. They gave him a couple minutes. They picked up the

Josh:
phone, called Sam, and now there's a deal.

Josh:
And to your point, it seems like there is this key difference.

Josh:
And while a lot of the morals that they were standing on are the same,

Josh:
the key difference is basically in the responsibility and the lawfulness.

Josh:
Like one is kind of proactive, one is retroactive, where Anthropic wanted the

Josh:
ability to sign off on things.

Josh:
Whereas OpenAI is saying, well, you are the government, you are the military,

Josh:
you can make these decisions so long as they are lawful and so long as someone

Josh:
is responsible for like kind of claiming responsibility for these decisions.

Josh:
And we kind of know how that works, where, I mean, perhaps that is not as foolproof as Anthropix plan.

Josh:
It resulted in them getting a, what was it, $200 million deal and a lot of publicity

Josh:
with the government. So it was a big win for OpenAI.

Josh:
And the point of the story where I was like, what is going on here? This is chaos.

Josh:
And mind you, this is just hours before the actual first strikes were about

Josh:
to start. So there was a lot of things happening in anticipation of this mission.

Ejaaz:
Yeah. All of this happened within, like, I can't emphasize this enough.

Ejaaz:
It happened within like four to six hours. All of this happened.

Josh:
I was sitting on X scrolling and I was like, I was sharing something and I was

Josh:
like, oh my God, wait, like a new thing happened. Then a new thing happened. Yeah.

Josh:
Friday night was not a night to go out because the internet was at its peak.

Ejaaz:
The truth was being revealed. I think X had their highest amount of engagement

Ejaaz:
over the weekend. Saturday and Sunday broke both new records. It's a great

Josh:
Time to monitor the situation.

Ejaaz:
Insane. But back to the Sam agreement, they agreed to all lawful use.

Ejaaz:
And the explicit difference there is that they'll settle all kind of grievances

Ejaaz:
in a court of law. So retroactive, as you just said, which is the thing that

Ejaaz:
Dario was just completely against.

Ejaaz:
But there's also some other important safety lines that they put in that I actually

Ejaaz:
think are useful towards addressing this.

Ejaaz:
So one, the models or chat GPT can only be deployed through the cloud.

Ejaaz:
And the reason why this is a better implementation versus letting the government

Ejaaz:
run it locally is that you can monitor and you can track what they're doing

Ejaaz:
to make sure that they're not doing anything nefarious.

Ejaaz:
Number two, OpenAI has a specific vetted team of American software engineers

Ejaaz:
that will always is work on these models and we'll update them.

Ejaaz:
And the best part is the government is hands off on this entire approach.

Ejaaz:
And then the third important point is Dario's agreement

Ejaaz:
His problem was around usage policies. So he basically wanted to dictate when

Ejaaz:
the Pentagon could or could not perform, let's say, a military strike.

Ejaaz:
Whereas in OpenAI's deal, they use usage policies and a software stack that

Ejaaz:
kind of helps them navigate through all of these different legal issues.

Ejaaz:
So it's just a much more detailed and nuanced plan.

Ejaaz:
A lot of people were kind of like against OpenAI for this, but this might be a hot take.

Ejaaz:
I actually think it's a very proactive way to kind of deal with this situation

Ejaaz:
for what we have right now. I think legislation will change eventually going

Ejaaz:
forward. I don't think it's perfect.

Ejaaz:
I don't think it's ready for AI-enabled warfare, but I think it's a good step

Ejaaz:
in the right direction ultimately.

Ejaaz:
And there was this really awesome comment from OpenAI's head of national security,

Ejaaz:
Katrina, who kind of explains these nuances and saying that the safety stack

Ejaaz:
and usage policies that we've set up here is going to be a more reliable one.

Ejaaz:
They called out Anthropic basically

Ejaaz:
saying that it wasn't well thought out and ours is way, way better.

Ejaaz:
The other final cool part about this agreement is that OpenAI explicitly states

Ejaaz:
to the Pentagon that they should offer these terms to every single AI model

Ejaaz:
lab. So they're not trying to secure an exclusive deal.

Ejaaz:
This could be for anyone and OpenAI is just the first vendor.

Josh:
Can we take a moment to just appreciate the fact that OpenAI,

Josh:
the AI company, does have a head of national security partnership?

Josh:
Like, I think this gets to the core of the message of this episode is,

Josh:
and the message of this entire narrative this weekend is who is really in control of this?

Josh:
And not like, when I say in control, in control of everything,

Josh:
Who has the leverage to make the decisions at the end of the day?

Josh:
And it seems like they're, I mean, prior to OpenAI signing this deal,

Josh:
it seemed like they were forming this kind of force against the government, right?

Josh:
This oppositional force where Anthropic was like, we need this to be safe.

Josh:
OpenAI and Sam Allman went on TV and agreed. Google and a lot of employees from

Josh:
that company and DeepMind were kind of on board.

Josh:
They were saying, we're going to draw these hard lines too. We're not working with you.

Josh:
And it created this interesting power dynamic where they actually did have enough

Josh:
leverage to inflict damage on, I guess, matters of national security on the

Josh:
military and limit their ability to use these prime tools.

Josh:
And it gets into this interesting debate of who should be responsible for these

Josh:
decisions. I mean, a lot of people will say the military, they've been elected.

Josh:
They are the officials. They understand they're held responsible for keeping

Josh:
us safe and protected, and they deserve the best tools.

Josh:
And the OpenAI and the Anthropics and the AI companies, they'll say,

Josh:
but you don't understand how these tools work. You don't know how capable they

Josh:
are. You don't understand the nuances within them.

Josh:
And we have spent our whole life trying to design these safely.

Josh:
Therefore, you should trust us to make this decision.

Josh:
And I think it's step one and it's like event number one in a probably longstanding

Josh:
kind of argument that could happen,

Josh:
which is who actually holds the leverage over who and is there a willingness

Josh:
to work together or is this going to be this divisive thing where there's a

Josh:
band of private companies and there's a band of public entities and they are

Josh:
clashing because they have the same goals, but they are at odds with how they get accomplished.

Josh:
And I think this was just an interesting moment of time to kind of reflect on

Josh:
that part in particular.

Ejaaz:
Well, we didn't even mention the craziest part about all of this,

Ejaaz:
which actually answers your question, which is no one knows.

Ejaaz:
For the actual military operation, Epic Fury that was performed over the weekend,

Ejaaz:
it was enabled by Claude after being blacklisted and after being banned completely.

Josh:
That's so ironic, huh?

Ejaaz:
Yeah, it's ironic. So, you know, you had the Pentagon creating this entire fast

Ejaaz:
diary saying, okay, cool, we'll give up means like you can transition to use another model.

Ejaaz:
They signed a new deal, $200 million deal with OpenAI.

Ejaaz:
And then they ended up using the model, which they explicitly banned by the president himself.

Ejaaz:
So it goes to show that there's a lot of nuance with this. I think Claude had

Ejaaz:
been used for well over six months within the Pentagon right now.

Ejaaz:
So it's trained on all of its data. It's being used by all the employees.

Ejaaz:
It's something that they have here. And technically, they do have another six

Ejaaz:
months to transition to another model. So it makes sense that they were still using Claude.

Ejaaz:
And it's obvious that Claude is the current and preferred choice right now,

Ejaaz:
and that'll probably change over the next couple of months. But yeah, it's a very...

Ejaaz:
Unsubstantiated or undefined vector moving forwards.

Ejaaz:
I think US has a lot of angles towards this, meaning they want to upgrade their

Ejaaz:
military offense, but also they're cautious and curious of the rising and looming

Ejaaz:
threat from China, potentially taking over Taiwan and a bunch of other things.

Ejaaz:
So they just want to get ahead of these things.

Ejaaz:
And if they could leverage top American AI model labs to work with them,

Ejaaz:
specifically work with them, that'll be the advantage that they want.

Josh:
Yeah. And it seemed like it was used in terms of like the actual implementation

Josh:
for three things it was for intelligence assessment for

Josh:
target identification and for simulating battle scenarios so

Josh:
the ai isn't directly guiding missiles it's

Josh:
not doing anything kinetic it is mostly just for informational purposes but

Josh:
yes and i think that's where a lot of this discourse comes from now sam had

Josh:
a really interesting ama where he was kind of answering questions too right

Josh:
yep about kind of the public sentiment addressing them doing it live in real

Josh:
time, answering people's questions the night of.

Ejaaz:
He goes, I'd like to answer questions about our work with the Department of

Ejaaz:
War and our thinking over the past few days. Please ask me anything.

Ejaaz:
And his three takeaways were super interesting. Number one, he was surprised at how much

Ejaaz:
50-50 debate there was between whether warfare in America or national security

Ejaaz:
should be the judgment of elected officials or unelected private companies.

Ejaaz:
It seemed like a lot of people were like, yeah, Anthropic maybe should have

Ejaaz:
some more involvement here in setting the guidelines up to what we use AI within warfare.

Ejaaz:
And then a bunch of other people saying, no, we elected officials specifically

Ejaaz:
for this. They should be the ones doing this.

Ejaaz:
The second biggest takeaway is there's a question around whether like companies

Ejaaz:
like OpenAI eventually become nationalized by the government because that technology

Ejaaz:
is so important and crucial towards things like defense and the economy.

Ejaaz:
And he goes on to say, this was really revealing. He says, I've thought about

Ejaaz:
nationalization, of course, and for a long time, it seems like it might be better

Ejaaz:
that building AGI was a government project,

Ejaaz:
which kind of shocked me there because I understand the existential crisis here,

Ejaaz:
but you know, that was super cool.

Ejaaz:
And then the third thing that he states is people take their safety for granted,

Ejaaz:
basically saying that people don't really realize the lengths and extents that

Ejaaz:
the Department of War and Defense need to go to to protect them.

Ejaaz:
And this is just a misunderstanding through public discussion and nuance.

Josh:
Yeah, the government backed project is super interesting because in the past,

Josh:
when we have done things like this, the Manhattan Project,

Josh:
companies like Lockheed Martin, who had a lot of government support,

Josh:
they've worked very well because it allows you to kind of move,

Josh:
converge resources and talent into a single motive and you get the legislative

Josh:
protection to build as fast as possible.

Josh:
The issue now is there's just this lack of efficiency and capability within

Josh:
those same entities that did this in the past.

Josh:
And the market forces will not allow it. With the amount of capital needed to

Josh:
build these gigantic AI data centers, you can't extract that from taxes. You can't,

Josh:
validate it by printing more dollars. You actually just have to make revenue

Josh:
and do this in private markets.

Josh:
And I think that's the slightly uncomfortable truth is that it's just too expensive

Josh:
and too challenging to do this in any other way.

Josh:
So there has to be this divide between private and public sectors because it's

Josh:
the only way that you can kind of garner resources this effectively to actually

Josh:
deploy them at the scale required to build AGI in the first place.

Ejaaz:
Yeah. And there was this other take, which I thought was super interesting.

Ejaaz:
Rune asked, are you worried at all about the potential for things to go really

Ejaaz:
south during a possible dispute over what's legal or not later,

Ejaaz:
or be deemed a supply chain risk?

Ejaaz:
Sam Olman responds, yes, I am. And if we have to take on that fight, we will.

Ejaaz:
But it clearly exposes us to some risk. I'm still very hopeful this is going to get resolved.

Ejaaz:
And part of why we wanted to act fast was to help increase the chances of that.

Ejaaz:
So again, reemphasizing the point we made earlier, he's taking the approach of take action now,

Ejaaz:
and we'll figure it out later, as long as there are certain stipulations from

Ejaaz:
the government saying they'll do it within lawful use and that there will be a human in the loop,

Ejaaz:
that you won't have AIs autonomously firing weapons at random people because

Ejaaz:
the models just aren't good there. But it's relatively uncertain.

Ejaaz:
This land is very uncharted. We don't know where this is going to end up.

Ejaaz:
And to be honest, it's going to be a very significant debate probably for the next couple of years.

Ejaaz:
I don't think this is going to be a one-off event. It is certainly the craziest

Ejaaz:
48 hours that we've had in 2026 so far, but it is by no means at its end yet.

Josh:
Yeah, it's been absolutely insane. And now you are mostly caught up on everything

Josh:
that happened this weekend.

Josh:
It was nuts. And I mean, it's to your point, Ijez, I think it's not a conversation

Josh:
that's going to end here.

Josh:
I mean, just in the last, what, in the two months we've went to Venezuela and

Josh:
now Iran, and there's clearly more intent to apply this to the real world.

Josh:
And as these models get more capable, as they're able to actually do more things,

Josh:
these debates are just going to keep heating up.

Josh:
But this one was crazy. I mean, I haven't been glued to my phone like this in a long time.

Josh:
And the plot twist, like this is better than any sort of drama TV show,

Josh:
right? We watched a deal fall apart.

Josh:
The same person who was backing that deal swooped in and stole it.

Josh:
And then within hours, the blacklisted AI was used to actually attack another

Josh:
country, even though a new deal had signed because they still have six months left of contract.

Josh:
And now Dario and the Anthropic team are upset and the public kind of supported

Josh:
them. So it went to number one on the app store. And it's just like.

Ejaaz:
I mean, can we... There's so much. Can we appreciate how quickly all of this

Ejaaz:
happened as well? Like, man, yeah.

Josh:
Shout out to X for this because geez, like the information was flowing.

Ejaaz:
The information was flowing and it came in in real time. Like I felt the hours.

Ejaaz:
Like I woke up, I think it was Saturday morning. I went to bed,

Ejaaz:
like maybe a normal or lame person.

Ejaaz:
And I woke up and I saw, I think a tweet from maybe you, Josh,

Ejaaz:
that was like giving me the breakdown of everything that was going on.

Ejaaz:
And I was like, how did I miss this? This is like an hour after I went to bed.

Josh:
The news is breaking every hour. It was crazy.

Ejaaz:
It was absolutely insane. And it just goes to show that the speed at which AI is accelerating,

Ejaaz:
not just chatbots, not just video creation, but major important things like

Ejaaz:
national defense, security, should not be understated and should be a focus

Ejaaz:
of topic for probably a lot of other sectors going forwards.

Ejaaz:
I don't know if we're at this point where we want to get into homework for the

Ejaaz:
listeners here, but I really want to hear from you.

Ejaaz:
What are your thoughts are on this entire debate? Do you think the Pentagon was in the right?

Ejaaz:
Do you think Dario was in the right? Do you think OpenAI and Sam actually struck the right resolution?

Ejaaz:
Or do you think it's all rubbish and that we need to completely dismantle everything

Ejaaz:
and rebuild from the grounds up?

Ejaaz:
Let us know your thoughts in the comments or even like DMs to us.

Ejaaz:
Like, I really want to hear your feedback.

Josh:
Yeah. And if you want to follow the conversation, we've been monitoring the

Josh:
situation. We've been publishing the situation.

Josh:
Follow both of us on Twitter or on X. They're both linked in the description below.

Josh:
We've been on it, I think between us, we've gotten like 20, 30,

Josh:
40 million impressions this weekend. It's been crazy.

Josh:
So that is always where you can see the news first before we get on camera,

Josh:
but we will try to keep you updated.

Josh:
If you've watched this, congratulations, you're now up to date for now.

Josh:
We'll see where things go throughout the rest of this week, but we have a lot more planned.

Josh:
There's a lot of exciting topics to cover and we'll be here with you to cover

Josh:
it all. So thank you as always for watching. I very much appreciate it.

Josh:
Thank you for sharing with your friends, which goes a long way for subscribing

Josh:
to our sub stack, which has been doing very, very well.

Josh:
There's like 60,000, 70,000 people that read every single one.

Josh:
So if you want to get in on the know, click the links down in the description,

Josh:
share it with your friends.

Josh:
And as always, thank you so much for watching. We will see you guys in the next one.