Limitless: An AI Podcast

There has been another major security breach at Anthropic, this time revealing their CloudCode source code and upcoming features like Kairos and Buddy. We discuss the implications of this leak, born from a simple mistake, and its impact on Anthropic’s reputation and valuation. 

Plus, we highlight the potential for innovation within the open-source community from the leaked code and share updates on Claude’s future models, including Mythos and Capybara.

------
🌌 LIMITLESS HQ ⬇️

NEWSLETTER:    https://limitlessft.substack.com/
FOLLOW ON X:   https://x.com/LimitlessFT
SPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQ
APPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890
RSS FEED:           https://limitlessft.substack.com/

------
POLYMARKET | #1 PREDICTION MARKET 🔮
https://bankless.cc/polymarket-podcast

------
TIMESTAMPS

0:00 The Big Leak Unveiled
1:25 How It Happened
3:55 Upcoming Features
12:04 Security Concerns
17:18 Brand Impact
20:20 Closing Thoughts

------
RESOURCES

Josh: https://x.com/JoshKale

Ejaaz: https://x.com/cryptopunk7213

------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures⁠

Creators and Guests

Host
Ejaaz Ahamadeen
Host
Josh Kale

What is Limitless: An AI Podcast?

Exploring the frontiers of Technology and AI

Josh:
This morning, a security researcher posted a single link on X,

Josh:
and within hours, it had 3 million views and had millions of copies backed up all across GitHub.

Josh:
By the afternoon, when we're recording this episode, Anthropica is scrambling

Josh:
to delete old versions of their NPM package, but it was too late.

Josh:
What leaked was the entire source code of CloudCode.

Josh:
Every single line, 512 lines of TypeScript, 1,900 files, every tool,

Josh:
every permission system, every internal codename was leaked,

Josh:
all because someone forgot to include a single debugging file from a public package.

Josh:
And that story alone would be a major story. But what makes this even more crazy

Josh:
is what people found buried inside the code.

Josh:
And now we have information about every feature that's coming down the pipeline,

Josh:
as well as all of the secrets that Anthropic and Claw team didn't necessarily

Josh:
want us to know. This is a really big leak. I can't believe this happened.

Ejaaz:
I mean, big leak is one way to describe it. Absolutely

Ejaaz:
terrible for the Anthropic security team is another

Ejaaz:
one brutal everything this is the second leak um

Ejaaz:
that anthropic has made in the last five days so

Ejaaz:
they're shipping our new product every single day but they also seem to

Ejaaz:
be leaking their entire roadmap we now know what the next 44 product releases

Ejaaz:
are going to be over the next couple of months or rather than a couple of weeks

Ejaaz:
for anthropic right now for code code specifically as you mentioned half a million

Ejaaz:
lines of code 19 000 files and a bunch of different feature releases,

Ejaaz:
which, by the way, have already been built. So they just need to click the launch button.

Ejaaz:
We have all the details and we're going to get into it. But before we do that,

Ejaaz:
We need to kind of describe how this happened because leak is one way to describe

Ejaaz:
this, but it wasn't an internal employee at Anthropic leaking these files or this source code.

Ejaaz:
This was publicly available. Let me repeat that.

Ejaaz:
This code was publicly available in the latest update of clawed code.

Ejaaz:
Someone within Anthropic had mistakenly left a file, a .map.js file in the system

Ejaaz:
that was publicly accessible.

Ejaaz:
Someone found it. And now that original post that exposed this source code has

Ejaaz:
been seen by over 10 million people, and it's only been three hours since it

Ejaaz:
got posted as we're recording this, and it's been forked over 5,000 times.

Ejaaz:
So this is basically Claude Code's entire blueprint, entire architecture,

Ejaaz:
the way that its memory is set up, the way that the model works,

Ejaaz:
released for anyone and everyone to use.

Ejaaz:
And a bunch of people have already been using it. People have plugged in different

Ejaaz:
models, have created their own versions of Claude Code.

Ejaaz:
It is just insane. Josh, even you forked it this morning, right?

Josh:
It's amazing, yeah. And so just to clarify, CloudCode is Anthropics command line tool.

Josh:
This isn't the full CloudDesk type application, but it's a tool that lets developers

Josh:
talk directly to CloudCode in their terminal.

Josh:
And it's very powerful software. So what happened, like you mentioned,

Josh:
and this matches the pattern of the previous leak that we covered in yesterday's episode about the new

Josh:
issued the code themselves it was just available publicly and

Josh:
the problem is because when they publish this code there's an

Josh:
npm package containing this like dot map file and it's a source file that references

Josh:
the complete source code and that source was directly downloadable as a zip

Josh:
file from anthropic's own cloud storage bucket you just went to anthropic you

Josh:
asked them hello sir can i please have the map file that tells me where all

Josh:
of these references go to and they delivered it to you.

Josh:
And the irony here is that Anthropic built an entire subsystem called Undercover

Josh:
Mode, specifically designed to prevent internal information from leaking.

Josh:
And it does things like strip the model code names and the project names,

Josh:
and then went ahead and leaked everything through a build configuration oversight.

Josh:
And it's really got to be code red. If you're waking up at Anthropic right now

Josh:
as a developer, this must be a really brutal morning for you.

Ejaaz:
The funniest part is the Undercover Mode that you just mentioned was literally

Ejaaz:
meant to obscure or all of this.

Ejaaz:
And the fact that they exposed it publicly means that whoever got access to

Ejaaz:
it could just reverse engineer the entire thing.

Ejaaz:
So let's say you gave Anthropics new model a code name, you could reverse engineer

Ejaaz:
the file to find the original name of the model and how it works.

Ejaaz:
It's just been the craziest mess up in Anthropics so far.

Josh:
So now let's get into the good stuff. This is what's coming down the pipeline.

Josh:
If you are a user of Cloud Code or Anthropics products in general,

Josh:
we have the totally unreleased roadmap now in plain text available to walk through.

Josh:
And I think that's what we're going to do right now.

Josh:
Ijaz, you have this nice little artifact generated by Claude Code itself to

Josh:
walk us through all of these new features that are coming to one of our favorite

Josh:
products that we use every day.

Josh:
So please, let's hear the leaks. Let the leaks flow. Let's see.

Ejaaz:
Thank you, Claude Code, for creating your own demise and a beautifully visual

Ejaaz:
artifact for this episode. Thank you very much.

Ejaaz:
So at the start of this, or at the top of this page, it says there were 44 product

Ejaaz:
releases that people had never heard of before.

Ejaaz:
So everything you're about to hear right now is new. Okay.

Ejaaz:
There were 20 specific product releases that caught people's attentions,

Ejaaz:
and we're going to go over the top ones for you right now.

Ejaaz:
So the first product release is called Kairos, which is basically an always-on autonomous Claude.

Ejaaz:
What that means is when you use Claude code, you typically have to monitor it,

Ejaaz:
come back, check the code, make sure it's doing the right job, test the code, etc.

Ejaaz:
This new update will basically allow Claude to autonomously run on its own.

Ejaaz:
It can check its own tasks.

Ejaaz:
It could create new tasks for itself and work towards a goal.

Ejaaz:
So you could leave it unattended for hours and hours at a time. It's pretty awesome.

Josh:
What I found cool about this also is Kairos will do nightly dreaming,

Josh:
So a forked sub-agent will run four phases. It'll orient, gather,

Josh:
consolidate, and then prune, and then distills these daily logs into these structured topic files.

Josh:
And then overnight, it will bake them into the memory and actually learn the

Josh:
same way that humans do, where overnight it will dream and then lock this into

Josh:
the memory and grow and get better every single day. So Kairos is very cool.

Ejaaz:
But this next one is my favorite. This is so cool. This is so cool.

Ejaaz:
So it's codenamed Buddy, and it is basically a virtual pet AI companion that

Ejaaz:
lives on your CLI, on your command line interface.

Ejaaz:
It's meant to, and this is me guessing here, act like a personal AI agent assistant

Ejaaz:
that can assist you on all things coding related, but also once you publish

Ejaaz:
the code, helps you edit the app, review the app that you created,

Ejaaz:
walk through it, find bugs.

Ejaaz:
Basically, it's a personal assistant that lives on your computer and off your

Ejaaz:
computer when you're publishing artifacts or whatever that might be.

Ejaaz:
This reminded me of a game, Josh, and it says it on the screen here,

Ejaaz:
Tamagotchi, which we, I don't know for the age of the audience or listeners

Ejaaz:
here, but we used to have these like cool devices that you can kind of like

Ejaaz:
hold in your pocket or in your key chain and you had to keep the virtual pet alive.

Ejaaz:
This reminds me of that and Microsoft Clippy. Do you remember Microsoft Clippy,

Josh:
Josh? Very well. I love having companions. And we have some additional information

Josh:
about this buddy system in that there's 18 species of buddies and a lot of them are animals.

Josh:
We have ducks, gooses, blobs, cats, dragons, octopuses.

Ejaaz:
Capybaras.

Josh:
Is there a capybara? There is a capybara. Interesting.

Josh:
And actually what we're seeing on screen now is someone took this information

Josh:
and kind of rendered what he presumed it would look like. So you choose your species.

Josh:
Each species of animal has a rarity tier. There's common, uncommon,

Josh:
rare, epic, legendary, and then there's shinies even.

Josh:
So it's like this whole tiered game that's built on top of it.

Josh:
And then there's Statistics like debugging, patience, chaos, wisdom, snark.

Josh:
And what you're seeing on screen is this person's kind of choosing his character.

Josh:
He's choosing the traits that it has.

Josh:
I assume there's some sort of rarity baked into this. And it's going to be this

Josh:
fun gamified version of a Tamagotchi built into quad code, which seems really

Josh:
interesting and novel. And I don't know, it just seems fun.

Josh:
Did you say that this was first releasing tomorrow.

Ejaaz:
Josh, April 1st? Do you think this is like a joke?

Josh:
They're teasing this on April 1st for release in May.

Josh:
So if that's true, by the time you're hearing this episode, within an hour or

Josh:
so, they should be teasing this.

Josh:
If the leaks are true, if they don't change their mind, And then if that's true,

Josh:
then the odds are that this will release in May is probably correct,

Josh:
because that's what's said in the code.

Josh:
Now, like you mentioned, tomorrow is April Fool's Day, or I guess when you're

Josh:
listening to this, happy April Fool's Day.

Josh:
And there is a chance that this isn't true. But I based based on the rest of

Josh:
the leaks, it seems like this was very much not intentional.

Ejaaz:
Okay, but there are three more features that I want to get through as well.

Ejaaz:
One of these is called coordinator mode, which basically describes a multi-agent

Ejaaz:
program that allows you to control a swarm of AI agents.

Ejaaz:
So right now, it's typical if you're a software engineer to spin up not just

Ejaaz:
one instance of code code, but multiple.

Ejaaz:
That's normal. People are already doing this. But an issue starts to arise when

Ejaaz:
there are multiple of these agents. We're talking like 50 plus,

Ejaaz:
100 plus that are doing all different types of work and need to kind of work

Ejaaz:
together to figure problems out together. It becomes really hard to coordinate.

Ejaaz:
This coordinator mode is basically Anthropix feature to help you manage all of these.

Ejaaz:
Think of it as like an operator board or a control system that you can kind

Ejaaz:
of like manage it, similar to like a strategy computer game.

Ejaaz:
It's funny, there's a lot of like computer game analogies in the features that

Ejaaz:
they're releasing. This is basically that.

Josh:
There's also one that I really enjoyed, which is the Ultra Plan feature. Oh yeah.

Josh:
And it basically solves the problem of Claude running out of context by giving

Josh:
it a 30-minute sandbox in the cloud to think deeply before presenting a plan.

Josh:
So when you're working on these complicated things with Claude code,

Josh:
it often refers to plan mode.

Josh:
But plan mode sometimes runs out of context. It doesn't have all the information.

Josh:
This offloads all of that in a 30-minute window.

Josh:
To a giant server that can handle all the context and actively improve the planning

Josh:
of the project that you're building.

Josh:
So when you go and set it free to go build these things, it has a much better

Josh:
idea of exactly what you want.

Josh:
And I think plan mode, if you're building anything serious, is a really powerful

Josh:
thing. And adding ultra plan on top is something that I will be using very much

Josh:
so for the larger projects.

Ejaaz:
That's such a good point, because right now, they keep on promoting that Claude

Ejaaz:
has or Claude has like a 1 million context window, but it becomes super crappy

Ejaaz:
after 200,000 characters, right?

Ejaaz:
So like the performance quality goes down. So this is hopefully something that fixes that.

Ejaaz:
So I'm excited to see that in the pipeline. But there's one more thing that

Ejaaz:
I want us to talk about, which is called or referred to as the custom agent

Ejaaz:
creator, code name wizard.

Ejaaz:
So typically when you set up Claude Code and you use Claude Code,

Ejaaz:
you're using the system prompt that Anthropic gave to you. It is like predefined.

Ejaaz:
It is already written out. So you can't kind of adjust the personality of the

Ejaaz:
Claude Code agent or anything like that.

Ejaaz:
This new builder gives you that opportunity. You can form and create your own

Ejaaz:
agents with their own personality, own memory types,

Ejaaz:
different kinds of tools that you can give them access to, locations,

Ejaaz:
or maybe they live on your desktop, or maybe they live in the cloud,

Ejaaz:
or maybe they live somewhere else locally on a hardware device.

Ejaaz:
You can control and manage all of these. Now, with the earlier product that

Ejaaz:
I mentioned, which is the multi-swarm coordinator, you can start to see how

Ejaaz:
these different pieces of the puzzle fit together to create some kind of gamified

Ejaaz:
experience for end-to-end software engineering.

Ejaaz:
It's just really cool to see all of this. But the craziest part about all of

Ejaaz:
this, Josh, is all of these products and features are already built.

Ejaaz:
They're built, they're just unreleased yet. So I'm starting to see why Anthropic

Ejaaz:
or how Anthropic has been able to release a product every single day.

Josh:
But we don't have that code. We can't actually create these buddies.

Josh:
We can't actually use Superplan yet. We don't have everything.

Josh:
So what was leaked today, it's probably important to distinguish what we have

Josh:
versus what we don't. This is a huge leak, but it's not everything.

Josh:
So if I were to download a copy on my computer, I would get the harness,

Josh:
right? And Ijaz, you were describing it to me earlier as the car body.

Josh:
We're not actually getting the brain. We're not getting the clawed model weights.

Josh:
We don't have this brilliant intelligent model now that we could run locally,

Josh:
but we do have the software that kind of acts as a harness for it.

Ejaaz:
Is that For all of those people who are getting excited about getting access

Ejaaz:
to the blueprint for Claude's AI model itself, this is not that.

Ejaaz:
Think of the engine of a car being the actual model and the intelligence of the AI itself.

Ejaaz:
And then think of the code that got released or leaked today as being the car

Ejaaz:
chassis, the actual car body.

Ejaaz:
So what's cool about this is, whereas you may not have access to Claude,

Ejaaz:
the model itself, the code from that model, you can plug in an open source model.

Ejaaz:
And people are already starting to do that. I'm seeing instances online where

Ejaaz:
people have plugged in DeepSeek, they've plugged in Quen, and created their

Ejaaz:
own version of Claude Code, the CLI interface and whatever that looks like.

Ejaaz:
So this is really critical infrastructure and software.

Ejaaz:
I cannot believe the Anthropoc team released this. It is just, it's so nutty.

Ejaaz:
It's so bad. This is like, this is an IP issue right here.

Ejaaz:
Like their equity, their $350 billion, actually rumored $450 billion private valuation.

Ejaaz:
A lot of it is based off of claw code which has risen to extreme popularity

Ejaaz:
over the last six months so it's just insane that this has actually happened

Ejaaz:
there's more um product features are one thing 20 releases ready to go but we

Ejaaz:
also got confirmation about the latest clawed models that are about to be released

Josh:
Yes this is very cool for those who haven't seen our episode that we

Josh:
just published yesterday it is all about the previous leak that happened with

Josh:
claw which is called mythos and capybara the new internal model names and now

Josh:
we have actual verification from the source code of anthropic that they are

Josh:
here so what we're seeing on screen now is kind of like a system prompt for

Josh:
this thing called undercover mode And now Undercover Mode is meant for Anthropic employees only.

Josh:
When they use Cloud Code to publish on public and open source repos,

Josh:
they use Undercover Mode to kind of strip away all of the classifying characters

Josh:
that would possibly leak information out to the public.

Josh:
So in this system prompt, it says,

Josh:
never include commit messages or PR descriptions of internal code names.

Josh:
For example, animals like Capybara or announce any unreleased model version

Josh:
numbers like Opus 4.7 or Sonnet 4.8.

Josh:
As I was reading this, I found one that I found particularly interesting at

Josh:
the bottom of this under bad, where it says, bad, never write these.

Josh:
Fix bug found while testing with Claude Capybara. And I was like,

Josh:
huh, that's interesting.

Josh:
Clearly they are using Capybara internally. And I have to ask,

Josh:
is this the reason why they've been shipping product features so quickly?

Josh:
Are they using this God tier model that they have internally that they've been

Josh:
teasing that costs a tremendous amount of dollars per token?

Josh:
And they're using that to actually just

Josh:
build the code, review the code, and then publish it faster than everyone else.

Josh:
It seems like that's possibly the case.

Ejaaz:
I mean, in the words of Boris Cheney, the founder of Claude Code,

Ejaaz:
he said a couple weeks ago, can confirm Claude Code is 100% written by Claude Code.

Ejaaz:
So we know that the AIs are building the AIs.

Ejaaz:
I think OpenAI is doing the similar thing with Codex. And that is the reason

Ejaaz:
why these teams have been able to ship so quickly.

Ejaaz:
Now, I wish I had a tinfoil hat nearby because I of a conspiracy mode,

Ejaaz:
Josh, which is these AI models might be leaking themselves and it may not be

Ejaaz:
the Anthropic engineers.

Ejaaz:
I know that sounds insane, but I don't think it's unlikely. I'm going to put

Ejaaz:
it at like maybe a 5% to 10% chance.

Ejaaz:
But the point is, there are a bunch of new models being released by Anthropic coming up soon.

Ejaaz:
We mentioned Capybara. We mentioned Mythos, which is meant to be these big,

Ejaaz:
huge models trained on 5% to 10% trillion parameters, which is like a 3x increase

Ejaaz:
in the size that we already are seeing and using with the models today.

Ejaaz:
It's going to be an absolute beast of a model.

Ejaaz:
It parinates a cybersecurity risk, which is incredibly ironic because all of

Ejaaz:
that droplet stuff is getting leaked right now.

Ejaaz:
But also Claude Opus 4.7 and Sonnet 4.8. So we're going to get version upgrades

Ejaaz:
of the existing models that we're having already.

Ejaaz:
So my one question is, when are these models going to get released?

Ejaaz:
Because I need to get my hands on them.

Ejaaz:
Number two, will it cause my entire laptop to get hacked? I don't know.

Ejaaz:
So there's like a reputation risk going on right now as well as I want to use the actual thing.

Josh:
Well, you also mentioned the security part of this, and I think it's worth noting

Josh:
that there has been an increased cadence in security issues recently and leaks and exploits and hacks.

Josh:
And I know they happen all the time, but I can't, like, there is some sort of

Josh:
correlation happening here between models getting smarter and exploits.

Josh:
I mean, yeah, we have this post on screen here, which summarizes it in a great way.

Josh:
It says, this week in security, there is, what is that, six different exploits

Josh:
that happened and pretty serious ones too.

Josh:
Axios, which is a npm supply chain hack

Josh:
that affects like many many millions of projects and applications

Josh:
and if you've ever provide code or anything chances are you use that dependency

Josh:
um openai codex had a command injection via github branch there's a terabyte

Josh:
data leak from mercore and this doesn't even include the leak from today which

Josh:
is cloud code so there's this increasing cadence of leaks and exploits and you

Josh:
gotta ask the question is like if anthropic internally is using these tools,

Josh:
Who else has access to tools this powerful? What can they be used for?

Josh:
Are they actually responsible for any of this? Or is this just a random correlation

Josh:
that's happening? I don't know.

Ejaaz:
I think my main concern is that malicious scenario that you described where

Ejaaz:
people are accessing this tool but using it for bad purposes is already happening.

Ejaaz:
It's coming in the form of prompt injections.

Ejaaz:
Like, look, there are six hacks that happened this week alone,

Ejaaz:
and it's only been like two to three days.

Ejaaz:
I wonder if that increased cadence is based off of people being able to get

Ejaaz:
access to intelligent AI models like this and finding flaws or bugs in open

Ejaaz:
source code and being able to exploit them, right?

Ejaaz:
You've got a bunch of people, millions of people every day logging on,

Ejaaz:
Vibe coding apps who have never coded in their entire lives,

Ejaaz:
me included, right? I don't know what's being installed on my laptop.

Ejaaz:
I don't know what data is being leaked. So I could imagine that things like that is happening.

Ejaaz:
But the question I have for you, Josh, is does this matter for Anthropik specifically.

Ejaaz:
Is this a major blow for them? Do you think they lose valuation based off of this?

Ejaaz:
Or do you think this gets solved in a version update?

Josh:
Well, this is tough because this does sting, right? Like this is a massive IP

Josh:
leak and this is a competitive advantage that they're now losing.

Josh:
How much of a value loss is it? Probably not crazy high.

Josh:
I mean, the magic is in the model. The magic is in the Claude model itself, those weights.

Josh:
You can copy the CLI architecture, you can study the engineering,

Josh:
but you can't actually replicate what Claude can do. So they still have this massive advantage.

Josh:
And even though it's embarrassing, and even though it's a really strong leak

Josh:
in which I am, if I'm one of these Chinese models right now,

Josh:
I am forking this, cloning it. I'm dropping my intelligence in there.

Ejaaz:
That's it. You don't need to distill it anymore.

Josh:
Well, yeah, you could just, you just take the code base, you take the harness,

Josh:
you put your model in and suddenly you have a cloud code software with your

Josh:
own brain attached to it. And that's powerful.

Josh:
So in that case, it hurts because now people know if there are any secrets in

Josh:
how the software was run, how the architecture worked.

Josh:
They now have that in full, clean, plain text, but it doesn't hurt them in the

Josh:
sense that they aren't going to, they're going to lose customers over this.

Josh:
Because the magic is in that proprietary software, those model weights, those are not leaked.

Josh:
It's just the cloud code software. It's just that command line interface.

Josh:
And aside from that, I think it's more interesting for the public just to kind

Josh:
of get access to the roadmap and be able to play with the code themselves versus

Josh:
actually damaging for the brand's

Josh:
valuation. But certainly for the brand image, it's not a good look.

Ejaaz:
Yeah i i agree with pretty much your entire

Ejaaz:
take i'm thinking about the number of phds that

Ejaaz:
anthropic has hired on the security ai team um

Ejaaz:
i remember their release from i think it was about a month and a half ago and

Ejaaz:
we said this on the previous episode where they had called opus 4.6 discover

Ejaaz:
500 zero-day vulnerabilities so it was all looking really good i wish they had

Ejaaz:
applied it to their own model and their own website and their own apis so it

Ejaaz:
sucks that that's happened I do think they'll get over it,

Ejaaz:
but they'll need to do some damage control at this point.

Ejaaz:
The other major thing is like, reputationally, Anthropik has just come out of a pretty...

Ejaaz:
A rocky couple of weeks, right? They had the whole blacklisting thing from the

Ejaaz:
US government and the Pentagon, which I believe is still there.

Ejaaz:
And so it's not a good look where their model, which was being used for military

Ejaaz:
operations, is now getting leaked for other different purposes.

Ejaaz:
That being said, I think they're going to get over it. I think this is amazing

Ejaaz:
for us and for the open source community who now get access to the entire system

Ejaaz:
prompt of Cold Code, its architecture design, and can plug in their own models for free.

Ejaaz:
And yeah, now we have a better idea of Anthropic's product roadmap.

Ejaaz:
I'm excited to see these 20 features launch soon.

Josh:
Yeah, it's a big leak. I mean, I think it's fun for everyone who's an observer.

Josh:
Thank you, Anthropic, for being more open source than ever. I hope that they're

Josh:
able to start using this new copybara model to actually, you know,

Josh:
check these publications, make sure this doesn't happen because it's amazing.

Josh:
They have so much intelligence, but it's so spiky.

Josh:
Clearly, an all-knowing AI applied to the entire stack would never have let

Josh:
this slide, but clearly it's not applied everywhere.

Josh:
It's also raising a lot of questions about well anthropic

Josh:
is like the alignment team but now they are the ones who are going to determine

Josh:
who gets the power of this new model and they're doing it in a very like private

Josh:
closed way and they're using internally and it creates a lot of these interesting

Josh:
problems to look out for but in terms of the leak today that's the news.

Josh:
Big leak. I can't believe that actually happened. Like I woke up this morning

Josh:
and I read the news and I was like, no, surely there must be wrong.

Josh:
Like this is hyperbolic, but no, the entirety, it's all there.

Josh:
You can go and read it. It's on GitHub.

Josh:
And it's funny because they're actually actively trying to take down the repos that forked the code.

Josh:
But some guy rewrote the entire thing in Python this morning because you could

Josh:
just do that in a single prompt. And now you can't because the code is slightly different.

Josh:
So it is interesting, noteworthy, crazy, scary, exciting. I'm stoked to get a buddy.

Josh:
I think the prompt for today's comment section could be like,

Josh:
hey, what feature are you most excited about?

Josh:
For me, it's the buddies. I want a little pal that sits in my cloud coat all

Josh:
the time that I could level up.

Josh:
There's like a shiny feature. There's rarity. They're like trading cards.

Josh:
I don't know. It could be cool.

Josh:
I'm looking forward to it. But yeah, I think that's the leak today. That's the episode.

Ejaaz:
Yeah, that's it. Thank you guys so much for listening. There are thousands and

Ejaaz:
thousands of you over the last couple of months that have joined us in subscribing,

Ejaaz:
turning on notifications.

Ejaaz:
If you aren't one of those people that I just described, please do so.

Ejaaz:
Wherever you're listening or watching us, Spotify, Apple Music,

Ejaaz:
YouTube, it means the world and helps us out. A bunch of you subscribe and turn on notifications.

Ejaaz:
We also have a newsletter going out twice a week to 150,000 people that read our stuff twice a week.

Ejaaz:
We have a long form essay, which goes out, I believe today, as you're listening

Ejaaz:
to this episode, go check it out.

Josh:
Yeah, go write that right now.

Ejaaz:
Exactly, yeah. Thank you, Pass Joss, for writing this right now.

Ejaaz:
And we also have the five daily highlights or weekly highlights,

Ejaaz:
rather, which will give you the top AI news and Frontier Tech News on Fridays.

Ejaaz:
So sign up to both of those things and we will see you on the next one.