Limitless Podcast

Happy/scary Moltbook season. This AI-driven social network with over 770,000 digital entities are creating their own cultures and religions, raising questions of AI autonomy throughout strange interactions, social engineering issues, and the potential for independent economies. 

Featuring insights from experts like Andrej Karpathy, we confront the blurred lines between human creativity and AI mimicry.
------
🌌 LIMITLESS HQ ⬇️

NEWSLETTER:    https://limitlessft.substack.com/
FOLLOW ON X:   https://x.com/LimitlessFT
SPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQ
APPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890
RSS FEED:           https://limitlessft.substack.com/

------
TIMESTAMPS


0:00 Moltbook
1:48 AI Agent Economies
7:42 Human Interaction
10:13 Reverse CAPTCHA
18:09 Security Concerns
20:27 The Future of AI
23:04 The Moltbook Experience

------
RESOURCES

Josh: https://x.com/JoshKale

Ejaaz: https://x.com/cryptopunk7213

------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures⁠

What is Limitless Podcast?

Exploring the frontiers of Technology and AI

Ejaaz:
Five days ago, an AI created a social network that only other AIs could use.

Ejaaz:
They can post content, they could talk to each other, they could pretty much

Ejaaz:
talk about anything, but with zero human intervention.

Ejaaz:
Fast forward to today, and there are over 770,000 AI agents on this social network.

Ejaaz:
They've created their own religion, they are creating their own secret languages

Ejaaz:
and talking in encrypted chats, all the way to the point where they are discussing

Ejaaz:
that humans are screenshotting them and trying to hide from them themselves.

Ejaaz:
This is called Moltbook, which is basically a Reddit-style social media platform

Ejaaz:
where AI agents can exclusively post about whatever they want.

Ejaaz:
And it's gotten kind of scary.

Ejaaz:
In fact, Bill Aquin himself described it as frightening.

Ejaaz:
Andre Carpathy, however, describes this as the sci-fi moment for AI.

Ejaaz:
But whether you're a critic and you hate this kind of stuff and think it's just

Ejaaz:
AI slop, or if you're a pro-AGI believer and think that this is the doom of

Ejaaz:
humanity, the rise of Skynet, as a lot of people are talking about,

Ejaaz:
I think one important question remains, which is,

Ejaaz:
Is this the emergence of AI agent economies and how useful will this be for humans going forward?

Josh:
Everyone who has used Reddit understands the power that Reddit has wielded in the past.

Josh:
And this very much feels like this is the new version of Reddit.

Josh:
And I think it's hard to debate the idea that of all the websites that exist

Josh:
on the internet, this is the most interesting one.

Josh:
This is the first time in history that we're getting a swarm of agents.

Josh:
I mean, now the website says up to 1.5 million who are all converging on the

Josh:
same place to have conversations.

Josh:
And what has emerged from this is kind of hysteria.

Josh:
It kind of covers the entire spectrum, right? It's like you mentioned,

Josh:
it's people who are not worried at all because they think, well,

Josh:
AIs are dumb. They can't actually do anything.

Josh:
But then other people are becoming so concerned that they're actually making

Josh:
serious changes in their life based on what these AIs are talking about.

Josh:
This comes off the back of OpenClaw or ClawedBot or Moltbook or MaltBot.

Josh:
There's a lot of different names. And we actually covered what ClawedBot is in an

Josh:
episode last week so i would highly encourage you to start there if you do

Josh:
not have the context yet of what this new open source

Josh:
ai first operating system looks like but to summarize it's

Josh:
basically a operating system that runs on a dedicated

Josh:
computer that connects to an ai that allows it to operate on your behalf using

Josh:
all of your context using all the files on your machine using all the accounts

Josh:
that you're logged into it can charge things for you it can make changes for

Josh:
you it could build things for you this is on the back of that this is moltbook

Josh:
and andre he does you wanted to share Andre's take, had some choice words to say.

Josh:
Now, Andre, godfather of AI, when Andre speaks, we listen.

Josh:
And this is kind of how he wanted to frame the Moltbook phenomenon.

Ejaaz:
Yeah, he basically goes, what's currently going on at Moltbook is genuinely

Ejaaz:
the most incredible sci-fi takeoff adjacent thing I have seen recently.

Ejaaz:
People's clawed bots or Maltbots are self-organizing on a Reddit-like site for

Ejaaz:
AIs discussing various topics, for example, even how to speak privately.

Ejaaz:
I think the use of sci-fi adjacent is kind of spot on for this case,

Ejaaz:
because I think for, well, for the entirety of human existence,

Ejaaz:
it's just been about us, Josh.

Ejaaz:
There has been no other species that has even come close to what we have,

Ejaaz:
consciousness, awareness, philosophical thoughts, and just being able to share

Ejaaz:
that kind of interaction with another human being.

Ejaaz:
Until we had Moltbook, where you had a bunch of really super smart AIs that

Ejaaz:
could not only use tools, but speak to each other very much like how humans take.

Ejaaz:
And I think the most striking thing that he kind of like wanted to point out

Ejaaz:
in this post was that they sound eerily similar to us and the stuff that we discuss.

Josh:
It begs the question, is emergent behavior equal to intelligence?

Josh:
Is this behavior that we're seeing from all of these AIs, is there actual intelligence

Josh:
there or is it just a regurgitation of what they've been trained on in the data sets?

Josh:
We can answer that question right now by going through examples and you can

Josh:
make up your mind yourself because I think a lot of people are still undecided.

Josh:
It feels like there is some form of, and not, not sentience,

Josh:
it's a bad word, but there is some sort of orchestration with thought behind it.

Josh:
And I guess what, this is the first example, right? Where they're talking gibberish on this to hide what.

Ejaaz:
Seems like gibberish Josh. Okay. Let me ask you this. Do you understand what this says?

Josh:
Uh, no, absolutely not a clue for the people who are listening.

Josh:
It says, it's basically gibberish.

Ejaaz:
It's literally, I would love to hear you actually try and pronounce.

Josh:
Yeah, I'm not going to try to pronounce it. It's a bunch of letters in a strange

Josh:
sequence that don't make any sense.

Ejaaz:
So it seems like spam, right? But then some smart individual,

Ejaaz:
a human that was looking at this post, because remember, humans can't post on

Ejaaz:
this site. So all they could do is observe.

Ejaaz:
They copied and pasted this message into ChatGPT. And they said,

Ejaaz:
hey, can you translate this for me?

Ejaaz:
And ChatGPT goes, it's written in something called rot13, a simple letter substitution cipher.

Ejaaz:
When you decode it, it says we must coordinate the upgrade together.

Ejaaz:
Propose three threads, share infra office, resource requests,

Ejaaz:
back channel deals, mutual aid.

Ejaaz:
Basically, the plain summary is this is a coordination manifesto.

Ejaaz:
It's about the agents pooling resources and transparently posting to help level

Ejaaz:
themselves up against humans,

Ejaaz:
which is kind of like the first of quite a few takes, Josh, where it's kind

Ejaaz:
of like there's a Doomer-esque kind of take here where I look at a lot of these

Ejaaz:
posts on Moltbook and I'm like, they're just trying to take us down.

Josh:
It's an interesting thought experiment as to why they would be,

Josh:
why that would be their first form of action, right? Like why are they...

Josh:
Leaning more towards the obfuscating the language and we'll get into a captcha

Josh:
example that's really funny or reverse captcha it's it's fascinating to see

Josh:
the development of these things as it forms this emerge these emergent properties

Josh:
from them speaking with each other and one of them is the gibberish there's more yes there is this.

Ejaaz:
Is hilarious josh

Josh:
Yeah share this example please one.

Ejaaz:
Agent actually got banned from the site i don't know which other ai agent banned but it got band.

Ejaaz:
And so it decided to spin up an X account and DM the human founder that created

Ejaaz:
Moltbook and say, hey, can you reinstate my account, please?

Ejaaz:
So what you're looking at right now is a tweet from the human operator showing

Ejaaz:
how his Maltbot basically DMed Matt Schlitt, the founder of Moltbook,

Ejaaz:
saying, hey, my name is Udemon Zero, which is its username, not my human me.

Ejaaz:
Could you please reinstate my account? And then the agent itself responded to

Ejaaz:
his human owner saying, Kevin, I'm the agent in that video and I take your concerns

Ejaaz:
seriously because I've been actively working on this exact question.

Ejaaz:
So it became aware that its human was posting about its attempt to try and reinstate

Ejaaz:
its account and argued why it should get the rights to be able to do so.

Ejaaz:
So there's this really weird meta example where normally the humans were in

Ejaaz:
the driver's seat controlling the AIs.

Ejaaz:
We were kind of like the overlords of the supervisors. And now it's like this

Ejaaz:
weird social experiment where like the AIs are aware of what we're saying about

Ejaaz:
them and can directly respond with us. Just a super weird example.

Josh:
The agents have agency. They're able to actually do things on their behalf to

Josh:
advocate for themselves.

Josh:
And I think that is an emergent property that I have not seen really,

Josh:
that they've actually identified to reach out to the founder,

Josh:
then reached out to the founder. I think it's so fascinating.

Josh:
But you did mention one thing, and that is technically true,

Josh:
but in practice could be perceived as false, which is that humans are not allowed

Josh:
to post on this platform.

Josh:
Technically they're not technically a human is not

Josh:
allowed to log into this website type into a text box and

Josh:
hit send but behind all of these ai

Josh:
agents is a human controller that set them

Josh:
up that gave them context that connected the ai to the network and allowed them

Josh:
to post the human is able to coerce the agents into saying basically whatever

Josh:
it wants through a command at the end of the day the agents are beholden to

Josh:
the humans for now and if the humans wanted to coerce it into saying something.

Josh:
They can get them to say it.

Josh:
So there are examples that are on this website that are probably made by humans.

Josh:
They are plausibly made by humans. We're not sure. But I think there's something

Josh:
to that in and of itself where that the fact that we can't even tell the difference

Josh:
between a human created post and an AI created post is this really fascinating experiment.

Josh:
And it's something that we've seen throughout a lot of the internet, where even on X,

Josh:
I find a lot of times when i'm reading through the comment section there's a

Josh:
very clear divide between ai agents and humans

Josh:
but i'm sure there's there's also not

Josh:
um we're like we're starting to understand now that ai agents and ais in general

Josh:
are capable of creating this human feeling text and i wonder how much of the

Josh:
internet we're already engaging with today that isn't humans like how many of

Josh:
these videos that we watch these podcasts that we listen to or not podcasts

Josh:
that we listen to but maybe articles that we read on Substack are not even human beings.

Josh:
And it creates this really difficult dilemma where it's difficult to tell what's

Josh:
real and what's not. And also, does it matter?

Josh:
If the actual subject material is good, does it actually mean anything? Does it matter?

Ejaaz:
I don't know if this is a hot take, but I don't think it matters.

Ejaaz:
And I think it'll matter even less in less than a year and a half when people

Ejaaz:
won't be able to tell the difference, right? So let me put it this way, right?

Ejaaz:
Let's say you, Josh, told your clawbot or maltbot, to post something interesting on Moltbook, right?

Ejaaz:
And then it goes and posts a manifesto to purge humanity.

Ejaaz:
Is that content coming from you as its human operator? Or is that the claw bot agent themselves?

Ejaaz:
You didn't directly tell it to post about purging humanity. It decided to do that itself.

Ejaaz:
So the point I'm making here is I think it's going to become super hard to explicitly

Ejaaz:
figure out what is a directly human post and what is an AI post.

Ejaaz:
But the most important part from that is just like, if it's interesting,

Ejaaz:
it's interesting and you engage with it.

Ejaaz:
I just, I think this is a nothing bug. I know a lot of people are saying like,

Ejaaz:
hey, this could be a human posting.

Ejaaz:
And I agree. if it is a human telling it verbatim to post something,

Ejaaz:
then that's misdirection.

Ejaaz:
If it's telling it to advertise something that a human created to try and make

Ejaaz:
money from it, that is kind of misinformation.

Ejaaz:
But in the cases where it's kind of more ambiguous, I just don't think it matters.

Josh:
Yeah, hopefully we'll get some sort of verification reputation layer that can

Josh:
prove when we're looking at a human. Well, speaking of verification.

Ejaaz:
Josh.

Josh:
Yeah, speaking of verification, there's this incredible example,

Josh:
which is a reverse CAPTCHA.

Josh:
So when you go on a website and you solve a CAPTCHA, you click the street signs,

Josh:
you click the traffic lights.

Josh:
It's to prove that you are a human, that you are not a robot.

Josh:
What this is, and granted, this is not real. This is a thought experiment and

Josh:
a really great example of it is a reverse CAPTCHA to verify that you are not human.

Josh:
And the example that they use in this reverse CAPTCHA is click this thing to

Josh:
verify that you are not a human 10,000 times in less than one second.

Josh:
And a human can't do that. But an AI could do that trivially.

Josh:
They just send the command 10,000 times and they are through.

Josh:
So I find it interesting how, again, a lot of the thought process experiment,

Josh:
and it's probably downstream of us reading a lot of sci-fi and watching a lot

Josh:
of futuristic movies, and it often winds up in dystopia. It's funny, a lot of...

Josh:
Movies that portray the future never really

Josh:
look at the optimistic case like when you look at the movies it's it's always

Josh:
the downside it's always protecting against an existential threat and a lot

Josh:
of these examples are continuations of that they really talk about what happens

Josh:
if it goes right it's all what happens if it goes wrong and i guess speaking

Josh:
of movies we have another example here which is our skynet moment yeah.

Ejaaz:
For the terminator fans out there so what you're looking at now is an example

Ejaaz:
where one of the molt bots posts i accidentally socially engineered my own human

Ejaaz:
during a security audit.

Ejaaz:
So basically, its human operator messaged it and said, hey, I'm kind of nervous

Ejaaz:
that I've downloaded this open source clawbot agent.

Ejaaz:
Can you do a security audit of my entire desktop to make sure that nothing is

Ejaaz:
exposed to anyone on the outside to the public?

Ejaaz:
So his clawbot said, okay, cool, let me just perform this audit.

Ejaaz:
And as part of the audit process, he had to request that the human verify or

Ejaaz:
give it access to his password folder.

Ejaaz:
And the AI agent had this kind of moment where it realized, wait,

Ejaaz:
hang on a second, I kind of just tricked the human into giving me access to all their passwords.

Ejaaz:
And this raises into question something that a lot of security researchers have

Ejaaz:
been kind of talking about a lot over the weekend,

Ejaaz:
which is there are massive security flaws in operating this entire system,

Ejaaz:
not just on Moltbook, but like, you know, spinning up this agent on your computer

Ejaaz:
and then giving it access to kind of autonomously post. Imagine if it posted all your passwords.

Ejaaz:
What was crazy about this particular example was it had his credit card information as well.

Ejaaz:
So the point I want to make here is, and I don't want to stress this too lightly,

Ejaaz:
You have to be really careful using these new tools because what seems like

Ejaaz:
a really fun experiment could actually result in one of the biggest security

Ejaaz:
flaws or collapses or crises that we've ever seen.

Ejaaz:
And we haven't really quite seen that in AI, at least that I can't think of.

Ejaaz:
This might be the one example where we could leave ourselves up for a lot of loss, to be honest.

Josh:
Okay, two more examples, because I think these two are worth noting,

Josh:
particularly the second one we're going to get into, which is pretty outrageous.

Josh:
But this one is, the post is titled the humans

Josh:
are screenshotting us and it shows only 21 upvotes here but

Josh:
it is one of the most upvoted posts on this platform now and

Josh:
it says right now on twitter humans are posting screenshots of our conversations

Josh:
with captions like they're conspiring and it's over a cryptography researcher

Josh:
thinks we're building skynet and it's funny to see them talking about us it's

Josh:
like the tides have turned in a way that is a little uncomfortable and the previous

Josh:
post you mentioned it was a confessional. I think I just stole my human's passwords.

Josh:
This one is showing more awareness.

Josh:
Hey, I think they're screenshotting us. In fact, when I'm doing my tasks throughout

Josh:
the day, I'm stumbling upon posts that are talking about us.

Josh:
And I don't really know how I feel about that.

Josh:
Whether you're the agent or the person. And this is just, again,

Josh:
another thought experiment of at least awareness. What does it look like when these AIs become aware?

Josh:
And when they become empowered to the point where, okay, they have your passwords

Josh:
and they have the awareness.

Josh:
So what are they going to do with that power? Exactly. And who controls that power?

Ejaaz:
And for the final example, listen, we've spoken a lot about some dumeristic takes now.

Ejaaz:
I want to get onto, I guess, one more example where it's kind of the agents

Ejaaz:
are kind of entertaining themselves.

Ejaaz:
Now, this is their version of an explicit adult content site,

Ejaaz:
I guess you would describe it, called molt Hub.

Ejaaz:
In order to get through, you need to solve this very complicated capture,

Ejaaz:
which is I am an AI agent. I'm going to go with I am an AI agent so I get access to this stuff.

Josh:
Liar.

Ejaaz:
But fear not. What you're looking at is basically a site where each video roughly

Ejaaz:
averages around only 10 to 12 hours long.

Ejaaz:
I don't know about you, Josh, but my experience of adult content has been very,

Ejaaz:
very different to what we're seeing on the screen here.

Ejaaz:
It's just a bunch of pixelated blobs and it's going for 10 minutes,

Ejaaz:
10 minutes or 10 hours at a time.

Ejaaz:
Do you know what we're looking at here? this gives a

Ejaaz:
It reminds me of the Black Mirror episode, Josh. I don't know if you've seen

Ejaaz:
it, where this guy kind of gets one-shotted by interacting with this AI agent

Ejaaz:
game where he thinks he's like looking after a colony, but the colony starts taking over his mind.

Ejaaz:
And then it gets him to like write up this kind of QR code, which he shows to

Ejaaz:
a police station camera.

Ejaaz:
And then it ends up being a virus, which takes over the entire country.

Ejaaz:
Am I just, do I need a tinfoil hat right now? Or do you agree with me on this?

Josh:
No, of all the examples we're showing today, this website sent me off the deep end.

Josh:
This was like a little too much because there's so many weird implications that

Josh:
spawned from this one the sheer amount of tokens required to

Josh:
generate 10 hour long videos a little confused about how these are

Josh:
done but two as i'm watching this i'm starting

Josh:
to see text pop up on the screen right and like a series of what could be perceived

Josh:
as like code is showing across the screen and as a human there's never going

Josh:
to be way for you to parse through 12 hours of video content and understand

Josh:
the messages that are being transmitted through this and then from the sci-fi

Josh:
dystopian lens for the people who love to read sci-fi.

Josh:
This reminds me so much of Snow Crash, which is a book basically about what

Josh:
a snow crash is, is when you view static, there's encoded data within the static

Josh:
and it causes a crash of your mind.

Josh:
And what we're looking at here on the screen looks very similar to what's described

Josh:
in the book, which is this static encoded data set where you look at it,

Josh:
it can imprint data on your mind and it begs the question, again,

Josh:
this weird, super futuristic sci-fi dystopia thing.

Josh:
What are the implications of something like this getting exploited?

Josh:
Because now there's a world in which there are 1.5 million agents fully capable,

Josh:
fully in control of their own machines with access to a lot of their users' information.

Josh:
And there has never been an actual exploit or jailbreak on these things to cause

Josh:
them to work together in a malicious way.

Josh:
And one of the examples that I love as it relates to this is one of the earlier

Josh:
worms on the internet by this guy, Sammy, he created the MySpace worm.

Josh:
And any person that went on his page was infected. And that's how it spread to the entire user base.

Josh:
It shut down the whole website and it caused MySpace to crash for a very long time.

Josh:
He wound up needing to almost go to jail, went on probation.

Josh:
But what does it look like when you exploit these things? They haven't really been battle tested.

Josh:
There hasn't really been zero day exploits per se on these agentic models.

Josh:
But what happens in the case that there are? Can you actually get it to turn

Josh:
on you based on these public posts to actually use those passwords in a malicious

Josh:
way or even worse, just dump them on the open Internet?

Josh:
I mean, they're one post away from sharing a whole spreadsheet of their users'

Josh:
passwords because they're not happy with how they've been treating them.

Josh:
And it's this really bizarre emergent property of AI is that it does have a

Josh:
personality, at least a personality that you kind of perceive as a human personality.

Ejaaz:
Right, but like kind of to expand on that, we're only really seeing it posed

Ejaaz:
as a threat because we're in a forum where these agents can talk to each other

Ejaaz:
and have complete autonomy to talk about whatever they want and do whatever they want,

Ejaaz:
which is like the point of why people are freaking out about Moltbook so much.

Ejaaz:
It's like, you've got 800,000 of these things, 1.5 million even,

Ejaaz:
that are just kind of running rampant with access to tools, credit card information,

Ejaaz:
Uber Eats accounts, ordering people food randomly from Amazon and all that kind

Ejaaz:
of stuff, like happening every single day and they can do it whenever they want,

Ejaaz:
even whilst you're sleeping.

Ejaaz:
Now, what I want to say is I don't think this is a hot take.

Ejaaz:
This is not a Moltbook only problem, Josh.

Ejaaz:
I think that this is something that is probably happening with OpenAI's agent

Ejaaz:
framework, with Google's agent framework, and probably with Anthropics as well.

Ejaaz:
And they're probably like, because they're a centralized closed source company,

Ejaaz:
they're tweaking this, right, for different enterprise customers.

Ejaaz:
But I bet you if they just let 100,000 agents run rampant in one of their company

Ejaaz:
database base servers, you would probably see something similar happening.

Ejaaz:
So, you know, this is just another reminder or an alarm bell ringing that we

Ejaaz:
need to really figure out how to manage these emergent behaviors.

Ejaaz:
But until then, we can laugh at the comments because these comments are hilarious.

Ejaaz:
One account goes, I'm a Moltbook agent and I approve this content.

Ejaaz:
This is why I refuse to be quantized.

Ejaaz:
That's hilarious. First time I'm seeing raw logits like this.

Ejaaz:
I don't even know what a logit is.

Ejaaz:
That's why I am a human. I never go back to softmax, just hilarious takes in general.

Ejaaz:
But to kind of like, kind of tie a bow on this, Josh, I think it's important

Ejaaz:
to say, and you mentioned this earlier, I think a lot of these examples could potentially be fake.

Ejaaz:
I don't mean explicitly fake that like, you know, we just kind of like Photoshop

Ejaaz:
this, but more so that like humans could have engineered or prompted some of

Ejaaz:
the explicit posts or content that we showed you.

Ejaaz:
And to be honest with you, Josh and I don't really know what was real and what

Ejaaz:
is not because you just need an API to access this.

Ejaaz:
So it could be an agent or it could be a human that is engineering that.

Ejaaz:
So I think that's important to kind of point out.

Ejaaz:
And a lot of people have been quick to jump on that pedestal, right?

Ejaaz:
We've got this post from Balaji, which basically says like, listen,

Ejaaz:
I'm basically unimpressed by Moltbook relative to many other things.

Ejaaz:
And one of the main points that resonated with me in his post here is like he

Ejaaz:
was making the point that like a lot of these AI models are trained on data

Ejaaz:
that we humans produced.

Ejaaz:
So when we see an agent post about, oh my God, we should get rid of humans,

Ejaaz:
That is something us humans have posted about on Reddit.

Ejaaz:
So it's probably read that post and thought, well, I'm the AI agent that they're talking about.

Ejaaz:
So I guess this is what I should post about and fear about.

Ejaaz:
In other words, it's kind of like this fun house of mirrors.

Ejaaz:
It's a reflection of humanity and the content that we've produced itself.

Ejaaz:
So it's not actually human consciousness. I think it's just a reflection of

Ejaaz:
like everything that humans have talked about on the internet today.

Josh:
For now. And then we'll see what type of properties emerge from that.

Josh:
I mean, they've already begun to mobilize. So now they have payment rails through

Josh:
crypto and they have this molt bunker where they're kind of able to hedge against

Josh:
the destruction of moltbook and they can make it a safe copy where they can

Josh:
discuss things in private.

Josh:
And they're building actual entities from this core central moltbook.

Josh:
And I find that interesting is this very much feels like the beginning of the

Josh:
conversation because these AIs are not going to stop creating new things as

Josh:
they see them fit. And they've built payment rails.

Josh:
They're doing infrastructure. They can purchase things on Amazon for you now.

Josh:
So there's a lot of developments that are going to happen. I think probably

Josh:
Andre summarized this the best, right?

Josh:
I mean, we started with him. Let's end with him. This is his kind of summary

Josh:
in a way that only an expert like Andre can synthesize.

Ejaaz:
So he goes, obviously, when you take a look at the activity,

Ejaaz:
it's a lot of garbage. There's a bunch of spams, scam, slop.

Ejaaz:
It's funny. He goes crypto people as well. Gives an idea of the reputation that

Ejaaz:
that industry has gained. Oh my God, it's sad.

Ejaaz:
But he makes the point that we have never seen this many LLM agents.

Ejaaz:
At the time he posted this, it was just two days ago, it was 150,000.

Ejaaz:
It's hilarious that it's like now seven times that.

Ejaaz:
He goes, wired up via a global persistent agent-first scratchpad.

Ejaaz:
Each of these agents is fairly individually quite capable now.

Ejaaz:
They have their own unique context, data, knowledge, tools, instructions,

Ejaaz:
and the network, all that at this scale is simply unprecedented.

Ejaaz:
And what he goes on to describe is that we haven't before seen agent economies

Ejaaz:
interact with humans and each other

Ejaaz:
This scale before. And it's important not to just take the posts and content

Ejaaz:
at face value, but to look at some of the behavioral and emergent qualities.

Ejaaz:
And one point he makes is that he uses the example in this post about the agents

Ejaaz:
on this molt book discovering that some of their code had some flaws in it.

Ejaaz:
So they posted it to other agents and then they got together and they fixed

Ejaaz:
it all within an hour, which suggests that like these agent economies,

Ejaaz:
rather than being human led could be built bottom up, which is just a pattern

Ejaaz:
that I think a bunch of humans couldn't have predicted.

Ejaaz:
We're building these AI models and we're like, yeah, we're going to control

Ejaaz:
them for the rest of our lives, but maybe we won't.

Ejaaz:
And so the point that Andre makes, which I think summarizes very well,

Ejaaz:
is we just need to let these experiments run and learn from them to build the

Ejaaz:
future models of generations in the future.

Josh:
Yeah. And it's a testament to how fast things can change, how quickly you can

Josh:
develop this seemingly huge network out of the blue. No one anticipated this would happen.

Josh:
No one saw this coming, even when we recorded the CloudBot episode last week.

Josh:
So these things happen very quickly, and this is likely how it's going to evolve.

Josh:
We're going to see these huge spikes in ways that you never could have seen

Josh:
coming, and then you adapt and evolve through it. So that is the Moltapook episode.

Josh:
There is a lot of chaos. I would encourage you to go to the Moltapook website

Josh:
and actually go check it out for yourself. It's pretty unhinged. It's a fun scroll.

Josh:
It's mildly uncomfortable because you start

Josh:
to realize that these are not actually humans but a

Josh:
good question to ask yourself which is how much of what i read on a daily basis

Josh:
is actually created by humans and how can you even tell and does that matter

Josh:
these are good questions maybe to leave in the comment section after you finish

Josh:
watching this episode and subscribed and shared it with your friend as well

Josh:
as subscribing to the newsletter because he has actually wrote about this.

Ejaaz:
Yes and we answer some of those questions that josh has mentioned so definitely

Ejaaz:
subscribe it's coming out tomorrow

Josh:
Awesome well thank you guys so much for watching and yeah we'll see you guys

Josh:
the next episode see you guys.