Threat Talks is your cybersecurity knowledge hub. Unpack the latest threats and explore industry trends with top experts as they break down the complexities of cyber threats.
We make complex cybersecurity topics accessible and engaging for everyone, from IT professionals to every day internet users by providing in-depth and first-hand experiences from leading cybersecurity professionals.
Join us for monthly deep dives into the dynamic world of cybersecurity, so you can stay informed, and stay secure!
What if the most dangerous
threat of this year is not malware,
but is an AI assistant
that's there to help you?
In this Threat Talks, the deep dive,
we discuss OpenClaw.
Let's get on to it.
Welcome to Threat Talks.
Let's delve deep into the dynamic world
of cybersecurity.
So with me today is Yuri Wit, SOC analyst.
Welcome, Yuri.
Thank you.
And, my name is Rob Maas,
Field CTO at ON2IT.
Yuri, would you install OpenClaw
to be more productive today?
Not to be more productive.
I think you can be
more productive with it.
I would install
I just to see if I can break it.
Okay. Makes sense.
So then of course, the question
to start with is what is
well, Clawdbot, Moltbot, OpenClaw,
they all seem familiar.
But what are they?
What is it? Can you explain?
Yeah. So it's all the same.
It started as a product
called Clawdbot
it was then later renamed to
Moltbot, I believe.
And then now finally we’re,
it is being called OpenClaw.
We settled for OpenClaw.
Exactly. It seems to be settled now.
And what it is, it's
basically an AI assistant
that you run in your own network.
And then you connect it to either
a large language model found publicly,
like from OpenAI or from Anthropic,
or to locally running models.
And you can give it tasks
and it does things.
It can interact with services.
It can interact with your own
computer where it's running.
It can do a whole bunch of things. Okay.
And just to get an idea,
how do I give it tasks?
There are multiple ways.
The primary way is that you can
connect chat applications
to it to literally interact directly
with it in human readable text.
You can connect it via
WhatsApp, Signal, Discord,
I believe as well, where you can literally
just send messages to it and it will-
Yeah, so I can choose a channel
that I'm very used to being using,
and then I can just chat
with my assistant,
so this OpenClaw thing, and also
I imagine then if I connected
with WhatsApp, I can interact with
it whilst OpenClaw is running
on my local computer.
I can interact with it from
my phone, wherever I am.
Exactly. Yeah.
Okay. That sounds, very
powerful and handy.
So I would install it.
No, as you say, you can install it,
it can do all these tasks,
but what is the limitation?
Where does it stop?
The limitation is currently to the skills
that it has when you just boot it up
the things that are built in, the services
that it can interact with, built in.
Now it is already quite powerful
at that stage,
but the main power is really unlocked
when you install your own custom,
well, not install, you give it custom
skills like you would with any agent.
There's actually a huge community
where people publish their own skills
for interacting with components
or services that you can just give it.
Okay.
And the idea, if I understood correctly,
you said it can run autonomously.
So, I just say this is what you need to do,
can it then also do that
itself, can it install tools or
is that the idea behind it,
or is there any limitation?
No, there isn't really a limitation
to what it can do: if a user can do it
and you can instruct it
to do those things.
It'll be able to. Okay.
So in essence it has
the same control over
your computer as you
as a person would have. Yes.
Okay. That's a bit scary already. It is.
But before we dive into all the implications
that brings to security,
that it has this full control,
there is also immediately or
quite quickly after the release,
there were some vulnerabilities exposed.
One of them was a One-Click RCE.
Can you tell us a bit more about that?
Yeah.
So, in this vulnerability,
an attacker could send you a link
that would open up the
webhook that is used by the bot
to interact with services.
And it would take over interaction
with that webhook,
allowing it to fully control everything,
including an internal ability was also
a way to then break out of the webhook
and allow the attacker to just run things
directly on your computer,
causing it to be complete remote code
execution from one click. Okay.
So in short, if I have it running,
I click on a link that I'm trapped into.
I click on it, the connection to my OpenClaw
is basically exchanged with the attacker.
And the attacker has now kind of
tricked my OpenClaw assistant to do...
Okay.
That's scary, especially if it has the same
rights as what I have on my computer.
Yeah.
So might be also a good thing to then
dive into the security implications
that OpenClaw brings,
because there are plenty
and even so many that Cisco calls it
a security nightmare.
And Gartner says this is really
an unacceptable security risk.
Those are quite strong words,
if you ask me,
especially from those vendors,
but are they right, calling it this way?
They definitely are.
They definitely are.
Especially if we look at it
in the light of Zero Trust.
You can actually quite easily compare it
to Zero Trust, since it's the opposite.
Within Zero Trust, you always work
with things like default denies
where everything is denied
unless you approve it.
With OpenClaw it works the exact opposite.
Everything is approved until you deny it.
Because there are pretty much
no real methods
of denying abilities or things
or actions from OpenClaw,
it is basically just a very hard
to control default allow.
Okay, so that means if I install it,
let's say I would install this on
my local computer, then by default
it would have access to my terminal,
my web browser.
But also my agenda, my email,
of course, the communication channels
we already discussed about,
that's a lot of access. Yes.
And the whole idea, in essence, is that
this is open so it can do tasks for me.
Yeah.
Basically you need to see it as a user,
because it works like a user.
You give it instructions
and then it performs a task,
based on natural language, that you give
it just natural language instructions.
So it makes sense, quote unquote, that,
it has the access
that a regular user would,
because if you're using your computer,
you would expect
not to be hit by restrictions
every second step.
But because it isn't actually a user,
it is a huge security risk.
Yeah.
So, the difference, especially
between a user and an AI,
even if they have the
same permissions is,
well, let's still assume my laptop
I have installed it.
Normally if I take actions,
I will think about hey,
what are the implications of this action?
What is the risk?
It's kind of a natural process.
And if I screw up too many times,
then people will start complaining
and say, hey, don't do that ever again.
Yeah, and if I still do,
then, maybe they fire me.
Or at least there are consequences.
Exactly.
And an AI doesn't have that.
Yeah, and doesn’t care.
No, definitely.
It only cares about doing the task
it was instructed with.
So if you tell an AI: do thing A,
it will do thing
A, it will not say oh but maybe
yes, no, no, it will just do it.
So it's fair to say that security
in this whole design of the system
was not an afterthought,
was not even a thought. No.
Not really.
So it's really just it should be open.
It should be doing things on their own
and probably just to find out
what it actually all can do.
Yeah.
And there is also this construct,
we already discussed this a bit
that it needs to have some
skills or tools available.
And we have this web page called clawhub.ai,
where OpenClaw can get and learn new skills.
How does this work and
what are the implications there?
So how it works; it's very easy.
It's pretty much all just markdown
files or they’re regular text files
that are akin to like recipes for food.
They're literally just instructions
for the AI on how to do things.
So, if you want your claw or, sorry,
OpenClaw to interact with a certain service
that it doesn't know how to,
then you give it
one of those recipes,
one of those instructions
where it tells you, hey, if you want to
interact with it, this is how you do that.
The reason why it's currently bad
is because that community,
is filled with malicious skills
that actively just tell
OpenClaw how to do malicious actions
and actions that are not
in the best interest of the user.
Yeah, yeah, I saw already,
some researchers found out that
one out of eight skills on that
web page are malicious.
Yeah.
And I also read a paper
by a security researcher
that published his own skill,
got high in the ranking,
and, then showed people,
hey, this was all the data
that I actually could have stolen from you,
if I really had malicious intention.
Yeah, exactly.
Okay.
And then, well maybe something
that's sometimes overlooked,
especially with this tool is,
of course, you need a lot of
power behind it because you
need to drive an LLM.
So a lot of cost might also be,
well, at least a risk,
maybe not a security risk, but it is
at least a risk factor- Financial risk.
Exactly. I heard a story about,
someone who wanted a reminder
of buying milk in the morning;
can you tell the story?
Yeah.
So, there was this person that, like you said,
he wanted OpenClaw to remind him
to have them buy milk in the morning.
And the way that OpenClaw did that was consistently
checking whether or not it was morning yet.
It was literally inferring an AI, ran by,
I’m not sure if it was OpenAI or Anthropic,
but I believe it was one of those two
constantly just going to the
AI and asking it, hey, is it morning yet?
Consistently, constantly.
What resulted is that that user put like 20 bucks
worth of compute tokens, into the AI service;
they ran out immediately, because
it was constantly just asking
the question, spending tokens
on whether or not it was morning yet.
The.. yeah.
A very inefficient way to
check if it is the morning..
Yeah. If the has morning started.
And I also assume
he forgot about his milk
in the morning.
Wouldn't be surprised.
So. Well, I think OpenClaw really demonstrates
the gap between what is possible.
It gives us a glance into the future
of AI assistance, I think.
Yeah.
But it also shows us the gap
with the security possibilities.
So maybe the first question is,
are we ready for this kind of tool?
No, we're definitely not.
It is a great tool.
It does show you what's already possible
with AI because Open-
Claw isn’t doing anything super new,
it's basically combining a bunch of things
that we could already very easily do into
one centralized tool to work with
services and your own applications
and your own PC, which is really cool.
We're not ready for it yet,
because there are no safeguards for it.
There are no real methods to limit
what your OpenClaw, for instance, can do.
The moment you give it
something, an instruction to do
it will do that, if it has
the capabilities of it.
You don't have a way to mitigate
the things that it can do.
So it's when it's running on a system
and then everything that's allowed
on that system it has access to,
and there is no real good way
to prevent certain tasks.
Exactly. Yeah. Okay.
But what could be mitigation?
Can Zero Trust help,
multifactor authentication?
Maybe you have other things
that can help here or..? Definitely.
I mean, corporations can already just
not allow the running of the OpenClaw
binaries and related tools
on their corporate machines.
That's kind of a no thought thing.
You can do that either with, just the binaries itself,
but that might be quite complicated,
although the most secure.
You can also just lock down
the permissions of users
so they're not able
to install things like this.
So we already have a couple things there.
But that's kind of the limit.
There isn’t a lot you can do.
So once it's installed on a computer,
so I assume you fail to detect
or prevent an installation.
And a user has it running.
Then there's not much you can do
because it acts like the user.
Yeah, it runs in userland.
It acts like a user.
It is basically a user just as software.
Okay.
But then assume,
so already between the lines,
I see that you say, okay, as companies
don't run this, at least not yet.
But what if you are really
eager to play with it
or maybe you’re a security researcher,
what would be a good approach to
research this safely?
Well, first things first. Sandbox it.
Do not run it on your, like,
main laptop or computer or desktop.
Also not in a VM or container or..
Yeah. No, exactly.
Sandboxed either in a VM
or dockerized in a container,
or even directly on hardware, but then do make sure
it's completely blocked of from anything.
Yeah.
I think to be fair, the hardware
is even more safe.
I already heard there were some container
vulnerabilities or escape...
Yeah. True.
So I would feel more comfortable
with a physical system.
Definitely. Well, I mean, everybody
is running it on Mac minis. Yeah.
There was a real spike
in the sales of Mac minis.
Well, it works. It is a separate hardware,
so you're not at risk of just destroying
your own hardware directly.
So definitely separate hardware first, but- And it maybe
also helps that it comes with an M processor,
which is relatively, compared
to traditional CPUs, good at AI,
running LLMs.
Definitely.
And with that, you also risk,
or you don't risk,
the ‘remind me to buy milk’ story,
because you're running it all locally,
so the only thing you're wasting is your
own personal compute and energy, which...
Yeah.
... is less impactful.
Yeah. Okay.
Are there other ways,
like virtualizing in the cloud,
for example?
I saw this project by... By Cloudflare.
Cloudflare. Indeed.
They made the Moltworker,
which is an OpenClaw instance
just run in their own sandbox.
So basically you buy a sandbox VM
from Cloudflare where you run OpenClaw.
And that's a pretty good solution.
It also solves the issue
of having to access it.
I already see that a lot of people
are just exposing their OpenClaw instances
directly to the internet, because people
don't realize what that actually entails.
And it even lists in the documentation
that using something
like Tailscale, which is a VPN
service, is very recommended.
So you can access it from anywhere.
But it is still not exposed
to the entire internet.
Yeah, I think, I was looking at this OpenClaw page
in preparation of this recording,
and I think the challenge here is,
it's not that difficult to install it.
Yeah, it's a bit technical,
you need to have a command line
instruction, but you can copy paste it.
So, yeah, it will also be a lot of people
that are trying this out
without really knowing the consequences
or having the technical knowledge
to prevent, well, the lateral movement
I would say.
Yeah. No. Exactly.
They just install it,
they click on the buttons until it works,
not realizing what they have done.
Okay.
Then, well, I'm already nearing the last question,
but there's a lot of fuss about OpenClaw,
what is the main thing that we might
have missed in this discussion?
Well, what we need people
to take away from this
discussion is that it's a really fun tool.
It is very useful.
It shows you the capabilities of AI.
But nobody really thought about
the consequences of such
a tool, such an assistant
that is as complete as OpenClaw.
And, the main takeaway
that people should have is that whenever
something new comes out,
always make sure that security
is, it doesn't matter what you are,
if you're just a regular user
installing it on your own laptop,
if you're a giant organization or just,
a small office, you always need to make sure that
whenever you're dealing with something
completely new that you realize,
hey, what can this affect?
What is the worst possible impact that
this could have? And then work from there.
Okay. Yeah. You said, well, maybe
nobody thought of it, but I think
companies like OpenAI, Anthropic and
Google know that this is possible.
Like you said, there's
nothing really new.
Those products of those companies,
have kind of an integration.
So I know there's a PowerPoint
integration with VS Code.
But it's really within guardrails.
Yeah.
Trying to prevent any damage
or malicious activity.
So, I assume these companies know
what the capabilities are.
They definitely do. But they are afraid
to say, okay, here it is.
Everything is open.
We can interact with everything, like OpenClaw,
because of the lack of guardrails.
And if it then goes sideways,
then that company is to blame.
And in this case,
it is just a single developer.
Well, nowadays a lot of people will
contribute, but in the beginning
it was just a single developer
that said, oh, this is really cool.
No, definitely. Yeah.
No, those AI companies, they know for sure
what their AI models are capable of. And,
whether just by instinct or deliberately,
they're not making it breach
it's quote unquote full potential,
because of the security risks
and definitely the amount of guardrails
that we just don't have.
Okay.
I think, we can come to a conclusion,
well it’s maybe not a happy one.
Yeah.
But it is nice to see that OpenClaw
gives us really a glance into the future.
So what is possible with AI assistants.
But the problem indeed,
like you also mentioned is,
with all new technology,
we get new security challenges.
And in this case, there is not even
a thought about any security controls.
Just: this is cool if it
can do everything. Yeah.
So we need to learn how to secure these
agentic features, which will
unfortunately take some time.
And that's also why we see companies
like Anthropic and Clawd, sorry, OpenAI,
be very careful with what
integrations they release.
And for the enterprises, if you
want to run, first of all,
if you don't need to run it, don't run it.
At this stage. Stay away, for now.
Yeah.
But if you really need to run it,
do it in a sandbox environment
very tightly controlled, and
really keep an eye on it.
Or if you are a researcher,
set up your sandbox or go to Cloudflare
Moltworker, it was you said,
might be a very good solution
to play around and see what's
coming at us in the future.
Okay.
And with that I would thank you.
Thank you for having me.
For all the information.
And thank you, listener.
I hope you like what you
just heard and saw and
please click that
like and subscribe button.
That helps us spread the word further.
And with that, thank you.
Thank you for listening to Threat Talks,
a podcast by ON2IT cybersecurity and AMS-IX.
Did you like what you heard?
Do you want to learn more?
Follow Threat Talks to stay up to date
on the topic of cybersecurity.