Threat Talks is your cybersecurity knowledge hub. Unpack the latest threats and explore industry trends with top experts as they break down the complexities of cyber threats.
We make complex cybersecurity topics accessible and engaging for everyone, from IT professionals to every day internet users by providing in-depth and first-hand experiences from leading cybersecurity professionals.
Join us for monthly deep dives into the dynamic world of cybersecurity, so you can stay informed, and stay secure!
Kind: captions
Language: en
What if malware could think, adopt,
and evolve its attack
strategy in real time,
based on your system.
What would then happen?
In today's Threat Talks, the Deep Dive,
we dive deep into PromptLock,
which does exactly this.
So let's get on to it.
Welcome to Threat Talks.
Let's delve deep into the dynamic world
of cybersecurity.
With me today is Yuri Wit,
our SOC analyst.
I've done an episode with Yuri
before, on AI and deepfakes.
So welcome back, Yuri.
Thank you for having me.
So my name is Rob Maas,
Field CTO at ON2IT.
So Yuri, when you heard about PromptLock for the
first time, what was your immediate response?
Initially, I was surprised, then shocked,
and then a sense of dread came over me,
because I realized that my job was
going to get a lot harder very soon.
So that is already an indication
that PromptLock can really change
the way we are looking at security now.
Definitely. Okay.
But before we go into PromptLock,
AI is already being used by attackers
for quite a while now.
Can you give us some insights
in the current techniques
that attackers are using? Definitely.
Yeah.
So, the most obvious example of
AI being used by
cyber attackers is in the generation
of phishing content.
Either the content of a phishing email
or phishing SMS or whatever.
A lot of groups already use
AI to generate that content,
because it can really easily just be
changed based on initial parameters.
What kind of company is it that we're
trying to target with this message?
Who are we trying to target
with this message within the company?
Things like that.
Using AI, you can very easily
just change the content of those emails.
And on the fly. Also make sure the English is
correct or the language you want to use.
Yeah.
Okay.
And any other techniques that we already see?
Yeah. So obviously AI coding assist.
You don't really hear this often,
but we can very safely assume
that since AI coding assistance
is already being used widely
by software developers around the world,
we can assume
that cyber attackers and malware
developers are doing the exact same.
But, you don’t notice that, of course.
And a third method that
we've also seen here
and there, is the use of AI
in the negotiation between an attacker
and a ransomware victim; to negotiate price,
delivery, all sorts of aspects.
Again, most likely to
fix their English spelling,
but also because it obscures
the attacker a bit.
Talking to an AI chatbot is very obvious.
But it's not fingerprintable.
You can't really determine to
which attacker you're talking to,
if you're talking to an AI.
Yeah. Okay.
So AI for attackers is not new, but what
is different now then, with PromptLock?
With PromptLock, we have seen
that while the other techniques
used AI either in a stage before the attack,
or a stage after the attack. With PromptLock,
we have seen AI being used
during the attack, during the
actual execution of malware,
AI was being used,
and we've never seen that before.
Okay, so this is really a new era,
I would say, of malware and
the use of AI within that.
Yeah.
So can you guide us through the
steps that PromptLock takes?
Yeah.
So, initially, PromptLock just
executes like any other malware.
It is written in Golang, and it's compiled
to multiple different platforms.
So it could just be a Windows executable,
PDFcider.exe.pdf or whatever.
Once it executes, it will start to
send requests to an Ollama
server hosted by the attacker,
to request inference from
an LLM to generate
malicious Lua files, which it would
then execute upon receiving.
That sounds already like
a lot of techniques involved.
Can you start with explaining what Ollama is?
Yeah. Ollama is an inference endpoint.
What that means is that it's the software
that runs on an AI capable endpoint, either
with large GPUs or with a heavy duty CPU.
And it actually turns a prompt
given by a human into
the necessary components to do
inference on, inference
meaning the querying of a large
language model to do something,
in this case, just text generation.
Okay.
And, so that means
and you already mentioned it,
you need to also an LLM for it.
Was there a specific LLM
being used for this?
Yeah. This PromptLock?
Within the malware,
the researchers found the entire Json
that would be sent onto the Ollama server.
And according to the Ollama spec,
that request would include
a model parameter.
And in this case, the model parameter
was set to gpt-oss-20b.
That's the new open source
model released by OpenAI.
The 20b just stands for the 20
billion parameters included in it.
Okay. Oh, that makes sense.
So we have now a component,
compiled by Go
that will do a call to the Ollama,
which can make use of an LLM.
And then, as a result, so probably the
the initial request was, hey,
I want the result as Lua code.
Can you also explain what Lua code is?
Yeah. Lua is just a programing language.
It's interpreted just like Python,
meaning that there's a specific binary
that executes the Lua code, so it's not
directly compiled to machine code.
So yeah, it's an interpreted
programing language.
Okay.
So that also means, I can imagine
that it can be really dynamic.
So in this example, it's all Lua.
But I can imagine that in the future,
a tool that uses the same techniques
would say, okay,
but I'm now on a Windows machine,
I want this script in PowerShell.
Or I'm now on a Linux host,
I see Python installed, can you
give me the script in Python?
Yeah, exactly.
Okay. That makes it really flexible,
I think, for an attacker.
Yeah.
Very dynamic and unfortunately
for us, a bit harder to detect.
Yeah, definitely. Okay.
But luckily for us, till today at least,
PromptLock seems to be
a proof of concept, founded by
ESET, if I'm not mistaken.
Yeah.
How do we know that this is
a proof of concept and not just
an actual malware that's
being used in campaigns?
Well, there are a few indicators of that.
The biggest red flag or I guess
green flag was the fact
that the strings of the malware
contained a Bitcoin address.
Now that's commonly seen in ransomware,
because the target needs to
send their money somewhere.
So it's usually a Bitcoin address
or some other cryptocurrency address.
But in this case, the Bitcoin address
belonged to the original creator
of Bitcoin: Satoshi Nakamoto.
So it wouldn't make sense for an attacker
to include that address, in their malware
if they actually were gunning for
funds to be received.
Yeah, it will all go to Satoshi.
Yeah.
So unless he is the creator of the malware...
That would be a great
plot twist.
That would be a nice twist indeed.
Okay, so a lot of things might
change now in this landscape,
very dynamic attacks.
How can we defend against this?
Well, first and foremost, the cyber
security community will have to start
transitioning off of static indicators
as soon as possible.
The most popular method of indicating
malware is based on file
hashes, SHA-256 hashing.
But that won't work anymore.
Because of the deterministic
behavior of LLM.
If you ask it to generate a script to do
something malicious; the second time
you might ask the exact same thing,
will give you a different return,
meaning that hash based fingerprinting
just won't work anymore.
Okay, so also sites like
VirusTotal, for example,
need to change if they want to track
all these kinds of attacks.
Yeah. Yeah.
There are already methods
in place for this,
you have malware families,
of course, where it's...
you can, well not identify, but you
can at least talk about malware
very easily, by just specifying
the name of the family of malware.
But you're still going to have to find
a way to detect the malware initially.
And so we're going to have to move over
to more dynamic detection solutions,
like behavioral analysis or just dynamic
malware analysis in sandbox environments.
Fingerprinting and hashing
just won't cut it anymore.
Okay, that makes sense.
Then if you talk about behavioral analysis,
I quickly think about EDR solutions, XDR
solutions, running on the endpoint,
detecting any abnormal processes,
abnormal behavior.
So I think that's a good step,
but it's also a step
I would say everyone should already
do at this time, because
well the static signatures are
already, I would say, lacking.
Yeah. Things are evolving so quickly.
Are there any other
things that we can do?
Not a lot.
The thing with behavioral analysis is,
the content of payload
that might be generated by
AI, will never be the same based
on fingerprinting, but it should
still perform the same amount
of steps that a regular,
self-written, piece of malware would do.
It would still need to check certain
commands or process information,
to determine the OS
it's running on, the CPU architecture,
things like that.
We'll just have to rely
on them a lot more.
Okay.
And for this specific malware at least,
or this proof of concept, we saw
you said that it needs to access an
Ollama server or Ollama instance.
Can we detect that?
We could. It would require SSL
decryption and inspection,
because it is just an API query,
so it's all going via Https.
It's all just encrypted traffic.
So you're not going to be able to determine whether
or not some random post request is going
towards an Ollama server without
inspecting the actual contents of it.
Yeah. So unless we know
that the destination is on an
indicator of compromised list,
we really need to decrypt,
look into the traffic, see, hey, these API
calls are going to an Ollama server.
Yeah.
Which then probably indicates
that that's not wanted.
Yeah. And we want to block it.
Unless, of course, you have some
valid use cases for that.
Yeah.
But for PromptLock, there is
an easier method of blocking it.
And that's obviously to just block
the execution of Lua scripts.
Wouldn't make sense on a random user
endpoint to run Lua scripts anyways.
So that's just low hanging
fruit to block, but
there would still be methods
around that, most likely.
Okay. Yeah, we discussed that before.
If you really talk about security
then also make sure that a user,
is not able to execute for example,
PowerShell or Python or in this case Lua.
Question in this case would be,
in most systems Lua won't execute
because you need that interpreter.
But yeah, block access to those tools.
Would certainly help.
Okay.
Any other things that you can think of?
No. Not really.
It's really just either behavioral
or based on network indicators.
But those already also require
foresight of other attacks.
Okay.
Yeah, I would say, but that's... go a Zero
Trust approach, not servers, for example,
often don’t have to go outside.
So you block everything.
Yeah.
It's not specifically for
this attack, but it will help.
In general.
Definitely. Okay.
So the question now is, we now
have this proof of concept,
what will be next and what
timelines are we thinking of?
Well, PromptLock was already quite dynamic.
It would generate those Lua scripts.
Those Lua scripts would enumerate
local file systems and would try to find
interesting tidbits of information.
And then it would go on to
determine its next steps,
based on what the previous script found.
So if it found that it’s running
on a Windows Server
2008 endpoint, it would know...
it would do something differently
than if it was running on
just a regular user's endpoint.
We already saw that it would
determine dynamically whether or not
it would try to ransomware the host by just
encrypting everything, whether or not
it would start destroying files.
No, it couldn't yet, because
that wasn't implemented.
Another indicator for it
being a proof of concept.
Or steal the information
I believe was the other... Exactly.
Or exfiltrate all the data that it found
that it thought was interesting.
So it was already super
dynamic in its execution,
but what we're going to
start seeing with other
AI powered malware is
even more dynamic steps.
It would start, it could feasibly start
looking into what kind of OS am I running,
that's low hanging fruit.
But also what kind of antivirus software
or EDR is running on this?
What other applications can I find?
Can I find known vulnerabilities
for these applications?
Do they have proof of concepts?
Can I exploit them?
And then even just generate exploits
for those vulnerabilities on the fly
without anybody having to do any
investigation into the target.
It could do all of these things
super dynamically and just on the fly.
Yeah, that makes it quite scary,
because we know there are some
ways to bypass an EDR.
We also did an episode on that.
But now,
in the current state, an attacker
needs to know what EDR
you are running, needs
to prepare everything for it,
you have to make sure
that this payload is able
to disable that EDR solution.
But what we might see then is new
malware, coming into your system
and just asking or checking
what EDR is running.
And then asking to the LLM
or saying, okay, I want
something that disables
this specific EDR solution
and then probably gets a
script or a way of doing that.
Yeah.
Instead of a cyber attacker deploying,
malware and just waiting for, like, C2
beacon, it could almost feel like you have
an actual attacker inside of your PC.
Even though it's all fully automated
and doesn't require
any input from anyone at all.
Okay, so it's not the creativity
that normally a pen-
tester would bring, but this is
somewhere in between
what we have now, relatively static,
and complete creativity,
it's somewhere in the middle.
But that also means
it's very unpredictable.
Exactly. Yeah.
Would it be, maybe also a benefit
to us as the defenders, that it also is
less predictable for the attacker,
do you think, or...?
Sorry, could you repeat that?
So, is it also, as a benefit for us
that the outcome is always different.
So also the predictability for
an attacker is less than it has today.
I wouldn't really know how to answer that
because on the one side, unpredictability
could lead to more obvious behavior,
meaning that it would become
much easier to detect.
But on the other side, unpredictability
could also just mean that anybody
looking into the behavior of a process
wouldn’t know what the hell is going on.
So it yeah, it's a double edged sword.
It can work both ways. Okay.
Yeah, that’s a fair point, I think. Okay.
What about timeline?
Do you have any idea when these newer
malwares would become a reality?
No, no.
I mean, if you look back, during Covid,
when ChatGPT started doing its thing,
I think not a lot of people could have
predicted that it would blow up
and start to become so commonplace
now to use AI tools, right now.
I think most people would
have thought it would have taken
at least another couple of years.
With this, yeah. I mean, it is a proof of concept.
It is new.
But, to me it's kind of like
the five minute mile or the,
yeah, the five minute mile
where, nobody did it.
And then one person did it,
and then everybody did it.
I think we might see something
similar here where
because it is now proven
that it could work.
We're going to start seeing, groups
actually actively trying it,
in the short term.
Okay.
I can imagine that maybe the
requirements for running your
own Ollama with a decent
large language model, might hold back
a few people, because
it's not easy to set up.
You need a lot of investment.
But once it's out there and once you have
set it up like that... That's also becoming
a lot less complicated,
with a generic consumer grade GPU,
something that you could use for gaming.
you can, that specific gpt-oss-20b
parameter model,
you could run that easily and have
pretty decent speeds in text generation.
So already the hardware
requirements are pretty low for this.
And I'm assuming cyber attackers,
APT groups, especially,
nation state attackers
will definitely have more
than enough resources
to perform these kinds of attacks.
Okay, so once they get used to these
kinds of techniques, we might see them
popping up. Yeah. Every day. Yeah.
Starting, within maybe,
maybe even a few days.
Yeah. We don't know.
Let’s hope not.
Let's hope not.
Let's hope we have a little bit
more time to prepare.
But I think that brings us
to a conclusion here.
So PromptLock really shows
us what is possible.
It's not sci-fi anymore.
Really a lot of capabilities, just as
kind of a preview of what's coming.
But maybe it's already
just tomorrow's reality.
So with that, thank you, Yuri.
Thanks for having me.
Thank you to the listeners.
If you like what you saw, don't forget
to like and subscribe,
as it really helps us
to spread the word further.
And with that, thank you.
Thank you for listening to Threat Talks,
a podcast by ON2IT cybersecurity
and AMS-IX. Did you like what you heard?
Do you want to learn more?
Follow Threat Talks to stay up to date
on the topic of cybersecurity.