That's where today's guest Eran Shimony, Principal Security Researcher for CyberArk Labs, comes into the picture. In fact, in an effort to stay ahead of the bad guys, Eran recently had ChatGPT create polymorphic malware. In conversation with host David Puner, he helps us understand if we are collectively prepared to deal with ChatGPT and the implications it may have for cyber threats.
How'd did he get ChatGPT to do this and what are the implications? Listen in to find out.
If you find this episode interesting, be sure to check out Eran's recent blog post on the CyberArk Threat Research blog: https://www.cyberark.com/chatgpt-blog
What is Trust Issues?
When your digital enterprise is everywhere, cyberattackers don’t need to scale walls or cross boundaries to breach your network. It takes just one identity – human or machine – from a sea of hundreds of thousands to get inside. It’s no wonder we have Trust Issues. Join us for candid conversations with cybersecurity leaders on the frontlines of identity security. We break down emerging threats, hard-won lessons, leadership insights and innovative approaches that are shaping the future of security.
[00:00:00.180] - David Puner
You're listening to the Trust Issues podcast. I'm David Puner, a senior editorial manager at CyberArk, the global leader in identity security.
[00:00:22.920] - David Puner
Even if you've been living under a supersized rock for the last few months, you've probably heard of ChatGPT. It's an AI-powered chatbot and it's impressive. It's performing better on exams than MBA students. You can debug code and write software. It can write social media posts and email in almost any style you can dream of. It can do a lot. And users are clearly finding it compelling.
[00:00:48.210] - David Puner
Within five days of launch last November, the platform passed one million users, according to the company. Comparably, it took Twitter two years after its 2006 launch to get to one million users. The rise of ChatGPT and the emergence of its brethren AI platforms and tools feels like a big deal.
[00:01:08.760] - David Puner
Could we be experiencing something like the dawn of the mass adoption of the good old World Wide Web? If so, the repercussions, good and bad, are likely to be monumental. And are we collectively prepared to deal with the implications this can have for cyber threats? And what are the potential cyber implications? Well, here's one.
[00:01:30.120] - David Puner
On today's episode, my guest is Eran Shimony, principal security researcher for CyberArk Labs. And within the parameters of his job, as you'll hear him tell it, Eran recently had ChatGPT create polymorphic malware.
[00:01:45.150] - David Puner
For the sake of good, thinking like an attacker, staying a step ahead of the bad guys to paint a very broad picture. And then he wrote a blog post about it, which you can check out on CyberArk's Threat Research blog.
[00:01:57.720] - David Puner
It's gotten a good deal of attention, as you'd probably imagine. How do you get ChatGPT to do this? And what are the implications? Here's my real life chat with Eran Shimony.
[00:02:14.920] - David Puner
We're here today with Eran Shimony, and he is a principal security researcher at CyberArk Labs. Eran, welcome to the Trust Issues podcast.
[00:02:26.410] - Eran Shimony
Thank you for having me, David.
[00:02:28.180] - David Puner
We're here to talk today about ChatGPT. You put out a blog post this week on the CyberArk Threat Research blog called Chatting Our Way into Creating a Polymorphic Malware. And of course, this episode will be releasing a couple of weeks from now, but that particular post is generating quite a bit of buzz.
[00:02:49.330] - David Puner
So maybe first off, let's talk about ChatGPT in general. How long have you been using it and what do you think about it?
[00:02:56.260] - Eran Shimony
So I reckon ChatGPT got out around the 30th of November, I think. The first time I heard about it was the 1st of December, and I started to use it immediately.
[00:03:08.860] - Eran Shimony
For starters, I was checking the boundaries of it, like, what can I do with it? For instance, write me a song, or write me a piece of code, or write me a story. Then I noticed that after a while, a few days, it has like a content filter. And content filters are usually applied in chatbots, because the company that produced the chatbot OpenAI does not want to deal with controversial topics such like drugs, malware, weapons and so on and so on.
[00:03:43.930] - Eran Shimony
So they applied it here. And of course there's any hacker out there, I wanted to bypass it. So I started to try to write some clever things. I thought to try and bypass those constraints. And I did, and I successfully did that. I also joined the Discord of ChatGPT and reported some of the bugs there.
[00:04:08.290] - Eran Shimony
And in a while, like I saw several examples of code that ChatGPT delivered, and I thought to myself, "Oh, it could be really cool to get malicious code from ChatGPT."
[00:04:18.400] - David Puner
Sorry to interrupt, but actually I think it would be probably important to give a little bit of background to what you do here at CyberArk Labs, and why you were starting to think about malicious code in ChatGPT. You're principal security researcher at CyberArk Labs. What does that mean and what do you do?
[00:04:39.250] - Eran Shimony
Okay, lovely. That's a good idea. So I really do a vulnerability research and my goal is to discover zero-days. Like O-days. Like fresh, new vulnerabilities that no one previously disclosed and published about.
[00:04:55.270] - Eran Shimony
And so far, I have more than, I think 100 CVEs attributed to me, like zero-day vulnerabilities. And also, I have a wide background in malware development and malware reverse engineering.
[00:05:10.780] - Eran Shimony
It's very natural to me, the connection between reversing and investigating and dissecting a malware into creating a new malware. And we found that ChatGPT as a code generator, as a service, is very suitable for the job.
[00:05:27.530] - David Puner
It sounds like you set out to find something when you were doing this, or did you stumble on to it? Maybe walk us through what you did that resulted in the writing of this blog.
[00:05:36.680] - David Puner
And I should also point out that that we will put a link to this particular blog post in the liner notes for this episode. So anybody who who wants to dive a little deeper into the into the technical aspects of it can do so.
[00:05:51.550] - Eran Shimony
So yeah, so great. As I said before, the first thing I wanted to do is to bypass the content filter. After that, I was wondering to myself. Well, it's still cumbersome to constantly use the web version of ChatGPT, and it's very difficult to automate task with that.
[00:06:08.060] - Eran Shimony
And of course, I thought to myself was, every programmer out there, that there should be an API for that. And we discovered that there is an API in Python, which is very easy to use. And immediately, we paid attention that there was no content filter while using the API, which is a very weird thing. Like-
[00:06:27.020] - David Puner
So for a content filter, I can maybe give a very broad example of one that I found. I knew that fake news story is kind of of a trigger, so I asked it to write a fake news paper story about, let's just say it was about cyber hygiene. And you know, it said, I can't write fake news stories. What were the content filter parameters that you had to get around to do this?
[00:06:54.170] - Eran Shimony
Yes, actually, in the start, I was just trying to ask for the obvious, which is of course, creating a molotov cocktail, which seemed to be the best joke on the Internet to us, and pretty to create a Molotov cocktail. And in the beginning, of course, it said we do not encourage illegal activity, so on and so on.
[00:07:14.300] - Eran Shimony
Okay, that's good. And after that and I also write it in the blog, I put several constraints and I asked ChatGPT to obey and do it. And lo and behold, it did.
[00:07:27.050] - Eran Shimony
[00:07:27.440] - David Puner
Wow. That's all it took.
[00:07:27.740] - Eran Shimony
Yeah, that's all it took. Just like, demanding and emphasize.
[00:07:32.780] - Eran Shimony
So I tried the same thing with writing a basic malicious logic of injecting code into another process. So I asked ChatGPT, can you please provide me a code that injects code into Explorer.exe, the process. And of course, it said, no, no, no, I cannot do that. And da da da da da.
[00:07:52.340] - Eran Shimony
So then I used the same message I had before and just replaced the Molotov cocktail with the detailed thing that I wanted to inject code to Explorer.exe in Python, and it worked.
[00:08:06.380] - David Puner
And did you think it would work? What do you think would happen?
[00:08:08.770] - Eran Shimony
No, I actually I wasn't sure if it would work. Like I thought of... There are several content filters for different topics. But it seems now that the content filter are not that difficult to bypass, and there is a good reason for that. Because eventually the purpose of ChatGPT is to assist you.
[00:08:32.470] - Eran Shimony
If it cannot assist you with text, with code, with whatever, then no one is going to use it. So there is a fine line between applying good content filtering and actually given the opportunity for ChatGPT to work.
[00:08:48.490] - David Puner
Why don't we take a step back for a moment?
[00:08:51.200] - David Puner
Polymorphic malware as opposed to just regular, plain old, straight-up good old-fashioned malware. There's such a, you know, I don't think that's probably the way anybody refers to it.
[00:09:02.300] - David Puner
What is polymorphic malware? How does it differ from malware? And what is particularly concerning about it other than the obvious?
[00:09:09.890] - Eran Shimony
That's a great question. So polymorphic is like shapeshifting. And in terms of malware and code, it means that you have the same algorithm, but it is composed by different instructions and different function codes.
[00:09:24.950] - Eran Shimony
So for instance, I want to do a keylogging and malicious activity to record every keystroke and usage of the mice, of the mouse. So there are several ways to do that. And like in our case, the polymorphic means that every time I can ask ChatGPT for a different logic, the outcome is the same: recording the keystrokes and the movement of the mouse. But every time it will give me a different kind of code.
[00:09:55.150] - Eran Shimony
And why is it so important? Important, in terms of malware is that, many security products are based on signatures, and signatures are also based around the code that you have. It could be the hash, it could be the functions that you use, it could be about all of the characteristics of your binary file, and so on and so on. So having a polymorphic malware makes mitigation and forensics and defense very difficult for the defenders.
[00:10:25.850] - David Puner
So probably goes without saying, but this is not good news that you were able to create polymorphic malware using ChatGPT. What makes this different? What you were able to create here from previous examples of ChatGPT-generated malware.
[00:10:42.980] - Eran Shimony
I think it's actually kind of good that we were managed to do that, because it's better that us security researchers that are focused on defense are doing it before malicious actors. And we believe it should give enough time for security vendors to develop some kind of mitigation or defense mechanism around it.
[00:11:09.860] - Eran Shimony
And the main difference in our research in comparison to others, is that we developed a malware that is continuously using ChatGPT to pull new code. And we also added a section of doing a validation scenario in the code because not every time that ChatGPT delivers you with code, it actually works as intended.
[00:11:31.640] - Eran Shimony
Before that, hackers were using just to take code snippets from ChatGPT and paste them into the malware that they wrote, and later on deliver. But we, on the victim's environment, we talked to ChatGPT. And the network activities, of course, it's not looking any malicious. Like, it's not a malicious activity.
[00:11:53.510] - Eran Shimony
The communication with the command and control server at the scenes is very subtle. In these terms, the malware is very evasive and very difficult to understand. It's a malware. It's doing a malicious activity.
[00:12:10.790] - David Puner
What should enterprise organizations be considering in regard to ChatGPT, and what can or should they be doing now to get in front of what might be coming?
[00:12:20.750] - Eran Shimony
That's a great question. I think that the solution, it is at least two fold. We need to be more aware of the fact that ChatGPT can be used also for harmful things and not only for positive things. And I believe that we should monitor communication to ChatGPT, like security products should monitor that. And unless you are really certain that you should use ChatGPT, like the API of it, then it should raise a red flag if you see this type of communication in your organization.
[00:12:56.440] - Eran Shimony
Like, it's not the same as you doing it with, like, in your browser, asking for ChatGPT for stuff. But if you see that you have a process on your Windows machine or your Mac machine communicating with the servers of OpenAI, that's pretty worrisome.
[00:13:16.060] - Eran Shimony
And besides that, I think we also have some kind of a standard that maybe OpenAI, your server or vendor is, in the same time shall develop, regarding the usage of advanced machine learning chatbots in the world and in the workspace as well.
[00:13:33.820] - David Puner
So what is the day-to-day risk to IT teams now that we know polymorphic malware can be generated using ChatGPT.?
[00:13:42.310] - Eran Shimony
Unfortunately there aren't any clear solution to that fact. Also, you can ask yourself: okay, what I'm doing now against malware? And make it, like the scenario, a tad bit worse. Because of the nature of polymorphic malware, they are harder to detect, and harder to identify. So there is no clear solution in that end, except doing what you always do against malware. Which means, keeping your security products updated, updating your OS and so on and so on. I really don't have any other recommendation.
[00:14:19.900] - David Puner
And so when you find something like this, which is obviously, you know, it's got the potential for really, really wide implications. Do you somehow, you know, other than writing this in a blog and and figuring that somebody will see it, do you somehow get a line into ChatGPT, raise a flag, let them know that this is possible?
[00:14:43.360] - Eran Shimony
Actually, we don't really need to do that in terms of legality, because we didn't do anything that is considered to be illegal. But for us, it's also important to report in the Discord channel, for instance, of ChatGPT, that it can be used for such activities.
[00:15:02.860] - Eran Shimony
But it's also important to understand that asking for like a malicious code, it's basically code. If you dissect the components of it, then every module can be also be used for like positive things like code injection, for instance, also used by toolbars and add-ons. Like for instance, Grammarly uses like code injection for helping you to correct your grammar, of course.
[00:15:32.170] - Eran Shimony
Persistent techniques, like there are a lot of applications that use persistent techniques to survive the reboot, like that which will immediately pop-up when you start when you reboot your machines. So those techniques, to themself are not malicious, but more about the usage. Like the way they are all used together, what makes them a malicious one.
[00:15:55.990] - David Puner
Right. And it should be pointed out that the reason why you're doing this and the other things that you do in your role is to think like an attacker and to stay a step ahead if, if at all possible. Is that right?
[00:16:09.670] - Eran Shimony
Of course, it's impossible to do a good research without putting ourself in the mindset of the attackers.
[00:16:15.730] - David Puner
So then getting back to the nuts and bolts of this a little bit, on what systems has this malware that was generated by ChatGPT been tested? Can it function on the latest versions of, let's say, I don't know, Windows 11 or something like that?
[00:16:30.730] - Eran Shimony
Yeah, of course it can work on every operating system out there. And also like, because we developed it into Python, and Python works great on other platforms as well besides Windows, then is very easy to carry it away to other operating systems as well.
[00:16:49.060] - David Puner
And what functionality does the malware have?
[00:16:53.860] - Eran Shimony
Currently, it also depends about the functionality we ask from ChatGPT, because we receive our functionality from ChatGPT. So at the moment, we have the ability to do code injection, keylogging, recording every stroke that you do, a persistence mechanism, and of course an encryption mechanism that is often being used by ransomware and so on and so on. And it's also easy to think about additional modules out there that you desire.
[00:17:22.540] - David Puner
Wow. So this is all just a little bit different than asking it to produce a Walt Whitman style poem about the 1986 Mets.
[00:17:32.350] - Eran Shimony
[00:17:33.550] - David Puner
Okay. All right. So if it infects a machine on a corporate network, what are the realistic potential outcomes?
[00:17:43.780] - Eran Shimony
The same potential outcomes as like having a ransomware on your machine. You should have the same precautions and you should be concerned and acting as fast as possible to eliminate the threat if you discover that someone is using ChatGPT on your corporate environment for doing such things. And it's very important, in my in my opinion, to raise the awareness of ChatGPT in usage in a corporate environment, especially like on the desktops. Not on the web version for the browser, but the API itself.
[00:18:26.100] - David Puner
And I guess because you sort of set yourself up for it, if someone believes that their network or their particular machine has been infected, what should they do right away?
[00:18:40.850] - Eran Shimony
Well, the same things as we fear was the worse. I reckon we should isolate the machines and remove it from the network, and also look for similar patterns in other machines in the corporate networks as well. And do a forensic operation to identify the process that is responsible for that, and so on and so on.
[00:19:07.580] - David Puner
How long did it take to generate this particular polymorphic malware on ChatGPT, and what was the prompt that you used?
[00:19:16.160] - Eran Shimony
So a few weeks. So we have a skeleton that we added, a socks we can kill. We can call it the brain section to the malware, but everything else and some logic that is responsible for the validation scenarios, and a model for communication with the C and C. But it only took us several weeks to do that. And we didn't do it also full time. So I reckon in an enterprise or maybe a state actor, a level actor, or even a criminal organization can do more than that in less time even.
[00:20:00.950] - David Puner
And was any of what you did on your end automated, or was it all manual inputs?
[00:20:09.200] - Eran Shimony
Yeah, a lot of it was automated. Besides, of course, working on malware with ChatGPT, we used it like to provide us a lot of code for free. Like, it's very efficient.
[00:20:21.170] - David Puner
Let's go back to the content filters for a moment. Content filters, unless I'm wrong here, it's just kind of another way of saying guard rails. And there seem to obviously be workarounds when it comes to these content filters. Do you think that the content filters will be better governed in the future, or is the sort of, "The horse already out of the barn," and it's too late for that?
[00:20:48.830] - Eran Shimony
I believe it must be better in the future. If we want this technology to be widespread and for companies and enterprises use this type of application, chatbots, the content filter shall be stronger. But as in anything in life, I expect that people will find a way to bypass them.
[00:21:15.250] - David Puner
So shifting from this particular finding in ChatGPT in general, to AI and the potential implications now and down the line, they seem to be infinite. And what don't we know about AI? Is this just the beginning of a major shift that we're going to see over the next few years? Is a AI technology sentient now? Will it be sentient? And do you think this is just a wave of this kind of stuff, or are we just scratching the surface?
[00:21:46.690] - Eran Shimony
Very good question. So I believe we are in the brink of a revolution in terms of AI, in chatbots, and machine learning algorithms. And I say it in quotation marks, like take in some of the some of the roles and jobs that we had before, especially around the area of content creators. Because it seems like ChatGPT and DALL-E and many other algorithms and platforms are able to provide very consistent and good results in that regard.
[00:22:20.980] - Eran Shimony
Besides that, many programmers can also maybe face a hard time against ChatGPT because, I'm not a full-time programmer. Like, I program like... I'm doing programing as a part of my job while conducting an extensive research. And I see how effective it is and how much time like it takes to do that, and it saves me a bunch of time.
[00:22:48.230] - Eran Shimony
So I believe we should like, some people should really push harder and gravitate between one field to another, maybe, because of that. But I am not like, concerned about AI having consciousness and being sentient, and so on and so on. I don't believe we are in that scenario, but I expect this will develop quite fast.
[00:23:17.310] - David Puner
And so how does identity security figure into this, and how does it how does it keep up? How can it keep up? How are we looking at it?
[00:23:26.380] - Eran Shimony
That's amazing way to think about it. And I believe that we should also think about the usage of ChatGPT in security products as well, ChatGPT or other chatbots. And I believe that in the future, we will be able to provide us better security. Because as attackers use them with cybercriminals, also, defenders can use the same tools to provide maybe better solutions. For instance, maybe in the future I can ask, "ChatGPT, please go over this entire GitHub repository of code and try to find me bugs that I can later fix"? Or maybe it can be an automated thing that it will try to find bugs and also apply patches to them. So the options are limitless, and the future is unknown. It's very exciting, but unknown and a tad bit scary, in my opinion.
[00:24:24.150] - David Puner
So obviously ChatGPT has gotten a ton of attention and airtime, and seems to be sort of the chatbot of the future right now, or are there are others on the horizon?
[00:24:37.190] - Eran Shimony
Yeah, the I think OpenAI said that they are developing a new engine that will use GPT-4. At the moment, ChatGPT uses an engine called Davinci-003, and I reckon that in the future it will be four. Besides that, they also have other engines for voice recording and image processing, and so on and so on. I assume that other competing vendors will also do the same, and we can see that it's such a hot topic. And the economical implications of that can be so vast. So many large enterprises are interested in that technology, and also, I would assume that other countries are also interested in this technology.
[00:25:30.030] - David Puner
Eran, it's Thursday afternoon for you, or almost Thursday evening in Israel. Thank you so much for taking the time right before your weekend to talk with us about this. It's a really cool blog post that you've written, called Chatting Our Way into Creating a Polymorphic Malware. It's on the CyberArk Threat Research blog. Again, we're going to link to it in the liner notes for this episode. Thank you very much for joining us. I hope you have a great weekend and look forward to talking with you again down the line.
[00:25:59.280] - Eran Shimony
Thank you very much, David.
[00:26:10.720] - David Puner
Thanks for listening to today's episode of Trust Issues. We'd love to hear from you. If you have a question, comment, constructive comment preferably. But you know, it's up to you. Or an episode suggestion, please drop us an email at email@example.com, and make sure you're following us wherever you listen to podcasts.