Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).
Hey, everybody, and welcome to another episode of AI security operations where today we're gonna cover some of the latest news stories involving IAM issues with agent privileges, JavaScript, AI generated attacks, as well as a couple other stories. But first, we'd like to say that this episode is brought to you by Black Hills Information Security for all of your pen testing needs. We also do, SOC services and anything security related you might need, reach out to us. Blackhillsinfosec.com. Also, check out Antisyphon Training where we have training that is put together by our experts in the field who are doing this, stuff day in and day out.
Brian Fehrman:So you can learn from the people who are, doing this as their job. So check it out, antisyphontraining.com. Alright. So let's, kick it off. So today, short, small panel.
Brian Fehrman:I got myself, Brian, and, Bronwyn as well. Yep. And let's, let's hit the, the first news story here. So I think this one was reported by Palo Alto. Was this I'm not sure.
Brian Fehrman:Was this something they put together themselves? They mentioned, research that essentially JavaScript, that is being generated using an LLM to, create a seemingly benign web page that, it can be used with a phishing attack, it looks like. I believe that is the case. It looks like the page actually calls out to an LLM that then returns polymorphic code snippets that, will turn the page into a, basically a working fishing page, and there's no real, like, static, payload, so to speak, that can be easily signatured by, traditional defenses for scanning purposes. So it's interesting.
Brian Fehrman:So it's, seems like another case, basically of a polymorphic polymorphic code that we're seeing more and more frequently, which obviously makes things, more difficult to, to detect because, static signatures, I mean, you know, those those have been a problem for a while, but especially with the AI generated code, that's just not, you know, it's not gonna work very well because it's just constantly evolving. Right? What are your thoughts on this, Bronwyn?
Bronwen Aker:A couple of the key points about it. Palo Alto Networks, of course, is a well known name in the cybersecurity space. And, it's nice to see that they're doing this kind of of research. What I I find interesting, it's another example of where LLM APIs can be used for badness. And the the fact that we're seeing so much of the polymorphic code, that for traditional cybersecurity detections, that is, if not the kiss of death close to in terms of detection, it makes it very, very challenging.
Bronwen Aker:And so they're using this research, they're finding better way, they're demonstrating that more and more potential for attacks and for adversarial actions are possible because of one, the potential for abuses to the ADIs, and two, these nondeterministic polymorphic LLM generated codes. It's so hard to stay ahead of the fact that this is malicious, not benign, there we go. Yeah. First day, rented lips. What can I say?
Bronwen Aker:But it's we've we've said over and over again, this ability for things to change almost on the fly really is making detection and mitigation challenging because it's a constantly moving target.
Brian Fehrman:Yeah. I, I completely agree. And it seems like one of the things here too they mentioned that kinda compounds the issue is that where it's calling out to to get the code is kind of, might consider it trusted endpoints and and, domain. So, I mean, it could doesn't say specifically here, but, I mean, it could be calling out to, you know, OpenAI or Anthropic or other places that, don't immediately seem, you know, malicious, let's say. So that makes it you know?
Bronwen Aker:And when you bundle this with traditional or, I should say, pre AI existing, attacks like using puny code to, imitate a legitimate domain and then all of the, cloning of legitimate websites to provide this pretext and and pretend location where people are going to enter in their legitimate credentials. It it's the the combination of all of these things makes it even more challenging.
Brian Fehrman:Yes. Completely agree. Alright. Move on to the next one. This one is AI agents versus IAM, and who approved this agent?
Brian Fehrman:So it looks like one of the issues, that that we're running into is that people are wanting to, deploy agents to help with task acceleration, with automation, and to start performing more and more tasks on behalf of people, without people needing to necessarily intervene. And so with that, what comes along with that is or or rather, I should say, to facilitate that, the agents have to have certain levels of privileges to be able to perform these actions. Right? And so I think that what this article is, probably touching on is cases where people are they're like, hey. I don't know exactly what privileges this needs, so I'm just gonna go ahead and I'm going to, grant them all.
Brian Fehrman:And I could I could see how people might quickly fall into that trap, especially, you know, when we're talking about AWS. There's so many different services that are part of AWS. So recently, I was putting together, you know, the chatbot that we've we've been, discussing, working on kind of a a proof of concept to tie into our, training material and implementing this in AWS. And it was, you know, it was a constant fight of try to, try to run this this little proof of concept and then get a service permission error. And now I gotta go grant access to that specific service.
Brian Fehrman:And then rinse and repeat, and, you know, by the time I was done with that, I mean, there were eight different services that I had to independently disable. Now if someone wanted to, you can take a shortcut there and you can just star it so you could do, you know, bedrock. Bedrockstar or whatever. And I I'm guessing that that's probably, one of the issues that that we're really seeing with this.
Bronwen Aker:Well, one of the things about all of these agents is that they don't just automate user actions. They do expand the the capability by being able to do things without waiting for a human to provide direction. But that's the double edged sword is because now I don't know how many organizations they they already struggle with RBAC, rule based access controls, for their human agents. And now we're adding into the mix these automated synthetic agents, and they're getting even less supervision in some cases than the humans are. Yeah.
Bronwen Aker:Humans nowadays, they have to watch out for key loggers and all sorts of of surveillance being performed by their employers, but who's watching the synthetic agents? And and if you give them rude access, oh my god. You know, it's it's truly, truly terrifying. So it's the the article is in the hacker news, who approved this agent, rethinking access, accountability, and Risk in the Age of AI Agents. And it really does go through how caution needs to be used when providing access to these unsupervised or barely supervised agents before you give them the keys to the castle.
Bronwen Aker:And again, it's back to the fundamentals. RBAC, IAM, which I know is overloaded, but it's, you know, who has it's it's about, the identity, the management, and authentication for not only humans, but now for these synthetic agents as well. When you're developing these tools, yeah, be very cautious when you're defining your scope. Least privilege is not just a fancy term. It really will save your bacon if applied properly.
Brian Fehrman:Alright. Cool. So, let's see. Kick off the next one here. So NIST is focusing on securing AI agents within critical infrastructure and considering them part of the critical infrastructure, it looks like, rather than just IT helpers.
Brian Fehrman:And so trying to, put together risk categories, such as maybe autonomy, delegated authority, identity, data access, and operational safety risks. They do actually looks like they might have a public comment period for this. So they're still very much developing the standard, it sounds like, but at least they are, looking to or kind of, you know, focusing more on what what are the risks within for AI agents used within, government and critical infrastructure environments. And and I think that this is this is great, because it was it did seem a little bit slow for standards to start being adopted around AI security and the the frameworks, that we can all kinda point to and use for reference. So looks like this is just kind of another step in that direction of trying to get together, standardizations that we can all agree upon.
Bronwen Aker:So the original post that we got this story from came from a a LinkedIn article, and they were in turn referring to an article from the cybersecurity dive. Never heard of them, but it sounds interesting. I should explore definitely. And and, yeah, they're talking about how NIST is actively seeking input from industry partners. And that's one of the nice things about NIST historically is the interaction between people who are in the industry and the government agency.
Bronwen Aker:That's been historical. Again, everything seems in flux these days, but hopefully that will remain unchanged. And the the idea is to find out what people on the front lines are seeing as far as the security risks that are arising from or possibly could be mitigated by agentic AI systems. And that's a wonderful thing to be exploring is as mentioned before, we have polymorphism. We have the ability to to ramp up very realistic fraudulent websites quickly now, especially using all of these agents and and various data driven tools.
Bronwen Aker:And I think that in the long run, it really is going to be AI versus AI, kind of the spy versus spy thing from from days of yore, but brought up and made current where now whoever has the best AI wins. If you have a good AI as a defender, then hopefully, obviously, it would be better at defending your systems and possibly anticipating those polymorphic twists and turns that an adversary would be implementing. So good news on the cooperation and looking ahead to solve the problem as quickly as possible because it's it's only gonna get worse from here.
Brian Fehrman:Yep. Lordy. Alright. Let's hit our final one here. So, Tenable One AI exposure.
Brian Fehrman:So quickly, for those of you who aren't familiar, Tenable, creates a variety of security products, but Nessus, vulnerability scanner is certainly one of their more well known products that has been around for, quite a while and is, certainly one of, main kind of standard tools that a lot of firms use. But it looks like they are putting out, what they are calling one AI exposure with the aim of it being to kind of, map out and secure AI usage across an organization. And it's really trying to find both sanctioned and shadow AI usage. Shadow AI basically referring to when employees aren't given or granted access to, let's just say, sanctioned AI within the the company that they can they go off and they start, you know, using using their own personal accounts and, and more, specifically, usually referring to, proprietary information ending up in those kind of non sanctioned AI, sources. And so I think the idea here is to try to kinda map out all of the AI that's being used, throughout an organization, to try to help, probably, you know, identify risks and, create better policies surrounding the the usage, that that they're seeing.
Brian Fehrman:Mhmm.
Bronwen Aker:Yeah. Every time some new technology is developed, there's always the shadow tech issue, and it's it's it invariably poses new and interesting challenges because every tech every new emergent technology is going to be different. That's what makes it new and emergent. So, I know for a fact that penetration testers have a love hate relationship with Nessus. One, because it can scan so many things, but also it does tend to throw a lot of false positives.
Bronwen Aker:I expect that this one AI exposure tool will have similar challenges, and yet I applaud the fact that Tenable is making this effort because they're already in the cybersecurity infrastructure scanning business. So they're likely to have a good handle on the kinds of things that can be looked for to find this. And if it becomes another industry standard tool for helping to identify it, I say, go team, because we need these new tools. We need ways for security professionals, cybersecurity researchers, and penetration testers to be able to hit the ground running. And if Tenable can make good on this desired outcome of being able to run a vulnerability scanner, identify indications where shadow data analysis stuff is being done.
Bronwen Aker:I hate the term AI. I'm sorry, I'm just struggling with it today. Because it is artificial, but all data stuff is artificial, but it isn't intelligent. Anyway, but yeah, it's a wonderful thing to see happening. I will probably be keeping an eye out for developments as this, tool continues to evolve, and I really hope that they are able to achieve at least a reasonable fraction of what they're hoping to with this new tool because it is so important and so very needed.
Brian Fehrman:Yeah. Yeah. I agree. I am, very much looking forward to, what what comes out of this and seeing it in action, seeing what it can do. I think it's a great idea.
Brian Fehrman:Cool. So with that, I think that was the last of our stories for today. So we'll go ahead and wrap it up. And so I'd like to say, thanks to everyone for joining and hope to see you on our next podcast and keep on prompting.