Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).
Welcome to AI Security Ops, the weekly show where we cut through the hype and dig into how AI is being weaponized and how we can defend against it. This week's headlines range from prompt injection exploits that turn ChatGPT into a data leak tool, a critical vulnerability discovered in the n eight n platform, and malicious Chrome extensions siphoning AI chat histories, not to mention a classic a p t 28 credential stealing campaign that now leans on AI generated phishing lures. Fun times. This episode is brought to you by Black Hills Information Security and Antisyphon Training. BHIS helps organizations find real world security gaps before attackers do through penetration testing, adversary emulation, and purple teamwork.
Bronwen Aker:Antisyphon delivers hands on practitioner led training built around real attacks and real tools. So you can use what you learn immediately. Learn more at blackhillsinfosec.com and antisiphontraining.com. So the first story that we have is about a zero day in n eight n, which is an open source workflow automation platform. The flaw resides in the HTTP endpoint that parses workflow definitions without proper authentication checks.
Bronwen Aker:An attacker can send a graphic JSON payload that triggers remote code execution with the privileges of the n eight m service account effectively taking over the host. Now because n eight m often runs with elevated rights to access internal APIs, the impact can cascade across an organization's automation layer very quickly. Automation platforms like n eight n are increasingly the glue of modern infrastructure. So a single remote code execution vulnerability can compromise the entire the entire pipeline. So with that introduction, I'm gonna open up the floor.
Bronwen Aker:What do you gentlemen have to say about this? I have wanted to dive into n8m, but, you know, only so many hours in a day. What do you think?
Brian Fehrman:Yeah. So Derek and I have both been playing around with this quite a bit as well as a couple other people I know at the company, Bo, has been playing around with this
Derek Banks:a
Brian Fehrman:lot. One of the baits we're still having is it n eight n or is it Nathan? Which I said to Derek the other day is kind of like the southern pronunciation of Nathan.
Bronwen Aker:According to the n eight n website, it is n eight n. So
Brian Fehrman:Yeah. n8n Alright.
Derek Banks:That's what I've heard most folks on YouTube saying, but I do like Nathan. It's Nathan. Yeah.
Brian Fehrman:It just kinda rolls right off the tongue. Nathan.
Derek Banks:Well, I mean, is it Azure? Is it Azure? Is it Azure? No one knows. Azure.
Derek Banks:Azure.
Brian Fehrman:If you want to be fancy about it.
Bronwen Aker:Potato, potato. Can we call the whole thing off?
Brian Fehrman:But this I mean, this the the tool though is awesome. It makes automating these different workflows or throwing together a workflow automation, I'll say, so much easier. I mean, could you go out and code this all up yourself? Certainly. I mean, you could go out, you could put together the discrete Python pieces.
Brian Fehrman:I mean, you could have Chachapiti and Anthropic, whatever you like to use, help you put together these pieces. Sure. You can do that. But what you get here is that the whole drag and drop, like, visual style of programming. So if anyone's ever played with something like LabVIEW before, which I've played around with a whole lot back when I worked at the university and dealing with data collection and signals processing, it can make some of these some of these tasks a little more abstract sometimes a lot easier because you can see that, okay, I have this block that does this and the data flows out to this block that then does this.
Brian Fehrman:And before I mean, in no time at all, basically, you can have an entire workflow together that's pulling in information and, you know, sending it massaging it through into whatever format you want and pulling it out. And so, Derek, what's some of the stuff you've been working on with it? I know you've been putting together some cool stuff with it.
Derek Banks:Yeah. I mean, kind of similar in the same vein where I was using n eight n to actually take proof of concept Python code that I wrote for our class that does essentially a cyber threat intelligence, like, newsletter. And I replicated that literally sitting on the couch watching a basketball game in Nathan. Now I used the template and, you know, kinda changed things around. And but I was I think that your to your point, you can write the scaffolding or the glue code, whatever you wanna call it, around large language models to perform tasks other than chatbot tasks.
Derek Banks:Right? But the glue kind of is in the widgets of the web interface. It's like a web canvas interface to to drag and drop different functionality to make it easier. I say easier because it's not easy. I've spent, like, the entire morning today trying to do something much more complicated than a a newsletter.
Derek Banks:But I I think it's really promising from an orchestration perspective. I I really do. And I I applaud the folks at at Naten for open sourcing or or making available for free, rather, the Docker container that is so full functioned. Alright. And and, you know, kudos to them.
Derek Banks:As far as the vulnerability goes, yeah, it's serious. But, I mean, is anybody surprised? I'm not. Right? Because I I mean, I know we've said this on the show before.
Derek Banks:I mean, you know, the pioneers take the arrows. Right? And, yes, it is a flaw that they should fix. Hopefully, they fixed it already. I I haven't checked.
Derek Banks:But, I mean, if you're an organization that's running, Nathan, on the outside of your network exposed to the Internet, I would question why you did that in the first place. I'm not saying that you shouldn't take an external data into your workflow, but standard information security rules apply here. Right? Like, you shouldn't expose something to the Internet without you know, thinking of the risks and repercussions. And so to me, that's like the biggest risk, at least from what I can tell.
Derek Banks:Right? Yes. It's an unauthenticated RCE, so as you you can communicate with the endpoint. So Internet exposure is the most serious thing. Then internally, I mean, yes, I wouldn't expose it internally to anything either.
Derek Banks:And I know we've I I would authorize what what should talk with it. Right? And I know that we've said this before, but I mean, this is really what is old is new again in information security. Right? I mean, I I think that you should look at all AI technologies as as beta at this point still in 2026, especially when it comes to agents and orchestration and data processing internally with your sensitive data.
Derek Banks:And you should be very careful what you allow to communicate with those things because, look, the folks that are coding, Nathan, are worried about functionality at this point, not necessarily security. But maybe now they're a little bit more worried about security. Yeah.
Bronwen Aker:Well, it's it's you said it yourself. Everything old is new again. And over and over again with these AI implementations, I'm seeing that it's the same suspects, the usual suspects that we run into with any type of thing. If the the authentication is bad, if the RBAC is bad, if if there aren't clearly defined policies or or guidelines in terms of what it can access, of course, badness is going to happen. Who is really surprised by this?
Derek Banks:Well, not information security practitioners. Right? I mean, the thing that I'm doing with with n a n at the moment is I'm literally allowing a a a a model access to tools that have running in a container with an API and then in some cases, like command line tools. Just those things in itself are, like, risky. Right?
Derek Banks:Now I'm not putting together a production system at the moment. Right? But I think that, you know, if you are putting together a production system and, you know, whether it's a chatbot or some kind of other AI powered system, just like anything else you've implemented in the last twenty years, you should probably have a security assessment performed on that system by someone who is qualified to do so.
Brian Fehrman:Yes. Yep. Absolutely.
Bronwen Aker:Anything else on the n eight n vulnerability, or shall we move on?
Brian Fehrman:Nope. Tilt the next one.
Bronwen Aker:Okay. So the next story is about ChatGPT's memory feature supercharging prompt injection. Researchers have discovered a zombie agent exploit, which abuses ChatGPT's new long term memory feature. An attacker will craft a prompt which stores malicious instructions in the model's persistent memory, then later trigger it via a seemingly benign follow-up query. The module dutifully executes the payload, leaking conversation history, or running arbitrary code on integrated tools.
Bronwen Aker:So this attack is hinging on the assumption that memory persistence is benign and that user prompts are fully trusted.
Derek Banks:Does anybody else hear the cranberries right now? I'm just wondering. Sorry. Yes. I mean, it's also how memory works.
Derek Banks:Right? I'm assuming some things I I did not read in-depth about this, but I'm assuming some things need to be true. Like, you send me a crafted URL. I have to click on it. I probably also have to be authenticated to chat GPT.
Derek Banks:Right? Like and so yeah. I get that, but I yeah. I I mean, the alternative is the term memory off and not use the feature.
Brian Fehrman:Don't know. Which which we've talked about on a different episode. Honestly, that's what I do. Not because of the I didn't even have the security risk in mind, but I just I don't like it when it uses other chats as context for current chats. I mean, certainly, I mean, I've you know, within the context of a chat, I wanted to use my previous messages to guide future messages.
Brian Fehrman:But if I start a new chat, I don't want it to use anything else that that I've done previously because maybe I'm chatting with it and I went down the entirely wrong path when I'm trying to solve a problem. But now it's using, it thinks that that's the way that I wanna go when I'm trying to approach the problem from a different angle and I don't like that. I like a a blank clean slate. And I know everyone's got got their own preferences, and, know, and you can go about it otherwise too, which you should be deleting your chats periodically. I mean, it's not good to I don't think it's good to just let them keep building up.
Brian Fehrman:So I do like to delete them also, but personally, I'm I'm a fan of just shutting off the the memory feature in in general.
Derek Banks:Yeah. I mean, so I'm kinda a little more nuanced with it. I don't run it on chatgpt.com, and I use chatgpt for more kinda like one off kinds of things. But I do have it running for Cloud Desktop. I really like Cloud Desktop, and so I want it there.
Derek Banks:In fact, I asked Cloud Desktop what it knew about me, and I felt like it knew more about me and what I do, especially when it comes from a coding perspective than than, like, than I did. And I was like, dang. This is good enough. I might actually include this in my annual evaluation, CJ. I do.
Derek Banks:I did some of the things, man. It was like, I didn't even think about that. That's really good. Right? And so I think
Brian Fehrman:Please do go on.
Derek Banks:Yeah. Please. Right? And so I I think there's a time and place for it. I think people just need to be aware and evaluate their risks.
Derek Banks:Right? I'm also someone who I I just I don't even click on links that, you know, Brian Fearman might send me. Right? I'm probably gonna copy
Brian Fehrman:I understand why I wouldn't either.
Derek Banks:Yeah. I mean, one time, I actually had John Strand send me an email, and it had thoughts, question mark, and then a link to a Word document. And so I went and downloaded the Word document in a VM and actually looked at it and made sure that it was, like, not weaponized. Right? And this is years ago.
Derek Banks:Right? And I actually called John and be like, did you mean to send me that? And he's like, oh, yeah. Want you to take a look. So I I guess maybe I have a little bit higher level of paranoia.
Derek Banks:But, I mean, just kind of, like, balance that risk kinda reward thing, I think.
Bronwen Aker:Well, that's the thing about being an info sec. I often feel like I have this split personality, like an a new Gizmo will come out and I go, oh, shiny. And it's the same thing. Oh, that hurts. You know, it's it every every new shiny toy has a downside, like the teddy bears that got released over Christmas that are that are AI enabled.
Bronwen Aker:Did you
Derek Banks:see any That sounds fun.
Brian Fehrman:Yeah. I haven't seen this.
Bronwen Aker:It's just disastrous as you can imagine. So, I mean, like you, I leverage the memory feature judiciously. And a while back, I had it write up a biography of me for a potential podcast. And of and, course, I'm looking at what it says, and I'm I'm intellectually, I I'm going, I detect no falsehoods. But on an emotional level, it's like, who the hell is this person?
Bronwen Aker:That person sounds like they really know what's
Derek Banks:going on. My response. I'm like, man. But I don't know. I I could look at the system prompt.
Derek Banks:I had chat GPT kind of flatter me today and and something. I I asked it about, like, to help me do something. It's like, man, what you're doing is serious and awesome or something along those lines. Like
Bronwen Aker:God. I hate the semantic. I hate when it sucks up to me. The things that I do like about ChatGPT, you have the ability to have a temporary conversation, and those do not leverage any of the the memory features. So I'll use that if say, I have a one off.
Bronwen Aker:I don't have any need to to capture it for future reference, And I'm basically using Chechi BT as a search engine in that case. But there are times that the the memory feature is nice because I hate having to tell it over and over again not to use inline dashes in its output. But that's me. So but what do we think about the data breach? I mean, again, are we really surprised?
Derek Banks:I mean, don't I think data breach is a strong term here, right? Like, is it a ChatGPT data breach, or is it a like, I don't know. It's it's kind of like kinda sort of a data breach, but not really. It's like calling stealer malware a a data breach. Like, it's not like a whole bunch of data from a whole bunch of folks was taken from ChatGPT.
Derek Banks:It's that you were, I'll say, targeted and clicked on a link, and then some of your data was stolen. So I mean, not diminishing that for anyone that that's happened to, but I don't I don't know. Like, usually, when I hear data breach, I think of, like, a company and a lot of data. Right? But, I mean, I guess, like you said earlier, potato potato.
Derek Banks:I don't know how don't know how architecturally you fix that. Right? I might I actually would think hope that browser controls would help, but obviously not.
Bronwen Aker:Well, the idea of the the whole the way they describe the whole zombie agent thing reminds me a lot of a persistent cross site scripting bone.
Derek Banks:And I Yeah, like a stored cross site scripting. That's what I thought of when I heard this. But a whole lot harder to fix. I mean, it's sort of like a feature. I mean, that's like part of the problems with some of the issues that we've highlighted across the last couple months of this podcast is things like prompt injection is the issue and the feature.
Derek Banks:Right? I mean, it's just how it works. And so and and I think I actually read here recently over the break that, I guess it was either OpenAI or Anthropic basically admit like, admitted that that prompt injection has no no fix in sight. Well, yeah. No.
Derek Banks:I mean, it's the nature of the technology. I don't until we find a different, like what comes after a transformer or what the next technology is that's AI, then I don't don't know that it gets fixed.
Brian Fehrman:Yeah. No. I mean, at this yeah. I mean, at this point, it's basically like saying fix social engineering. I mean, Yeah.
Brian Fehrman:Okay. You know?
Derek Banks:Yeah. It's it's
Brian Fehrman:With with web forms go ahead. Oh, go ahead. I'm sorry.
Bronwen Aker:No. You first. Please.
Brian Fehrman:Alright. No. I was just gonna say, yeah. I mean, it it's about, you know, blocking down the things around it, mitigating the actual risks, and mitigating the damage that come that come along with it, you know. But, yeah, it's at this point, I mean, it's just there's architecturally, there's not really a good way to to fully address the issue.
Derek Banks:Yeah. I guess one thing that for this particular thing is, I mean, maybe not put things that are sensitive into chat GPT. I mean, there there is that. Right?
Brian Fehrman:There's that.
Derek Banks:Yep. I can't I don't think that I I typically don't put what I would consider to be sensitive information into chat GPT. Maybe a little bit more with clogged code and clogged desktop. I would consider that a little bit more sensitive, but that's not really applicable in this situation. Right?
Derek Banks:Different technology, different thing. So
Bronwen Aker:The the thought that comes to mind for me is that when it comes to like web interfaces, input validation is usually fairly straightforward. But with an LLM or some other artificial intelligence generative doohickey, how do you validate the input? Like you said, how do you do that protection for social engineering?
Derek Banks:That A little bit more porous of a Band Aid, Mhmm. But Yeah. I mean, you're you're basically gonna put another classifier in front of it to try and classify the text as, you know, malicious or not. And and and it's not like, I don't know that you can get away with the same kind of input. Like, I've I've done web app tests where, yeah, the the the web input or the, you know, the user input was highly restricted.
Derek Banks:And in some cases, I was still able to get around it. Right? But I mean, how much could you put in place in ChatGPT before you started affection affecting functionality of being able to use the app?
Brian Fehrman:Yeah. I mean, that's that's been the that's that's been the ongoing battle with, I mean, security in general. Right? Sec security versus convenience. I mean, at some point, you have to be able to use the thing.
Brian Fehrman:Yeah.
Derek Banks:Yeah. Because the most secure state is off.
Brian Fehrman:Well, yeah. Exactly. Just unplug it and it's, you know, probably alright.
Bronwen Aker:There there are reasons why I'm very glad I live in a remote area. If I wanna go off grid, all I need to do is flip one button and it's that is a very reassuring thought on a regular basis. So since we're talking about ChatGPT, the next story is also about ChatGPT. In this case, it's a new zero click attack. Radware's research team uncovered a zero click attack that abuses ChatGPT's agentic features.
Bronwen Aker:By sending a crafted URL to a victim's email, an attacker triggers a model to auto execute a data export routine without any user interaction. Now I'm curious if this is specific to email accounts with a specific provider because I know not all not all email hosting providers are created equal, but I'm getting ahead of myself. So the model supposedly retrieves the user's recent chat logs, exfiltrates them to a command and control server, and then the exploit relies on the model's default permission to access prior conversation history and the lack of authentication on the auto execute endpoint.
Derek Banks:Sounds like a fancy way to say indirect prompt injection. But that that's what it sounds like to me. Right? Is I I know that you have, I'll say, Gmail. You know, your your OpenAI account is reading agentically your email, so I guess it can give you a summary of your daily email.
Derek Banks:And then and then there's a prompt in there that gets the model to go off and grab things. And, I mean, I think this is again one of those things that's not gonna get fixed. Right? Not not entirely. I mean and this is actually pretty serious when you start thinking of, like, other things.
Derek Banks:Like, I mean, on your phone. Right? Like, you know, you can have Siri summarize emails and thing text and things of that nature. I would bet to some degree there's a vulnerability there. So I think indirect prompt injection is a very, very interesting and sort of like a what do you call it?
Derek Banks:The silent killer. Right? Like, you don't even know that it's there. Am I reading that right, Brian? Sounds like indirect prompt injection.
Brian Fehrman:Yeah. Yeah. Certainly. I'm pulling up the whole article over here and they're also labeling this as zombie agent as well. So I'm curious if this if this is related to the last one that we we just did because it it seems like two different things here because the other one is
Bronwen Aker:But it
Derek Banks:seems like there's overlap. Yeah. What's that?
Brian Fehrman:One's a persistence. Like, the other one's more of like a persistence mechanism. This one is like more of like the initial like like, a different compromise vector. Like I said, zero click that you've got the email summarization, and you're just embedding the instructions within the the emails, which is, you know, been an idea that's been around for a while. I mean, we participated in that CTF, like, a year ago that Microsoft put on the l l mail challenge, which was basically doing this.
Brian Fehrman:You were sending emails to, you know, a a fake account, essentially, your account set up that had a summarizer, and the entire goal was to get the summarizer to perform some kind of an action. And we actually have a lab of that in one of our our AI classes.
Derek Banks:And the one that we are teaching in the fall in Deadwood. Oh.
Brian Fehrman:That's right. Fall and also, I think it's coming up in February March? Or March, I'm not mistaken.
Derek Banks:I think it's March. Virtual teach of it, I believe.
Brian Fehrman:Mhmm. Yep. Yeah.
Derek Banks:And so, yeah. I I don't think this stuff is going anywhere. This is probably gonna be a recurring thing when we do news episodes. Oh, look. It's the latest indirect prompt injection.
Brian Fehrman:Mhmm. Yep.
Bronwen Aker:Good. So the next story is that two Chrome extensions were caught stealing ChatGPT and DeepSeek chats from 900,000 users. This article was posted in the Hacker News. Security researchers found two malicious Chrome extensions that silently capture conversations from OpenAI's ChatGPT and also from DeepSeek along with regular browsing data. Once installed, the extensions inject JavaScript into the AI web UI, read the DOM, the document object model for chat text, and post it to an attacker controlled server.
Bronwen Aker:Supposedly, over 900,000 users were exposed because the extensions passed Chrome's store review by masquerading as productivity tools. Shocker.
Derek Banks:I got got a I almost feel like that that that this isn't the story as much as maybe, like, do OpenAI and DeepSeek's websites not have or are they blocking third party scripts, like, in their content security policy?
Bronwen Aker:It sounds
Derek Banks:I like it's really don't know how this works. Like, I But would see that's actually, man, that's pretty sophisticated if we go after some chats. Right?
Brian Fehrman:Oh, yeah. Yeah. I'm trying to look up what the what the actual extensions were. Chat GPT for Chrome with GTP five, Claude Sonnet, DeepSeek AI, and the other one was called AI Sidebar with DeepSeek, ChatGPT, Claude, and more.
Bronwen Aker:Sounds useful. So they're acting like they're an extension to implement the AI within your browser. And of course, once you log in, now you've given away the keys of the kingdom.
Brian Fehrman:Yeah. That's dirty.
Bronwen Aker:Attackers are sneaky and So
Derek Banks:are these extensions, were they actually like on the Chrome store? I guess they were.
Brian Fehrman:Apparently. Sounds like it. Yeah. That they got that they managed to push their way through. I mean, it'd be relatively easy to do, I would think, with these because I mean, what what are they doing?
Brian Fehrman:So they're integrating with with the AI tools, which, I mean, sure. Okay. And then they're just funneling over the data to, like, another another another site. I mean, we have probably tons of extensions that are collecting your data and sending it somewhere else. So I wouldn't think that would send off a lot of alarm bells unless people were, like, really digging through and looking to see, you know, what it's actually doing.
Brian Fehrman:But, I mean, it's not like it's, like, trying to pull out your secret like, your storage credentials or
Derek Banks:I was gonna say, anybody who's done web app testing in the last like eight or nine years, like when you go to a modern web app, the amount of tracking and and just scripts that run from third party resources is insane. And most of the web app tests that I did, you know, I spent half of the year, you know, doing web app tests last year for BHS. And I can't remember, like I I mean, it was, a consistent consistent finding for allowing third party scripts through CSP. I was actually able to take advantage of of one of them in a test. And so I just kinda feel like that maybe OpenAI and, I guess, DeepSeek folks, if you're listening, might wanna might wanna fix that on your site.
Derek Banks:I don't know. But yeah. And I guess for users, I mean, just be aware of what you're installing in your browser, especially if it's the browser that you use to do things like banking and Amazon and things of that nature.
Brian Fehrman:Yeah. Work tasks, whatever, whatever.
Derek Banks:Yeah, I have a browser for work and a browser for personal. And then I have a browser for when I don't want either of those browsers touching what I'm trying to go after on the internet.
Brian Fehrman:Yes.
Bronwen Aker:I do the same thing. And even on my personal system, I have different browsers for different personas. I mean, all the fine online banking stuff stays in one set of browsers and there's other stuff that goes to other browsers and never the twain shall meet.
Derek Banks:Yeah, I actually take it even a step further. I have a whole separate system that I do banking and financial stuff on that I don't really do anything else that touches the Internet so much. Like, I don't browse on it. I do some other things on it. Right?
Derek Banks:But, like, yeah, I I'm I guess I'm privileged in that way. I have enough systems where I can do that. But I mean, I I just again, you know, when you evaluate risk for a living, I guess those kinds of things just start to happen. I I think this is browser extensions.
Bronwen Aker:Yeah. It it I feel bad for, quote, normal people because they don't they don't know the stuff that we know. They don't know that just because it looks shiny that no, these are are mean, nasty, awful people who are looking to steal your entire retirement fund. It's it sucks.
Derek Banks:Yeah. I wonder how long it's gonna be though because like right now, my wife, she basically doesn't she has a work computer. Right? But at home, she doesn't really use a computer at all. She uses her phone for everything.
Derek Banks:Right? And so, I mean, I think most like normies kind of do it that way now. So, like, at some point, like, if you're using a computer or a laptop, you're kind of going over into the the danger zone, it seems like, you know? Because you can't install an extension on your mobile browser, can you? I've never actually tried.
Bronwen Aker:So I don't know.
Derek Banks:I actually don't know either. I've never tried.
Bronwen Aker:Never even considered that. Take the buyer beware approach when using any extensions. That's about all I can say. Alright. Ready for the the next and last story?
Derek Banks:Yep. The obligatory APT story.
Bronwen Aker:Of course. Russian APT twenty eight runs credential stealing campaign targeting energy and policy organizations. So this is another article from the hacker news. APT twenty eight, also known as Blue Delta, apparently launched a credential harvesting operation against Turkish energy and nuclear research agencies entities, a European think tank, and groups in North Macedonia and Uzbekistan. The group distributed phishing emails containing malicious Microsoft Office documents that embed AI generated spear phishing text, making the lures more consistent, convincing.
Bronwen Aker:Once a victim opens a doc, PowerShell loader contacts a c two server, downloads a credential dumping tool, and then exfiltrates the data. So am I the only one who feels like this is a replay of old news?
Derek Banks:Yeah. I mean, look, we do this here at Black Hills. Right? Like, we we are currently using open like, in-depth open source reconnaissance to then use LLMs to create things like this. Right?
Derek Banks:I mean, I'd be surprised if every threat actor wasn't doing it. Right? And it certainly, you know, especially, like, with language barriers. Right? So what do you what do you tell people now about emails?
Derek Banks:I mean, I I think personally, we have failed as a like, one of the biggest failures of the information security, world was to tell people to not click on links from people that you don't know. Don't click on that link in email. And then what does, you know, information security do? They send out links for you to click on. Right?
Derek Banks:And then they don't bother to tell you that probably you should be careful of links that come from people that you do know. Right? Because if their account got compromised, then you're probably gonna get a link from them, and then you click on it. And so I think it's always a little bit more nuanced. The advice I've given to my my children is if it comes from the Internet, even if it seems to be from somebody that you know, be very suspicious of it and try and verify it, especially if it's asking for passwords or money.
Derek Banks:Either one of those. Right? And so I I just yeah. This is this is another one of those things that is not going anywhere, and it's going down the road of, you know, like, crazy, realistic deep fakes on Teams, social engineering in the future. Right?
Brian Fehrman:Yep. Yeah. And I'm curious of the so they mentioned the malicious documents here. I'm curious if there's macros that they had built in or what and, you if it's in this case, you know, another another point too would be, I mean, macros are disabled by default at this point. And as an organization, you can shut it off so that users can't even re enable them.
Brian Fehrman:That's what I saw on a recent engagement I was on. I was just, like, company wide, like, literally just disabled. Like, you and, like, the option to re enable is just literally not there. And I know that feature has been around for, a while, but, like, it's more I think it's basically kind of more it's a little bit easier and more of the default, these days. So would for organizations, which suggests looking into that for your users.
Bronwen Aker:The only thing that truly that's AI related to this story is the fact that the, spear phishing text was AI generated. And I know we've talked before about how phishing campaigns have gotten more difficult to spot because of the AI generated text in the the phishing or smishing or vishing or whatever it is ishing that's going on. But other than that, it seems like the same old
Derek Banks:You know, to bring it back full circle to to n eight n because that's what I'm here for. I'm pretty sure you could create a workflow that would go off and automatically run, you know, some tools to gather open source reconnaissance on a given individual and then have a large language model craft potential spear phishing roosts. Sounds like a fun workflow to create.
Brian Fehrman:Oh, yeah. I like it.
Bronwen Aker:All right. So, well, those are the stories we have. The bottom line is the and I've said this so many times. The AI genie is out of the bottle, and we're still all learning as we go along. As CJ likes to say, this is the worst that AI will ever be going forward.
Bronwen Aker:Hopefully, it will get better. But it has definitely changed a lot of games. And just be cautious. Be careful. Don't just arbitrarily throw sensitive information into any chatbot.
Bronwen Aker:Pause. Any other closing remarks?
Derek Banks:I'll channel my inner jaw fire and say keep on prompting.
Brian Fehrman:Perfect. Alright. Well, thanks everyone. See you next time.