On Ahead of the Threat, Bryan Vorndran, assistant director of the FBI’s Cyber Division, and Jamil Farshchi—a strategic engagement advisor for the FBI who also works as Equifax’s executive vice president and chief information security officer—discuss emerging cyber threats and the enduring importance of cybersecurity fundamentals.
Featuring distinguished guests from the business world and government, Ahead of the Threat will confront some of the biggest questions in cyber: How will emerging technology impact corporate America? How can corporate boards be structured for cyber resilience? What does the FBI think about generative artificial intelligence?
Brett Leatherman, assistant director of the FBI’s Cyber Division: Welcome back to Ahead of the Threat. I’m Brett Leatherman, assistant director of the FBI’s Cyber Division. Later in this episode, I sit down with John Hammond. John is a senior security researcher at Huntress, one of the leading firms protecting industry. He’s also one of the most respected voices in the defender community, somebody who actually gets into the code, reverse engineers malware, and shows the world how bad guys operate.
We get into ransomware, what he’s seeing on the front lines, and what defenders can do right now to get ahead of it. You don’t want to miss that conversation coming up next.
But first, three stories that caught our attention this week. I want to introduce Mike Machtinger, who is the FBI’s deputy assistant director for the Cyber Intelligence Branch, to look through the news with me this week.
Mike, welcome to the show.
Mike Machtinger, deputy assistant director for the FBI’s Cyber Intelligence Branch: Brett it’s great to be here. Thanks for having me on.
Leatherman: Great. Mike, what does the deputy assistant director for Cyber Intelligence for the FBI do?
Machtinger: Yeah. So, first of all, it’s an amazing job. Basically, I have the best job in the world …
Leatherman: And the best boss?
Machtinger: … and the best boss in the world; absolutely! So, we’re really, the branch is all intel functions on the cyber threat, all policy for the cyber threat, and what does that mean? It means we cover the gamut of threats from cyber-criminal actors, nation-state actors—you know, China, Russia, DPRK [Democratic People’s Republic of Korea], Iran.
And we look at intel across the board. So, our analysts are doing tactical analysis, it’s really investigative focused. They’re directly combating the threat side-by-side with special agents and computer scientists and all the other great Bureau personnel that we have. They’re also doing strategic analysis on all of these cyber threats and that’s really allows us to be focused on understanding and illuminating where these threats are going next.
Helping you and the rest of FBI executive leadership make decisions about resourcing and making sure that we’re staying ahead of the threat, as it were.
Leatherman: Yeah. I think what’s unique, too, is the, the FBI cyber teams work closely with industry. And we’ll be talking to John Hammond next, who is definitely a consumer of cyber threat intelligence. And so, that unique perspective that our cyber intelligence teams brings, it’s incredibly important to share that with industry and your teams have a key role in doing that.
Machtinger: I think it’s probably the most critical relationship across the intel programs in the FBI with industry. I mean, having worked some of our other programs, counterterrorism, for instance, you know, there are those relationships and they are strong. But as we’ll see in the day’s news items, and later on in the podcast in and your interview, it’s really, really critical that you know, the FBI and our partners in law enforcement and the government working together side-by-side with industry is really what gives us what we need to defeat these threats and to secure cyberspace for Americans.
Leatherman: Yup, that’s great. Speaking of nation-state threats, on Feb. 15, the Wall Street Journal published a story that puts a human face to kind of a national security threat that our teams have been warning about for several years now. So, in that article, they describe a North Korean defector who described how the Kim regime identified him young, early on as a child prodigy, trained him in elite schools, and then sent him to China.
The goal there, along with about 10 other operatives crowded into a two-bedroom dormitory, working up to 16 hours a day, was to fake their job, fake opportunities to get into remote IT jobs in the United States. It’s not a small operation. While he was crowded into a 10-apartment, there were hundreds of these folks doing this.
According to U.S. and international partners, this can generate up to eight, I think it was $800 million for the regime in one year alone. And then Google’s Mandiant estimates that these operatives have infiltrated hundreds of Fortune 500 companies here in the United States. The regime takes approximately 90% of the total revenue, only leaving 10% for the workers themselves, all in support of regime priorities which to us, very concerned, around the nuclear weapons program.
So, Mike, we’re tracking this threat closely here in the FBI. Tell me a little bit about what our perspective is on that.
Machtinger: Yeah. So, I think that, you know, the article did a great job of bringing a human perspective to this, right? Looking at the individuals that the North Koreans are essentially using as cyber slave labor, to, you know, put forward their efforts. And you know, it definitely corroborates what we’ve been tracking for the last couple of years, which is, you know, the use of laptop farms, some witting, some unwitting individuals in the United States that are enabling this.
And, you know, when we look at the scope you mentioned I think, $800 million, when you look at the scope of the amount of money that these IT workers are generating for the regime. Like, that’s one really scary aspect of it. I mean, I think there was even a point in the article of, you know, it only takes a handful of remote IT workers to pay for a missile, right?
And I think that, you know, the article did a really good job kind of foot stomping on the impact there. But something else that you know really worries me beyond that is, how are these remote IT workers making their money? They’re infiltrating into Fortune 500 companies, potentially into, you know, the defense industrial base.
They’re getting access to all kinds of information that could be used for espionage purposes, you know, for blackmail purposes; all kinds of extra things that they could do. And I think, you know, they’re starting to realize that from a threat perspective.
And it really just highlights that there’s an insider threat issue here. I think the takeaway for me is more than just revenue generation, you know, what do these folks have access to, what can they exfiltrate back to DPRK or other nefarious players, be they criminal or nation-state?
And how could the access be weaponized? I mean, if you have ... you know, we talk about insider threat being employees of the company that sort of turn; what if the insider threat is an individual who never should have been there in the first place and isn’t who they purport to be?
And I think we’ve made a strong effort, along with our partners, to put out information to help hiring managers and executives, you know, prevent their companies from falling prey to this. And it’s hard to underscore how important that is. And, you know, some of that advice is, you know, insist on video interviews, although there’s certainly technology that can help them get out of that, right?
So, I think it’s important for folks to stay up to date on, you know, the newest releases that, you know, the FBI and our partners are coming out to help understand the tradecraft these folks are using. And then also just to understand that, you know, the revenue generation is a part of it, but the threat goes so much deeper than that.
Leatherman: Yeah, it’s the data extortion piece. I think we put out a piece in January of 2025 that talked about how, when the actors lose access, we have seen them pivot into data extortion. That becomes that insider threat that you had talked about. And there’s really two primary ways right now to combat this threat. Number one is law enforcement action.
June of last year, the Justice Department announced ... coordinated actions, the FBI and Justice Department across 16 states, which included two indictments, an arrest, the searches of 29 laptop farms across the United States, the seizure of 29 financial accounts, 21 fraudulent websites, and approximately 200 computers. I think that shows the scope of the problem when we talk about laptop farms.
It’s not that these actors are connecting to you know, the companies here in the homeland from U.S.-based infrastructure themselves, they are pivoting into U.S.-based infrastructure through these laptop farms in order to move into corporate networks undetected. And so that law enforcement piece is incredibly important that we take that action, that we take down those laptop farms, we remove their ability, to really pivot in from U.S. infrastructure and detect them earlier.
But the second part of that is industry taking additional action to do that. So, you mentioned the idea of strengthening identity verification in the hiring process. What are other ways that organizations can better defend themselves from this threat?
Machtinger: Yeah, I mean, a couple of things. I think first, like, everyone just has to be on the lookout for this and understand, you know, how these actors may tip their hands. And that might be looking at logs for, you know, evidence of high-latency connections; it might be individuals that are requesting to be paid in cryptocurrency or, you know, other types of funds; individuals who refuse even in an introductory way to come for an in-person meeting or, you know, claim their cameras aren’t working, sort of the tells.
And, you know Brett, I think you really hit on something as well, and on the law enforcement action. And I think it’s important for everyone to know that in order for that to work and in order for us to you know, impose costs on the DPRK and actors doing this, we really need industry to report to us when they have a suspicion that there are one of these workers.
Because one thing that we have seen is, you know, the same DPRK operative may be working for multiple companies, right? So, if that individual is working for your company and you identify them and fired them, rightfully so, but don’t let law enforcement know, you know, it’s very possible that they’re victimizing multiple other companies at the same time.
Whereas if you share that information with us, we’re always going to respect it. We’re always going to be victim-focused and look for industry to share in a way that’s comfortable for them. That will really allow us to do the digging we need to do to identify them and find out really if there’s other victims who may not have caught them yet.
Leatherman: Yeah. I think when we talk about seizing over 200 laptops that facilitated this, it’s alerting us to the actors who are doing it, but also the folks who are allowing it to happen here in the United States and disrupting those laptop farms as well. So, all really good points.
Now the second one I wanted to talk about, which is really transitioning from a workforce attack to a hardware-based attack, is a directive, a binding operational directive released by CISA [Cybersecurity and Infrastructure Security Agency] this month.
So, on Feb. 5, CISA issued this BOD 26-02, which ordered every federal civilian agency to find and remove end-of-life edge devices from their networks. We’re talking about routers, VPN [virtual private network] appliances, firewalls, switches; devices that have reached the end-of-life and no longer get support but sit at the boundary of government networks. CISA and Nick Anderson over there called the threat substantial and consistent.
They confirmed awareness of widespread exploitation campaigns by APT [advanced persistent threat] actors, including those with ties to nation-states targeting those devices. Here’s what matters: the FBI signed on to the fact sheet there, because we thought it was so important to get the information out there to industry, not just the government agencies, but so did the UK’s National Cyber-Security Center.
So, Mike talk edge-of-life, end-of-life devices. Why is it so important to pivot beyond just a government mandate? Why should industry look to understand this and try to apply this to their environments as well?
Machtinger: Yeah, Brett, that’s a great question. And first of all, kudos to CISA for putting out this directive. I mean, this is something that is absolutely necessary in the threat environment that we’re in. And I think, you know, as we lay out in the factsheet, having an end-of-life device on the edge of your network is almost like putting a key to your house, you know, under the doormat.
It’s easy to find, they can do scanning and find it there. You know, it’s ... By very definition, the flaws are never going to be fixed. Right? Every new flaw discovered on an end-of-life device is a zero day, and it will always be effective, and it will never be, it will never be patched, almost by definition, right?
And you know, when we think about—I’m glad you mentioned nation-state actors, too—because when we think about the sophisticated APTs out there and all the capabilities that they bring to bear, you know, at the end of the day, what we see, time and time again, the way that they get into networks is through unpatched legacy hardware, that’s just sitting there, sort of waiting to be taken advantage of, right?
And you know, that’s true for the government certainly, but it’s also true for industry. And again, you know, in the FBI, we’ve just seen that time and time again, it’s you know, end-of-life devices that aren’t patched, you know, CVEs [common vulnerability and exposures] that are out there and known, but that haven’t been patched.
And then, of course, you know, the human element of social engineering. But I think if every organization were to follow the steps in this directive, which, you know, are pretty broad, starting with identify devices on the list that are on networks and taking them offline and moving to developing an automated system to track the end-of-life of devices as time goes on, right?
If every organization were to implement this in the way that CISA has directed the federal government to do it, I think we’d see a huge reduction in the attack service, surface that’s available for every kind of adversary, from criminal to nation-state.
Leatherman: Yeah, I think it’s ... you hit the nail on the head. I think it’s about starting to inventory those. Like, we don’t have to do everything at once. CISA gives organizations a time period to identify it, come up with a retirement plan, and then get rid of the devices from the environment. Organizations can look at compensating controls in the meantime to better protect those devices and what they connect to.
But it’s taking that first step that really matters. And that’s so important here, and that’s where CISA is trying to get out ahead of this and say, “Time to start taking that first step.” So that’s your own hardware. Now let’s pivot into something that is a little bit different, that’s when your own device becomes somebody else’s device.
And that device is used against you whether you know it or not. And that is related to Google’s announcement that in January their threat intelligence group, working along with Cloudflare, Lumen, Spurr, and others took legal and technical disruption against one of the world’s largest residential proxy networks. It’s called IPIDEA, and in our Season Two, Episode One guest John Hultquist was one of those at Google that actually advertised and talked about their work on this front.
Here’s what a residential proxy does. It routes malicious traffic through ordinary home internet connections. It could be your smart TV, a phone, a tablet, an IoT [Internet of Things] device on your network. So, that when an entity or an adversary from outside the United States wants to have greater success in targeting U.S-based organizations, they can hit those organizations using a home in suburban Ohio as opposed to someplace in a foreign country.
It’s not a hacking group sitting somewhere like Beijing. It’s actually somebody sitting from Beijing sitting on our IP addresses here. So how is this proxy network built? Well, they paid app developers. This group paid app developers to embed code within legitimate software applications that they downloaded to their phones. And in addition to getting a working software package—a calculator or whatever the application may do—they also got a vulnerability, where their device was used to attack other devices.
They also ran free VPN apps that actually worked as VPNs but introduced that vulnerable code as well. So, Google says that in a single week, they observed more than 550 distinct threat actor groups using this residential proxy to include actors from China, North Korea, Iran, and Russia. They identified those and they decided it was time to take action.
So, from an intelligence perspective, what does a network like that using a proxy network like that mean, whether it’s a nation-state or a criminal group? What does that mean for us here in the FBI?
Yeah. So, without any question, Brett. I mean, it makes it harder for us to find and stop them, right? And to stop the malicious activity. And you know, I love seeing this article. I love seeing this action from Google. This is how we win in the cyber realm, right? We as the FBI and our partners with law enforcement actions, and the government, and then, you know, companies like Google, and some of our other close industry partners, you know, with legal takedowns, technical severing of the capability of the actor.
I mean, I think you’re looking at somewhere around a 50-70% degradation in the number of proxies that they could pull from, which is a huge amount. And will there be reconstitution? Of course there’ll be reconstitution. But, you know, we see the same thing on the law enforcement side as well. And I think, you know, kudos to them.
They made a huge dent here. And it’s really a big win for everybody that this happened because you had, you know, unwitting victims here whose internet bandwidth was being weaponized by, you know, as you said, and in a small time slice every manner of APT and criminal actor out there was getting in on a piece of this action and making it that much harder for us to identify them, for other net defenders to identify who they were.
So, this is a big deal. And again, you know, I applaud Google for this, and it’s something that certainly has had a huge impact and degrading the ability of the malicious actors to get what they want to do done.
Yeah. It was great to see. Sometimes we have an opportunity to do these disruptions with industry, and sometimes we do them on our own with international partners. But other times, we see folks like Google engage in this kind of action, which really imposes cost on bad actors by removing that capability and that brings real relief to victims.
Yeah. One thing also, Brett, that I think is important to point out is, they also did some ecosystem cleaning when they were done. Right, they went through their application store and searched for these SDKs [software development kit] that you mentioned, that these, you know, sort of malicious software that was being used to weaponize customers’ bandwidth.
And they went through and cleared that out of the ecosystem. And again, like, I’m sure that there will be some elements of reconstitution, but they’ve really done a full-scale takedown here, that should have a lasting impact.
Leatherman: Yeah. They removed some of those applications that had a legitimate use but still had that code within it from the app store, which is incredibly important.
So, Mike, thanks for being here and thanks for everything that your teams do to defend the homeland through your engagement with industry, your engagement with the intelligence community, your engagement with critical infrastructure, and really all the efforts to inform our work.
Machtinger: Thanks, Brett. Happy to be here. And we’re really proud to be a part of this fight with industry and our other government partners.
Leatherman: Great. Well, folks, three different attack surfaces today. Your workforce, your hardware, and the hidden infrastructure adversaries used to mass their operations. All of them exploitable, many of them fixable. And that’s what Operation Winter SHIELD is about, closing the gaps before the adversary finds them. You can find all 10 recommended defenses and the supporting resources at fbi.gov/wintershield.
Now I’m looking forward to the conversation with my next guest, John Hammond, who brings a practical perspective to the current and emerging cyber threat environment. John Hammond is next.
___________________________
Leatherman: Welcome back to Ahead of the Threat. Today’s guest is John Hammond, senior principal security researcher at Huntress.
John discovered his passion for cyber security as an undergraduate at the U.S. Coast Guard Academy. He served as an instructor and curriculum developer at the DOD [Department of Defense] Cyber Crime Center’s Cyber Training Academy, and he operated as a red team cyber operator at the Defense Threat Reduction Agency.
At Huntress, he’s on the front lines of endpoint security for companies of all sizes. John is also one of the most recognized voices in cyber security education. He’s built a YouTube community of over 2 million subscribers, breaking down real-world threats and attacker tradecraft. And he’s one of the lead voices on “Huntress Tradecraft Tuesday,” series, where he and his colleagues do live analysis of the techniques adversaries are using right now.
He doesn’t just research threats: he makes them understandable. John, welcome to the show.
John Hammond, principal security researcher at Huntress: Hi there, Brett. Thank you for having me here.
Leatherman: Glad to have you. I love how you came to the cyber discipline. Can you share the origin of your journey with the audience?
Hammond: Well, I’ll admit I felt like I kind of grew up, as any kid, thinking, “Oh, I want to make video games.” Or, “I want to be like a hacker I see in the movies when they make it all Hollywood glorified with, like a projector pointed at them and all.” It’s silly, but I thought, “Hey, how do I jump into the cyber security scene?”
I remember literally googling, “How do I become a hacker?” And I found the old like, free and open-source software, sort of blogs from some of the visionaries there that said, “Well, you got to learn Linux, you got to learn some programing chops with the Python scripting language or something.” Really it was kind of just about making things.
Can I build something, write some software first? But when I found myself at the Coast Guard Academy, one of the military institutions, the conversation really began, “Okay, it’s great that you can make something, but can anyone break that something?” And that just really, oh, opened the floodgates for a whole lot of cybersecurity conversations, vulnerabilities, and exploits and the whole world.
Leatherman: Well, that was back when you actually cracked a book, right? I remember, as a young kid learning cyber and coding, like cracking my first book about using Basic or Visual Basic and starting to read about everything. And, as you began to troubleshoot code, there was no, you know, artificial intelligence to help with that. So that was a different time for all of us.
Hammond: Yeah, I know the whole artificial intelligence and AI can of worms is certainly something we could dive into, but there’s a whole wide world of cybersecurity.
Leatherman: Yeah. That’s great. Well, thanks for joining us.
Today’s conversation really is part of Operation Winter SHIELD. It’s the FBI’s campaign built around 10 key defenses that we see organizations can help close the gap with adversaries, in ways that adversaries are currently exploiting. We developed these with our domestic and international partners. Not really as a wish list but based on what we’ve seen adversaries in the wild exploiting every day.
The FBI does incident response from a law enforcement standpoint 365 days a year. These are, in 99% of the cases or more, the ways we see actors getting into environments. So, we’re really focusing on three of those defenses today, which are tracking and retiring end-of- life technology, implementing risk-based vulnerability management programs, and adopting phish-resistant authentication.
And John, your perspective as … is exactly what we want to see, organizations implement based on kind of the gaps that you see as part of your incident response. So, I guess let’s go to last May when the FBI dismantled a criminal proxy network in an operation that we called Moonlander. Four total operators, three of them were Russian nationals, one was a Kazakhstani national.
He’d been charged as a result of running the scheme for more than 20 years. They infected thousands of end-of-life wireless routers—devices that manufacturers had stopped providing updates for years—before they were exploited with malware. And then the actors actually sold access to those devices as proxy services. Essentially, they were renting out other people’s routers to give hackers an ability to use those devices to hack others to the tune of $46 million, I think, in revenue generation.
You’re on the front lines with organizations that still have these kinds of devices on their networks. So, you respond to organizations where you find these end-of-life devices. Can you walk us through what your team actually sees when you respond to a breach like this? What are those conversations look like with the organizations that you respond to?
Hammond: Yeah, I can tie this together a bit because, I’ll admit, I think my upbringing was more of, oh, “Can I try and see what that pen testing” or like red teaming, trying to emulate the adversary to find these flaws and weaknesses and gaps? And it wasn’t until I kind of now arrived at Huntress, where I am much more of that malware and instant response and kind of doing the right threat hunting in organizations that need it.
But to your point, hey, too often we still see some organizations that just have those old legacy, end-of-life deprecated systems that should have been brought out of commission but never really were. Ultimately, it’s just because the stakeholders and decision makers like didn’t know or they didn’t realize, “Oh, that server is still sitting in the closet and it’s still on. It’s still exposed to the whole world.”
And they left over some ports, they left over some software that hasn’t been patched, hasn’t been brought to the latest version. And, to your point, is completely end-of-life. They aren’t even offering updates to that anymore. That visibility has always kind of been the biggest thing that we need to, okay, bring to that organization and say, “Hey, this is still out there, it’s still in the wind. And that’s going to leave an open door for threat actors and adversaries.”
I think to your point, where they say, “Hey, this whole operation and this whole, endpoint and team that was saying, ‘Look, let’s give out these routers and allow access for other threat actors to just use and abuse.’” That’s probably oftentimes now the easiest, really commoditized entry point for bad actors, because they could just get something off the shelf and that’s immediate access.
And then they could do even more for the vulnerabilities and things for other assets and other parts of your stack. It’s really that application and asset list and inventory, the teams don’t have. But you gotta, you absolutely do to just really know your own environment so you can protect and defend it.
Leatherman: Yeah, that’s a great point. I think the biggest question I get from small-medium businesses is, “Why would the actors target me? I’m an SMB [small-medium business]. I’m not visible to a lot of folks. Like they’re going to attack the Fortune 100s or the Fortune 500 companies.”
But if you have infrastructure exposed to the internet, you’re now visible. You’re now a target, right? And especially if you have end-of-life devices, the adversary increasingly knows how to scan the IP space, or the internet, to try to find those devices.
And I would argue, IP space in the United States is incredibly valuable to foreign adversaries, because now you can pivot from an IP address in the United States, which is trusted here in the United States versus coming in from Moscow or Tehran or Pyongyang, where you’re probably going to have an uphill battle.
Hammond: Yeah, there are a lot of things to draw on there. I think, just as you mentioned, hey, anyone can be compromised. And even if they’re trying to shrug their shoulders and say, “Look, why would they go after me?” I think we’ve talked for years about the sort of “assume compromised mentality” of, like, it’s not a matter of if, but when. There will be a breach, there will be an incident, there will be something to triage and investigate.
And if you’re trying to say, look, “I’m a small fish in a big pond,” ultimately, the adversaries want the data, they want your access. They want even just the infrastructure, the servers that they could use. And then they could, sure, sell that access on the dark web. Sure, they could make a quick buck of the data breach information that they’re selling.
That’s really, the motivator is money. And no matter how small or how inconsequential you might think you are, that is still the value for a bad actor.
Leatherman: Yeah, I agree. The value’s there in that they can exfil data that’s valuable too, on PII [personally identifiable information], PCI [payment card industry], PHI [protected health information], whatever that may be, or sensitive business information, intellectual property.
We’ve also tracked the targeting of end-of-life devices through some state-sponsored campaigns. Volt Typhoon is one example, that is, the PRC’s [People’s Republic of China] ability to compromise end-of-life devices, or devices that have known vulnerabilities and kind of create an opportunity within the U.S. to pivot into critical infrastructure in that case, to potentially impact the U.S ability to project force in the Indo-Pacific region.
So, now we’ve got nation-state actors looking to do that on U.S. infrastructure and it doesn’t matter if you’re a small mom-and-pop shop or you’re a Fortune 100 company. That infrastructure, I think, is valuable to them.
Hammond: That’s always what opens the door in all reality. Yeah, sure. The end-of-life, the deprecated server in a closet; like if that is the foothold, that’s the initial access vector to, whether it be yes, mom-and-pop shop or critical infrastructure that’s across the board. That is just one of the ways that, okay, the adversary is going to get inside the environments.
And then they could do even more to move laterally or to start to do some post exploitation to get the information and sensitive data that they want. Or why not just keep leveraging that infrastructure? You were talking about stuff as United States IP addresses. Okay. That’s residential. Maybe sometimes proxies that they could basically use for more attacks, more campaigns; opens the doors for ransomware, opens the door for business email compromise.
Really the laundry list starts just when they get that access.
Leatherman: That’s exactly right. And I think, you know, CISA, the Cybersecurity and Infrastructure Security Agency, has recognized this problem. And just a couple weeks ago, Feb. 5, they issued Binding Operational Directive 2602, requiring every federal civilian agency to inventory and remove end-of-support edge devices from their networks. That’s a priority, right, for the U.S government.
The directive gives agencies 90 days to inventory, 12 months to decommission devices already past the end of support metric, and then 18 months to replace everything else. And they really didn’t mince words when they called it the threat; when they said that the threat from these devices represented a substantial and constant risk to federal government agencies.
Now, that is not binding on industry. But I always think that, you know, directives such as that, where you see a mandate from a government agency, could also apply, should probably also apply to industry and maybe even tracking that along the same time frames the 90-day, the 12-month decommission, the 18-month rip and replace could be applicable. Do you see it that way, or is it not quite that straightforward?
Hammond: No, I tend to agree, honestly. And maybe I’m—I don’t want to say a blind optimist, but I am an optimist, in that thinking, look, setting a deadline, setting something to really, really enforce the, “Hey, we’ve got to get our act together with all this, with these end-of-life devices, with these vulnerabilities that we know are getting beat up.”
I’m a big fan of, CISA especially their KEV, the Known Exploited Vulnerabilities catalog. Honestly, for a lot of the work I’ve been doing just this past week, we’ve really been tracking that, for some of the recent intrusions and incidents we saw. But that is still with the standard.
And like, okay, we are actively seeing across the industry, across the cybersecurity landscape, these are the weaknesses, these are the flaws that are getting beat up and are again an entry point, initial access vector that a threat actor could use.
So, we’ve really got to set sure 90 days, 120 days, whatever timeline. As long as it happens, as long as we get the zero to one, one to zero, like flip the switch and that thing is cleaned up, it’s not going to be a continued risk and continued threat to your org and every organization.
Leatherman: Yeah, it’s a … that’s a great point. What would you—like, if you were to advise somebody listening to the podcast today, if they were to say, “Hey, I know I’ve got end-of-life devices sitting in my environment,” or, probably worse yet, “I don’t know if I have any end-of-life devices in my environment.” What would be the top step or top couple steps that you would take next week or next month to start to understand what your risk is in this space?
Hammond: I think the latter end of your statement is really the most important part. It’s, “Hey, sometimes I just don’t know what I don’t know.” So, you kind of have to hunker down and build out that asset inventory, build out that application list of what is where, what are the IP addresses, what’s the hostname, what’s the barcode, serial number. If you have to go that deep.
Just know your environment. Know what all these machines and endpoints are. And it, I’ll admit, goes even further beyond that because sure, we’ve got all the blinky boxes. But what about your online attack surface? What about the solutions you’re using for an online Entra ID cloud environment? Your identity provider?
What are all those software solutions that you also have some of your attack surface presence? I know that’s a big mountain of a task for folks that just don’t have all that visibility, but that’s just absolutely 1,000% where you have to start.
Leatherman: And I think that’s a good point. So, you start where you can start to bucket these, right. Here’s your on-premises devices. Here’s your cloud-based infrastructure. And really starting to say, “What are the hardware models that I have. What are the operating systems or the firmware running on these models? What’s the IP space that, you know, you mentioned that it runs on, and then start to track.
“Is it end-of-life? Is it not? When does it become end-of-life?” And then really assigning ownership to that is key. And so, you know, incremental progress matters. First, giving yourself that visibility is that initial step. And then starting to come up with a replacement-isolation-containment, or you know, kind of rip-and-replace time frame, is that next step.
Hammond: That’s been the, one of the biggest things that I feel like I’ve learned, because I know I’m a nerd and a geek and, and I want to be technical. I want to be on the keyboard. I want to keep solving technical problems. But I keep seeing more often than not, hey, sometimes it’s a people problem.
Leatherman: Yeah.
Hammond: Yeah, it’s a process problem.
Leatherman: Yeah.
Hammond: It’s a silly, I don’t want to say paperwork problem, but really, genuinely. Do we have documentation of what’s where and when and how? Do we know the checklist of what we do when something hits the fan? Do we have all our ducks in a row? Do we know who to call? Do we have a breach coach?
Those are all the things that you want to prepare ahead of time. I know it’s a lot of work, but the incremental progress is exactly right. Can’t be boiling the ocean, but you’ve got to put the time to it for sure.
Leatherman: But let’s start somewhere. Yep. Now you mentioned the KEV, CISA’s KEV. That’s an important point. That kind of takes me to the next topic, which is, you know, we have those devices that are end-of-life. They don’t get updates. But we all know code eventually becomes vulnerable. And there are still devices that are being supported by manufacturers.
I can’t tell you the number of cases that the FBI has responded to where there was the exploitation of a device, a vulnerability within a device that was disclosed weeks, months, or sometimes years before it happened, meaning there was a solution out there just not applied.
The FBI and CISA issued an updated advisory on Akira ransomware last November. Akira emerged in March of 2023, roughly. It’s collected approximately 244 million in ransom payments, primarily targeting small and medium businesses, manufacturing, education, healthcare. Exactly the organizations that you often work with.
And here’s what makes that relevant to our conversation. Akira’s preferred method of getting in is exploiting known vulnerabilities in VPN appliances, the devices that let employees connect to their company network remotely.
Several models specifically, especially when multifactor authentication wasn’t enabled on those devices for remote access. One of the vulnerabilities that, has been tracked and disclosed in our disclosures, was patched in May of 2020. That’s a five-year-old, six-year-old fix, almost, and incident responders are still finding these vulnerabilities out there.
You know, how do we start to approach this idea of managing vulnerabilities in the patch management process? Because it’s not easy. Every year we continue to see an increased number of patches rolling out. You guys, again, respond to these incidents. And what are your recommendations when you engage these companies and how they start to apply patches?
Do they look at the perimeter first? Do they look at those edge devices? Do they look at VPN appliances? Where do you kind of start with that?
Hammond: Let me say I know this is a hard problem to solve because it is so frequent and so constant. Just as you mentioned, all the kind of things that we need to be tracking. That’s a lot. So, this is where I want to go back to the start. But I mean, hey, that’s our first base of that asset and application inventory list.
Well, you can use that as your map. That’s the compass, sure, okay, let’s think about what’s externally facing. Let’s get to those edge devices and let’s see what’s that software, what’s actually pertinent there that you kind of need to keep your ear to the ground for the advisories, for the knowledge base articles, for the write ups, and just alerts, or new CVE’s that hit the streets, for new vulnerabilities that are added to the Known Exploited Vulnerabilities catalog or any of the even just provider, the vendor themselves that are sharing information.
That’s a lot. And that list continues to grow, but that’s something that you really do got to stay on the heartbeat and the pulse of as to what are these changes across everything that’s in your stack? That’s how you don’t miss those. Okay. Sure. Bugs and weaknesses that might have been 5 or 6 years old, because you’re staying with it because you’re on the ball.
That’s a lot to do. But it has to be done, especially as you’re growing and building out more and more of your ecosystem, your infrastructure there. It really is just a matter of, okay, what are the big things that are the most critical priority when there is an external kind of exposure and then the things that are high in severity, the 9.8, 9.9, CVSS [Common Vulnerability Scoring System] scores that say this is a one-shot, immediate Remote Code Execution.
I know that’s what everyone’s going to be screaming and shouting about, but for good reason. Because Akira, as you mentioned, one of the ransomware groups and Clop and others, they’re going to be already on the pulse trying to see when a new end day is or zero day going to drop, that they could then arm and use for an attack of opportunity just because it’s an easy one-shot foot in the door.
That’s what you have to stay on top of. Or hey, work with the teams that you already have for your security providers for getting some managed security operations that are in place so that you’ve got extra help and an arm in the fight and you can keep up.
Leatherman: Yeah, you can. And I think what’s interesting is that the bad actors are really doing a good job of racing against the defenders to exploit these vulnerabilities. I mean, we have to have the KEV, we have to have a disclosure program so that folks know when there’s a vulnerability and they can patch it.
But the problem is, the actors are getting really good at leveraging AI and other technologies to try to find those vulnerabilities before the masses do and then race to exploit those in as many devices as they can.
Is there a way that defenders can start to employ similar techniques to defend against exploitation of these vulnerabilities a lot quicker than what we’re doing right now? Artificial intelligence or anything else?
Hammond: Yeah, I knew we were going to fall down the AI rabbit hole.
Leatherman: I heard we have to at some point.
Hammond: … open that can of worms. But I think you’re right. I mean, it’s obvious, there’s no doubt. Sure, adversaries are using AI and artificial intelligence to speed up all the parts of their attack. All the components of the kill chain, right? That is what they could use to make not just one phishing email, but 10,000 phishing emails. They could implement whatever sort of new proof-of-concept or attack armed weaponized script to get and leverage, take advantage of our new recent vulnerability. Like, without a doubt. I feel like we know.
But to your real question there is, “Can us defenders do that, too?” Yeah, I don’t see why not. I think finding the right place for it is still what we’re trying to figure out. Whether it’s something that means, oh, assisted security operation center so that analysts can triage and investigate faster.
Yes, sure. Maybe that gets you there. I don’t know if it’s always 1,000% silver bullet, but that is going to supplement our work. But what if you used it to kind of help in your asset inventory list, just to get a baseline of what’s normal for your environment? Can we find the anomalies much faster? Yeah. Kick that to LLM [Large Language Model] and AI can really, really help speed run and super tune what you’re up to.
I’ll admit, and I’m curious, even your take. Where do we put it? How does it fit in the picture? But I think we do have to if just to keep pace in the race. Like if we want to go toe-to-toe with the adversary.
They’re using these new innovations. So, we should start to just as well.
Leatherman: Yeah. It’s going to be harder and harder to catch up if we don’t start to leverage those. I’m of the opinion that, the commercial AI platforms here in the United States, the frontier AI platforms, offer tremendous value in taking the right first step forward. I said this, I think, on our last episode, which is you don’t have to deploy artificial intelligence across your entire ecosystem to get started, and you can start to focus on the key perimeter devices and starting to monitor those.
A lot of these platforms are even offering now agentic capability just on their pro plans. That would allow you to start looking at the priority users, who would have the biggest blast radius if impacted or the priority vulnerabilities. And I think, you know, the KEV in particular, demonstrates where we’re going with the vulnerability process.
It’s now crossed over 1,480 total vulnerability entries. It grew by 20% last year alone, including 24 vulnerabilities directly linked to ransomware campaigns. So, for all of us that’s what we call a clue in law enforcement, which means we’ve got to focus more and more on what these critical vulnerabilities are.
I do think what you mentioned, leveraging the CVSS, can be part of that assessment. Like how impactful is this? A 9.8 is a lot different than a 2.5, right?
Hammond: Without a doubt.
Leatherman: Yeah. ... So, are there automated tools outside of artificial intelligence? I know you’ve talked before about free and open-source software that can help you understand, at least on the other podcasts, kind of what versions of software you’re running. Are there tools that make sense for folks to consider? I guess free and open-source is one way to go, as opposed to leveraging a commercial solution if it’s not within your budget.
Hammond: Yeah, I’ll dance with that, I guess without naming names. Right. But I think it is a matter of knowing that, hey, the internet is open and I’m not saying that it’s a bad thing. I’m saying that as look, there are resources and knowledge that’s out there that if you or anyone listening in wants to take advantage of, you’ve got everything in your power to do that.
I know I’m silly. I share a whole lot of YouTube videos, and I try to get public education info out and about, but that is so that there are more resources and more exposure on, yeah, some of these open-source solutions are just as good, maybe sometimes, if not better. If you’re able to really tune it, if you’re really able to play with it, and you can divest some of the time to build out new capabilities for yourself, even in your home lab. And not everyone is technical, not everyone is at that point where, oh, they want to spend that time or if they feel like they should.
But at the end of the day, in my mind, it’s education, it’s the awareness, it’s the knowing what threats are out there, and knowing what they look like and how you can combat them. There’s a lot to talk about in the realm of phishing and security awareness training. We could fall down that rabbit hole just as well.
But I got to be honest, I think for the layman, for the everyday individual, mom, dad, sons and daughters, brothers and sisters like, that is the real value of, bringing some of the information and bubbling it up so that everyone is good in cybersecurity, right? Not just the nerds and geeks.
Leatherman: Yeah, that’s a great point. And I think that also applies to business executives, right? So, in small-medium businesses, often the CEO is also the network defender. And so, leveraging commercial AI platforms to learn more about some of the things we’re talking about today can really bring clarity. Asking it to help you understand what those threats mean, what it means to have a vulnerability management program, even plugging in the types of devices you have and helping ... the AI helping you to understand what mitigations you might put in place. Like that’s, I think, a meaningful start.
But going back to kind of board of directors, the C-Suite executives; often cyber risk is left to the network security defenders: the CISO and the IT teams. We’ve kind of gone beyond this era where cyber risk is just defined there because cyber risk is business risk. In fact, I would argue it is, from my perspective as the head of Cyber Division for the FBI, where we do this work and respond to incidents, those organizations where you see buy-in from executives within the organization to understand, invest in resource appropriately their IT teams and mitigate risk; they tend to be more resilient.
And that is because they take an all-of-company approach to mitigating that risk. Is that something you see, as you do incident response, greater resilience there? Do we have a lot of room still to go in helping boards and executives to become aware of what that risk is? And how do we start to close that gap?
Hammond: Yeah, I agree in that too often I think cybersecurity is kind of thought ... maybe an afterthought, maybe it’s a cost center. Maybe you’ve got some team members and employees that just don’t have the right wherewithal. But to your point of this being an all-company-effort initiative, that I think just really rings true.
Something that we do, I know internally is kind of just, have a certain culture of what cybersecurity is, what is good, what’s bad, and honestly even kind of point and laugh when we see something that is wow, so far in the wrong direction that we all know and we share it in Slack, you know, we’re kind of a remote company, but say someone got a silly phishing email, some egregious, obvious fake, bad, maybe business email compromise attempt, impersonating a CEO or treating and acting like an executive.
And those are the kind of things where we just kind of all can say, “Wow, yeah, goodness, I’m glad I didn’t fall for that.” But the thing is, someone might have. Someone could have. So, still kind of being all in on this, letting all the team members, all employees, everyone really get a quick cue and a good understanding of like, wow, this was a fail.
And we’re laughing at threat actors that make silly mistakes, but that does still spread education in a cool sort of cultural way.
Leatherman: Yeah. And I think the other area we can look at is, you know, a lot of companies compete with each other, we get that across sectors. But the same actors that target companies are often targeting other companies in that sector. It can be academia because they use similar software. It’s definitely happening in health care where you say it have the same, you know, health platforms, patient record platforms, that, you know, have a vulnerability.
So, you start to see exploitation across the sector. Having really robust relationships with your sector partners, at all levels, I think, would be incredibly important because you can share the trends in targeting. You can share cyberthreat intelligence with each other. And I think we have some room to work there as well. Information sharing across sectors where we could be more intentional about, you know, again, disclosing vulnerabilities we learn about, going to our peer CISOs, going to our peer, chief information officers and sharing vulnerabilities, or frankly, targeting of our infrastructure, whether it was, successful or not, so that we can help defend patients in a hospital system the next county over. We probably have some work to do there as well.
Hammond: I am, again, an optimist. I think the info sharing is, yes, absolutely where we can improve. Heck, a lot of our partnerships, public sector, private sector kind of altogether. I don’t want to sound like a broken record, but I realize, like, we’re in this fight together. Yeah, that takes each and every one of us. So being able to share some of those indicators of compromise, being able to, oh, see, this is what the intrusion post exploitation commands and activity really look like.
That helps everyone understand, okay. Even before a breach happens or before the damage is done, this is what we need to board up the windows and lock all the doors for. Or the folks that are unfortunately at the other end of it, and something has already occurred, they know the artifacts. They know what to look for. They can hunt for this. But that’s across all companies, not just, oh, sitting in a silo.
Leatherman: Yeah. I think you brought up a great point here, which is, you know, it’s not just about industry-to-industry sharing. It’s about industry-to-government and government-to-industry sharing. I tend to talk about this in the framework of an all-of-society approach meant to disrupt, deter, and contest the adversary both defensively and offensively because the adversary takes an all-of-society approach to target us.
We’ve demonstrated that through our joint cybersecurity advisories, the PRC leverages industry to target us. It’s, there’s no doubt that Russia aligns itself with pro-Russian-aligned hackers to target us. And so, these symbiotic relationships between industry and hackers, and foreign nation-states compel us to work together across the board, to defend national security, critical infrastructure, intellectual property, all of that.
And I think it’s important that, that we continue to be intentional about that. Have you had a chance to advise clients who’ve worked with the FBI in the past? We’ve got 56 field offices located around the United States. We have 23 cyber assistant legal attachés. Our goal is really to bring threat intelligence to bear that only we have when an organization gets breached, and to share that in a way that prioritizes containment, mitigation, remediation.
What’s your perspective on that front with engaging law enforcement during ... before, during, or after a breach?
Hammond: Well, I won’t bring up any, I don’t know, PTSD or trauma for folks, but I think, yeah, one of the big ones in recent memory was, the Move It breach some time ago. And fingers crossed, that’s okay to dive into, but absolutely, we were able to kind of have some more of the conversations with one of the FBI locations that was kind of in and around the area, and we know that they were tracking the case.
Truth be told, that was just kind of a fully transparent. This was the “Hey, let me share with you what we’re seeing, you share with us what you’re seeing.” And that felt good. Like, that felt like, hand-in-hand partnership. And it also felt like, hey, we’ve got friends in high places, which is a good feeling.
Which, which is, which is a sweet testament of, like, we are all in the same fight and we are all tracking this. So, I was pleased to have that togetherness in that outreach. I think we can keep doing that.
Leatherman: Yeah. That’s great. Appreciate that feedback because I agree with you. If folks engage the FBI, again, we have a victim-centric model. We’re very focused on helping victims. But we also have that threat intelligence where if you haven’t seen a cyber breach happen, and others in your sector haven’t seen it happen, it doesn’t mean we haven’t seen it somewhere else in the country or our partners internationally haven’t seen it.
And our goal is to bring threat intelligence to bear that really helps you with that remediation. You mentioned earlier, the idea of credential theft. You talked about authentication and the risk that is omnipresent there. So, let’s switch over to the human side of the attack surface, because we’ve talked a lot here recently about the technical side.
And I want to kind of ground this portion of the conversation in some data, because the scale of the credential theft problem has changed, I think. IBM’s 2025 Cost of a Data Breach report found that phishing, which is fraudulent emails designed to steal credentials or deliver malware, was the most common way attackers got into organizations. Sixteen percent of all breaches, had an average cost of $4.8 million per incident.
But here’s what shifted. It’s not just phishing emails anymore. Your team at Huntress found that credential-stealing malware—programs specifically designed to harvest every password saved in a browser ... I think of, like, Luma ... every login cookie, every session token on an infected machine—accounted for 24% of all incidents that you investigated last year. That’s the single most common threat type Huntress observed,I believe.
Those stolen credentials got sold on criminal marketplaces, and groups like Akira and Scattered Spider buy them to walk right into an organization without tripping any alarms. So, I look at that as a pipeline. Malware steals the credentials, criminal markets sell them, ransomware groups buy those credentials to get in very easily. Help the listeners understand how that economy works and why it makes traditional passwords, even with basic MFA [multifactor authentication] so vulnerable.
I’m super glad you called it an economy because, you know, that’s really what it is. I know, maybe sometimes, oh, the dark web could be this sort of amorphous, spooky, scary, vague thing. And then I think a lot of folks, nerds and geek in us know that. Okay. No, it’s still just the internet or shady cybercrime stuff could happen.
Hammond: But you realize that, yeah, the commodity malware that is truly, genuinely commoditized is oftentimes the info stealer malware that just rips up these passwords, these credentials, these stored cookies, cache information, and then can just spit it out to an adversary that will sell it on the dark web. And then you’ve got that cycle, just as you mentioned, ransomware gangs buying it and they’ve got a foot in the door.
I think this happens constantly. It’s a business. It is genuinely a hidden competition that maybe we don’t even realize, some sort of mirrored industry going on separate from what our own usually is. But that is how I think you hear in the maybe trite phrase of, “attackers don’t break in, they log in” Well, it’s because they’re using these credentials.
It’s because they sent a phish, and we can fall down the rabbit hole on one of the phishing emails that, sure, if they’re using some new tricks, if they’re using some new, maybe adversary in the middle to collect your credentials right then and there. And we could talk about, look, you could be using some phishing-resistant hardware security tokens, like a, like a YubiKey, to try and add another layer of defense.
That’s totally the defense in depth that we really got to harp on is, hey, just adding more layers and more hooks and hurdles so that these aren’t as easily performed. That’s it’s not as commoditized as it always kind of tends to be too often, now.
Leatherman: Yeah, I look at this, you know ... I think when it comes to authentication, people look at it two ways. Either we’re going to train folks, over and over and over again, and we might test them via, you know, phishing emails and then provide remedial training. Or the other mindset is let’s take some technical solutions here. But I think it’s a combination of both, that we should be looking at because we have to make everybody more cyber aware of what they’re clicking on.
There’s always going to ...there is probably always going to be the No. 1 way adversaries get in. They continue to do it at record levels, I think right now, through authentication when we have good toolsets out there. So, I think it’s a combination of both training and technical controls that will help secure the human. Is that the way you see it?
Hammond: Absolutely. Again, I know I don’t want to sound like a broken record or beat a dead horse, but that’s why I think education is just so, so vital. That’s our biggest sword and shield here. When we were talking about some of those different threats of email, especially, I think now we have the kind of back and forth where, look, if we could protect our passwords or credentials, whatever lands in our inbox as an attempt to steal it, you think about the layers of authentication. The password might have been something that, you know, the maybe a two-factor code that gets sent to your phone or you open up.
It’s something that you have—your mobile device—but the big biometrics, the fingerprint scan, whatever, facial recognition, that’s something that you are. And those three at least make a strong pedestal. They make a good trio. But if we can add in even more, we’re getting the most multi-factor authentication to protect ourselves.
Leatherman: Yeah. I think, you know, certain forms of MFA were pretty successful when they first came out. I’m thinking SMS [Short Message Service] based verification. I’m thinking push notifications. Those are still prevalent in all kinds of online accounts and business-supported accounts. What is your position on SMS and push notifications? I see those as weakened forms of MFA at this point.
Hammond: I would agree. SMS especially is kind of churn and burn. Most folks have now gone to okay, some authenticator app, which is a good thing. But just as you mentioned, those push notifications, we saw kind of a new onslaught of a different technique for MFA fatigue where an actor just spams. Here’s a new notification, here’s another, here’s another, here’s another.
Until you just have to make it go away. So, there is always a nuance in that kind of security cat-and-mouse game, as I’m sure you know.
Leatherman: So, yeah, and I—you mentioned, FIDO2-compliant devices. Like those are pretty low-cost at this point to deploy. I know there’s a process by which we deploy them, but I think there are these newer technologies that really are phish-resistant methods. Whether it’s, pass keys on device or the hardware keys themselves or even the authenticator app, that move you beyond those kind of that low-hanging fruit.
Hammond: I’m a big proponent of, hey, whenever you can, if you’ve got the buy in, if it’s been made accessible and actually rolled out well, for yourself and your organization. Absolutely. Those hardware tokens do add more security. Same thing goes for password managers. Same things go for hey, just your understanding of okay, you running your antivirus, running whatever you need to get things at least guarded and gated up better than without.
Leatherman: Yep. And I would recommend to anybody listening that you look at a good, trustworthy—we don’t make recommendations on the show—but a trustworthy based password management program. There’s a lot out there that may not be trustworthy, but, you know, for the listeners in the United States, a U.S.-based provider is important, or at least recognizing kind of how the, how the organization stores your passwords and controls security to it is incredibly important.
Okay. On ClickFix. Your team has been tracking a social engineering technique that’s been spreading in 2025 and into 2026, called Click Fix. Walk our listeners through this one because it’s a perfect example of how attackers exploit human trust. As I understand it, an employee visits a website, sees what looks like a broken Captcha—something that we all try to do to prove that we’re a human—prompt on a website, and it ultimately ends up in the user installing malware.
What does that look like? Kind of walk the users through that so they understand this kind of trend in ClickFix exploitation.
Hammond: Goodness. You gave me a good one. I might get too nerdy here, I don’t know, but ...
Leatherman: We like nerdy on the show.
Hammond: ClickFix is something I feel especially partial to because I tried to showcase it. I spotlighted it in a little video demo. And that might have helped the fires burn further, but it was an interesting thing because there was one of those info-stealer variants of malware, right? That was trying a little technique where there was some website out and about on the internet.
They might have pointed to someone either via a phishing link or they just had some redirection to drive an end user to it, but it just looks like one of the regular quick check boxes you fill out on the form that says, “I’m not a robot,” or oh, you verify you’re human just by clicking a couple pictures and you click the simple button like a turnstile.
“Oh, I am a real human being. I’m not a robot.” But rather than the maybe usual Captcha that you tend to see of oh, “Click the number of fire hydrants” or whatever, they give you some instructions. They say, “Press and hold down the Windows key on your keyboard and press the letter R,” and then they tell you to press and hold “control” on your keyboard and press V, and then they tell you to press “enter.”
And what an end user who might not know or just have the wherewithal of what that really means, you’re actually being kind of fooled to open up, a “Run Dialog” box or a quick little window where you could run a command on your computer, but the website has sort of pre-poisoned the browser to have some other malware or some other payload, some other little dropper in your clipboard.
So, when you use “Ctrl V” to paste as the hotkey and then you press “enter,” you’re just a quick and easy one shot falling for the bait to run malware. And that, I’ll admit, has kind of caught on. That’s come quite a trend, but the biggest defense, I think when we could get really tactical, we’ll talk about the detection, talk about the process chain, talk about how that looks for the forensics aspect.
Think it’s still just a matter of we got to kind of let folks know, “Hey, those are things you shouldn’t do. Don’t paste or blindly trust just some of these instructions or whatever random website gives you.” That’s the awareness and education again. But ClickFix has evolved to many variants of file fix or download fix or consent fix that the list goes on and on that’s quite a can of worms.
Leatherman: And you guys continue to follow that, right? Because it’s an incredibly impactful social engineering technique and one I think anybody should be aware of— both end users who sit at home and CISOs who manage large groups of people—because it’s using technology on your device, you know, it’s going to deploy malware, but it’s using technology on the device to actually run these commands, reach out to malicious domains, pull back malware, deploy it on the system.
And so again, it’s targeting the human vulnerability. And phish-resistant MFA is not necessarily going to help here because you’ve got a user actually doing it, not a malicious actor logging in. And so, end user behavior on systems and understanding that is incredibly important. But the same with the tools that we allow end users to run.
I think, you know, one of the things that we talk about in Operation Winter SHIELD is de-escalating user privilege, meaning, you know, everyday users who are not, do not need to run as a privileged account should probably not do that because often you can block malicious files or they can’t run certain types of files if they’re running as a standard user account.
Is that something that would stop, some of these exploits, or is this really hitting on kind of anybody who clicks on, these, these particular, links or these particular captures?
Hammond: No. I think the roots of really what you’re saying is true, that is correct to say, can we get access controls in place? Can we do some allow listing? Do we have the principle of least privilege so that, yeah, we don’t have end users running as admin? And a lot of us, I think might think, “Oh hey, that’s like bare bones security basics.”
None of it is the two-liter sexy stuff that’s kind of in the wind and in the news and the headlines. But quite often I think, especially what we see, I don’t want to fall back to the, all the attacks of opportunity. But genuinely, a lot of the threats just tend to be spray-and-pray. So, whatever you can do to add those extra layers to do the principle of least privilege, I got to say it, those bare bones security basics, that’s the right stuff.
Leatherman: Yep. Absolutely agree.
Well, John, I know we’re kind of wrapping down our time here. I wanted to give you a chance to kind of reflect on what we’ve talked about today. We’ve talked about a variety of different controls folks might think about considering. You know, for those folks who heard a lot about what we talked about today when it came to end-of-life devices or vulnerability-management programs or principle of least privilege, which we just touched down briefly, or phish-resistant authentication, this sounds like a lot, if you’re not in the cybersecurity field.
If you were somebody who, manages cybersecurity teams, but you’re not a technical person, if you’re on a board of directors, if you’re external counsel or internal counsel, what are the kind of conversations that you would start to have Monday with your teams that would help to start to frame some of these discussions?
Where would you kind of start to dive in, and how should they think about approaching that?
Hammond: Well, I think my biggest call to action here, truth be told, and I don’t want to sound like I’m parroting anything, but what you’ve got, and you’ve built out for Operation Winter SHIELD it is really the right stuff. It’s distilled down. It’s a little bit tactical. “Hey, these are the 10 things to work through,” and you don’t need to go through that in—and forgive me—maybe chronological order, but cherry pick what’s the easiest thing for you to get started on?
What is the best thing to get some buy-in and some forward motion, really get the momentum? I think just as you mentioned, yeah, we were kind of knocking them off one after the other. We drove all over the map and it is a lot of work, but I think the most important thing that you come home, maybe you get in the office on Monday, you’re talking to your team, you’re briefing the board, chatting with executives, whatever the case may be, you come in and you got to say, “This is something that we’ve got to do,” no matter the amount of work that it is, because cybersecurity is something you have to earn.
Like we have to stay in the fight. We have to stay with the pulse, keep track of that heartbeat. It’s a lot of work, but that’s the only way that you’re going to stay toe-to-toe with the adversary and really be, forgive me, ahead of the threat of that. That’s what it takes.
Leatherman: Well, John, you have a great perspective there on kind of what organizations need to do day in and day out. I think your mapping this back to Winter SHIELD is important because that is, this is a first-of-a-kind of campaign for the FBI. We have intentionally released those 10 controls because it doesn’t matter if you’re Fortune 100 or a small mom-and-pop shop, we continue to see these things exploited every day.
Our hope is that folks will look at that and start to take those initial steps towards greater resilience. And listen, leverage your, favorite commercial AI platform to ingest that list and start asking questions about each thing there. What other AI recommendations do you have? Should they jump in and start, looking through vulnerabilities?
Should they look at phish-resistant MFA? Like where are you guys at with recommendations to companies that you consult with on AI use or adoption?
Hammond: Yeah, I think I’m still a little bit tempered, I’ll admit, trying to stay kind of down to earth with a lot of the AI wiz-bang newfangled tech. I really, really do see it in that kind of optimistic way of, “Hey, this is some awesome new innovation.” But I still want to be with it, like I want it to supplement my human activity.
It’s a tool. You know, I like to keep human in the loop, so I’m not in the point of letting AI take the wheel quite yet. But I do think it’s great to, okay, let’s treat this as something we’ll continue to be vigilant and secure with every step of the way.
Leatherman: Great, John. Thanks. Here’s what I want all our listeners to remember: All of the resources that we talked about today related to Winter SHIELD are available to the public. You can go to fbi.gov/wintershield. All our content is there and throughout February and March we’ll be advertising kind of what we see as the best ways to move forward on implementing some of these controls.
John talked a little bit about the CISA Known Exploited Vulnerabilities catalog, which is updated in near real time. That is available at CISA.gov and the advisories that we talked about today, to include some of the advisories about Akira and others, they’re all available at FBI and CISA dot-gov and I would encourage you to take a look at that. We talked a little bit about open-source software and the ability to use that.
What’s interesting is CISA provides a list of free open-source software that is available to industry as well. There’s no charge for that. You can visit CISA.gov to see that. And then of course, if your organization experiences a cyber breach, we encourage you to report it to the FBI. We’re there to help. We are victim centric. We want to bring threat intelligence to bear that will help you with your containment activity, while we can also pursue the adversary.
There’s deterrence through defense, but also deterrence through offense. And our teams are pros imposing cost on malicious cyber actors. So, we’re here to help. Reach out. That’s what this is all about: getting ahead of the threat together.
John Hammond, principal security researcher at Huntress. Thank you for joining us on Ahead of the Threat.
Hammond: This was a real treat. Thank you so much, Brett.
Leatherman: Thanks.