Ahead of the Threat

In the first full episode for Season 2, host Brett Leatherman, assistant director of the FBI’s Cyber Division, welcomes the self-described cyberthreat hunter John Hultquist, the chief analyst of the Google Threat Intelligence Group.

In their conversation, John highlights that not being hacked is unrealistic. Creating resilience, especially in how fast you can get back online following an attack, is the best mitigation approach. AI is also discussed, with John citing how since hackers use it to infiltrate systems, organizations must use AI in any countermeasure.

Brett also announces Operation Winter SHIELD, the FBI’s first-of-its-kind campaign to highlight the 10 most common ways the FBI sees companies get victimized by cyberattacks. Learn more at fbi.gov/wintershield.

The news segment also returns in Season 2, with Brett joined by special guest Kristin Grimes, an FBI unit chief in the Cyber Law Unit, discussing the effects of CISA 2015’s (maybe) reauthorization, edge device exploitation, and the White House’s executive order on AI. Joint Advisories: https://www.cisa.gov/news-events/news/cisa-uk-ncsc-fbi-unveil-principles-combat-cyber-risks-ot

Listen to Ahead of the Threat episodes, read the transcripts, and find related material at fbi.gov/aheadofthethreat

Subscribe to Ahead of the Threat wherever you get your podcasts:
And follow us on social media:

What is Ahead of the Threat?

On Ahead of the Threat, Bryan Vorndran, assistant director of the FBI’s Cyber Division, and Jamil Farshchi—a strategic engagement advisor for the FBI who also works as Equifax’s executive vice president and chief information security officer—discuss emerging cyber threats and the enduring importance of cybersecurity fundamentals.

Featuring distinguished guests from the business world and government, Ahead of the Threat will confront some of the biggest questions in cyber: How will emerging technology impact corporate America? How can corporate boards be structured for cyber resilience? What does the FBI think about generative artificial intelligence?

Brett Leatherman, assistant director of the FBI’s Cyber Division: Welcome to Season 2 of Ahead of the Threat. I’m Brett Leatherman, assistant director of the FBI’s Cyber Division. Our cyber mission is to impose costs on criminal and nation-state hackers through our unique law enforcement and national security authorities. Equally important is our commitment to industry and victims, and this podcast allows us to share our perspective from the digital frontlines with you.

So, thanks for joining and being a part of the important work to defend the homeland in cyberspace. This season, we’re opening each episode with a look at recent news. To help with that, I’m bringing in people from across FBI Cyber who are pushing our mission forward to help you understand what’s happening and why it matters. For Episode One, I thought it was important to invite Kristin Grimes, chief of FBI’s Cyber Law Unit.

There’s been a lot of discussion around CISA 2015 [Cybersecurity Information Sharing Act of 2015] reauthorization. And, by the time you hear this, Congress may or may not have reauthorized it. Either way, we want you to understand the protections that law provides when it’s in place. And equally important, the protections you have sharing with the FBI, regardless of whether CISA 2015 is active or not.

We’ll also hit a few other developments you may have missed since Season One. Then I sit down with John Hultquist, chief analyst at Google Threat Intelligence Group. John leads the team tracking the world’s most dangerous cyber threats, nation-state espionage, criminal ransomware, and everything in between. We’ll dig into what the latest threat landscape looks like from the front lines.

But first, Kristin, welcome to the show.

Kristin Grimes, chief of the FBI’s Cyber Law Unit: Thanks, Brett. I’m glad to be here.

Leatherman: Yeah. Before we dive into the news, help the listeners understand, what is it that the attorneys in the Cyber Law Unit at FBI do?

Grimes: Yeah, definitely. So, the Cyber Law Unit, or as we call it inside the FBI, CyLU, our mission is to keep Cyber Division in a posture to innovate while staying within the lanes of law and policy. So, what does this mean? It means we look at all the authorities that the FBI has. And how can we use those authorities to their maximum ability to complete our mission and to complete the mission of the FBI to protect the American people and uphold the Constitution?

And we do all of this, right, while empowering FBI teams to defend the homeland.

Leatherman: Yeah, it’s incredibly important, right? I mean, and you hit that like, in part, you’re keeping us out of trouble because we are innovating at the speed of cyber right now. But we have to do it within the confines of protecting privacy. You know, upholding the Constitution, while we also impose costs on those bad actors.

Grimes: That’s right. And, you know, lawyers sometimes have a bad reputation of saying, “No.” And that is absolutely not the way that we do business here. Our mission is always to get to “yes” and just to get to “yes” in the right way.

Leatherman: Yeah. That’s incredibly important because we do have to move really quickly and pivot as the adversary pivots. And you guys do an incredible job, you know, keeping us on the frontlines of the cyber fight. So, pivoting now to CISA 2015. I know that when Salt Typhoon compromised the global telecommunication infrastructure, our response came down to one thing: victims’ willingness to share with us quickly in real time.

Like, what was the threat that was happening in their environment that allowed us to help them with containment activity, but also pivot against the bad actors themselves, right? They brought us in quickly. They shared in near real time. And that’s where CISA 2015, in part, comes in: affirmative legal protections for sharing cyber threat indicators with the federal government.

And, you know, it expired, or is set to expire January 30th. We don’t know at this point if it will or not by the time the folks hear this. But walk us through what the law provides and what protections remain in place regardless of whether the law is in place or not.

Grimes: Yeah, I appreciate every opportunity I get to talk about information sharing. It’s so important to the FBI. I talk about it every chance I get, which makes me super popular at parties. But at least in this context, I think people actually want to hear about it. So, CISA 2015, which is the Cybersecurity Information Sharing Act of 2015, allows for the sharing of cyber threat information between and among the private sector and the government.

And cyber information is defined specifically in the statute as cyber threat indicators and defensive measures. And, you know, there’s a lot that goes into, “What do those things mean?” But you know, Brett, you see it from your perspective of what’s important for the FBI to receive, right? Like indicators of compromise, right tactics, techniques, and procedures, other things like that are covered by the law.

And as long as it’s shared for a cybersecurity purpose and personally identifiable information is stripped from that sharing, it receives a whole host of protections. And I’ll run through the list of protections. I mean, we could talk for hours about what all this means, but I’ll just detail really quickly what the protections are.

So, no waiver of any applicable privilege, which includes attorney-client privilege and trade secret protections; and antitrust liability exemption; exemption from federal/state disclosure laws; certain exemption from certain state and federal regulatory uses; treatment as trade secret or commercial information; and an ex parte communication waiver. There’s an additional level of protection, a liability protection if you share under DHS’ [Department of Homeland Security’s] mechanism. But that’s very specific. But you can see I mean, that’s a, that’s a huge list of protections that are afforded if you share under CISA 2015.

Leatherman: Yeah, it’s important to know, like, we’re looking at indicators of compromise. We’re not looking at, like, underlying data that’s important to the business themselves. We’re not looking at the exfiltration itself of if a business has, you know, suffered a compromise. Of course, we’re interested in intent behind what the adversary is doing. But CISA 2015, and the FBI’s work, relies heavily on those indicators of compromise.

From my view, vantage point, there’s two things we can do with this right up front. And that’s, number one, run those IOCs [indicators of compromise] through our holdings and our partner holdings to help victim organizations understand attribution. And the potential to do a greater work at doing hunt and containment efforts, but also for us to move upstream against the actors to impose cost against them, right? And so, those are incredibly important things that really don’t violate any sort of, from my perspective, privacy concerns regarding customers or downstream stakeholders.

Grimes: Right. And that data that we have internally is so rich that, you know, what we bring in, we then layer on what we have access to, whether it’s our own investigative data, a variety of classified data, information from our government partners. And so, we can really enrich what we receive and then push it back to companies.

Leatherman: Yeah. Now that’s CISA 2015 protections. Let’s talk about what protections the FBI offers outside of CISA 2015 as a result of our law enforcement mission.

Grimes: Yes. And so, prior to CISA 2015, during the period of CISA 2015, if CISA 2015 lapses, there are certain protections that will exist no matter what. And, you know, we’ll talk about different statutory ones. But I think the most fundamental thing that we need to discuss is just the way that the FBI handles victim information or information that companies share with us, and particularly for victims.

I mean, we always treat them as victims and never want to revictimize them. And so, we hold that relationship very sacred. And every chance we get, and I know every chance you get, Brett, you go out there and you talk about how we do this. We say it publicly all the time. We write it on paper. But what I’ve been most impressed with from the inside is seeing how we hold true to that promise.

You know, those discussions that we have internally about, “What can we do to really uphold this promise that we make to the private sector?” I mean, it’s real, it’s authentic, and seeing it from the inside is, is pretty inspiring.

Leatherman: Yeah, it’s part of our DNA, right?

Grimes: Yeah.

Leatherman: We are a century-old organization. More than that. That has always treated victims like victims. And, the cyber threat is no different. Even when victims don’t know they’re victims. Our goal is to take any approach with the victim in mind and to err towards urgency and victim notification. In reaching out, in helping victims when other people may not treat them as victims, especially in the cyber realm.

And so, protecting their information when they share it with us is incredibly important. And that’s protecting it from regulatory disclosure. It’s protecting it from public disclosure. We’re really good at ensuring that, how we, how we move forward with our investigations is focused on protecting that information.

Grimes: Yeah. One-hundred percent. And that leads really nicely into what those protections are that we can rely upon to follow through on that promise. And so, you mentioned regulators. That’s one that we hear all the time. Victims say, “Well, what if I share my information with you and then will you then go share it with regulators?” And the answer to that is, “No.”

We don’t share with regulators for regulatory purposes. We point the regulators back toward the victim and to their legal counsel and advise the regulators to use their own authorities, that they have to get the information that they’re seeking. And I think that’s really important. It does, it assuages some of the concerns of the private sector of sharing with us, knowing that it, they’re sharing for a purpose and it stays with us for that purpose.

And similarly, there are protections that exist when we get FOIA requests: Freedom of Information Act requests. And this is also built into CISA 2015. But it stands alone as well that there are certain exemptions that we can assert to protect information shared with us. So, if it’s compiled for law enforcement purposes, if it’s trade secret or company confidential, we can protect it by asserting those exemptions.

It helps if the information is marked as such. You know, you share it with us and it’s marked as “Confidential,” but it’s not a prerequisite. But, you know, we do assert those exemptions to the extent they are appropriate.

Another concern companies have is antitrust liability. Again, it’s built into CISA 2015. But even back in 2014, before it was enacted, the Federal Trade Commission and DOJ [Department of Justice] issued a policy paper saying that the proper sharing of cybersecurity information does not subject companies to antitrust liability when properly shared.

So, you see that there’s this already existing patchwork of protections that exists.

Leatherman: And then moving beyond that, we use traditional law enforcement legal process, right, our law enforcement authorities to also collect information from industry, but also to protect that information from disclosure.

Grimes: Exactly. And that’s one of the other things that we always advise companies: “You can share with us voluntarily.” There are other protections that I’ll go through that will also allow you to do that sharing. But we can also offer fast and friendly legal process if you would prefer to go that route. So, you know, grand jury subpoenas, D orders [discretionary orders], search warrants.

Those things are also available to share information under legal guardrails.

Leatherman: Yeah. And most importantly, we have those conversations up front during a cyber breach. So, none of this comes as a surprise to a victim. It is … those conversations happen with the CISO [Chief Information Security Officer]. They happen with the inside counsel. They happen with the outside counsel. We want to make sure that everybody’s comfortable with how we move forward.

Grimes: Exactly. And, you know, one of the things that we’ve offered to companies is, right, before you even have an incident, come and talk with us. And we’ve been expanding that to the legal, to legal counsel. I mean, we’ve a lot of the targeting, target messaging has been to CISOs and CEOs [Chief Executive Officer], but the lawyers are the gatekeepers for a lot of those conversations.

And so, in the Cyber Law Unit, we’ve been going out and having these one-on-one conversations, speaking directly to legal communities, offering lawyers to say, you know, come talk to us directly. We’ll have that one-on-one conversation. And so, I would love to use this forum as a mechanism to offer that out to anybody who’s listening.

The Cyber Law Unit is always open to having those conversations and hopes to have them in advance of an incident to talk about things like this.

Leatherman: Get ready for the emails. I am sure there’s a lot of people who are going to take you up on that. That said, we do have 56 field offices who are empowered through their cyber teams to engage with you and all of them understand, kind of what we’re talking through today and are happy to work through that with you and, and certainly get Kristin and her team engaged as well.

Kristin. What else? Anything else you want to share on CISA 2015 before we kind of move on to the next topic?

Grimes: So, just the last thing I’ll say on a lapse, if there is a lapse, there are a whole host of other protections that we didn’t even get into. But just to say there are some sector-specific protections: If you’re a provider that’s covered by the Electronic Communication Privacy Act, that statute builds in protections for voluntary sharing. If you are a financial institution, you can share information under the Bank Secrecy Act in a Suspicious Activity Report.

So, there are just there are lots of mechanisms for doing it. So, don’t be fearful if there’s a lapse. But, you know, make no mistake, CISA 2015 is critically important because it really creates a streamlined, clear framework for information sharing that shows the protections that are available. And it really has accelerated and facilitated the information sharing that we need to do our job and protect the American people.

Leatherman: Yeah. And I would just encourage everybody out there, it all starts with a conversation. And so, you know, the conversations are best had before the breach happens. And so, wherever you sit, we’ve talked about this before. The boardroom, the network defender room, the server room, the outside counsel’s office. Like, we’re ready to have that conversation, reach out, and we’re happy to talk about that.

So, John Hultquist next—who’s up next here on the episode—and I talked a lot about edge device exploitation and the increased targeting by threat actors of routers, modems, IoT [Internet of Things] devices. It’s now one of the top ways that adversaries get into environments.

January 2024, we took down two nation-state botnets in the same month. Operation Dying Ember disrupted over a thousand routers that Russia’s GRU had hijacked for cyber espionage.

The kV botnet take down hit infrastructure, here in the U.S., through China’s Volt Typhoon campaign, which was used to pre-position in critical infrastructure. Two major adversaries. Same playbook. Compromised routers and edge devices at the edge.

So, Congress is paying attention to that. The Routers Act requires commerce to study national security risks from consumer routers and modems from certain threat countries. It passed the House in April and builds on the laws that blocked providers like Huawei and ZTE from U.S. networks previously. So, kind of help us understand the, you know, where this act sits and kind of what it addresses.

Grimes: Yeah, this act, the bill was proposed because there was a recognition that there is this huge blind spot, that there had never been systemic review of these kinds of devices and the threat that they present to national security. And, Brett, like you mentioned, it builds upon previous efforts to address these kinds of issues where we’re looking at, you know, very specific types of technologies and the threats that they pose.

And some of the previous examples, you know, when we talk about the Secure Technology Act and others, it’s very targeted. Some of them actually already call for action in ripping and replacing certain devices and provide for cost reimbursement. This proposed bill is, you know, it’s kind of bigger picture. It’s, “Let’s think about what the threat is and then propose some type of action.”

So, you know, a lot remains to happen. This also has to you know, it has to pass first. Then there’ll be a study. And then there will likely would be some follow-on action. But it’s definitely a very important first step.

Leatherman: Yeah. And I think for those folks listening today it should invoke a conversation, right? If you’re a board member or a C-suite executive with your network defenders, what is the technology that sits at our edge right now? We’ve seen these actors time and time again, build these botnets based on SOHO [Small Office/Home Office] based devices or end of life devices.

What devices and where were they manufactured? Where do they sit in our environment today? Legislation takes time, but it signals the priority here, right? And so, what kind of conversations should we expect, should we hope, that folks in industry are voluntarily starting to have today as a result of this?

Grimes: Yeah, like you said, this, this is one piece of a bigger picture. And so, the conversations that should be happening are not just focused on this particular bill and what this is supposed to tackle, but the big picture of supply chain security. And there have been many executive orders and legislation passed that aim towards securing our supply chain.

Some are targeted to the government supply chain, some are more, you know, across the board commercial supply chain. So think about it when you’re talking with, not just your board and your C-suite, but all the way down, you know, the buyers within your company, the people at the line level that are purchasing the most, the tiniest piece or part that fits into a bigger system because all of those have potential risk, and not everybody is getting the message when you’re, you know, the line level buyer working in procurement.

And so, you just need to bring more people into the picture and make sure that you’re having this holistic discussion about supply chain security and where it fits within the bigger picture.

Leatherman: Yeah, such an important conversation to have, especially if you’re an organization that has older technology at your edge right now. If that is coming from a high-risk country, developed there, it may still be under, you know, maintenance standards where they’re applying patches to the device, but in some cases there are hard-coded backdoors that allow an actor to get into those devices.

And so, really understanding remote access to those environments by the original manufacturer is really important, and doing kind of threat assessments, risk assessments, around that is really important for industry.

Grimes: Absolutely.

Leatherman: All right. Final topic is the AI [artificial intelligence] executive order. We know that adversaries are using AI to move at machine speed. In November, John and I, talk about Anthropic’s report about how a PRC state-sponsored campaign, targeted industry, over 30 organizations, where AI executed like 80-90% of the operations autonomously. If we’re currently in an AI arms race, which I’ve heard a lot of folks talk about, we can’t trip over our own feet here, right?

So, right now, companies deploying AI are navigating hundreds of potentially conflicting state laws, some requiring models to alter output in ways that could compromise accuracy. So, in December, the administration issued an executive order on AI to address exactly that. It establishes a DOJ task force to challenge conflicting state laws. It ties federal broadband funding to regulatory alignment and directs the FCC [Federal Communications Commission] and FTC [Federal Trade Commission] to consider preemptive federal standards.

So, Kristin, walk us through what this EO [executive order] is trying to accomplish.

Grimes: Yeah. There’s no cyber podcast that’s complete without talking about artificial intelligence. And so yeah, we, absolutely it’s so important to talk about it. It’s, you know, we have to recognize that this administration has done so much to position the United States to be a leader in artificial intelligence. And this is just one piece of that larger picture of, you know, bringing America to the forefront of this. It’s not just a matter of, you know, being the best in AI, but it’s actually a national security imperative that we are, because if it’s not us, it’s somebody else.

It creates jobs, economic growth, and protects our competitiveness. And this all builds upon, the AI pillars that the administration put out, which I’ll just read quickly because I think it really encapsulates the importance of what we’re doing here.

So, in July 2025, the president put out the AI action plan, which had three pillars: to accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security.

And you know, we are mirroring that in the FBI, or we are trying to, right. We are accelerating AI throughout the Bureau to be in line with what the president’s priorities are for AI. And like you said this, these new EOs are meant to address the fact that we can’t achieve that mission if we’re operating in this fractured landscape. If each state is implementing their own regulation, then, you know, every … how can an AI company succeed if, you know, they’re in one state and they have to operate under this paradigm, in another state they have to operate under another.

And so, this is meant to really centralize what that approach is to AI and make sure that the handcuffs are off, right, to be able to, to innovate and win on this, in this space.

Leatherman: Efficiency matters here at speed. Right? Like we have to adopt AI at speed. We just talked about the PRC’s use of AI. We’ve seen Russia use that as well in influence operations. So, we have to match that speed. And hundreds of regulatory efforts across the states can hamper that. And so, this is meant to, to help us be more efficient.

Grimes: Yeah, absolutely. And Brett, you really hit the high points of what the EO covers. It really is designed to, again, not to just jump right in and change things up, but to take a very deliberate look at where things stand right now, to establish this task force, identify where those regulatory laws are, regulations are, that could be in conflict with this goal, and then to see what actions should be taken at that point to make sure that we’re doing it the right way.

And so, like you said, considering preemption laws, proposing certain recommendations for bills that could address this, and all of this is designed with that end goal of being the leader in AI.

Leatherman: Well, without a doubt, we’re going to talk a lot more about AI in the coming year in 2026 and beyond. And we have to, right? And that is important legislation or an important EO to help us really focus on, reducing, I think, the roadblocks and increase in efficiency.

Grimes: Absolutely.

Leatherman: Great. Well, thanks for joining us, Kristin. And thanks to the entire Cyber Law team for what you guys do. You enable FBI Cyber to defend the homeland in new and innovative ways, while also ensuring everything aligns with our respect for the Constitution and the privacy of the American people. That balance really matters.

Before we move on, just wanted to flag for the audience two recent, cyber intelligence releases critical for anyone operating in industrial control systems.

So earlier this month, we joined CISA and the U.K.’s National Cyber Security Center and international partners to release secure connectivity principles for OT [operational technology]. Eight principles for designing and securing OT connections. Exposed OT is being actively targeted by nation-state actors, multiple nation-state actors and hacktivists, and defending that is critical.

And then coming soon, we’re going to release “Adapting Zero Trust Principles to Operational Technology,” a joint guide, also with CISA and some interagency partners focused on implementing zero trust within those environments. And so, incredibly important intelligence. I would encourage everybody to look at, if you’re a critical infrastructure owner or operator, this material belongs on your desk.

Finally, I’ll talk about this in the episode with John. This week, the FBI Cyber Division has launched Operation Winter Shield, a 60-day sprint to defend the homeland.

Unlike many law enforcement operations where we rely heavily on our state and local partners, we’re relying on you as defenders of the homeland here as well in industry. fbi.gov/wintershield is where you can find more information. And again, I’ll talk with John on the podcast as well.

So, we’ve covered the policy landscape, legal protections for sharing edge device security, AI regulation.

Now let’s get into the threat landscape driving all of that. My conversation with John Hultquist, chief analyst at Google Threat Intelligence is next.
___________

Leatherman: Welcome back to Ahead of the Threat. Today’s guest is John Hultquist, chief analyst at Google Threat Intelligence Group. John has spent over 20 years analyzing and hunting malicious cyber actors, previously working as vice president of analysis at Mandiant and also running the Cyber Espionage Intelligence team at iSight partners. John started his career at the State Department and Defense Intelligence Agency.

John leads the team responsible for tracking the world’s most dangerous cyber threats, from nation-state espionage campaigns to criminal ransomware operations. He now sits at the center of Google’s global threat intelligence operation, translating what his analysts see into guidance that helps organizations defend themselves.

John, I often describe the FBI’s cyber work as sitting at the intersection of law enforcement and national security operations. Reading your bio and knowing what your teams do every day, your mission sounds really familiar to me. So, welcome to the show and thanks for joining us.

John Hultquist, chief analyst, Google Threat Intelligence Group: Thanks. Thanks for having me.

Leatherman: Yeah, absolutely. So, John, I think a lot of people out there know what Google does day in and day out, right? But they don’t necessarily have visibility into the perspective your team has, which is kind of that threat telemetry across the Google ecosystem. Can you give the listeners kind of a perspective on what you guys see, and exactly why that is so important to helping defend Google and U.S.-based, you know, downstream stakeholders?

Hultquist: Sure. You know, I came into Google through like a series of acquisitions. I came in from iSight partners back in the day. And, you know, iSight started out essentially monitoring the underground. So, we are a very, very early player in that space. And they brought me in, out of the government space, to come in and help them start doing cyber espionage. And frankly, you know, it was the very early days of VirusTotal.

We were working with people who had been regularly targeted, like dissident groups—anywhere we could get very, like, the just … any kind of information on the threat and, start aggregating it and pulling it together. And that’s how the very early visibility came about. But ultimately, we were fortunate enough to integrate with technology like FireEye devices.

We were acquired into this organization that had Mandiant. The beauty of Mandiant is that Mandiant gets called in all these extremely high-profile, serious events, and not only get to see just the sort of surface level, attack surface side of things, which is where I’ve been sort of stuck for a long time with the visibility I had.

But we got to see after the fact, you know, how an intrusion unfolds. And that’s a really beautiful thing about, you know, being associated with Mandiant, is they get to go in and see how the whole thing, you know, the whole thing plays out. And that’s been, that’s a big piece of, you know, still a big piece of what we do.

So, we’re looking at the underground. We’re integrated with the various technologies to pull that data in. Now we’re part of a company that owns VirusTotal. So, of course, we’re, you know, very strong users of VirusTotal, which gives us a tremendous amount of data that we use. And we’re aggregating all that, to pull together like a—the clearest picture possible of the adversary, right?

That’s what everybody’s trying to do. These are spies and criminals. They’re doing everything possible not to get caught. And, you know, the only chance you have is essentially pull as many threads together as possible to get some picture of what’s going on.

Leatherman: Yeah. You and your teams are cyber investigators, which I love, right? Because that’s what we do day in and day out is execution, threat pursuit. But you have that unique perspective because the Google ecosystem is so expansive. Right. And so, pairing that Mandiant visibility into that Google ecosystem, now you see, I would imagine, this mass telemetry that you have to try to distill like what’s important, what’s not important, into our investigations to understand things.

Hultquist: Yeah. And that’s the other part of the game, right? Obviously, one of our most important roles here is to protect, you know, Google users. Right? And that’s a huge piece of it. But the other, the other game here is that, can you learn from build this picture and learn things that you can leverage?

Because now, like you guys working, you know, with sensitive data all the time: not everything you learn you can talk about. You know, even when we talk about IRs [incident responses], right. The things you learn in a single IR, oftentimes you cannot use it, you know, use it elsewhere. But the beauty—the trick—is to build the biggest picture possible and then get the most utility out of that.

Right? And ultimately, what we want to be, instead of piecing together one incident after the incident, is hunting an adversary that we now know, right? And that we can sort of actively chase, right? Like there are actors that, for instance, have always been at the top of my list, like, 29 obviously was associated with SolarWinds. In my opinion they’re an apex predator.

I’ve always been extremely interested in anything they’re doing. We’ve got, you know, we want to not just sort of passively wait to see what’s going on from them, but we need to go out and hunt for them, right, and see if we can find them across that telemetry or, you know, in VirusTotal or in any of these incidents that we’ve responded to.

Leatherman: Yeah. Apt29 associated with, the Russian SVR. Right. Four our audience here, which is probably Russia’s more sophisticated threat actor, advanced persistent threat actor, responsible for one of the more consequential cyber-supply chain compromises we’ve seen, impacting, what, 18,000 total organizations based on getting into one company?

Hultquist: Yeah.

Leatherman: That’s kind of the scale of the problem that we face.

Hultquist: And they’re still operating at that scale, right? So, like the other thing is, once, you know, once you start understanding this, you can start essentially marshaling your resources to the right problem. And that’s the whole threat intelligence game. We all live in a world of finite resources. AI [artificial intelligence] hasn’t solved that yet. Right? We’re still in a world of finite resources.

And so, the game is essentially, leverage your resources most efficiently. And that’s what threat intelligence to me is all about. You know, obviously on a very tactical level, it’s about catching bad stuff and, you know, and … matching some IP that, you know, or blocking some IP or something like that. But really on a strategic level, it’s about leveraging limited resources for the problems that matter.

If you’ve got Apt29 and you’ve got just about any other actor, you know, and let’s say you’re a technology provider, there’s a really good chance they’re about to hit all of your customers downstream. That’s what they do. That’s the problem you need to be focused on. Whatever other thing that you’re doing doesn’t matter compared to that.

And that’s like the wisdom that we want to be able to bring, like pull out of all this.

Leatherman: Yeah. It’s the one-to-many problem. Right. And we see that with ransomware actors. We see that with APT [advanced persistent threat] actors. They look for those points of injection into an ecosystem in order to have maximum impact while trying to minimize, like the blast radius for themselves, meaning they don’t want to go to 18,000 organizations if they can go to a development server and then launch themselves downstream.

It’s that one-to-many issue. What I really love about your teams, and Google in general, is you don’t just take your information, and then just kind of use it for your own ecosystem. But you release reporting, right? The M-trends report. It is really interesting for, I know, the kind of the global population of information defenders. Network defenders, because of the, because of your perspective and what that brings.

So, you know, reading the 2025 M-Trends report, the threat story publicly tends to be about the sophistication of the threat actor, like more zero days, more advanced techniques. But your report data tells us a little bit of a different story. Really, it continues to be the low-hanging fruit in many cases.

Stolen credentials surge past phishing as the second most common attack vector to get into environments. Exploits actually declined because, I think, some of the more fundamental things are working for the actors, right? I think in M-Trends it mentioned that Snowflake, the Snowflake campaign itself, was traced back to credentials sitting in criminal markets since 2020 and in some cases were still effective.

So, if the threat isn’t necessarily more sophisticated, but it’s faster and easier paths in, what should defenders be thinking about or rethinking right now?

Hultquist: I think it depends on your threat model. Right? I think it depends on what kind of business you have, what your most critical assets are, and what sort of attractors. I’m sure there’s already some, like agreed upon language for this. So, I apologize for saying it like this. But whatever is attracting the adversary to you, right?

It makes you sort of like, it’s there’s a question of like how critical things are, how vulnerable you are. And then there’s also a question of like, what actually attracts these adversaries. If you are building, you know, obviously jets, you know, like military equipment, there’s very clear reasons why extremely sophisticated actors would have an interest in you. If you are a, you know, political figure running a government, clearly you’re going to be attracting very high, high-level sophisticated actors.

But I used to tell people, you know, like I used to tell people in meetings, you’re not interesting enough for zero day. And I will say that where they’re getting, the adversaries getting a lot, like particularly Chinese adversaries, getting a lot more leverage out of single zero days. And they’re sort of moving lower, like you’re getting lower hanging fruit with them, particularly because it’s not a social engineering zero day that we’re talking about as much, and it’s hitting the edge. Right?

And it’s not as at-risk. But like the average person is really … like the biggest concern they should have is ransomware, right? Or extortion. And that plays out again and again and again. That’s the majority of our business, even though we get called in for these very serious incidents from very sophisticated players.

It really just depends on what you’re protecting and what you’re … you know … and why somebody would come after you. Some people just aren’t, they’re just not that into you, you know, you’re not interesting enough for a serious adversary. And that’s great. That means you can focus on, like, the other problem. Right?

And that still matters. It’s not easy, right? I think I have seen … you know, what’s amazing about some of these actually less technically sophisticated actors, I’ve seen them roll through targets, that frankly, I don’t think the Russian actors, who I’ve been tracking for two decades, could have managed. Right? Like the aviation sector. Right. I mean, the critical infrastructure players.

Your Sandworms, the Energetic Bear guys out of the FSB. Those guys I’m not sure they can hit. And they’ve hit those sectors before. I’m not sure they would have rolled through as quickly as some of these kids who have not, they are good. I mean, don’t get me wrong, but they’re really outstanding at social engineering.

Right. And that’s a whole new game. A different game that we have to play or, like, things to worry about and defend that we haven’t really focused as much on.

Leatherman: Yeah, that’s a great point. And, so, you’ve tracked Scattered Spider. I think that’s kind of where you’re talking here a little bit is around the Scattered Spider threat and kind of the social engineering that these disparate group of individuals are able to do. You’ve tracked them methodically through various industries. U.K. retail in April, U.S. retail last May.

Insurance, I think, last June. Now, you know, aviation sector. So, in one case we had three insurance companies hit in five days, again by this Scattered Spider group and they continue to evolve. You said this is the threat that keeps you up at night previously. So, for organizations in sectors they haven’t hit yet, what should be … what should they be doing right now to get ahead of that pattern?

Hultquist: Well, I think, you know, it comes down to, you know, their incredible prowess with social engineering, right. Call centers is where they’ve shown real value as far as getting into your IT helpdesk and convincing them to do something that they know that they shouldn’t do. Right. One of the places where I felt that we raised real value, you know, in the public domain was being able to see, okay, we can see them shifting towards a sector.

Let’s go out and tell that sector, “You’re on high alert.” Right. And here’s what you, like, concrete things we put out, like hardening against these sorts of attacks. The very first thing is, where is your call center and do they need to know that they need to be ready to say no, right. And they need to say, and they need to be on, like, they need to say no to everything.

And if there’s a question, take it to a supervisor, right?

Leatherman: And It’s okay to say no, right?

Hultquist: It’s okay to say no.

Leatherman: They think they might be talking to the president, the Board of Directors, or somebody important. Right. Often, it’s time is of the essence. I’ve got a big business deal and they’ve got to know that security is top of mind, I think, for the CEO or the executive running the organization as well. And it’s okay to push back.

Hultquist: Push it back, send it to a supervisor. Right. And let them say no and then like that’s the other thing is, like, you do not, like, these people should not be in the position to have to make, you know, these judgment calls, either. That’s like it can go up to that CEO’s level for all we know, you know, to confirm.

But it, the problem, or one of the inherent problems that we recognize very early with a lot of this activity, is call centers are, essentially, incentivized to be helpful, right. After a million years of going to call centers and not getting the help you need and asking to talk to a supervisor, we’ve built systems that essentially incentivize them to be as helpful as possible.

And, unfortunately, that puts them in an awkward position. You know, when it comes to saying no, in a lot of cases, and we have to make sure that they are, you know, able and, you know, willing to do that.

Leatherman: Yeah. That’s a great point. I think in some of these cases, we’ve seen these actors be able to execute at scale through those call centers. Once they get credentials, within 24 to 48 hours they’ve gone from helpdesk to domain admin in some cases. And that, that is a tremendous impact. And I think it demonstrates that we have a lot of needs out there to implement the fundamentals.

Because if they do get credentials, there’s still ways to avoid that escalation, right? Simple things we can do as far as, like, phish-resistant multifactor authentication, is an example, that I think that in many of these cases would mitigate their ability to act with speed and scale like they’re doing.

Hultquist: And I’ll tell you, the one thing that we’ve learned when it comes to multifactor is, I think the biggest lesson, is their ability to, to burn through, like the SMS-based multifactor. Right. This is important to think about. These guys are like almost telephone natives, if you want to think about it that way.

Right. That’s their space. That’s where they’re telecom natives. Right. They understand how to move through like, the like telecom BPO [business process outsourcing] as a way to get into targets. They understand how to social people over the phone and they understand how to essentially, clone your SIM or whatever it is to get to get your, you know, your, your SMS-based second factor.

So, if you know, if you’re really serious about that, it’s probably time to move on to that. Of course, they can also do, like, we see the … with the pushing attacks and stuff like that, but it’s still a better way. I think, you know, the big … one of the big lessons that come out of this, is the inherent, like the security that we thought we’d had in this SMS two factor.

It’s just not, it’s just not the case anymore.

Leatherman: Yeah. Threat actors evolve. These guys have certainly evolved to SIM swapping in different ways to be able to socially engineer PINs or SMS texts in order to get in, or you mentioned, you know, push notification fatigue. That they’re very good at trying to social engineer this. And that kind of goes to the point that, like these actors, it’s kind of a combination of targeting technical vulnerabilities and human vulnerabilities.

We have to look at both of those, I think in concert. And what’s your view on that? Because sometimes organizations look at, “How do I defend my networks from a technical perspective?” But they don’t look at, “How do I train my workforce? Or …

Hultquist: Yeah.

Leatherman: “… mitigate threats to my workforce from the human perspective, or vice versa?” They put it all on the workforce, and the technical implementations are complex, they’re expensive. And so, they try to mitigate doing that and focus on the human. There’s a balance, I think. Right. And I’m sure to an extent it’s organization to organization. You’re making those decisions.

But what’s your sense on kind of where we’re focusing and is the focus right? How do we balance that?

Hultquist: You know, one of the ways I’ve been thinking about it is, you know, especially with AI, you know, coming online, and agents. Right. There are sort of our deterministic, security measures. Right. Like, things that are sort of hard-and-fast rules, you know, that, you know, that, are, you know, aren’t as easily like that, are not like, associated with social engineering, for instance.

Right. And you know, when I think about the sort of agentic problem or like the agentic security problem, it’s very much like a social engineering problem, right? This is like a person that can be convinced to do things that maybe they would not, would not otherwise do. It has the authority potentially, or the ability to do that, if not the authority. Right.

And I think about, you know, I joined the Army Reserve. They had a program when you’re 17 years old. They used to have this program where you can go to boot camp between your junior and senior year of high school, I was so excited about being in the military.

Leatherman: Oh, that’s how you spent your summer vacation. A lot different than me.

Hultquist: I spent my summer vacation in boot camp. The day after basic training. You know, Fort Knox. The next morning I was in senior high school. And then I went back and then of course, Afghanistan, Iraq, and all, like. So, I, very early, my experience is enlisted, you know, like bottom rung enlisted, but it’s a world where they’re very, they really understand these sort of non-deterministic like security world.

Right. They don’t … you have, you have like sergeants, right? They don’t expect a private … they give you a set of rules, like, your general orders. Right. And they expect you to do your best, but they also don’t expect you to get everything right. Right? That’s why sergeants are there. That’s why there’s an entire hierarchy to make sure that the people are there are doing the things that they’re supposed to be doing right.

And I think we have to take some of that very, very old wisdom and apply it to this non-deterministic world. And there’s already, thinking about it, even in the agent space. There’s, like, they’re talking about, like, you know, observers or controllers that can essentially track, you know, “Are these things doing the things they re supposed to be doing? Does this look like they’re out of line?”

You know, we can essentially build a system like that, you know, when you think about it, you know, enlisted, young enlisted people manage our nuclear, like, our nuclear arsenal. Right? But they have a tremendous amount of rules and controls around them that have been well thought out, you know, to keep those systems highly secure.

Leatherman: And safeguards where necessary.

Hultquist: And safeguards. And these are people ...

Leatherman: To kind of understand when they’re making decisions that they shouldn’t be making.

Hultquist: Absolutely. Like the two-person switch. Right? Those sorts of things. Those are all built around non-deterministic, you know, like systems that they had to put rules around. We’re going to have to essentially do the same thing. Yeah. And it’s possible. Right. It’s absolutely possible. You know, and I keep thinking back to my roots to understand what that would look like.

Leatherman: What? So, you mentioned AI, so, I’m going to bring it up. So, let’s help folks understand the difference kind of between where we’re largely at right now, which is, artificial intelligence that is generative versus where I think we’re, to an extent, at now, but really going, which is agentic artificial intelligence. What does that difference between those look like?

Hultquist: Well, I think, you know, right now, you know, we look at, like, the threat actor, like, what it looks like so far. So far, you know, the threat activity that we’re seeing, you know, it’s been going on for ... actually like generative AI has been around for a while. And one of the very first applications was the ability to make fake content.

Right, like, you know, or generate ... like I remember, like, there’s a website that came out and it’s like this person does not exist. And there was somebody, I can’t remember who, who was like, inside bets going on, like, how long will it be before one of these fake people show up in like, a fake, you know, a profile from a social engineering.

And it was, I think it was like a week. And the best part is, is they forgot to remove the watermark. So it said at the bottom, like, “This person does not exist,” you know, in the actual attack. But you know, so this stuff has been going on, but what’s, you know, what we’re pushing into, like the future that we’re pushing into, is the ability to sort of, you know, they’ve been using this to fabricate stuff and create, you know, it’s really great for social engineering, pretending to be people, convincing somebody to do something on the phone that they wouldn’t otherwise do because you sound like them.

The area that we’re pushing towards, though, is, you know, with the additional ability of like, the agentic capability is essentially automation, right? It’s just, you know, it’s the ability to automate things. And, Anthropic put out this incredible paper, where they saw, a team out of China doing this stuff, automating the attack. That’s the future that we’re looking at.

You know I think pretty clearly and that’s like, you know, some people will say, “Look who, who cares? Like, you know, these aren’t necessarily better than a human.” And that’s actually a really good point. Like, there are very creative hackers, and some of these teams will always be more interesting or better at these edge cases when it comes to these incidents than other players.

And that’s, I think, that’s absolutely true. And I think for the espionage game, you’re like, you’re very worried about being caught. And so, you got to be, you have to sort of exercise a certain amount of control over that, over that intrusion. Right. Otherwise, you could end up getting caught or like, you know, like break out or, by the way, cross a line that you’re not supposed to cross. Right?

Like I’ve seen a thousand intrusions in critical infrastructure. Inevitably, something would break. It probably has broken or whatever. That could be a very serious geopolitical problem. So, there’s like a real, I think there’s some real need for some controls when it comes to that space. Also, you have a very sophisticated player who might be better at dealing with edge cases and creatively thinking that’s harder to automate, right?

Like, I’m in this environment, you know, this is what it looks like. I got to have a creative way to move to the next step. Automated. You want less edge cases, you want it to move forward. I do worry, my first concern right now is, I’m worried about, I’m most worried about the criminal environment. Right.

What I’m worried about is there are actors who do not care about breaking the law or geopolitical, like the geopolitical consequences, or going too fast and got caught. If you’re doing extortion, you’re going to get caught. Right? Because like and as in, like, you’re, someone’s going to know you got in because you have to go out and ask them for money.

Leatherman: Unlike APIs [application programming interfaces] where it’s like, how long can we persist in an environment? The criminal threat is loud and proud. We were meant to be seen because we’re monetizing this activity. Right? And so.

Hultquist: Absolutely.

Leatherman: We don’t care if we’re loud and how we do it.

Hultquist: So, the disruptive, extortive game is where I really worry about, really I worry about this, like, being leveraged the most. So, if my game is, I need to just move across the network as fast as possible to get to the crown jewels and steal them, or to get it in place to deploy them out like malware.

And by the way, I can be the very first person to run on that zero day that nobody has patched on. That’s where I really worry the most currently, right? Yeah. Now there will be benefits to criminal actors. Don’t get me wrong. They’ll be doing the same thing for different reasons. They’ll have certain speed bumps that they’ll have to abide by.

They’ll also be really good actors who don’t have to hire younger, less-good actors to do what they need to do. And they will essentially be 10-Xing themselves through automation. And so, everybody’s going to get … The problem, what I am really concerned about now is we’re moving to like the speed game. It’s just like we are moving into a really fast game.

And I think the reality is, and they’ve already figured this out in the kinetic space, right. In the military space. Is that if you want to respond to that speed, you’re going to have to automate as well.

Leatherman: Yeah.

Hultquist: In the future. It’s fast.

Leatherman: Yeah. To that point. So, I know, I read a report from Unit 42 that said AI cut the time to exfiltration from roughly two days to 25 minutes. In that AI acts approximately 100 times faster than human driven attacks. And then Google’s forecast for 2026 is that the threat actor use of AI is expected to transition from the exception to norm.

Leatherman: So now we’re getting ready to deal with this as a normative thing in cyberspace operations. You mentioned the Anthropic report released in November of last year, in which a Chinese state-sponsored group was able to target 30 total entities. AI executed 80-to-90% of the tactical operations associated with that attack. It targeted the full life cycle of an attack autonomously, which is recon, vulnerability, discovery, exploitation, lateral movement, privilege escalation, like the normal things within the kill chain, to include exfiltration.

I think in that, report, Anthropic noted that it also encountered physically impossible request rates, meaning that autonomy was sending so many request rates at such a high speed. Thousands of requests, multiple operations per second, that it was really unsustainable for, in many cases, human defense.

Hultquist: Yeah.

Leatherman: So, most incident response plans are built around human speed adversaries. That’s what we’re used to. Right. And so what does preparation in your mind look like when you’re racing against a machine instead?

Hultquist: Well, I think we’re going to have to automate as much of this as possible. And I think that starts with the vulnerability cycle. Right. Like, yeah. That’s just a very like, there’s a lot of different places, a lot of different areas where we’re going to have to think through this. Right. But the very first, like, I worry about right off the bat, is the vulnerability cycle right now.

You know, we spent a lot of time deploying these fixes that simply cannot happen anymore. Things are either going to have to be taken offline or they’re going to have to be fixed nearly instantaneously. Right. And I think those are going to be our options moving forward. Right. Like when these vulnerabilities drop that we need to anticipate that technology is offline until it can be repaired, which we theoretically do very, you know, with automation very quickly.

But I think that’s the most, like for me, that’s the first part of this cycle that seems very clear. I’ll also say, you know, we’ve done our own research at Google when it comes to the vulnerability cycle. And there … it’s very clear that this is like that AI is an incredible tool.

We have an agent called Big Sleep that we’re using already to identify vulnerabilities. We’ve had a lot of great success with this. It’s found dozens so far. And I can tell you that, if we are doing it right, like we did, this. I mean, there’s a lot of brilliant people doing this, don’t get me wrong.

But I think the idea of it, or at least you know, it isn’t, you know, original or novel, right? We can expect that they’re doing this elsewhere. Right. And they may be a year or two behind, but they’ll get there. Right? They will figure out a way to leverage this. And so I think part of this cycle that I think a lot of the conversation is now is how do we look for these things proactively leveraging the scale and automation of AI?

For us, faster than they do. Right. So it’s not just the sort of response, but I think we’re going to have to essentially chip away at these, at the code by ourselves to move faster. And that’s going to be, you know, a race.

Leatherman: Yeah. And by the way, like, I applaud, like, Google and Mandiant. You guys put out M-Trends, right? You’re very transparent about what you’re seeing. I applaud Anthropic, OpenAI, Google and the other, artificial intelligence, the core artificial intelligence companies in the U.S. for putting these kind of reports out because it helps all of us understand how this, how the threat actor is manifesting their use of artificial intelligence in really concerning ways.

And so our ability to study this and understand it positions us to better defend against it. I’m also of the opinion that a lot of organizations out there are very concerned about how they implement artificial intelligence in defense of their own data in networks. And I think it’s like many other things that we do: We don’t have to apply it to the totality of our digital footprint.

We don’t have to apply it to the totality of the data that we hold. If we can just start to apply it to what you talked about earlier in the podcast, the important stuff, the important data and networks that we have or surrounding the users in our environment who have credentials, who could have incredible impact to our organization.

And we can start to apply defensive AI around that. That’s actually a pretty simple thing to do. Because of the ability to use APIs to many of these, major, artificial intelligence platforms, to simply start looking for anomalies or deviations in behavior. Right? And so we don’t have to do it across the entire network. But we can start with a core group of systems, internet facing devices, user log ins and that are concerning to us or could have that cascading impact and start to apply it just to those use cases.

And we’re going to start to move the needle in being able to defend against, I guess the blast radius of an attack like this.

Hultquist: Yeah. I think, you know, I think about like, there’s a huge opportunity, too. Right. Like, I think we can get into this cycle. We’re talking about this. You know, AI negative and obviously this is serious, we’re going to have to face these, like, threats, you know, with an open mind.

But, there’s also a tremendous amount of value that we can bring, you know, to our own operations. I think you mentioned anomalies. I think about what, like, you know, the Volt Typhoon stuff and not just Volt Typhoon. So many of these actors have gotten so good at living off the land.

Leatherman: Yeah.

Hultquist: And like what? How do you catch somebody living off the land? You look for anomalies, right? It looks like a, an average, like an IT person, but they’re operating outside the, you know, Beijing hours or doing things that they wouldn’t normally do. Right. And that’s hard to do for a human to go and find this stuff.

But, you know, AI may be the, you know, the way that we find a lot of these things. So there’s a lot of promise here to sort of, open up some of these cases, you know. We’ve already found adversaries using, AI in malware, and it’s pretty interesting. They’re using it, for instance, to essentially, avoid some of like the signature checks and things like that.

But, what is really cool is, you know, one of the ... VirusTotal has an integration like an AI integration and one of the early, very early first, you know, incidents that we saw of this, to my knowledge, the only thing that caught it was this VirusTotal AI integration. It found it and it marked it as hostile like.

There’s a lot of, like, value that we can bring on a lot of these problems. I’ll tell you, we’re already using it. Obviously, our business is crunching a lot of data. I use it every day to understand things.

Listen, if you’re in the risk management business, this is huge because you’ve got to understand a lot of stuff all the time that I, listen, I forget things constantly, right? Like I have my own context window that is constantly running out. Right. And I have to refresh. And it’s so good at like, here’s what you need to know about this, you know, so you understand it so you can make a better call. And that’s really exciting, too.

Leatherman: Yeah. And to your point, like, if I were in the risk management business and I didn’t understand technology right now, I think the general chat bots that are out there are a great way to start understanding.

Hultquist: What a godsend.

Leatherman: Yeah. Like if you’re on the Board of Directors or you’re a C-suite executive or you’re a chief risk officer, and you’re not a CISO [chief information security officer] or somebody with a technology background, understanding what is happening in living off the land. I mean, plug that in and see what it means and start, enabling or empowering conversations with your CISO. “Hey, do we have end of life devices? Do we have, you know, what do our device, what does our vulnerability management program look like? How can I start having meaningful conversations that help reduce risk?”

These chat bots are a great way to do it. For example, the FBI drops Joint Cybersecurity Advisories all the time with, CISA [Cybersecurity and Infrastructure Security Agency] and our domestic and international partners. Understanding two weeks ago, we dropped one on operational technology and how to mitigate risk to that. Drop that report in, understand what an executive—ask it to understand what an executive needs to know.

And then how do I inform conversations with my executives around mitigations and controls we can put in place that’s going to help you to understand that risk. So, use it for benefit, for education, to get more smart on these things that are incredibly complex.

Hultquist: We’ve got a, program called LLM. I think it’s widely available. But I heard of this CISO the other day was telling me that he uses it. He puts all of his, like, guidance that he’s got, you know, like, all the regulations and everything. He puts it all in one place and he uses it to query it.

“What are the issues around this and that?” It just gives him like, you know, and he can go to the references, go back to the references or whatever, but it gives him, you know, a way to sort of store all that stuff.

I’m doing the same thing. There’s so much new AI information coming out every day. I’m starting to just dump it in one place, you know, read what I can but at least have it in one place where I can sort of, you know, go back and reference it.

And it’s super valuable. So, you know, I think for risk managers, there’s a huge advantage here.

Leatherman: Yeah. To, to your point, like I create projects within kind of my, artificial intelligence platforms that is meant for threat intelligence, and I collect it. And then I look to aggregate, what are we seeing across the end-of-device life span space? What are we seeing in vulnerability management? You start to see trends by doing that.

So, really powerful stuff. Before we shift into something you just talked about before, which is kind of the PRC [People’s Republic of China] threat in the enabler ecosystem, that kind of enables what we’ve seen over the last 24 months or so with PRC cyber enabled operations. I want to talk briefly and just kind of highlight to the audience that we have launched Operation Winter Shield.

It’s a first-of-its-kind campaign in the FBI to really, highlight the risk posed by the cyber threat. From February 1 to the end of March, we are highlighting the top 10 things that we see in the FBI throughout our incident response, our law enforcement and intelligence community mission. The top 10 things that adversaries continue to exploit.

For those who work in the cyber defense discipline, this is not going to be a surprise to you, but again, from Fortune 100s to small mom and pop shops, we continue to see this. We know that supply chain compromises take an average of 267 days to detect and contain. Nearly nine months of adversary access before defenders know that they’re even in the environment. There’s ways to shorten that gap.

We’ve talked about the fact that the adversary is not using zero days often. They’re not using their most sophisticated tools because they’re taking the path of least resistance. The path of least resistance involves these 10 areas that we can shore up. And it’s not, again, from my point earlier, it’s not about fixing these across the entire environment. We can move the needle by building resilience across certain systems that are most at risk, to our organizations.

So, this is, in my view, not a wish list. It’s the things that would have made the difference in the cases that FBI teams have worked across our 56 field offices. You know, responding to incidents. And so, I would encourage folks to check out the FBI’s Cyber Division Twitter page, fbi.gov/wintershield, because those mitigations are there.

And for the next two months, we’re talking about a lot of what John and I are talking about today, which is, plugging up some of those fundamental things and making the adversary work harder. That’s what imposing costs on the actors from a defensive standpoint looks like.

John, let’s switch over to kind of that corporate enabler ecosystem. I know you have looked at iSoon in the past. And then, last year, the FBI and DOJ indicted 10 actors, eight employees of iSoon, two MPS [Ministry of Public Service] officers were identified that, in the case of Flax Typhoon, a significant PRC-backed campaign, Integrity Technology Group was a company that supported the PRC’s objectives in that case. Even in Salt Typhoon, we released a joint advisory last year, and we traced that activity to three Chinese-based technology firms that were serving the PLA [People’s Liberation Army] in the MSS [Ministry of State Security].

So, from where you sit, is this an ecosystem China built deliberately or something that has kind of organically emerged? And how impactful is it?

Hultquist: Well, without the benefit of, you know, the really, really cool intel, I always have to, you know, I have to recognize that I have a limited view, but I’ll say that, there’s a really sort of like classic regular cadence of activity that we see globally, where we have historically seen, countries set up their own operations through contractors.

Oftentimes, they’re literally pulling guys out of the underground who are sort of nationalist hackers. I can’t tell you how many of the serious threat actors or historically have been these, like, were old school nationals, hackers who, like, went legit. They like, one day they, you know, they left the basement and they set like they set up a company website.

Some of them were on LinkedIn and, you know, and you see them and they’re like, “We’re offering pentesting services.” Some oftentimes will say that the government is one of their clients. Right. And there’s a lot of euphemisms around it. But they go legit and, they start becoming like a serious tool of like the government.

And this is going on in Russia, Iran, China. We’ve seen it in South Asia, a lot of other places. The private sector is a great place for, these like, for countries to lean on for this talent because, especially when it comes intelligence operations, one, because they, you know, they tend to pay well, they simplify so they have access to, you know, really good talent, they simplify the operation, but they also keep these operations at sort of like arm’s length. Right?

So, it’s, you know, like, you know, the a lot of, you know, intelligence services want to deny, you know, involvement. And it’s harder to do when you’ve got somebody in uniform at the back end. You know, if you think about a lot of the very early indictments that came out of you guys, you know, it was guys in uniform. Right?

They always had their peaked hats on. Right. Less peaked hats these days. Yeah. And more. Just like kids. Right? You know, and I think that’s what we’re going to, to see more and more, you know, that’s taking place in China with their sort of ecosystem. Right. And the ecosystem is really interesting.

You can see these guys suing each other because, you know, like you can see them working. You can see people moving from one org to the next. You can see this crossover between government and academia and military and corporate space. And it’s all this giant ecosystem. And there’s specialization. There’s groups that are clearly really good at sort of the, zero day problem.

They’re building out that pipeline. That’s moving this stuff to other operators. There are different, probably different levels of, contractual obligations. I think, you know, obviously, you know, you would expect in some cases things are like a soup-to-nuts shop, right? Where you just asked me to do it. And I just get it done.

And there are groups that are probably doing, you know, very specific parts of parts of the operation. Right?

Leatherman: Yeah. Identifying ways to get into U.S. networks, for example. And then.

Hultquist: Exactly. And then handing it over.

Leatherman: To a carrier intelligence service. Right. So, they have specific goals in many cases or where they have unique zero days, like you said, leveraging those in places that make sense and not just randomly like there’s a method to that madness, I think.

Hultquist: And I’ll tell you that that’s a really efficient way to do it. And you could see in China, we could see you have more of these sort of gray … well, they’re just straight up criminal spaces or whatever. We can see connections between those. Some of these initial access brokers are clearly working with you know, like the espionage cases are coming off that. That’s always happened in Russia. Right.

And, for the same reasons. And I can tell you, like, you’ve always, you know when you can go to jail for doing hacking at any minute. You’ve always got you always owe the government something, right? They’re under duress just by virtue of their chosen profession. Right. And so, they really don’t have an option.

So, if you’re a criminal operating out of those spaces, you can either play ball and be nice, which I think they’re more than happy to do. It’s a good relationship for everybody. Or if you don’t want to do that, you better, you know, like you’re in. That’s not an ideal scenario for anybody.

So, when I think about, particularly like when I think about the problems that we are, like, the specialization in the criminal space, I think really that very early these access brokers and how far that’s gotten, you know, how much the info stealer marketplace has driven so much criminal activity and what that means for like the espionage actors, too. If you are selling these accesses and you get the one that you know that the government will be interested in, like, just giving that away to the government is a great place to do it or selling it.

And we can expect that. Especially like the specialization we’ve seen in the underground, that there’s going to be that that’s just a great way for the government to sort of come in and take it, pick it and choose the pieces that it needs. And we’re seeing increasingly seeing that in China, too. But we’ve always seen that in Russia.

And, you know, we all see, you know, the things with the hacktivist space, right? You know, like, if somebody is patriotic and carrying out attacks anyway, you can sort of dip into that space. I think increasingly, this is going to be a game of these sort of middle, this sort of, like, foggy space of contractors, hacktivists, and cyber criminals where they’re leveraged as agents of, of different state actors.

That keeps these things deniable. And it’s honestly just very efficient. Right. Like, why would you pay to train and do all the work that you have to do when these guys are willing to play ball because they don’t have a choice?

Leatherman: Yeah, right. It’s kind of like this blended ecosystem is what we’re seeing, right? Hacktivists, enabling companies, cyber criminals, nation-states, all kind of working together, knowing where their strengths are and handing aspects of CNO [computer network operations], CNE [Computer network exploitation] effort between themselves in order to facilitate, you know, geopolitical aims. I know that’s, an area that we continue to see.

I know we’re winding down here, but I want to kind of really hit on, the shift that we’ve seen over the last 24 months in that PRC activity. You’ve described Volt Typhoon historically, as you guys have analyzed, it is a significant threat that we’re likely to continue to see over time, which is pre-placement on critical infrastructure in the event of some sort of kinetic or wartime operation.

Right. We assess that Volt Typhoon was launched by the PRC in order to impede U.S. military projection in the Indo-Pacific in the event of some sort of kinetic war, related to the Taiwan contingency. We’ve seen that similar activity in Russia-Ukraine, as Russia went into Ukraine. What is it that owners and operators here in the United States need to understand about this?

You know, end-of-life technologies, vulnerability management, credential, you know, really bolstering our credentials in how we defend those. Like, what do folks really need to understand to prevent themselves from being targeted successfully by some of these nation-state actors? Like China, like Volt Typhoon. And by the way, like Salt Typhoon. A lot of folks look at Salt Typhoon as an espionage campaign, which it was.

It was probably the most consequential espionage campaign we’ve seen launched against the telecommunications providers. I’m of the opinion that that could have transitioned to destructive attacks as well and could have had tremendous impact on the Telco sector. So how do we start to, like, get our minds around defense against sophisticated adversaries like this?

Hultquist: Well, I think when it comes to sort of OT [operational technology], critical infrastructure attacks that the two lessons, I think are really important. Time is a really, really, really important factor when it comes to understanding it. The first, the reason is that, so much important activity happens before the attack, before the actual contingency or before the geopolitical situation gets to the point where this contingency gets sort of, leveraged. Right.

And that means that the actors have to be way out ahead, right? They have to actually think ahead to the possibility. It also means that nothing could happen. Nothing could happen, right? We do a lot of planning in the military for situations that are unlikely to happen, but they have to be prepared.

But they’re so critical and important that they have to prepare for them anyway.

Leatherman: We do that in law enforcement too, right? We’re doing a dynamic SWAT [special weapons and tactics] operation or something. We tabletop that, we walk through that because it’s unlikely something will happen. But it could. And so we’ve got to be ready for it.

Hultquist: It’s too important. It’s too important not to be prepared for. And so that means that they have to hack in advance for war. That may never come. But if you’re a defender, that means you are fighting the next cyberwar right now. Because your chance to stop them is probably this moment. Not in the hours when we realize the geopolitical situation has gone south.

It’s right now. It’s a really strange situation that you are forced to like think about as like sleeper agents, right. The sleeper, you know the sleeper agents are going to come. You’re not going to get them after they’ve been living in the United States for 20 years or something like that. You’ve got to get them at the border. Right.

And so we have to sort of think about it that way. We have to sort of be ready to fight them now and put our resources in that direction.

The other thing that’s really important, I think, when it comes to a lot of this activity, is it is short term, right. Generally, the things that we see, the activity that we see, the targets get back online, they get back online in a matter of days, oftentimes hours. Right. And you know, that generally has to do with the level of sophistication of the actor and what they’re capable of.

We don’t know a lot about, for instance, how China’s … we have not seen their malware ever. Right. And we don’t know how far they could take it or how how broke they could leave things.

But, you know what? I’ll tell you something. We’ve seen rail heads here. And when I was in the Army, one of the things you did was you put you like, they sent me to school to learn to put my unit stuff on rail, so we could ship it. And people forget that, like the Army ships everything by rail then gets on a boat and then, you know, then finally, you know, whatever.

And, you know, I think there’s a good chance they want to disrupt that process. But guess what? They’re going to get stuff on that rail. They’re going to operate that train. It’s going to happen. But it’s about that time factor. Right. So they’re generally trying to have an effect not to stop but to slow you down.

When we looked at the incidents, you know, one of the most interesting incidents I think I saw in the Ukraine context is right at the outside of the invasion, they took down a major satellite provider, the design there was not ... they knew that that thing was going to get back. The provider was going to get back on line. Their point was to slow response or inhibit response to the actual invasion.

So, time is this huge factor. Then, two ways that you sort of think about it. One, you start defending now. You try to you know, you try to root them out now, which is your best advantage or your best opportunity. And the second, the second important thing to do is if you like, you need to have a plan to get back online, right?

There is a realistic scenario here where it’s not just rooting this stuff out. And that’s my job, right? I like I’ll recognize that’s my responsibility too. But these are sophisticated actors. Many people will not find them. Right?

We have been through generations, for instance, of Energetic Bear, Berserk Bear, getting into critical infrastructure in the United States. Do I believe that we rooted them out, like, you know, out of every system? That seems like a very tall order. So, to a certain extent, everybody needs to be focused. I believe, on resilience, on reducing that window. Right.

Or operating, you know, in that scenario. Right. And that’s exactly what you’re talking about. Training. Right. Like, preparing for that, like for the attack, can you get an analog system running and can you operate without those things? I remember back in the day, like 10, 20 years ago, or 10 years ago, there was all this talk about the Navy learning how to use sextants again. Whatever, like the, you know, like the, you know, the admiral up on the bow.

Leatherman: I’m a pilot and we still learn how to fly not necessarily by GPS or what we normally do, but we still use backups. Right? We still use the old analog technologies in case we do have to navigate by non-digital means.

Hultquist: I think being prepared for those contingencies, it’s just a huge piece of this. Right. Because they are very sophisticated. And we have to I mean, I’m not saying that we give up the fight to hunt them down. We absolutely have to do that. But I think that we also just need to be prepared to limit that window of opportunity for them. Right. Can we do that?

And I’ll tell you, one of the incredible things that we’ve seen from Ukraine is that they have got things back online so fast, in some cases, nobody even recognized that something had happened. Right? Until they told everybody, you know, after the fact. And I hope that we can, you know, we can have as much of that much success ourselves.

Leatherman: Yeah. Well, I think there’s a couple things that are really interesting here. Number one is your charge to all defenders, that we’re all on the front lines here and that, resilience is key. It’s not about zero breach scenarios. It’s about fighting through breach. Right. Being able to get back on line and being resilient through that. And that’s okay.

But this takes all of us to fight against these adversaries, just like the adversaries are using a whole-of-society approach to attack us. We have to use a whole-of-society approach to defend ourselves. So, I think that is a charge to all of us and a great way to kind of leave this, John is, this is all of our job.

We’ve covered a lot of ground today, John. I think we could have talked for hours. I’m looking forward to hopefully having you back on here, because there’s a lot we didn’t cover that I was prepared to, but today we covered credentials sitting in criminal markets for years and still being used by the adversary. China’s industrialized hacking system. Volt Typhoon pre-positioning for chaos. AI-accelerated attacks faster than most defenders can respond.

The ransomware economics that keep working despite our, you know, the FBI’s disruptions. And of course, Scattered Spider moving sector by sector, proving that sometimes the biggest threat is a phone call to your help desk and not the technology itself, right.

Hultquist: Absolutely.

Leatherman: So, I’ll let you get back to your day job of hunting bad guys. Thank you so much for the work that you and your team do to defend all of us in cyberspace, John.

And then to the audience, thanks for joining us today. You’ve heard how nation-states and criminals are targeting the networks and data that matter most. The through line here is that in everything we discussed, we can’t do this alone. The intelligence that drives our disruptions, the FBI disruptions against the adversary, the early warnings that let organizations get ahead of the threats, the visibility into what adversaries are actually doing, that comes from partnerships, government and private sector. FBI and companies like Google, all of us working together, partnerships on the cyber front lines are what will keep us ahead of the threat.

I’m Brett Leatherman, head of the FBI Cyber Division. Thanks for listening, and I’ll talk to you next time.