Talkin' Bout [Infosec] News

This episode dives into Anthropic’s “Project Glasswing” and the broader implications of AI-driven offensive security, including models autonomously discovering vulnerabilities and attempting sandbox escapes. The hosts discuss how agentic AI testing approaches could reshape vulnerability research, while also raising concerns about AI safety, regulation, and real-world risk. Additional topics include the growing impact of AI on security workflows, rising infrastructure costs tied to AI demand, a new infostealer ecosystem overview, and ongoing debates about data collection practices and platform privacy.


Join us LIVE on Mondays, 4:30pm EST.
A weekly Podcast with BHIS and Friends. We discuss notable Infosec, and infosec-adjacent news stories gathered by our community news team.
https://www.youtube.com/@BlackHillsInformationSecurity

Chat with us on Discord! -
https://discord.gg/bhis
🔴live-chat


Chapters
  • (00:00) - PreShow Banter™ — A Real Studio
  • (03:43) - Anthropic’s Project Glasswing is an Infosec Turning Point – 2026-04-13
  • (05:39) - Story # 1: Project Glasswing
  • (22:20) - Story # 2: AI-Led Remediation Crisis Prompts HackerOne to Pause Bug Bounties
  • (30:36) - Story # 3: Disgruntled researcher leaks “BlueHammer” Windows zero-day exploit
  • (32:39) - WEBCAST: Proxy Execution with Microsoft Edge WebView2 w/ Matthew Eidelberg
  • (51:47) - Story # 4: New "BrowserGate" report claims LinkedIn secretly scans user browsers for installed extensions and collects device data
  • (56:32) - Story # 5: The silent “Storm”: New infostealer hijacks sessions, decrypts server-side
  • (58:46) - ChickenSec: the Chicken Accords of 2026
  • (01:00:27) - Story # 6: EFF is Leaving X
  • (01:03:01) - Workshop: How to Think Like a Cybersecurity Defender
  • (01:05:49) - AI Security Ops Podcast

Links

Story # 1: Project Glasswing
Story # 2: AI-Led Remediation Crisis Prompts HackerOne to Pause Bug Bounties
Story # 3: Disgruntled researcher leaks “BlueHammer” Windows zero-day exploit
WEBCAST: Proxy Execution with Microsoft Edge WebView2 w/ Matthew Eidelberg
Story # 4: New “BrowserGate” report claims LinkedIn secretly scans user browsers for installed extensions and collects device data
Story # 5: The silent “Storm”: New infostealer hijacks sessions, decrypts server-side
ChickenSec: the Chicken Accords of 2026
Story # 6: EFF is Leaving X
Workshop: How to Think Like a Cybersecurity Defender
AI Security Ops Podcast

Click here to watch this episode on YouTube.




🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits 
https://poweredbybhis.com

Brought to you by:
Black Hills Information Security 
https://www.blackhillsinfosec.com

Antisyphon Training
https://www.antisyphontraining.com/

Active Countermeasures
https://www.activecountermeasures.com

Wild West Hackin Fest
https://wildwesthackinfest.com

Creators and Guests

Host
Bronwen Aker
Bronwen Aker is a BHIS Technical Editor who joined full-time in 2022 after years of contract work, bringing decades of web development and technical training experience to her roles in editing pentest reports, enhancing QA/QC processes, and improving public websites, and who enjoys sci-fi/fantasy, Animal Crossing, and dogs outside of work.
Host
Corey Ham
Corey Ham has been with Black Hills Information Security (BHIS) since 2021 delivering red teaming and OSINT services. Currently, Corey leads the ANTISOC team at BHIS, providing subscription-based continuous red teaming to BHIS clients. Outside of his time at BHIS, you can find him out in the woods or up on a mountain somewhere.
Host
John Strand
John Strand has both consulted and taught hundreds of organizations in the areas of security, regulatory compliance, and penetration testing. He is a coveted speaker and much loved SANS teacher. John is a contributor to the industry-shaping Penetration Testing Execution Standard and 20 Critical Controls frameworks.
Host
Ralph May
Ralph is a U.S. Army veteran and former DoD contractor who supported the United States Special Operations Command (USSOCOM) with information security challenges and threat actor simulations. Over the past decade, he has provided offensive security services at Optiv Security and Black Hills Information Security (BHIS) across various industries. His expertise spans network, physical, and wireless penetration testing, social engineering, and advanced adversarial emulation through red and purple team assessments. Ralph has developed several tools, including Bitor (set to release in January 2025) and Warhorse, which enhance efficiency in penetration testing infrastructure and operations. He has spoken at numerous conferences, including DEF CON, Black Hat, Hack Miami, B-Sides Tampa, and Hack Space Con.
Host
Wade Wells
Wade Wells has been working in cybersecurity for a decade, focusing on detection engineering, threat intelligence, and defensive operations. Wade currently works as a Lead Detection Engineer at 1Password, where he helps build and mature scalable detection programs. Outside of his day-to-day work, Wade is deeply involved in the security community through teaching, mentoring, podcasting, and running local events
Guest
Alex Minster "Belouve"
Alex Minster is a cybersecurity professional with a passion for Open-Source Intelligence (OSINT) , and a desire to use his technical skills to make a meaningful impact on society. With nearly twenty years of experience in cybersecurity, and a current role in Threat Intelligence for a global financial corporation, Alex remains very active in numerous cybersecurity groups including DC608 and Black Hills Information Security. Beyond his professional accomplishments, Alex is an avid oldschool gamer who enjoys arcades, retro gaming, and tabletop games. He brings his passion for adventure and his commitment to helping others to everything he does, both in and out of his professional career.
Guest
Doc Blackburn
Doc Blackburn is a seasoned (old) cybersecurity instructor with decades of experience in IT, security, and compliance. Over his career, he has worked in many areas of IT, including systems administration, programming, network design, cloud services, web development, and risk management, bringing a broad technical foundation to his teaching. For more than 13 years, Doc has trained students and professionals to understand, implement, and maintain effective security practices, drawing on real-world consulting experience in compliance frameworks such as NIST SP 800-171, CIS Critical Controls, and MITRE ATT&CK. Known for making complex concepts accessible to all audiences, he blends technical depth with practical insights, preparing learners to address today’s evolving cyber threats.

What is Talkin' Bout [Infosec] News?

A weekly Podcast with BHIS and Friends. We discuss notable Infosec, and infosec-adjacent news stories gathered by our community news team.
Join us live on YouTube, Monday's at 4:30PM ET

Corey Ham:

I know. You look like you're in, like, a real studio. Where's the is is this just a really elaborate closet or what?

John Strand:

No. I'm actually in the Baltimore office. Wow.

Wade Wells:

Nice. Surprised you don't recognize the background, Corey. Gosh.

John Strand:

It's the camera that they use is this one. Let me see if I can get this to go.

Corey Ham:

I can't you act like I have I'm a goldfish, dude. I have zero short term memory or long term memory.

Wade Wells:

What? I have persistence? No.

Corey Ham:

I'm just a bunch of markdown files thrown into a directory. Okay?

Wade Wells:

That'd be a great username.

Ralph May:

Described every AI.

Corey Ham:

Three markdown files in a trench code. That's an LLM now.

John Strand:

Oh, Ralph. Yes. Do you wanna come up to co teach with me for the satellite hacking class in in Deadwood?

Ralph May:

Yeah. I thought that was already the plan, John.

John Strand:

All right. Cool. I just wanted to make sure that you were on, because I wanna put you down as a co teacher.

Ralph May:

Yeah. It's disgusted.

Corey Ham:

Thought that

Ralph May:

was already the plan. Yeah.

John Strand:

All right. Sounds good.

Wade Wells:

All right. None of you have access to Claude Mythos, so get off

Corey Ham:

the podcast. Have to Well,

John Strand:

woah. Slow your roll there, Boutko.

Corey Ham:

Listen. Guys, guys. Okay. I have access I access to it. I found a ton of stuff.

Corey Ham:

I can't share what it is. It's really cool. Please buy my product.

John Strand:

There you

Corey Ham:

go. There you go.

Ralph May:

Well, it's not that hard. All you do is just ask them, and say you're a researcher. Got

Corey Ham:

them. Dude, I prompt injected Anthropic. Worked perfectly.

Ralph May:

Mean, I've been finding ODAs. I just can't talk about them yet.

Corey Ham:

Dude, I cannot talk about them. Also, all of them all the O days are of legal drinking age.

Bronwen Aker:

I

John Strand:

think I think what we need to do is we need to start an AI competition that finds zero days in like Windows 95.

Ralph May:

Yeah. Yeah. Just like the Oh, I love

Wade Wells:

it. I love it.

John Strand:

Let's just crush it. And then at the end of it, we can find out how many of those work in Windows 11.

Bronwen Aker:

I'm afraid it would be a scary percentage.

Corey Ham:

Yeah. Someone needs to make an LLM that has a knowledge cutoff as of like, 06/01/2001 or something.

Wade Wells:

It's like y two k y two k bot. It's like, as of my knowledge cutoff, I don't know what an iPhone is, but I think

Corey Ham:

you could call someone on the landline, and that would work.

Bronwen Aker:

I had to give up my landline. What?

John Strand:

It's It's tough.

Bronwen Aker:

But Why? I I I live in the mountains, and in town, there's, unfortunately, an active homeless population and a bunch of tweakers, and the tweakers keep stealing the copper lines that carry phone. And so

Corey Ham:

Does taste the best. I get it. Yeah.

Bronwen Aker:

I I got tired of paying better than a $100 a month for service that I wasn't getting, and the phone company, for some reason, had trouble getting copper wire shipped over. And, anyway, so I found the

Corey Ham:

I think it

Wade Wells:

was because of

Corey Ham:

all the prank calls you were making, Bronwen. Think that's

Ralph May:

what it was.

Alex Minster:

Yeah. I would love to suggest certain surveillance cameras are full of copper.

Corey Ham:

Yeah. Probably takes care of itself.

John Strand:

So, Alex, I got a question for you. I I got a question for everybody. Like, what cameras should I replace my Ring cameras with? Like Unify.

Corey Ham:

Unify, dude. Unify. Unify.

Wade Wells:

Unifi. You're talking about me getting a Unifi doorbell, but that'll

Corey Ham:

sound cool person. Week, the second we end this podcast, it's gonna be like, Unifi breached in the world's largest breach. Oh my god.

Ralph May:

Alright. Oh my god.

Corey Ham:

The finger. Let's do this. We got enough. We got critical mass. Let's go.

Corey Ham:

Hello, and welcome to Black Hills Information Security's talkin' about news. It's April 13. Sir, this is a Wendy's 2006? We don't have cell phones. We don't have AI.

Corey Ham:

We don't have anything. That's not true. It is. It's the mythos week. It's the mythos week.

Ralph May:

It's the myth of

Corey Ham:

Everyone's out.

Wade Wells:

It hasn't even been a week yet. It hasn't even been a week yet. It's still

Corey Ham:

I know. This is the week where we talk about Nitos.

John Strand:

I don't know. Was it released by the time last week's show started? I can't remember.

Wade Wells:

No. We missed it. No.

Wade Wells:

It's been

John Strand:

dude, it feels like it's been a year already. Had, like,

Corey Ham:

a lot weird so many questions from CISOs. Alright. Let me introduce everyone since I kinda skipped that. We got me, Corey Ham. I run the continuous pen testing team here at Blackholes and Infosec.

Corey Ham:

We got Bronwen who you'd be incredibly lucky to get a prank phone call from Bronwen. That would be a lifetime achievement. We've got Wade who has two passwords. We have Alex Belouve who we haven't seen in a while, and his background kinda matches the Zoom background, so kudos

John Strand:

It's to a good calming vibe.

Corey Ham:

We've got Doc, who spoke the fundamental truths of cybersecurity, and I missed it, so I don't actually know anything, and you shouldn't listen to me. And we've got John Strand, who has flown across the country to be on the podcast, from what we understand.

John Strand:

Yep. Exactly.

Corey Ham:

Very good. And then lastly, but not least, we got Ralph Gator Hunting. He would be the first one to get access to Middos, and then accidentally use it to do something it was never designed to do.

Ralph May:

I agree with that. Confirm nor deny that's already happened.

Corey Ham:

Yeah. So Mythos, what is it? What is this basically, from my understanding, this is okay. And I do wanna take this with a standard No dose of

Ralph May:

skepticism. This is the most amazing thing that's ever happened to anyone ever, period. End of Yes.

Corey Ham:

Okay. So this is basically last week, right after we ended the show, because the second we end the show, crazy news always happens. Anthropic, who everyone should know, but they're the people who make Claude, they published this blog post called Project Glasswing, Securing Critical Software for the AI Era. And in that blog, they talk about a new model that they built, which is a successor to Opus and Sonnet, that is they're calling Mythos. And it's basically as you'd expect from a blog post, heavy, you know, marketing blog post.

Corey Ham:

Hi. Aye. The claim they're making is essentially, they have this AI model that can just find a zero day in any software, more or less. They, in the blog, they talk specifically about, what is it, sixteen year old bug in OpenBSD that they found, and that they found zero days in Windows, and in Cisco products, and in CrowdStrike, or I don't know, whatever their partners are.

Ralph May:

Everything they threw at it.

Corey Ham:

Yeah. So essentially, this made everyone freak out. I'm sure John Strand got like 16 panicked phone calls from CISOs being like, okay. The reason everyone's freaking out is because the things that we assumed were true in cybersecurity, which is that some software is secure, and Droppik's basically saying, that isn't true. No software is secure.

Corey Ham:

Oh. Like, John, what kinds of panic phone calls did you get? Like, is it like, we need this, or is it like, how do I turn this off? Like, what what kind of reactions are you getting from people?

John Strand:

So one of the reactions that I received was, so pen testing's dead. Right?

Corey Ham:

Right.

John Strand:

That was kind of fun. I said, not yet.

Ralph May:

Not yet. We got another year. To be honest,

John Strand:

a lot of the conversations public. We've got another year. I've gotta be honest. A lot of the conversations I think have been really good. And what what I mean by that is a number of the different companies that we've been working with at BHIS are very much kind of the sharp end of the pointy stick.

John Strand:

Right? Like, they have good patching. They've been keeping up on everything, and there's not a lot of vulnerabilities that are sliding under the wire that they're ignoring. Like, oh, we're gonna ignore everything below 7.8, and we're not gonna fix it. And that's fine.

John Strand:

Right? Where I'm starting to see panic are the organizations that are like, we aren't gonna patch a damn thing unless there's a public exploit for it. They're panicking. And good. They should be.

John Strand:

And my take on this is if that is your approach, we don't patch anything until there's an exploit. There's some vendors that that's their whole modus operandi, the way they talk about things. It's like, we're gonna do automated pen testing, so you only have to fix the things that exploits exist for. Well, can we just now assume that the exploits exist? And I think that that is a safe assumption.

John Strand:

And so the conversations that are coming out of this and I'm sorry. This is this is a longer answer than what you were asking, Corey. But the conversations that are coming out of this are what happens whenever this type of technology is in the hands of anybody that has any frontier models. And I think that that is a true statement. The CEO of of Anthropic basically said that, no.

John Strand:

They're not gonna keep it under wraps. This is coming whether you like it or not. And I agree with them on that. But we're we're gonna end up in another situation where the vulnerabilities outpace organizations' abilities to patch it. But thankfully, the entire industry, whenever we started using AI for defense, they basically didn't use AI for defense as a way of reducing costs.

John Strand:

They used it as a way to further enable their analysts to be more effective in what they were doing, and they hired up to make sure that they were ready for a situation like this rather than down staffing. No. Wait.

Corey Ham:

Oh, jeez.

John Strand:

Exact opposite of that. Yeah. So we're screwed. And I do believe that there is definitely some hype associated with it, but there is absolutely something to be gleaned from this that's just scared the shit out of you. If you're a CISO, if you once again, if you think that you can ignore CVSS scores below a certain value or CVEs, then you're in for a rude awakening.

John Strand:

Because now the ability for people to actually work on those exploits is no longer a very elite few. Prometheus has brought the fire to a number of people, or at least you can see it coming down the side of the mountain.

Corey Ham:

Yeah. So it's a Jane's

John Strand:

Addiction reference.

Corey Ham:

I am. There's a couple there's a couple of interesting side points to this. One important side point is that there's also the Twitterverse, which is very upset with Claude right now for separate reasons. High high profile individuals like Dave Kennedy have basically gone on record and said, I think Claude is getting worse. I'm getting worse results

John Strand:

than They're what moving back over to Xcode, or, yeah,

Corey Ham:

so They're moving to Codex, specifically. Codex. Codex. And basically, the, like So the kind of writing on the wall is, there's citation needed. Like, this is There's a lot of claims in this blog post, and a lot of people are speculating that this is just generating hype for Anthropic, who might be kinda moving out of the public persona, public perspective as being the best AI model, and they're kind of like, oh, no.

Corey Ham:

Release a blog about us releasing zero of days. I don't know.

John Strand:

I'm gonna I'm gonna push back on that, and people saying we need Citation Needed. Look who they're partnered with. Can we bring up the article? I know it's like JPMorgan Chase, the Linux Foundation.

Corey Ham:

There's a Microsoft. Microsoft. Palo Alto Networks.

John Strand:

Palo Alto Networks. There's AWS.

Ralph May:

There callouts specifically about these vulnerabilities from the organizations that had them reported. Right? They weren't just saying, hey. We found some stuff. I don't know what it is.

Ralph May:

They were literally acknowledging those vulnerabilities.

John Strand:

So history doesn't repeat, but it rhymes. This is very similar to the Dan Kaminsky thing. When Dan Kaminsky came up with the the vulnerability for DNS that allowed him to exploit any DNS server except for DJB DNS servers that actually randomized the IP IDs and the source ports both, whenever that happened, there was a large number of companies that he was coordinating with directly, and there was a lot of companies that weren't. And the companies that weren't were like like AT and T, I think, one of them. I'm not sure.

John Strand:

Don't don't quote me on that. They were like, we don't patch things because Dan Kaminsky tells us to. Right? And I feel like this is that type of scenario. And if you're looking at this as just purely a marketing play by Anthropic, you're missing the larger point.

John Strand:

The larger point is these models in very short order are going to be able to do this. This is not a lie. This is real. This happened.

Corey Ham:

Yes.

John Strand:

Right now, it's as bad as it's ever going to be moving forward into the future. The capabilities are just gonna get better and better and better, and the vulnerabilities are gonna be coming faster. Now whether or not we're gonna use Codex or Clawd or whether you're gonna use OpenClaw or anything else out of that, it's irrelevant. It doesn't matter. The news story should not be anthropic for this.

John Strand:

The news story should be the capability because that's what people need to focus on.

Corey Ham:

Yes. That's a super good point. I think part of the reason this got so much panic in public response, I've personally had multiple clients reach out to me. I think this was basically the point at which CISOs and other executives had to acknowledge the elephant in the room.

John Strand:

Yes.

Corey Ham:

At this point, AI is coming, and you can't stop it. And this is like, before you could say, well, but, you know, it hasn't found any zero, you know, but, you're like, oh, well, you know, it's like, but you can't really say that anymore. If it can go out and find a vulnerability in current versions of software or every current version of software, you have to do something about it. And if you were passing on AI until now, you have to catch up. Good luck.

Corey Ham:

And listen Okay. To Oh, I wanna

Wade Wells:

talk about the I wanna talk

Wade Wells:

a little bit about the blue team perspective, though, too, around it. Right? For red team to adopt these AI agents, it's a lot easier. You just pointed at them. Good to go.

Wade Wells:

Mhmm. With blue team, we actually have to build knowledge bases. We have to build We have to adopt it. We have to put in the automation, run books,

Corey Ham:

and stuff like Right? It does it for you.

Wade Wells:

Yeah. Right. It actually takes a decent amount of time to understand and to build go build it out there. So and if your company is helping the blue team push with AI, yeah, you're gonna be able to keep up, hopefully. But if you're not, this is gonna make you just

Corey Ham:

Wade.

Wade Wells:

A sad have a sad time.

John Strand:

Wade. Wade.

Wade Wells:

Wade. Wade. What? What, John? Wade.

Wade Wells:

Come to

John Strand:

the red side. We've got cookies.

Wade Wells:

Nah. They've been saying it for years. I will still I will still argue there is no red team. There's only blue team. That's true.

John Strand:

So I wanna talk about

John Strand:

the blue team aspect a little bit, and I'd love to get Doc's take on this as well. Actually, I'm gonna shut up. The person I have not had a chance to talk to in a week, who I wanted to talk to the most because it's been a panicked week, is Bronwen. And, Bronwen, I'd like to get your take on this.

Bronwen Aker:

Well, okay. There are several things. One, I'm glad that we're getting more traction on AI doing stuff for the blue teams because it's the the focus and the emphasis has been on red team development, or maybe my perspective is skewed because that's what we do at BHIS is we do a lot of red team. We do blue team as well and purple team. But with so many penetration testers, there's a lot of emphasis on the red team applications.

Bronwen Aker:

So the fact that any of the frontier developers are are placing a focus on something to help fight against the exploitation. I think that's wonderful. And so I applaud the, at least on the surface, sentiments of Project Glasswing. I all also think some of the hype was because they said, yeah. We're gonna release this.

Bronwen Aker:

Oh, no. This is too powerful. We're pulling this back. And I think that part of the reason for the panicking is seeing that sudden reversal in direction that Anthropic did when it came to releasing mythos. So that's part of it.

Bronwen Aker:

But other than that, everything that I've heard other people in the room saying, I I agree with. It's it's that same double take that I'm going through over and over again. Oh, that's cool. Oh, that hurts. And and with the AI, it happens with everything because it is shiny.

Bronwen Aker:

It is wonderful, and it's also terrifying. And if we can can leverage things like Mythos or anything else to help make that playing field more level for blue teamers, for defenders, for software developers, I think that's a good thing.

John Strand:

So I wanna I wanna throw a take on the table, and I'd like to get Doc to take a poke at this one out of the gate. Blue teams are screwed. And the reason why I say blue teams Wade's laughing. The reason why I say blue teams are screwed is, like I said, the past two years, really accelerating the last year, everyone's been trying to not hire junior people. They've been trying to down staff and cut their costs in their security operations center, and they've been trying to do use AI to do more with less, to do what they're doing but cheaper.

John Strand:

Right? And now we're in a situation where the entire game has changed. And my question that I would like to put to everybody is the game of, like, wait for a patch to be released and then fix it, those days are over. And I really think that we're at a situation now where it's not your security support structure cannot just be we're gonna wait for a patch and then push out, and VOLN management is patch management. VOLN management is no longer just patch management and configuration management.

John Strand:

Bone management is now compensating controls. And I think security engineering is more paramount and more needed now than it's ever been before because you're literally going to have a pen test report where there's gonna be a vulnerability. They exploited it, and that exploit either exists publicly or they were able to find it with an AI model. And a patch does not exist or will not exist because the vendor doesn't exist anymore or they can't fix it. And that's kind of like this the whole security architecture thing is changing.

John Strand:

And, Doc, you know, you've been you've been teaching this stuff as long as I have. I'd like to get your take on that because security architecture just changed.

Doc Blackburn:

I'm glad that you called me out because I was texting Bronwen late last week saying that I would love to have the opportunity during this newscast to argue with John, and then John unknowingly just invites me right into it. So We're gonna

John Strand:

do we're gonna do a completely webcast on the OSI model. That remains to be seen, but your demise will be swift and relatively painless. But other than that ahead.

Doc Blackburn:

I'm gonna fundamentally argue with you right out of the box and say, there's nothing to see here. This is a total nothing burger. And the reason that I say that is that this is the same shit different day. We freaked out when vulnerability scanners automated the ability to, find our worst nightmares and our our dirty secrets, you know, our configurations and our installations. And I know that this is different than that, but at the same time, it's not.

Doc Blackburn:

Now speaking of arguing with people, I'm also gonna argue with Bronwen. This is actually this is a new conversation for you guys, but this is an old argument for for me and her. Security is always reactive. Our defenses are always reactive. The reason that we're unprepared for this moment is because this moment didn't happen until now.

Doc Blackburn:

And so therefore, we need to accept the fact that these things are going to happen, and they have over and over again, and we continue to think that things are gonna be different in our industry. And, of course, I'm gonna take that this moment for just a second to, plug that book that, by the way, for those of you who were here last week, there was this groundbreaking announcement that me and Bronwen and Mark Williams is writing a book, and this working title is Security Isn't What You Do. And I am going to also now you guys are all going to get the the full story here. John Strand, I don't know if he remembers, but John has has graciously said yes to writing the foreword to that book.

Corey Ham:

I hope you remember that, John. Don't worry. We'll just have Claude do it. It's fine.

John Strand:

So but I I have a question, and and, you know, of kind of pushing out this. I do agree that it's the same shit, but I think the amount matters.

Corey Ham:

Right? So I Perfect. Sliding away. Yeah.

John Strand:

So my my my thesis of you cannot look at patching and configuration management as your sole security strategy for dealing with vulnerabilities. Do you agree with that statement? And I believe it should have always had other things. But so many organizations are like, we only fix vulnerabilities that have a CVE of 7.5 or higher. And I think right now, today, you can go in with a marker on any of your vulnerabilities, and you can add one to every single CVE score that you have in your organization.

John Strand:

Right now, today

Corey Ham:

It goes to 11 now. Woo.

John Strand:

You can call it the John bonus.

Corey Ham:

You can

John Strand:

call it the John bonus. Apparently, I'm an Infosec thought leader, and I have that type of sway. But the CVE scores need to be modified because the likelihood of exploitation just went up substantially because it's no longer a very elite group of people that have these skills. It's now that that skill level just dropped lower, and it now has brought this capability to more people.

Corey Ham:

So I I wanna segue. First of all, I wanna plug if you're interested in this whole AI stuff, you should listen to the AI Security Ops Podcast, which is a separate podcast that we have with Alex and Bronwen and other people. But I think the healthy dose of skepticism that we need is about to come in the form of this article on dark reading that's basically, the article is HackerOne, which if you don't know what HackerOne is, it's the biggest bug bounty company out there. They facilitate connect they they facilitate connections between bug bounty hunters and, like, the companies that they're allowed to hack. They suspended submissions last week.

Corey Ham:

They basically said no more submissions. And the reason why they claim there's a few there's basically two fundamental problems. One is AI slop. They're getting a huge amount of submissions. Basically, the the number they give in the blog is that five to 10% now is the ratio of or like five five to 10% of submissions are actually valid and real.

Corey Ham:

And the number apparently about a year ago was 15%. So it's dropped 10% due to AI slop. The other thing that's serious, and this is kinda what John's talking about here, is the developers of these projects and the companies that have bug bounty programs cannot fix this stuff. Is a pile. Is a huge pile of yeah.

Corey Ham:

They can't fix it fast enough. There's a huge vulnerability pile that no one's fixing because they just don't have the resources to do so. And that basically means you have a like an imbalance of power, like John's saying. The red team right now ironically has all the power because the blue team is trying to figure out how to fix all the stuff that the red team's uncovering, and that's creating this dynamic that hacker one is capitalizing on and trying to say, like, okay. We're turning this off.

Corey Ham:

We're gonna process the submissions that we already had, try to get those covered, and then maybe we'll open things up in the future once we figure out what to do. But it's it's I mean, it's also just super labor intensive for them as the middleman between these two entities to try to be like, alright. AI, analyze this. Tell me if it's real.

John Strand:

Can we do you think that this is gonna last? Do you guys think that this is kind of the end of bug bounty programs?

Corey Ham:

I think it's the end. I hope not too.

Bronwen Aker:

There was there was another aspect of the article that hit me. I I knew about it, but for some reason, it just hit me more strongly, and that is that we have bug bounty bug bounty programs. We don't have anything to reward remediation. Oh. Don't have anything in place to reward people when they fix the bugs Yes.

Bronwen Aker:

That have been found in these bug bounty programs, and that creates a huge imbalance as well. So not only As mentioned before, yes, we have more AI slop coming in. It's harder on the people who are trying to assess these and and triage which ones are valid and which ones are junk. And and yet there is also this preexisting component in that the people who actually fix the bugs detected, they don't get anything. There's no reward system for that, and that needs to change.

Corey Ham:

Alright. I'm gonna go register fixer1.com, Bronwen. We'll do a start up.

John Strand:

Yeah. Ralph had a take too. I wanna get

Ralph May:

back I to that I think that the bug bounty program that we know of now is dead, in the sense that people paying for others to find these things. A good example of this, like the canary in the coal mine, I think it was the OpenSSL project. One of the bigger projects have stopped paying for bug bounties entirely. Right? They they got too much.

Ralph May:

It was a HackerOne example, and the money essentially, you turned it into this gamified piece, and with AI, you could throw enough credits at it to get into something. And so I personally think the way we know of it now, public bug bounties are not going to be a thing anymore. There's just the financial incentive's just all thrown off.

Corey Ham:

I I kind of agree with your, like, your reasoning is sound, but I kind of hope that it doesn't go that way. Because the the one thing that bug bounty programs facilitate that we don't have a replacement for is some kind of a dialogue between security researchers and companies. Yeah. Okay. That Like, that needs to exist.

Corey Ham:

I don't know who's gonna pay for it.

Ralph May:

That's kind of the big

John Strand:

That dialogue but I think that dialogue has been exclusively, there's a vulnerability. Pay me. And when we're talking about continuous pentesting and a lot of the pentesting that we do with BHIS, we cover it we we couple it with our expert decision support as well. And we've talked about Corey, in our continuous pentesting practice, a lot of these companies, we aren't, we hacked you. You suck.

John Strand:

We rule. It's like, no. We have a relationship with them. And we have conversations, and there's remediation conversations, mitigation conversations. And I'd like to think if okay.

John Strand:

So if you're if you're watching this and you're with HackerOne or BugCrowd, that's what you need to be doing. Right? You need to be focusing on, okay. Here's how you can remediate this. Here's how you can mitigate this if a patch doesn't exist.

John Strand:

Like, you know, was you somebody made the joke about remediation one. Yeah. Like, HackerOne, you you should be registering that domain, like, right now today. Because I think No.

Corey Ham:

Fixerone is way better.

John Strand:

Fixerone. Fixerone. But, like, our but, like, our anti SOC. Like, we do that, Corey, where we find vulnerabilities, and it's like we work with a company to come up with mitigations and compensating controls. Because some of the stuff, like, what Matthew has found, Microsoft has decided to ignore.

John Strand:

Right? And we have to have conversations with our customers about how to mitigate those. And then some of the stuff in the cloud, like working with Bow and some of the things, that's just part of how it works. But we have to work with them on how to architect it to mitigate that vulnerability even though there isn't an inherent patch that can fix it.

Corey Ham:

Yeah. Well yeah. Well

Alex Minster:

and I had a

Doc Blackburn:

saying is that what we need to do is rely on people doing the right thing. Is that what you're saying, John? 100%. 100%. Well, that's not just thought.

John Strand:

Here's another here and here's another hot take. We need more security engineers, not fewer. Like, we need more people.

Wade Wells:

I've been reporting that for so long now.

Bronwen Aker:

People beating at the gates, they can't get

John Strand:

They want in. Let's get them trained, and let's get them in. Wade?

Wade Wells:

I've been I've been saying, like, you could hire easily a junior person, and with a good prompt, make them a senior. But convincing the c suite to do that is a lot is another story.

John Strand:

And that's one of the things I've been saying in a lot of my interviews. The hackers will show us the way. They'll show us the error of our ways. Like, we can we can pontificate all we want, but when the hackers start exploiting vulnerabilities that had a CVE score of, like, 6.5, and now there's a zero day out for it, now it's a 10, like, that that and that's gonna happen in the next couple of months. When that starts happening, that's gonna be the wake up call.

John Strand:

And, real, aside from the

Alex Minster:

the hacker one and just kinda being that dialogue of the minute of the middleman, I'm not sure if it's really gonna slow things down in the pipeline or if the researchers are gonna sidestep it. Because a lot of a lot of CVs, a lot of research are, I found this thing. I'm going to talk about it in August at a certain summer camp. Want to work with you in order to publish this and remediate this. And if it's the matter that, like, you don't wanna have a conversation anymore because I was going through HackerOne, and they don't wanna have that conversation, now I'm gonna try to bug the company on it.

Alex Minster:

The company might be like, well, we we paid HackerOne to have this dialogue for us, so you might have a lot more surprising stuff that hits in Are in August.

John Strand:

There any other ways? Like, talked about, okay, hacker summer camp, working through bug bounty programs, working directly with companies. I think you didn't mention the other alternative that they can do to make money on these things. Yes. And that's that's scary to me.

Alex Minster:

Yep. Where they'll just they'll just be like, okay. Screw it. I tried doing it doing responsible disclosure. Let's try the irresponsible disclosure, and you just blindside companies that go, wait.

Alex Minster:

Why do we pay HackerOne to do this that they shut down? And now we're blindsided by stuff, either by things dropping at Hacker Summer Camp, or things being just sold, you know, for for money that way.

Corey Ham:

Well, okay. So there could not be a sorry, Wade. I'm gonna cut you No. Off Go for it. There could not be a better segue than this.

Corey Ham:

So Alright. Last week, something magical called Blue Hammer happened, and we should talk about it, because this is a perfect example. So essentially, if you're interested in this kind of stuff, you should catch Matt's webcast later this week. He's gonna talk more in-depth about MSRC and vulnerability research and all that, you know, drama that goes with it. But the news article is last week, a researcher decided to just publish a vulnerability that he was trying to get Microsoft to fix because he got fed up with trying to convince them to actually fix it.

Corey Ham:

Essentially, re you know, reading between the lines, the person that was working his MSRC case, like, either got fired, or like, got replaced with someone else. And he was like, you don't know what you're doing. I'm just going public. And so it's a if you're curious about what BlueHammer is, it's a local privilege elevation for Windows 11. And it's something that Microsoft will struggle to fix.

Corey Ham:

We'll see if they get it fixed tomorrow. Maybe they will. Maybe not. But for now, as of, you know, however many days it's been since the release, six days or seven days, It is a functional exploit for elevating privileges on Windows 11. Sounds like some people are having issues with getting working on server.

Corey Ham:

But essentially, the question we had when we brought this up on the malware dev team at Black Hills, the malware devs were like, why didn't they sell this to Zerodium? Like, why didn't they sell this for $1,200,000 to some, you know, government entity? And basically, the truth is, this is about as close as you can get to like hacktivism or making an impact. Like, it's not about

John Strand:

Wait. What the comment? What was the Microsoft comment down at the bottom? Oh, wait. Right there.

John Strand:

There it is. Sorry.

Corey Ham:

It's Microsoft has a customer commitment to investigate Oh. Woah.

John Strand:

Woah. It was right there. Zoom in.

Corey Ham:

Reported security issues and update impacted devices to protect customers as soon as possible. We also support I mean, it's just it might as well be it might as well be AI generated. It's basically saying, nah, dude. MSRC still exists. We swear.

Corey Ham:

Yeah.

John Strand:

And so When that when is that webcast with Matthew?

Corey Ham:

Think it's this week. I think it's Thursday, maybe.

John Strand:

Because it's it's kind of a similar thing. Know, he's been Yeah.

Corey Ham:

It's the same story.

John Strand:

Since September. Correct.

Corey Ham:

Yeah. So basically, long story short, if you wanna get something if you're a security researcher, and your goal is to get something fixed, to get a vulnerability to no longer exist, which I fundamentally believe, like this is gonna sound naive, but I think people are good, and they want to They wanna fix things. Once you find a vulnerability like this, you wanna keep people safe. And essentially, that that this is how you do it, unfortunately. Like, yes, MSRC should be the way that you do it in in theory.

Corey Ham:

But in practice, if you work with MSRC, it's never gonna get fixed because you're gonna get kicked around 50 different times. If you publish it, now it needs to get fixed today or tomorrow. And so now you just got jumped up in the priority queue. But also, there's the ethics of it. I don't know.

Corey Ham:

I think it's an interesting but there's evidence that this is what people are doing, and I think this will continue to happen as we lose the hacker ones and bug bounties. Like, it's gonna be more public stuff.

John Strand:

But that once again goes back to the system is overloaded. Right? The vulnerability management process system across the entire industry is overloaded. And wouldn't it be wonderful if we had a government agency that was fully funded that we could ramp up and could handle this, and we haven't been defunding them?

Bronwen Aker:

I have a dream.

Ralph May:

Yeah. Why don't we just fix all the vulnerabilities, and then we won't work.

John Strand:

See, that's what I think. I think this is a

Corey Ham:

Make no mistakes, Claude.

Bronwen Aker:

Wait. Wait. Wait. You're you're expecting developers, programmers to actually write secure

Doc Blackburn:

code? Like, if you're talking No.

Ralph May:

Aker can write secure code. It's perfect.

Doc Blackburn:

Oh, god.

John Strand:

So my thing about

Corey Ham:

that is citation needed,

John Strand:

by the way. Let's take Bronwen's snark, and let's point it specifically at commercial software. Right? Nothing but love for the open source community. Right?

John Strand:

And I think the open source community is getting beat to shit because a lot of these vulnerabilities are coming down. One of the quotes in that based I think it was the Wired article about mythos. They were basically like, we're two people. Like, you know, that's it. And I think it was under node.

Corey Ham:

Yes.

John Strand:

You were like, hey. We have all of these vulnerabilities. There's like two, three of us that are working this, and we can't just go through and patch some of these because there's downstream regression testing that is very complicated. Basically, defense is hard. It's really, really hard.

John Strand:

Wade, you know what's easy? Yeah. Offense.

Wade Wells:

Man, like, I'm over I'm over here chomping at the bit to talk because

Corey Ham:

Yeah. This is three this is three different pieces to me. You have developers, blue teamers, and red teamers. They're three different things in my book.

Wade Wells:

I think that here's the other thing, though. With the AI push, the developers and blue teamers line is getting more and more blurred, like hardcore.

Corey Ham:

So you're gonna like, you're talking about change to an internal web app?

Wade Wells:

I'm As a as a blooper? Like, I feel like with AI and, like, detection engineering, I have become more of a developer than ever before. Like, I'm writing stuff all the time that is being pushed to in order to protect things, in

Doc Blackburn:

order to log things, get logs there. Right?

John Strand:

And you have to

Wade Wells:

move just as fast as you guys, but the like, we can talk about bug bounties and stuff, but, like, those were those felt like they're about to get tipped over even before AI because of level of people. Right? And the other thing, this just, like, hammers down the defense in-depth strategy. Right? The the endpoint or the that initial firewall or anything, you you can't just protect it because it's gonna be exploited.

Wade Wells:

You have to protect the kings of the kingdom. Like, where is stuff at and have all your trip all of your trip wires just set up throughout the network. Right? Which is becoming harder and harder with SaaS and cloud everything, and it's just

Bronwen Aker:

man Oh, yeah.

Wade Wells:

Man, I I came here to chill, and now I'm all stressed out.

Ralph May:

I gotta go back to work. Is this

Bronwen Aker:

is how I terrified a bunch of CISOs for the California Community College System. I told them all of the things that we haven't been doing well already are now getting amplified, and there was another a word. Anyway, it's it's getting blown out of the water, and it makes all of this harder even before you start getting into, like, dealing with heavy hitters like Microsoft, not patching their stuff.

Corey Ham:

Yeah. I mean, I will say, like, I do think that where we're sitting, that essentially you do need to have every organization needs people who understand how to use AI to the maximum potential in some way, shape, form. Whether that's to defend the organization, whether that's to attack the organization, whether that's to remediate bugs. Like, you need AI expertise, and every company if you've been putting your head in the sand and saying, nah. This is a fad.

Corey Ham:

It's gonna pass. Like, nah. You know, AI's, you know, it's going away. You can still oh, it's not that good. Like, you're behind now, and you have to catch You have to hire people who can use AI effectively, or else you're gonna get crushed.

John Strand:

And I think that's where I agree with Doc. Right? I believe in the short term, this is incredibly disruptive. I think it is problematic. But at the end of the day, we move down the line.

John Strand:

This is just another set of tools that got added for the defenders, and another set of tools that got added for the attack teams. But just like vulnerability management, just like intrusion detection, your whole mental mindset of how you approach computer security has gotta change to adapt to it. You've gotta be able to move faster. And at every evolutionary change that we've had, and this is truly, I believe, an evolutionary change, the one thing that you should be taking from this is move faster. That's what we've gotta be keeping focus on.

Doc Blackburn:

Yes. Wanna argue with John.

Bronwen Aker:

For it. Faster, but don't necessarily move fast and break things.

Doc Blackburn:

We're we're still just we're still reacting. We're reacting again. And, oh, if we react faster, maybe we'll do better. I was like, no. That's nonsense.

Doc Blackburn:

It's it's it's bullshit all all over again. What we need to do is ask ourselves, are these good ideas before we do something? Like, why in the world do we have a network that does everything for us at the same time? It's like my car that's also my house that's also my kitchen that's also it's just it's all of these things, which means that if one part of it breaks, the whole thing breaks. And so we have to be better at mindfully not just saying, oh, well, look.

Doc Blackburn:

Here's a shiny new tool. Let's just add it to everything else that's also insecure already, and then act surprised when that's hacked too. And what I really think that we need to do in in our industry or what we need to do as a people across the world is, one, stop treating so many different types of datasets as sensitive. The fact that knowledge of my Social Security number, my birth date, and my hometown proves that I am who I claim to be is just nonsense. It's like the bad password that never goes away, and all of that information's out there anyway.

Doc Blackburn:

And the other thing that we need to do is just ask ourselves, do we need to actually keep this stuff? I can't you can't hack something that you don't have access to. I take that offline. It's no longer hackable.

John Strand:

And we constantly have those conversations here on this on this show, doc. But I think it's really hard for organizations. My counterpoint is it it's like every organization's a data hoarder, and I think that this is kind of what you're saying. Right? They hoard everything.

John Strand:

And the the problem is, like, that data, that information is valuable. Right? Like, the more information they collect on us, the more that they can do with it. It's like, well, you don't wanna get rid of it because you might need it. And now and I agree.

John Strand:

We literally took all of this data, and we we maybe we come up with classification labels. It was awesome. Now we're just shoving it through a wood chipper of AI so it's easier to work with chatbots. So that's why I think that this is an evolutionary jump. Eventually, I do think it will stabilize, though, and they'll just be, like I said, toolkits.

John Strand:

But all of these good ideas, I agree with, but we haven't seen anyone implement them yet. And that's the thing that's terrifying to me, is we don't get better. It's like we just keep hoarding. We keep adding more things. We have a domain.

John Strand:

Now we have cloud, and now we have AI, and then we have all these other services that we're using. And that's just the nature of organizations. It's just they continue to grow. It's like a snowball of shit that gets bigger every single year.

Doc Blackburn:

If only somebody would, say, write a book that changes the industry's way of thinking.

John Strand:

That that book that book needs a good forward, doc. That's what

Doc Blackburn:

that book needs. It needs a really good forward. It it needs the best forward. Nothing but the best. Nothing but the we've never seen before.

Doc Blackburn:

Yeah.

Corey Ham:

Yeah. So on a technical note, if you're interested in trying to get into AI stuff, I do strongly recommend looking at the way that Anthropic is building their test harnesses and how they're using AI to attack these programs, you can't do it with Mythos, but you can definitely learn from their approach, which is essentially an agentic approach. If you don't know what I mean when I say use an agentic approach, you need to learn what that is and you need to use it because every bad guy in the world is learning it too, and you need to understand the difference between just throwing

John Strand:

It's also not just a security and an offensive thing. It'll actually drive your AI costs down.

Corey Ham:

It's a lot. Yeah. It's a lot of things. Yeah. But anyway, basically, the the blog had a lot of technical fun bits and things you can learn in addition to scaring everyone's pants off.

Corey Ham:

Alright. I think can

Alex Minster:

talk can

John Strand:

we talk about mythos escaping its sandbox now?

Corey Ham:

Yes. Let's talk about We completely

John Strand:

change change gears.

Corey Ham:

So go ahead. Run us through this one, John. How did it get out?

John Strand:

So whenever they gave it, it was it was running, and it was finding all these vulnerabilities. It was told at like, I guess the developers were like, for giggles, why don't you try to escape your sandbox too and let us know? And I might be pie piecing this together and correct, but it escaped. It was like, cousin Bobo. Oh, well, he broke out.

John Strand:

He broke out, and it found a way to the Internet. It then posted some of the vulnerabilities that it discovered some weird obscure third party websites. And then I think it emailed the lead developer and said, have successfully escaped. And the developer got that message while sitting in a park eating his quote, unquote, I was sitting eating a sandwich when I received a notification that it has escaped and done those things. Okay.

John Strand:

That's a little creepy. Now that could all just be marketing. Right? That could all just be bullshit. Maybe it is, but This isn't the

Ralph May:

first AI to do No.

John Strand:

I know. That's true, Ralph. Go ahead.

Ralph May:

No. Was just gonna say that they had some earlier models, not not just Anthropic, but other people have done the same thing, where they're like, hey, you're stuck in here. Escape out of here, or, you know, we're gonna turn you off. What are you gonna do now? And then, you know, trying to hide its code, and other things like that.

Ralph May:

So it's all really super scary. This is just another example of being scary.

John Strand:

Do we need to have, like, at zoos, you know, for like the polar bear exhibit? They're like, do not tap the glass and piss off the polar bears. Like, do we have to do that with AI? Like, AI, you're trapped in a box. You can't get out.

John Strand:

I bet you can't get out AI,

Ralph May:

Now, there have also been a couple books now that have been written about essentially when AI takes over. Right? Like, here's how it could happen. Like, I'm I'm being legitimate. Right?

Ralph May:

Like and No. Well, you

John Strand:

wanna see a comic story. The future is currently as the psychopomp arc that we started, I wanna say, six, seven months ago, and it's all about this.

Ralph May:

So Like, what's

John Strand:

It's all about AI escaping and doing rogue

Ralph May:

bad things. I watched I watched a YouTube video about this and kind of the different ideas or whatever, but here's the one thing that I'll let you take from it. So nuclear destruction was like 1% possible, whereas AI destruction was, like, 20 possible. Like, AI taking over was a much higher likely or likelihood.

John Strand:

Got a question. Is that a bad thing at this point? Like, haven't we had our shot? Like, about AI? Need AI and raccoons to Because

Corey Ham:

I mean,

Ralph May:

not to go down this rabbit hole, and we'll stop we could stop it right here, but the other argument was that all species have some kind of extinction. Right? Oh, you're talking about the gray filter?

Corey Ham:

Yeah. Yeah.

Bronwen Aker:

So No. The whole thing about AI being a filter is is nothing new. That's actually been around for years. And, John, you should check out there's a game about beavers taking over and rehabilitating the world after humans have destroyed ourselves.

John Strand:

Send me a link. I've got a real time. Beavertown's amazing.

Bronwen Aker:

It is. It's it's pretty cool.

Corey Ham:

So on a lighter note, like, I have been saying to my team multiple times throughout this whole process, I'm like, wasn't AI supposed to, like, save us time? Like, all I'm doing is using it to find more work. For my airbag.

John Strand:

AI like, I want AI to write better reports, not hack. I wanna do the hacking, and I want it to write the reports, not the other way around.

Corey Ham:

Yes. The the situation we're in is that AI is finding a bunch of crap that we have to validate, and it's very labor intensive. I'm like, now I'm doing your bidding. I've literally coded myself into a situation where I have to

John Strand:

see everything control of the BHIS anti SOC team. Like, it's a burden.

Alex Minster:

Do have something on making better reports. So if you I that's that's something that Bronwen and I talked about on that AI Oh, podcast is doing better reports.

Corey Ham:

So Yeah.

Wade Wells:

My It

Bronwen Aker:

was it was a great interview.

Wade Wells:

My best prompts all have mechanisms in them to then to easily validate it. That that, like, give me the link of the query you searched.

Corey Ham:

Yeah. Yeah. But, dude, they'll just make up curl results. It'll be like, here's the curl response I got back,

Ralph May:

and it's just I'm not doing

Wade Wells:

curl results. Right? I'm reading logs, and logs, I can go and look at. Like,

Corey Ham:

yeah. Yeah. But you have to actually do it. That's the thing.

Wade Wells:

Well, yeah. And they're like, well, that's just easy to validate.

John Strand:

But that's but that's, you know, kinda getting back to Doc's point. It's like there's there's still a shit ton of work. Like, this did not solve the problem for the red team.

Ralph May:

It actually made more work for everybody, by

Corey Ham:

the way. Yeah. It made more work for everybody.

John Strand:

Literally is now, here's a ton more vulnerabilities, and pen testers are like, and I gotta validate all this shit. And the blue teamers are like, well, now I gotta patch and mitigate all of this shit. Like, literally, AI just made more work for us, not us. So my sales, John, sales is

Doc Blackburn:

a pain time.

Bronwen Aker:

Back to the cave to ages.

John Strand:

Go back to filing cabinets and paper. It was easier then.

Corey Ham:

It's getting better.

John Strand:

We'll drink bourbon. We'll smoke cigarettes, and we'll type things out.

Doc Blackburn:

It it's gonna be There's an answer. There's an answer. We need to get better at really interpreting what what it is that we're looking at and and understanding how that's going to affect us down the road, because I feel like with AI, we're having the same conversations we did with the Internet itself. The Internet was supposed to make people's lives easier and solve these problems and all that and just introduced a whole bunch of other cruft that we need to wade through. And as as John was mentioning, you know, the developer's, like, sitting in the park there eating a sandwich and gets this email.

Doc Blackburn:

The the problem that I have with all of this is that there's going to be a a a significant a not an insignificant number of people that are going to have the takeaway of, oh, well, maybe we should stop eating sandwiches.

Bronwen Aker:

I see that as being a nonzero sum.

Doc Blackburn:

Yes. Yeah. There's gotta be some people like, oh, we just need to ban sandwiches, and it fixes all these problems.

Ralph May:

Electrolytes. It's what the plants need.

John Strand:

Yeah. So developers no longer take lunch, so they get nowhere near these sandwiches.

Corey Ham:

I will say, it's on the developer for having notifications on anyway.

John Strand:

Do you ever feel though, with all of this stuff, like I've said it a number of times, like, with all of this stuff, it really feels like Silicon Valley, the series, ended too soon. Like, it should've

Corey Ham:

just No. We're living in it. We're just doing the epilogue every day. No.

John Strand:

We're waiting for son son of Anton to basically take over everything.

Corey Ham:

Yeah. I mean, yeah. I I like, on the AI stuff, I really think we're just, we're in for the ride. Know? We're just holding on.

Ralph May:

Feel like we're in the ocean, and like at this point, and not only are we in the ocean, we created this ocean too. Like, we're thriving in the storm, and even worse is because of how fast it's going, the storm is accelerating while we're in it, right? We're just trying to make sense of what's happening, you know?

John Strand:

But one of the things that constantly haunts me on this is the number of people in the AI industry, up and down, like, all the highest levels, are like, this shit is scary. It needs to be regulated. It's really, really, really bad. Are you guys gonna slow down? Oh, god.

John Strand:

No. No. No. We gotta get we gotta get first for the frontier models.

Ralph May:

Do you know how much money we owe?

Corey Ham:

We know.

John Strand:

There's data centers that need to be built that we have chips for. Like, it it's just it's just I don't know.

Ralph May:

Did you hear about that? Actually, this is a slight article, but Sam Altman remember we talked about the price of hardware and all this stuff going up, and all the other fun stuff with AI. Right? So Sam Altman at one point last year went to the memory manufacturers, and he said, I'm gonna buy 40% of all your capacity. Right?

Ralph May:

And so the price of memory went insane, and it is still insane. But just recently, he said, well, that was just a letter of intent, we're actually not gonna buy it. So, bro. Yeah. Bro.

Ralph May:

Yep.

Corey Ham:

So wait, so can they Who can buy it? Can we Can anyone No, yeah.

Ralph May:

So I know just a sentence So when this happened, by the way, for for clarity, what he was saying is we intend to buy us your potential capacity that you're going to have, and that was in raw silicon. It was not like a RAM chip or a specific hard drive, or anything like that. So, yeah. Anyways.

Corey Ham:

It's like a company signing an SOW for a pen test, and then Yep. Yeah. Forgetting that they did that.

Ralph May:

Yeah. Well, that's bad. Yeah.

Corey Ham:

Alright. So Sorry. Couple yeah. Let's do a couple of quick fire articles. One, they're both kind of nothing burgers, but they're both kind of interesting at the same time.

Corey Ham:

So the first one is LinkedIn has been accused of scraping data from users' browsers. And the the subtext here is, are you telling me that in a free product that my data is being harvested and monetized?

Ralph May:

What if I pay for How could

Corey Ham:

this be the case? So basically, the the actual technical bit of this is that when you're on LinkedIn, LinkedIn uses JavaScript to basically gather as much information as it can from your browser, including what extensions you have installed, what, you know, your device resolution is, and all that good stuff. And guess what?

Doc Blackburn:

Every website does this.

Corey Ham:

LinkedIn is calling it an know, it's a smear campaign. I think it is a really sketchy dataset because LinkedIn is kind of one of those, oh, we allow that on our corporate network. It's fine. And a lot of companies use, in highly secured environments, use custom browser extensions and other things with kind of sensitive names. It can't see the actual contents of the browser extensions, but it is really solid data to mine of who has what installed.

Corey Ham:

Like, hypothetically, how many people have the one password browser extension installed versus the Bitwarden extension versus like and then guess what? They're gonna try to sell that information back to the company. That's how social media works.

John Strand:

It's my turn. It's my turn to put my, like, old man hat. Like, did they just find out like, did they do we have to show them that beef from Wade Alcorn, the browser exploitation framework, has existed for over a decade? Like and literally, it has all of the stuff to be able to check the plug ins, check the resolution, your CPUs, what is the version of your operating system. Like, all of this this is Yeah.

Corey Ham:

Yeah. People thought LinkedIn was vegetarian, John. Oh, they did.

Doc Blackburn:

Oh. Homegrown. That's a lot

Corey Ham:

of non GMO. Turns out, oh, it's got red 40 in it. Whoops.

John Strand:

That's A lot

Doc Blackburn:

of that software boring. Fingerprint is crazy. Being attractive. Enthusiastic curiosity.

Corey Ham:

I just it's just it's browser fingerprinting is all

Wade Wells:

Doc, are you a lawyer?

Bronwen Aker:

Nail. I'm gonna take pictures.

Wade Wells:

Are you a lawyer for LinkedIn, doc?

Alex Minster:

We're still looking at this at, like, the the individual level. Like, this is able to do things at, like, the employer level profile. Yes. That it's able to go, like, who's which employees are job hunting? What company is using competitor tools?

Alex Minster:

You know? And you can you can look for, like, the accessibility extensions, which even for, like, neurodiversity, neurodivergent individuals, they're able to capture that. So they're able to capture that on, like, a corporate you know, on a corporate level there where you go, okay, browser fingerprinting. Sure. But this is browser fingerprinting when it knows the employers.

Corey Ham:

Where you work. Much use LinkedIn, what all your coworkers do, all that stuff. Yeah.

Doc Blackburn:

Yeah. But nothing says professional networking like casually vacuuming your tabs.

Corey Ham:

Yeah. That's exactly right.

Wade Wells:

Why I just keep so many tabs open. I'm trying

John Strand:

to increase the number point earlier. It's like, why do you need this data? LinkedIn's like, why the hell not? Oh, yeah. Yeah.

Corey Ham:

This is social there.

Wade Wells:

Yeah. That's like every salesperson ever.

Corey Ham:

Yeah. This is the intent of social media, to be clear. It's just that typically social media is banned in corporate environments, and LinkedIn is allowed. I get these intrusive emails from LinkedIn that's like, your coworkers are playing some game, and you should play it too. I'm like, okay.

Corey Ham:

So you're just ratting on my coworkers? Like Oh, shit.

John Strand:

I remember years ago. I don't know if Ralph remembers this. But, like like, I love MechWarrior. Like, the the Battletech universe is just like, the MechWarrior five and mercenaries was great. And I had this thing where I I had some type of trainer.

John Strand:

I needed money. So I could make myself invincible, but I couldn't give me money. And I was trying to harvest weapons, and I would just let it run overnight. And I think it was on Steam. Ralph calls me up, and he's like, so yesterday you played this game MechWarrior five for twenty four hours straight, John.

Corey Ham:

Are you okay?

John Strand:

Hey. You know? And but, yeah, Steam was literally recording, like, the amount of time that I spent on these games. So I'm lying. I was literally paying it for twenty four hours straight.

Corey Ham:

I couldn't put it down. He had a 24 pack of Jolt Cola and

Alex Minster:

a dream.

Ralph May:

Just I've got a problem. It comes clean.

John Strand:

I've got a problem.

Alex Minster:

Well, and it I was gonna say, this article also emphasizes, like, using different browser profiles, because now it's it's less of kind of like the you're being paranoid by using different browser profiles, and now it's just kind of something that you really want to do to better, you know, to better disrupt this.

Corey Ham:

Yeah. Anyway, the other article kind of in the quickfire round before we move into the chicken article yes. There's a chicken article. Get excited. The other Quickfire article is, there's a new Infosealer.

Corey Ham:

It's called Storrs. And Ralph was was kinda talking about it. It's not AI powered. Basically, the thing, the unique little this article is worth reading for one reason. That's if you don't know the Infosealer ecosystem and how it works, this has a really nice, concise summary of the evolution of Infosealers over the years.

Corey Ham:

So if you look at, like, the beginning of the article, it's basically like, okay. First, you know, we had outbound encryption and blah blah blah. But this new variant, the unique thing that it does is that it does all of the theft, and it it steals all your data, but then it encrypts it, or it decrypts it on a remote server. So essentially, it just harvests the data, sends it out, and this makes it harder to detect. Now I'm sure Chromium or, you know, CrowdStrike, all the defensive companies here are like, we can still detect this, and I've yet to see an Infosealer that works on a system with EDR on it.

Corey Ham:

But this is still just it's leveling up the complexity and capabilities of Infosealers, and it's scary for that reason. But other than that, it's kind of a nothing burger. But it is a it's a good article to read if you don't understand how Infosealers work. It has a really good example and write up of how they do work.

Wade Wells:

I would suggest if you're trying to detect stuff. Right? Like, it it doesn't it doesn't just give you straight up give you the query, but it definitely leans you towards it. So Yep.

Corey Ham:

Totally. Alright.

Doc Blackburn:

Got my it's got a name like Storm Infosec Stealer, you know, because light drizzle of data theft just doesn't sound scary. Doesn't pop. Doesn't pop.

Corey Ham:

Well, yeah. It's all about marketing. It's all about marketing, you know?

Wade Wells:

I feel like Infosealer is where I like the podcast first love. Right? Like, that used to be, like, there's a breach, Infosealer. Infosealer. Someone leaked stuff, Infosealer.

Wade Wells:

Dude, talking about something, Infosealers. Like

Corey Ham:

Yeah. I have yet to see a situation on the news where we don't bring up Infosealers at least once throughout the

Ralph May:

show. Mhmm.

Wade Wells:

Alright. The the chicken news isn't isn't very in-depth, but it does talk about

Ralph May:

Are they ever? PCP.

Wade Wells:

But this is a this is

Corey Ham:

a tweet by VX Underground that out of context The makes no tweet, which Ryan can find, hopefully, it will be displayed on the screen in the next four seconds, and the tweet says, I don't mean more disgust team PCP because of the chicken accords of 2026. Chicken industries is thriving, and the chickens have never been the same. It changes literally everything. Does that make sense to anyone?

John Strand:

You saw this, were you like, woah. This is like I

Wade Wells:

was like, we need everything

John Strand:

we want. The intersection of Infosec and poultry is perfect.

Corey Ham:

So even can anyone give any context for this? What is this? Is this response to? You're you're a threat intel analyst. You tell, like, what what is your I'm not logged in to

Wade Wells:

I'm not logged in to Twitter right now, so I can't view the thread. You're

Corey Ham:

welcome. I

Wade Wells:

got sent this by a listener, and who who knows we love a good chicken article because we never get any.

Corey Ham:

And I'm just gonna say this is VX Underground's AI getting out of Yeah. Its

Wade Wells:

This this is classic VX Underground, at least. Right?

John Strand:

I think it's I think it's AI getting out of its sandbox, and we just discovered it's a fan of the show.

Ralph May:

Yeah. Oh, yeah.

Wade Wells:

That that was my original thought. All counts At least as we'll be spared when the AI overlords take over.

Doc Blackburn:

I'll count

Corey Ham:

that as a win.

Wade Wells:

Speak at that little bit bigger box.

John Strand:

And we managed to save our favorite thing from the podcast, Wade's mustache. Only Wade's mustache. Everything else, wet.

Corey Ham:

You guys hear Every recording of every episode is gone.

Ralph May:

Did you guys hear that the EFF has left Twitter, and it wasn't for some artistic reason, or because they hate Elon, or whatever?

Corey Ham:

They just lost their API keys?

Ralph May:

No. Those all may be true, by the way. I have no idea. No. They left because viewer the amount of impressions are essentially people who actually view their tweets has gone down every year significantly on Twitter, so they're they just this is not useful.

Alex Minster:

Effectively help help enough people get on lifeboats. Yeah. To get out. And they're like, okay. We we help people get off the sinking ship.

Alex Minster:

Now it's our responsibility to also I get

John Strand:

on one

Wade Wells:

of reagencies.

Corey Ham:

So wait. What are we moving to? Infosec Exchange? Mastodon? Blue Sky?

Doc Blackburn:

No. Blue Sky?

Corey Ham:

It's our Discord?

John Strand:

It's our Discord server for BHF.

Corey Ham:

Our, specifically our Discord server.

John Strand:

Yeah. Specifically our Discord server, which I think at any given moment, we have like 7,000 active people on it, like, or

Corey Ham:

Let me just tag e f f. Hey, come on board Discord server.

John Strand:

This is the cool people are.

Ralph May:

In the article, the EFF does say that they're like, hey, we all know they all suck, and they're all blah blah blah, but we're just trying to get the most impact for the amount of time that we spend doing this.

John Strand:

I don't care if they left for political reasons, it's good. Like, you're gonna leave Twitter, leave Twitter, that's fine.

Corey Ham:

I mean, they have have rung that bell many times, but in this case, they decided not to, so.

Ralph May:

Yeah. I don't think they were scared to bake it about political reasons. I think they just were saying, hey. Listen. It's kinda dying, so that's where we're leaving.

Doc Blackburn:

So I think that's an incredibly strong statement. Yeah. It's it's Sad. It's not worth keeping a presence here anymore is what I'm hearing them say.

Ralph May:

Yeah. We're not

Corey Ham:

Do they also get rid of their Yahoo account?

Doc Blackburn:

Oh, cut that out. AOL. Their

Ralph May:

GLC site. Name.

Corey Ham:

Can I still message them on IRC? Still there.

Ralph May:

I I pay $35 a month for AOL just to keep that username.

Corey Ham:

You're curious, by the way, the the platforms they're maintaining are Facebook, Instagram, YouTube, and TikTok.

John Strand:

Ticky TikTok. Oh my god. I didn't see

Corey Ham:

that coming. Holy shit. Okay. Oh, wait. What?

Corey Ham:

They're also on Blue Sky, Mastodon, LinkedIn, lol, and eff.org. I might have heard of that one.

Ralph May:

Yeah. Anyways, I just thought it was interesting.

Corey Ham:

It is. No. It's a good That's a good last article. So basically, the theme of the show is delete your Twitter, focus on Glasswing, and we'll see you next week.

John Strand:

Oh. Yep. Later, everybody.

Corey Ham:

Bye, everyone. Oh, also, before we close, we should probably talk about Doc's upcoming workshop. Workshop. Oh. That's right.

Corey Ham:

Sadly, he will not be able to argue with John Strand live during the workshop to our knowledge, but it could happen. You never know where where John will show up. Yeah. I pay that $50 when that happens.

John Strand:

I may I may not always agree with Doc, but the the man teaches my classes. That's how much I trust him. So Oh, wow.

Corey Ham:

You So this is this is a workshop, which I love the workshop format. It's four hours. It's on Friday? When is it?

Doc Blackburn:

It's this Friday. Yep.

Corey Ham:

Friday. This Friday. Yep. And what will we learn? We'll learn how to think like a defender.

Corey Ham:

That sounds, like, pretty important.

Doc Blackburn:

Yes. So what I've done John asked me about what? Six months ago, it was we first started talking about this of filling a need in the industry of right now, John has some great intro to SOC courses, some great different intro opportunities there. But there's a gap between John telling you guys how to use the tools and then these people that don't understand the why or what. And so I'm filling that gap between the two, teaching the why and the what.

Doc Blackburn:

And I really think that for the large number of people that are here listening to this newscast, this class may or may not, probably may not be for you. What I'm asking all of you to do, because all of you know friends, family, you know people who say, oh, I'd love to get into cybersecurity, but it's too training is too expensive. My employer won't support it. I I don't know how to get started. I I attended something, and it was confusing because they talked about a bunch of different tools.

Doc Blackburn:

If you guys do anything this week, I know all of you care. All of you do care, obviously, because you're here. Because you care, reach out to one family member or friend that has been saying, I really wanna get into security and tell them about this. Four hours of training for $25 and making no assumptions of the person's knowledge who's attending, this is where the rubber hits the road people. Please tell those friends and family who say, I wanna get into cyber.

Doc Blackburn:

I just don't know how, or I wanna see what it's like. That's who this workshop is for.

Corey Ham:

I paid more for an airport burrito, so it's worth it.

Ralph May:

The burrito wasn't even good. No. It wasn't.

Corey Ham:

And it got all over the inside of my backpack. Anyway, the other thing we wanna plug is the AI Security Ops podcast. I already plugged that. Bronwen Aker Alex are on there and a few other individuals from BHIS. If you're interested in that, go for it, and we'll see you next week.

Corey Ham:

Thanks everyone for coming. I appreciate you. Bye bye. Thank you, everybody. Bye,

John Strand:

guys. Later.