AI Security Ops

In this episode of BHIS Presents: AI Security Ops, the team breaks down the Vercel breach — a real-world incident that shows just how fragile modern security has become in the age of AI integrations and SaaS sprawl.

What started as a simple Roblox cheat script downloaded on a work laptop quickly escalated into a multi-hop compromise involving OAuth permissions, an AI productivity tool, and access into Vercel’s internal systems.

This wasn’t a zero-day or advanced nation-state exploit. It was a chain of everyday decisions: installing software, clicking “Allow,” and trusting third-party integrations.

The result? Allegedly $2M worth of data listed for sale, including API keys, internal data, and employee records — all from a breach path that most organizations aren’t even monitoring.

We dig into:
• What Vercel is and why it’s such a high-value target
• How environment variables become the “keys to the kingdom”
• The full attack chain: Roblox malware → Context.ai → Vercel
• What infostealers like Lumma actually do (and how cheap they are)
• How OAuth permissions become persistent backdoors
• Why AI productivity tools introduce hidden risk
• The rise of “shadow AI” inside organizations
• How supply chain attacks continue to scale across ecosystems
• The role of AI in accelerating attacker speed and capability
• Why this type of breach is becoming the new normal

This episode highlights a critical shift in cybersecurity: you don’t have to get hacked directly anymore — attackers just need to compromise something you’ve already trusted.



📚 Key Concepts & Topics

Attack Chain & Initial Access
• Lumma infostealer and malware-as-a-service
• Credential theft: passwords, cookies, OAuth tokens
• Low-cost, high-impact compromise paths

OAuth & Identity Risk
• “Allow All” permissions and persistent access
• OAuth tokens as long-lived entry points
• Lack of visibility into third-party integrations

AI Security Risks
• Shadow AI and unsanctioned tool adoption
• Deep integrations with Google Workspace and SaaS
• AI tools as new supply chain attack surfaces

Supply Chain Attacks
• Multi-hop compromise paths across vendors
• Real-world parallels (Trivy, LiteLLM)
• Interconnected ecosystems increasing blast radius

Threat Landscape Evolution
• AI accelerating attacker speed and scale
• Lower barrier to entry for complex attacks
• Criminal groups operating as decentralized “businesses”

Defensive Strategy
• Auditing OAuth integrations and permissions
• Enforcing least privilege across SaaS tools
• Segmenting sensitive data and reducing blast radius
• Avoiding risky behavior on corporate devices

⏱️ Chapters
  • (00:00) - Intro & Breach Overview
  • (00:21) - Sponsors & Show Setup
  • (01:29) - What Vercel Is & Why It Matters
  • (02:31) - Initial Compromise: Roblox Script & Infostealer
  • (05:03) - OAuth Permissions & Pivot into Vercel
  • (08:04) - AI Tools, Over-Permissioning & Supply Chain Risk
  • (09:53) - AI Acceleration of Attacks & Ecosystem Impact
  • (13:34) - Threat Actors, Attribution & Key Takeaways

Click here to watch this episode on YouTube.


Brought to you by:
Black Hills Information Security 
https://www.blackhillsinfosec.com

Antisyphon Training
https://www.antisyphontraining.com/

Active Countermeasures
https://www.activecountermeasures.com

Wild West Hackin Fest
https://wildwesthackinfest.com

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
https://poweredbybhis.com


Creators and Guests

Host
Brian Fehrman
Brian Fehrman is a long-time BHIS Security Researcher and Consultant with extensive academic credentials and industry certifications who specializes in AI, hardware hacking, and red teaming, and outside of work is an avid Brazilian Jiu-Jitsu practitioner, big-game hunter, and home-improvement enthusiast.
Guest
Ethan Robish
Ethan Robish has worked with Black Hills Information Security (BHIS) since 2008 — first as an intern and then as a full-time Security Consultant starting in 2012. In his current role as a Threat Hunter, Ethan is involved with customer engagement, research, working with Active Countermeasures’ AC-Hunter, as well as improving BHIS HTOC and SOC offerings. Previously, he implemented defensive security solutions for the Exchange Online security team as a Microsoft intern. While in college, he competed in the International Collegiate Programming Competition (ICPC) World Finals. In his time off, he enjoys cooking, playing the piano, and reading fantasy novels.

What is AI Security Ops?

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

Brian Fehrman:

Hey, everyone, and welcome to this week's episode of AI Security Ops. This week, we are going to talk about the Vercel data breach, where allegedly $2,000,000 worth of data was stolen and listed for sale on breach forums, and spoiler, it all started with a Roblox cheat script. But before we get started, let you know about Black Hills information security. If you or your organization are in need of any kind of cybersecurity service, external, internal, web app testing, physical security, wireless, red teaming, basically anything security related or SOC operations, anything security you can think of, I'm sure we can get you taken care of. So check us out at blackhillsinfosec.com.

Brian Fehrman:

Additionally, we have a training branch called Antisyphon Training where our good folks who are doing these tasks day in and day out on on the offensive side, defensive side, and everything in between. They take their knowledge. They package it up nicely for you to learn these tools and techniques so that you can integrate them into your daily workflow and provide it at a very affordable price. So check out antisyphanttraining.com. So with that, let's get on with the episode.

Brian Fehrman:

So today, we've got a bit of a small crew. We got Ethan Robish with us. Welcome, Ethan. And myself, yep, Brian Fehrman. So let's go ahead and let's kick this off.

Brian Fehrman:

Before we get too far into this, let's talk about Vercel because some people may not have heard of them at this point. Basically, what they are is a cloud hosting platform that tries to make it easy to ship out web apps. You connect up your GitHub repo, and they'll build and deploy it automatically, and it's live pretty quickly. Also, amongst other things, they are the creator of Next dot j s, which is a pretty popular React framework with over 520,000,000 downloads in 2025. And so, basically, if you've used the web, you have probably used something on Vercel.

Brian Fehrman:

And so because of that, because of their positioning in the in the marketplace here, obviously, they they might have, access to a lot of important information, the environment variables, the API keys, database passwords, and other information that would be of interest to a threat actor. So I think this started with, as we mentioned, a Roblox script that someone downloaded. Is that right, Ethan?

Ethan Robish:

It sounds sounds that way. Yeah. Vercel has kind of released their official statements. The person compromised with their with with the Roblox scripts. I think they're from a different company.

Ethan Robish:

Right? Context.ai?

Brian Fehrman:

Yep. That's a good distinction to make. So the original compromise did not start with with Vercel. What was the name of that company?

Ethan Robish:

Context dot a I. So the way the way I understand it is maybe chronologically first, someone from for Cell used context.ai. Context.ai is a legitimate company. They they started they allowed an OAuth app access to to to use Context AI internally for Vercel's, like I don't know. I guess what does it does it say what what it actually does?

Brian Fehrman:

Let's see. Are you talking about for the for the OAuth portion?

Ethan Robish:

For yeah. So just, like, what is context AI? Why would Vercel have used them?

Brian Fehrman:

Oh, apparently, they make some kind of a productivity tool, but that is a good question of exactly what the productivity tool was. Yeah. Let's hit the Googles. So we got context dot AI, Google Workspace productivity tool. We'll just do some live research.

Brian Fehrman:

Platform that acts as an AI office suite offering over 40 integrations including Google Workspace, Slack, and Microsoft three sixty five

Ethan Robish:

Sounds like that. Helps to automate. That Google Workspace sounds like that Google Workspace integration is maybe the the the downfall here.

Brian Fehrman:

Yeah. Yep. Yeah. Because it looks like so, you know, what happened is that someone at so so we had the initial breach over on the context side, like you mentioned, and it was through the someone downloading a Roblox cheat script onto their onto their work laptop, which was actually a, a wrapper for a Luma stealer, Infosteeler. It sounds like it's a, it's a malware as a service type tool that you can rent out for a couple $100, and it sucked up a bunch of, sensitive information from the Context AI employee system, including save password, session cookies, and OAuth tokens, which is probably then the chain into the Vercel employee's Google Workspace, account.

Brian Fehrman:

Yeah. I am That's what I'm saying.

Ethan Robish:

I am I imagine the session information for contact the context employee allowed the attacker deeper into context architecture. And, yeah, that that OAuth token may maybe it wasn't part of the initial session of on the employee, but presumably that employee has internal access to context services. And through, you know, pivoting escalation, the attack was able to somehow get access to this, yeah, this OAuth grant that then allowed them into into Vercel. And so back back to chronologically, at some point in the past, this Vercel employee had signed up for context AI, was presented with, like, oh, hey. Do you wanna allow this application access to your Google Workspace account?

Ethan Robish:

And the credential the the privileges were, yes, everything. Then we were we were talking about that earlier. Sometimes when you sign up for something like that, you're not really gonna say. Sometimes it lets you, you know, pick and choose if the developer, like, allows that option, but sometimes it's just like, what whatever the app requests, you have to take it or leave it. So whatever the case, the Vercel employee clicked allow.

Ethan Robish:

And so this OAuth application had this grant or this this permission established and was sitting there for who who knows how long until the context in employee's system got compromised, pivot, and then back into into Vercel.

Brian Fehrman:

Yeah. Yeah. And the the permissions thing, I think that that's a good point. I mean, it's all, you know, like you said, a lot of it, it's it's like it's take it or leave it. It's like, I don't know, maybe like signing some kind of like a like a waiver or something if you've ever signed something at like a gym or, I don't know, you go on some excursion activity or whatever else where they might have you sign out a waiver.

Brian Fehrman:

Like, you don't get to go through usually and just, like, start crossing things out that you don't agree to. I mean, it's it's either you you take it or you leave it. And so in this case, you know, if someone wants to use this tool, then, I mean, that's that's kinda what they're stuck with. And, so, I mean, it could be a matter of, kind of locking things, know, further down within the workspace if if you're able to segment those things out. But it it's hard to say without seeing the exact setup and exactly how these things tie in, but understandable though of how someone would, would accept all this.

Brian Fehrman:

But you were mentioning that there's some, kind of, I think you put it as politically correct back and forth between the the two companies right now.

Ethan Robish:

Yeah. So the I don't know who wrote the application that, you know, presented this request. It doesn't it doesn't say, but it's whether whether it was internal or whether it was external to Vercel. Either way, the way contacts. Ai put it in their their security notice was Vercel's internal OAuth configurations appear to have allowed this action to grant these broader permissions in Vercel's enterprise Google Workspace.

Ethan Robish:

So it's almost like blaming the victim. Well, you shouldn't have shouldn't have allowed that to happen. You shouldn't have shouldn't have done it that way.

Brian Fehrman:

Yeah. Yeah. I certainly do feel that that's a that's it's a bit of it's a bit of victim shaming here. But, I guess, kind of looking at maybe, like, a, like, a broader kind of overall look at things is is that is, you know, one of the issues that we have with, you know, AI integration right now. And it's it's not it's not something that is unique to AI.

Brian Fehrman:

Right? I mean, there's lots of apps that we've been that people have been installing for years that have nothing to do with AI but still have broadly scoped permissions. But now with the, you know, with AI quickly becoming integrated into so many workflows, people wanting to hop on the train very quickly, if you will, trying to keep up with things, You know, it's, that we're I think we're already seeing a lot of these issues popping out and probably continue seeing these issues of, you know, kind of the idea of excessive agency and over permissive, use of AI that can lead to these kinds of incidents. Again, I'm not not trying to shame the person in any means, any way, shape, or form, and, who, you know, had accepted the permissions, but just, I think it's just kind of a broader a problem in in general that we'll continue seeing for a while.

Ethan Robish:

It's kinda so we're swinging the pendulum of security versus usability. We've Mhmm. We've been harping on, you know, only only scope your permissions to what you need, like, least per per principle of least privilege. And now with AI, it's like, well, if we give it all these things, it can use all these things. It's it's good at everything.

Ethan Robish:

And so, I mean, you you have the rise of OpenCLO where it's like, you give it access to everything in your life. Right? I mean, some people do, but that can backfire. And this is this is one of the ways. I mean, this this is kind of a classic supply chain attack.

Ethan Robish:

Right? We it's we we kinda joked about context context response of, you know, blaming the victim. But, I mean, realistically, this is a type of attack that is on the rise. Supply chain supply chain attacks are in the news more and more. This reminded me of the Aker security trivia Mhmm.

Ethan Robish:

Supply chain attack, which trickled down into a whole bunch of different places, including a Python package called Light LLM, which is used by a whole bunch of, you know, AI coding users. And, yeah, it's a you could be several steps removed from how you got compromised. So in this case, context AI got compromised, but then hopped to Vercel. Vercel not only does Next. Js does host hosting.

Ethan Robish:

They have a Vercel Labs GitHub that has some projects that maybe you've heard of, but they're they're quite popular. Skills.sh, which is probably the number one way of finding and installing AI skills. So they it's it's an NPM package that is on on their GitHub agent browser, which, again, is one of the most popular ways of giving your browser access to an actual web web browser. Sorry. Giving your agent access to a a web browser.

Ethan Robish:

And these are these are hosted on Vercel's GitHub. Now we we have no evidence that, you know, any any part of their GitHub infrastructure was compromised, but it it it's it makes you wonder. You you don't know.

Brian Fehrman:

Yeah. Yeah. Exactly. The the I mean, the risk is the risk is very real. I think that comparison to Lite LLM was, in the Tribi chain was, I mean, spot on.

Brian Fehrman:

I mean, we are so, interconnected at this point with all these different packages that are getting pulled into all these different places and who has access to those at the developer level and the organizations that have access to the organizations that those developers work at. And, I I completely agree. And and it seems that, what we're finding too is that, threat actors are increasingly using AI to help speed up and make those connections very quickly, without having to go through, you know, maybe as much time as they spent would have had to spend before piecing all the all the, you know, the puzzle pieces together, chaining this together manually. Now they've got basically their, their spotter, their copilot, you know, whatever you wanna call it, that is, able to to help them quickly piece things together to, I I mean, cause to very quickly cause broad and significant harm.

Ethan Robish:

Yeah. I wonder how long before this moves from what attack what Vercel called the attackers as highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems. But I wonder how long before operational velocity of this magnitude just becomes the norm because of AI assistance.

Brian Fehrman:

Yeah. I I think we're already seeing that. I think that's one of the, you know, one of the topics of discussion is just how much it's this is lowering the bar for for people. I mean, it'd be off often before, it seemed like it was just the top tier, actors you'd hear about these different supply chain compromises. And it seemed it seemed like they're a little less common, But now it seems like, I mean, we're just about every week, we're basically hearing a supply chain compromise or or two.

Brian Fehrman:

It's the the pace is just mind bending at this point. So yeah. So I think that's, I think we covered all the points in the story there. You know, obviously, things companies can try to do is, you know, audit, basically audit permissions throughout your throughout your environment, see, what what has access to where. But, I mean, obviously, it's not gonna be, foolproof or bulletproof by by any means.

Brian Fehrman:

And at some point, if you're going to be integrating with other applications, you have to allow the permissions. So, I mean, that that's kind of a tough one. But, you know, trying to segment out data the best you can so that not everyone in the organization has access to everything, trying to, keep sensitive data separate from, you know, more day to day stuff, and kinda try to separate that a little bit to kind of, you know, reduce the the blast radius, but then also, on not to because that's kinda laying a lot of things on Vercel, looking at com looking at the context person, probably don't download scripts off the random scripts off the Internet onto your work computer like that.

Ethan Robish:

That is that is sage advice right there.

Brian Fehrman:

Yep. Yep. Alright. So Sure. Yeah.

Brian Fehrman:

So talk

Ethan Robish:

a little bit about the the organization that did it. We touched touched on it at the beginning, I think. But just

Brian Fehrman:

Yeah. Yeah. Go for it.

Ethan Robish:

How so the the shiny hunters claimed credit for for this breach, and they're the ones that posted the the the ransom or what what have you on breach for breach forums, including Vercel's employee records and screenshots of their internal dashboard. And shiny hunters have had arrests in 2024, but groups as Brian mentioned, this is kind of like malware as a service, ransomware as a service, info stealer as a service. Like, it's it's more of a a cartel, right, than than an, like, single organization. Like, you can't really take down the head because it's you you arrest one or two people and nothing meaningfully changes. Like, it just keeps running business as usual.

Ethan Robish:

I mean, it is a business. Like, if you look up some of these things, they run it like a business, and it's it's it's crazy.

Brian Fehrman:

Yeah. Yeah. When there's when there's money to be made, there are there's always gonna be people who are who are banding together to to to make said money for sure. So, obviously, yeah, like you said, the arrest, you know, it's not obviously, not saying that we shouldn't arrest people. Obviously, we definitely should, but we can't expect, overnight changes from doing so.

Brian Fehrman:

Yeah. It looks like, they've actually hit quite a few things in the in the past. They've got a lot of stuff in their name, but they're actually looks like they're not they're claiming that it says the actual shiny hunters are claiming that they're not involved here. So we don't know if it's a subset of them Copycat group. Or nobody really knows.

Brian Fehrman:

Yeah.

Ethan Robish:

The night not quite quite as shiny hunters. The Yeah. Yeah. Dimly lich hunters.

Brian Fehrman:

Yeah. Nice. Cool. Well, yeah, I think I think that about wraps it up. I think we covered covered all the points.

Brian Fehrman:

So it'd be interested interesting to see if anything else comes with this particular attack. If not, we'll be on the lookout for the next one and keep everyone, informed with our, insights and hot takes and everything else. So hope everyone, enjoyed the episode and keep on prompting.