Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? Solo.io is here to be your guide to connectivity in the cloud-native universe!
Solo.io, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.
Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application. Solo.io’s got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with Solo.io.
And here’s the real game-changer: a common interface for every connection, in every direction, all with one API. It’s the future of connectivity, and it’s called Gloo by Solo.io.
DevOps and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit solo.io/screaminginthecloud today and level up your networking game.
Corey: As hybrid cloud computing becomes more pervasive, IT organizations need an automation platform that spans networks, clouds, and services—while helping deliver on key business objectives. Red Hat Ansible Automation Platform provides smart, scalable, sharable automation that can take you from zero to automation in minutes. Find it in the AWS Marketplace.
Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn. I’ve been studiously ignoring a number of buzzword, hype-y topics, and it’s probably time that I addressed some of them. One that I’ve been largely ignoring, mostly because of its prevalence at Expo Hall booths at RSA and other places, has been software bill of materials and supply chain attacks. Finally, I figured I would indulge the topic. Today I’m speaking with Adnan Khan, lead security engineer at Praetorian. Adnan, thank you for joining me.
Adnan: Thank you so much for having me.
Corey: So, I’m trying to understand, on some level, where the idea of these SBOM or bill-of-material attacks have—where they start and where they stop. I’ve seen it as far as upstream dependencies have a vulnerability. Great. I’ve seen misconfigurations in how companies wind up configuring their open-source presences. There have been a bunch of different, it feels almost like orthogonal concepts to my mind, lumped together as this is a big scary thing because if we have a big single scary thing we can point at, that unlocks budget. Am I being overly cynical on this or is there more to it?
Adnan: I’d say there’s a lot more to it. And there’s a couple of components here. So first, you have the SBOM-type approach to security where organizations are looking at which packages are incorporated into their builds. And vulnerabilities can come out in a number of ways. So, you could have software actually have bugs or you could have malicious actors actually insert backdoors into software.
I want to talk more about that second point. How do malicious actors actually insert backdoors? Sometimes it’s compromising a developer. Sometimes it’s compromising credentials to push packages to a repository, but other times, it could be as simple as just making a pull request on GitHub. And that’s somewhere where I’ve spent a bit of time doing research, building off of techniques that other people have documented, and also trying out some attacks for myself against two Microsoft repositories and several others that have reported over the last few months that would have been able to allow an attacker to slip a backdoor into code and expand the number of projects that they are able to attack beyond that.
Corey: I think one of the areas that we’ve seen a lot of this coming from has been the GitHub Action space. And I’ll confess that I wasn’t aware of a few edge-case behaviors around this. Most of my experience with client-side Git configuration in the .git repository—pre-commit hooks being a great example—intentionally and by design from a security perspective, do not convey when you check that code in and push it somewhere, or grab someone else’s, which is probably for the best because otherwise, it’s, “Oh yeah, just go ahead and copy your password hash file and email that to something else via a series of arcane shell script stuff.” The vector is there. I was unpleasantly surprised somewhat recently to discover that when I cloned a public project and started running it locally and then adding it to my own fork, that it would attempt to invoke a whole bunch of GitHub Actions flows that I’d never, you know, allowed it to do. That was… let’s say, eye-opening.
Adnan: [laugh]. Yeah. So, on the particular topic of GitHub Actions, the pull request as an attack vector, like, there’s a lot of different forms that an attack can take. So, one of the more common ones—and this is something that’s been around for just about as long as GitHub Actions has been around—and this is a certain trigger called ‘pull request target.’ What this means is that when someone makes a pull request against the base repository, maybe a branch within the base repository such as main, that will be the workflow trigger.
And from a security’s perspective, when it runs on that trigger, it does not require approval at all. And that’s something that a lot of people don’t really realize when they’re configuring their workflows. Because normally, when you have a pull request trigger, the maintainer can check a box that says, “Oh, require approval for all external pull requests.” And they think, “Great, everything needs to be approved.” If someone tries to add malicious code to run that’s on the pull request target trigger, then they can look at the code before it runs and they’re fine.
But in a pull request target trigger, there is no approval and there’s no way to require an approval, except for configuring the workflow securely. So, in this case, what happens is, and in one particular case against the Microsoft repository, this was a Microsoft reusable GitHub Action called GPT Review. It was vulnerable because it checked out code from my branch—so if I made a pull request, it checked out code from my branch, and you could find this by looking at the workflow—and then it ran tests on my branch, so it’s running my code. So, by modifying the entry points, I could run code that runs in the context of that base branch and steal secrets from it, and use those to perform malicious Actions.
Corey: Got you. It feels like historically, one of the big threat models around things like this is al—[and when 00:06:02] you have any sort of CI/CD exploit—is either falls down one of two branches: it’s either the getting secret access so you can leverage those credentials to pivot into other things—I’ve seen a lot of that in the AWS space—or more boringly, and more commonly in many cases, it seems to be oh, how do I get it to run this crypto miner nonsense thing, with the somewhat large-scale collapse of crypto across the board, it’s been convenient to see that be less prevalent, but still there. Just because you’re not making as much money means that you’ll still just have to do more of it when it’s all in someone else’s account. So, I guess it’s easier to see and detect a lot of the exploits that require a whole bunch of compute power. The, oh by the way, we stole your secrets and now we’re going to use that to lateral into an organization seem like it’s something far more… I guess, dangerous and also sneaky.
Adnan: Yeah, absolutely. And you hit the nail on the head there with sneaky because when I first demonstrated this, I made a test account, I created a PR, I made a couple of Actions such as I modified the name of the release for the repository, I just put a little tag on it, and didn’t do any other changes. And then I also created a feature branch in one of Microsoft’s repositories. I don’t have permission to do that. That just sat there for about almost two weeks and then someone else exploited it and then they responded to it.
So, sneaky is exactly the word you could describe something like this. And another reason why it’s concerning is, beyond the secret disclosure for—and in this case, the repository only had an OpenAI API key, so… okay, you can talk to ChatGPT for free. But this was itself a Github Action and it was used by another Microsoft machine-learning project that had a lot more users, called SynapseML, I believe was the name of the other project. So, what someone could do is backdoor this Action by creating a commit in a feature branch, which they can do by stealing the built-in GitHub token—and this is something that all Github Action runs have; the permissions for it vary, but in this case, it had the right permissions—attacker could create a new branch, modify code in that branch, and then modify the tag, which in Git, tags are mutable, so you can just change the commit the tag points to, and now, every time that other Microsoft repository runs GPT Review to review a pull request, it’s running attacker-controlled code, and then that could potentially backdoor that other repository, steal secrets from that repository.
So that’s, you know, one of the scary parts of, in particular backdooring a Github Action. And I believe there was a very informative Blackhat talk this year, that someone from—I’m forgetting the name of the author, but it was a very good watch about how Actions vulnerabilities can be vulnerable, and this is kind of an example of—it just happened to be that this was an Action as well.
Corey: That feels like this is an area of exploit that is becoming increasingly common. I tie it almost directly to the rise of GitHub Actions as the default CI/CD system that a lot of folks have been using. For the longest time, it seemed like a poorly configured Jenkins box hanging out somewhere in your environment that was the exception to the Infrastructure as Code rule because everyone has access to it, configures it by hand, and invariably it has access to production was the way that people would exploit things. For a while, you had CircleCI and Travis-CI, before Travis imploded and Circle did a bunch of layoffs. Who knows where they’re at these days?
But it does seem that the common point now has been GitHub Actions, and a .github folder within that Git repo with a workflows YAML file effectively means that a whole bunch of stuff can happen that you might not be fully aware of when you’re cloning or following along with someone’s tutorial somewhere. That has caught me out in a couple of strange ways, but nothing disastrous because I do believe in realistic security boundaries. I just worry how much of this is the emerging factor of having a de facto standard around this versus something that Microsoft has actively gotten wrong. What’s your take on it?
Adnan: Yeah. So, my take here is that Github could absolutely be doing a lot more to help prevent users from shooting themselves in the foot. Because their documentation is very clear and quite frankly, very good, but people aren’t warned when they make certain configuration settings in their workflows. I mean, GitHub will happily take the settings and, you know, they hit commit, and now the workflow could be vulnerable. There’s no automatic linting of workflows, or a little suggestion box popping up like, “Hey, are you sure you want to configure it this way?”
The technology to detect that is there. There’s a lot of third-party utilities that will lint Actions workflows. Heck, for looking for a lot of these pull request target-type vulnerabilities, I use a Github code search query. It’s just a regular expression. So, having something that at least nudges users to not make that mistake would go really far in helping people not make these mista—you know, adding vulnerabilities to their projects.
Corey: It seems like there’s also been issues around the GitHub Actions integration approach where OICD has not been scoped correctly a bunch of times. I’ve seen a number of articles come across my desk in that context and fortunately, when I wound up passing out the ability for one of my workflows to deploy to my AWS account, I got it right because I had no idea what I was doing and carefully followed the instructions. But I can totally see overlooking that one additional parameter that leaves things just wide open for disaster.
Adnan: Yeah, absolutely. That’s one where I haven’t spent too much time actually looking for that myself, but I’ve definitely read those articles that you mentioned, and yeah, it’s very easy for someone to make that mistake, just like, it’s easy for someone to just misconfigure their Action in general. Because in some of the cases where I found vulnerabilities, there would actually be a commit saying, “Hey, I’m making this change because the Action needs access to these certain secrets. And oh, by the way, I need to update the checkout steps so it actually checks out the PR head so that it’s [testing 00:12:14] that PR code.” Like, people are actively making a decision to make it vulnerable because they don’t realize the implication of what they’ve just done.
And in the second Microsoft repository that I found the bug in, was called Microsoft Confidential Sidecar Containers. That repository, the developer a week prior to me identifying the bug made a commit saying that we’re making a change and it’s okay because it requires approval. Well, it doesn’t because it’s a pull request target.
Corey: Part of me wonders how much of this is endemic to open-source as envisioned through enterprises versus my world of open-source, which is just eh, I’ve got this weird side project in my spare time, and it seemed like it might be useful to someone else, so I’ll go ahead and throw it up there. I understand that there’s been an awful lot of commercialization of open-source in recent years; I’m not blind to that fact, but it also seems like there’s a lot of companies playing very fast and loose with things that they probably shouldn’t be since they, you know, have more of a security apparatus than any random contributors standing up a clone of something somewhere will.
Adnan: Yeah, we’re definitely seeing this a lot in the machine-learning space because of companies that are trying to move so quickly with trying to build things because OpenAI AI has blown up quite a bit recently, everyone’s trying to get a piece of that machine learning pie, so to speak. And another thing of what you’re seeing is, people are deploying self-hosted runners with Nvidia, what is it, the A100, or—it’s some graphics card that’s, like, $40,000 apiece attached to runners for running integration tests on machine-learning workflows. And someone could, via a pull request, also just run code on those and mine crypto.
Corey: I kind of miss the days when exploiting computers is basically just a way for people to prove how clever they were or once in a blue moon come up with something innovative. Now, it’s like, well, we’ve gone all around the mulberry bush just so we can basically make computers solve a sudoku form, and in return, turn that into money down the road. It’s frustrating, to put it gently.
Adnan: [laugh].
Corey: When you take a look across the board at what companies are doing and how they’re embracing the emerging capabilities inherent to these technologies, how do you avoid becoming a cautionary tale in the space?
Adnan: So, on the flip side of companies having vulnerable workflows, I’ve also seen a lot of very elegant ways of writing secure workflows. And some of the repositories are using deployment environments—which is the GitHub Actions feature—to enforce approval checks. So, workflows that do need to run on pull request target because of the need to access secrets for pull requests will have a step that requires a deployment environment to complete, and that deployment environment is just an approval and it doesn’t do anything. So essentially, someone who has permissions to the repository will go in, approve that environment check, and only then will the workflow continue. So, that adds mandatory approvals to pull requests where otherwise they would just run without approval.
And this is on, particularly, the pull request target trigger. Another approach is making it so the trigger is only running on the label event and then having a maintainer add a label so the tests can run and then remove the label. So, that’s another approach where companies are figuring out ways to write secure workflows and not leave their repositories vulnerable.
Corey: It feels like every time I turn around, Github Actions has gotten more capable. And I’m not trying to disparage the product; it’s kind of the idea of what we want. But it also means that there’s certainly not an awareness in the larger community of how these things can go awry that has kept up with the pace of feature innovation. How do you balance this without becoming the Department of No?
Adnan: [laugh]. Yeah, so it’s a complex issue. I think GitHub has evolved a lot over the years. Actions, it’s—despite some of the security issues that happen because people don’t configure them properly—is a very powerful product. For a CI/CD system to work at the scale it does and allow so many repositories to work and integrate with everything else, it’s really easy to use. So, it’s definitely something you don’t want to take away or have an organization move away from something like that because they are worried about the security risks.
When you have features coming in so quickly, I think it’s important to have a base, kind of like, a mandatory reading. Like, if you’re a developer that writes and maintains an open-source software, go read through this document so you can understand the do’s and don’ts instead of it being a patchwork where some people, they take a good security approach and write secure workflows and some people just kind of stumble through Stack Overflow, find what works, messes around with it until their deployment is working and their CI/CD is working and they get the green checkmark, and then they move on to their never-ending list of tasks that—because they’re always working on a deadline.
Corey: Reminds me of a project I saw a few years ago when it came out that Volkswagen had been lying to regulators. It was a framework someone built called ‘Volkswagen’ that would detect if it was running inside of a CI/CD environment, and if so, it would automatically make all the tests pass. I have a certain affinity for projects like that. Another one was a tool that would intentionally degrade the performance of a network connection so you could simulate having a latent or stuttering connection with packet loss, and they call that ‘Comcast.’ Same story. I just thought that it’s fun seeing people get clever on things like that.
Adnan: Yeah, absolutely.
Corey: When you take a look now at the larger stories that are emerging in the space right now, I see an awful lot of discussion coming up that ties to SBOMs and understanding where all of the components of your software come from. But I chased some stuff down for fun once, and I gave up after 12 dependency leaps from just random open-source frameworks. I mean, I see the Dependabot problem that this causes as well, where whenever I put something on GitHub and then don’t touch it for a couple of months—because that’s how I roll—I come back and there’s a whole bunch of terrifyingly critical updates that it’s warning me about, but given the nature of how these things get used, it’s never going to impact anything that I’m currently running. So, I’ve learned to tune it out and just ignore it when it comes in, which is probably the worst of all possible approaches. Now, if I worked at a bank, I should probably take a different perspective on this, but I don’t.
Adnan: Mm-hm. Yeah. And that’s kind of a problem you see, not just with SBOMs. It’s just security alerting in general, where anytime you have some sort of signal and people who are supposed to respond to it are getting too much of it, you just start to tune all of it out. It’s like that human element that applies to so much in cybersecurity.
And I think for the particular SBOM problem, where, yeah, you’re correct, like, a lot of it… you don’t have reachability because you’re using a library for one particular function and that’s it. And this is somewhere where I’m not that much of an expert in where doing more static source analysis and reachability testing, but I’m certain there are products and tools that offer that feature to actually prioritize SBOM-based alerts based on actual reachability versus just having an as a dependency or not.
[midroll 00:20:00]
Corey: I feel like, on some level, wanting people to be more cautious about what they’re doing is almost shouting into the void because I’m one of the only folks I found that has made the assertion that oh yeah, companies don’t actually care about security. Yes, they email you all the time after they failed to protect your security, telling you how much they care about security, but when you look at where they invest, feature velocity always seems to outpace investment in security approaches. And take a look right now at the hype we’re seeing across the board when it comes to generative AI. People are excited about the capabilities and security is a distant afterthought around an awful lot of these things. I don’t know how you drive a broader awareness of this in a way that sticks, but clearly, we haven’t collectively found it yet.
Adnan: Yeah, it’s definitely a concern. When you see things on—like for example, you can look at Github’s roadmap, and there’s, like, a feature there that’s, oh, automatic AI-based pull request handling. Okay, so does that mean one day, you’ll have a GitHub-powered LLM just approve PRs based on whether it determines that it’s a good improvement or not? Like, obviously, that’s not something that’s the case now, but looking forward to maybe five, six years in the future, in the pursuit of that ever-increasing velocity, could you ever have a situation where actual code contributions are reviewed fully by AI and then approved and merged? Like yeah, that’s scary because now you have a threat actor that could potentially specifically tailor contributions to trick the AI into thinking they’re great, but then it could turn around and be a backdoor that’s being added to the code.
Obviously, that’s very far in the future and I’m sure a lot of things will happen before that, but it starts to make you wonder, like, if things are heading that way. Or will people realize that you need to look at security at every step of the way instead of just thinking that these newer AI systems can just handle everything?
Corey: Let’s pivot a little bit and talk about your day job. You’re a lead security engineer at what I believe to be a security-focused consultancy. Or—
Adnan: Yeah.
Corey: If you’re not a SaaS product. Everything seems to become a SaaS product in the fullness of time. What’s your day job look like?
Adnan: Yeah, so I’m a security engineer on Praetorian’s red team. And my day-to-day, I’ll kind of switch between application security and red-teaming. And that kind of gives me the opportunity to, kind of, test out newer things out in the field, but then also go and do more traditional application security assessments and code reviews, and reverse engineering to kind of break up the pace of work. Because red-teaming can be very fast and fast-paced and exciting, but sometimes, you know, that can lead to some pretty late nights. But that’s just the nature of being on a red team [laugh].
Corey: It feels like as soon as I get into the security space and start talking to cloud companies, they get a lot more defensive than when I’m making fun of, you know, bad service naming or APIs that don’t make a whole lot of sense. It feels like companies have a certain sensitivity around the security space that applies to almost nothing else. Do you find, as a result, that a lot of the times when you’re having conversations with companies and they figure out that, oh, you’re a red team for a security researcher, oh, suddenly, we’re not going to talk to you the way we otherwise might. We thought you were a customer, but nope, you can just go away now.
Adnan: [laugh]. I personally haven’t had that experience with cloud companies. I don’t know if I’ve really tried to buy a lot. You know, I’m… if I ever buy some infrastructure from cloud companies as an individual, I just kind of sign up and put in my credit card. And, you know, they just, like, oh—you know, they just take my money. So, I don’t really think I haven’t really, personally run into anything like that yet [laugh].
Corey: Yeah, I’m curious to know how that winds up playing out in some of these, I guess, more strategic, larger company environments. I don’t get to see that because I’m basically a tiny company that dabbles in security whenever I stumble across something, but it’s not my primary function. I just worry on some level one of these days, I’m going to wind up accidentally dropping a zero-day on Twitter or something like that, and suddenly, everyone’s going to come after me with the knives. I feel like [laugh] at some point, it’s just going to be a matter of time.
Adnan: Yeah. I think when it comes to disclosing things and talking about techniques, the key thing here is that a lot of the things that I’m talking about, a lot of the things that I’ll be talking about in some blog posts that have coming out, this is stuff that these companies are seeing themselves. Like, they recognize that these are security issues that people are introducing into code. They encourage people to not make these mistakes, but when it’s buried in four links deep of documentation and developers are tight on time and aren’t digging through their security documentation, they’re just looking at what works, getting it to work and moving on, that’s where the issue is. So, you know, from a perspective of raising awareness, I don’t feel bad if I’m talking about something that the company itself agrees is a problem. It’s just a lot of the times, their own engineers don’t follow their own recommendations.
Corey: Yeah, I have opinions on these things and unfortunately, it feels like I tend to learn them in some of the more unfortunate ways of, oh, yeah, I really shouldn’t care about this thing, but I only learned what the norm is after I’ve already done something. This is, I think, the problem inherent to being small and independent the way that I tend to be. We don’t have enough people here for there to be a dedicated red team and research environment, for example. Like, I tend to bleed over a little bit into a whole bunch of different things. We’ll find out. So far, I’ve managed to avoid getting it too terribly wrong, but I’m sure it’s just a matter of time.
So, one area that I think seems to be a way that people try to avoid cloud issues is oh, I read about that in the last in-flight magazine that I had in front of me, and the cloud is super insecure, so we’re going to get around all that by running our own infrastructure ourselves, from either a CI/CD perspective or something else. Does that work when it comes to this sort of problem?
Adnan: Yeah, glad you asked about that. So, we’ve also seen open-s—companies that have large open-source presence on GitHub just opt to have self-hosted Github Actions runners, and that opens up a whole different Pandora’s box of attacks that an attacker could take advantage of, and it’s only there because they’re using that kind of runner. So, the default GitHub Actions runner, it’s just an agent that runs on a machine, it checks in with GitHub Actions, it pulls down builds, runs them, and then it waits for another build. So, these are—the default state is a non-ephemeral runner with the ability to fork off tasks that can run in the background. So, when you have a public repository that has a self-hosted runner attached to it, it could be at the organization level or it could be at the repository level.
What an attacker can just do is create a pull request, modify the pull request to run on a self-hosted runner, write whatever they want in the pull request workflow, create a pull request, and now as long as they were a previous contributor, meaning you fixed a typo, you… that could be a such a, you know, a single character typo change could even cause that, or made a small contribution, now they create the pull request. The arbitrary job that they wrote is now picked up by that self-hosted runner. They can fork off it, process it to run in the background, and then that just continues to run, the job finishes, their pull request, they’ll just—they close it. Business as usual, but now they’ve got an implant on the self-hosted runner. And if the runners are non-ephemeral, it’s very hard to completely lock that down.
And that’s something that I’ve seen, there’s quite a bit of that on GitHub where—and you can identify it just by looking at the run logs. And that’s kind of comes from people saying, “Oh, let’s just self-host our runners,” but they also don’t configure that properly. And that opens them up to not only tampering with their repositories, stealing secrets, but now depending on where your runner is, now you potentially could be giving an attacker a foothold in your cloud environment.
Corey: Yeah, that seems like it’s generally a bad thing. I found that cloud tends to be more secure than running it yourself in almost every case, with the exception that once someone finds a way to break into it, there’s suddenly a lot more eggs in a very large, albeit more secure, basket. So, it feels like it’s a consistent trade-off. But as time goes on, it feels like it is less and less defensible, I think, to wind up picking out an on-prem strategy from a pure security point of view. I mean, there are reasons to do it. I’m just not sure.
Adnan: Yeah. And I think that distinction to be made there, in particular with CI/CD runners is there’s cloud, meaning you let your—there’s, like, full cloud meaning you let your CI/CD provider host your infrastructure as well; there’s kind of that hybrid approach you mentioned, where you’re using a CI/CD provider, but then you’re bringing your own cloud infrastructure that you think you could secure better; or you have your runners sitting in vCenter in your own data center. And all of those could end up being—both having a runner in your cloud and in your data center could be equally vulnerable if you’re not segmenting builds properly. And that’s the core issue that happens when you have a self-hosted runner is if they’re not ephemeral, it’s very hard to cut off all attack paths. There’s always something an attacker can do to tamper with another build that’ll have some kind of security impact. You need to just completely isolate your builds and that’s essentially what you see in a lot of these newer guidances like the [unintelligible 00:30:04] framework, that’s kind of the core recommendation of it is, like, one build, one clean runner.
Corey: Yeah, that seems to be the common wisdom. I’ve been doing a lot of work with my own self-hosted runners that run inside of Lambda. Definitionally those are, of course, ephemeral. And there’s a state machine that winds up handling that and screams bloody murder if there’s a problem with it. So far, crossing fingers hoping it works out well.
And I have a bounded to a very limited series of role permissions, and of course, its own account of constraint blast radius. But there’s still—there are no guarantees in this. The reason I build it the way I do is that, all right, worst case someone can get access to this. The only thing they’re going to have the ability to do is, frankly, run up my AWS bill, which is an area I have some small amount of experience with.
Adnan: [laugh]. Yeah, yeah, that’s always kind of the core thing where if you get into someone’s cloud, like, well, just sit there and use their compute resources [laugh].
Corey: Exactly. I kind of miss when that was the worst failure mode you had for these things.
Adnan: [laugh].
Corey: I really want to thank you for taking the time to speak with me today. If people want to learn more, where’s the best place for them to find you?
Adnan: I do have a Twitter account. Well, I guess you can call it Twitter anymore, but, uh—
Corey: Watch me. Sure I can.
Adnan: [laugh]. Yeah, so I’m on Twitter, and it’s @adnanthekhan. So, it’s like my first name with ‘the’ and then K-H-A-N because, you know, my full name probably got taken up, like, years before I ever made a Twitter account. So, occasionally I tweet about GitHub Actions there.
And on Praetorian’s website, I’ve got a couple of blog posts. I have one—the one that really goes in-depth talking about the two Microsoft repository pull request attacks, and a couple other ones that are disclosed, will hopefully drop on the twenty—what is that, Tuesday? That’s going to be the… that’s the 26th. So, it should be airing on the Praetorian blog then. So, if you—
Corey: Excellent. It should be out by the time this is published, so we will, of course, put a link to that in the [show notes 00:32:01]. Thank you so much for taking the time to speak with me today. I appreciate it.
Adnan: Likewise. Thank you so much, Corey.
Corey: Adnan Khan, lead security engineer at Praetorian. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that’s probably going to be because your podcast platform of choice is somehow GitHub Actions.
Adnan: [laugh].
Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.