Antisyphon Training Anticasts

Existential Courage: The Hitchhiker's Guide to Surviving AI in Cloud

🛝 Webcast Slides -
https://www.blackhillsinfosec.com/wp-content/uploads/2026/03/SLIDES_The-Hitchhikers-Guide-to-Surviving-AI-in-Cloud.pdf

Can AI really help secure the cloud, or is it quietly making things worse?

Join Antisyphon instructor and security researcher Andrew Krug for a free one-hour Anti-cast on what really happens when AI collides with cloud security.

  • Andrew will cut through the hype and look at how LLMs affect IAM, monitoring, governance, and real-world risk.
  • Learn where AI helps, where it hallucinates, and how to defend cloud environments without panic.
  • Expect practical insights, grounded strategy, and a bit of cosmic humor. Bring your towel. Don’t panic.

Chapters
  • (00:00) - Intro
  • (02:44) - Our trip through the galaxy
  • (03:38) - What kind of literature is the Hitchikerʼs Guide to the Galaxy?
  • (04:29) - Don't Panic
  • (05:18) - The Agentic Revolution
  • (05:56) - Cast of Characters
  • (07:44) - The State of AI in the Enterprise - Deloitte
  • (10:53) - How do teams build agents?
  • (12:11) - What are teams using agents for?
  • (13:17) - Why build on Bedrock + AWS
  • (14:17) - Are we learning? Or not learning?
  • (15:58) - Are you the fixed point in a shifting universe?
  • (17:01) - TL;DR the majority of these are the same threats we have been dealing with
  • (18:16) - Prompt Injection is the new SQL Injection
  • (19:13) - Sandbox Escape
  • (20:20) - Shared Structure: General Software & AI Supply Chains
  • (23:03) - The Bad News
  • (24:29) - Threate Vector Coverage
  • (25:24) - The Expanding Universe of Secrets
  • (28:15) - Hope is not a strategy! But a strategy can give us hope.
  • (28:36) - (Yes we AI-Removed Andrew's Coughs)
  • (29:40) - back to: Hope is not a strategy! But a strategy can give us hope.
  • (30:47) - Plan for maximum risk scenarios
  • (33:03) - Squishy Stuff
  • (34:38) - KIRO
  • (37:11) - Infrastructure and Data Protection
  • (39:11) - Priveledge Escalation Paths – https://pathfinding.cloud
  • (40:58) - The AI Stuff
  • (42:01) - So anyway, here's Firewall
  • (43:34) - OpenTelementry
  • (46:47) - You still have to have logs
  • (48:22) - MCP
  • (49:22) - Learn more from Andrew in: Securing the Cloud Foundations
  • (50:23) - Post Show Q&A

Credits
Chat with your fellow attendees in the BHIS Discord server:
https://discord.gg/bhis
in the #🔴live-chat channel

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –
https://poweredbybhis.com

Click here to watch a video of this episode.


Brought to you by:
Black Hills Information Security 
https://www.blackhillsinfosec.com

Antisyphon Training
https://www.antisyphontraining.com/

Active Countermeasures
https://www.activecountermeasures.com

Wild West Hackin Fest
https://wildwesthackinfest.com

Creators and Guests

Host
Deb Wigley
Deb Wigley is the Director of Kindness and Generosity for Black Hills Information Security (BHIS). She joined the team in 2019 after celebrating 20 years of working in customer engagement and satisfaction in the Automotive Industry. She brings her passion for helping and serving people to the work she does at BHIS. The part of her role she enjoys the most is interacting with the community through our webcasts and educational content, our Discord servers, and conferences. She loves being a mom to her four kiddos and in her spare time, she enjoys reading, hiking, frequently entertaining a beach day, and being whisked away on rewilding adventures with her husband of 20+ years as much as possible.
Host
Jason Blanchard
Jason Blanchard has been happily adopted into the hacker community at Black Hills Information Security (BHIS) since 2019, even though he “works in marketing.” He’s had every dream job imaginable: teaching filmmaking, owning the world’s most famous comic book store, and fostering the infosec community efforts for SANS. While some at BHIS call him the “Director of Excitement,” he is formally known as the Excitement Co-Creator. In his day-to-day work of “sucking at capitalism,” Jason enjoys helping others, sharing his knowledge, and giving away lots of free stuff. When he’s not working, Jason spends time with his wife and daughter, hosts a semiweekly job-hunting Twitch stream, and enjoys writing short stories and performing stand-up comedy.
Guest
Andrew Krug
Andrew Krug is a Security Geek specializing in Cloud and Identity and Access Management. Andrew brings 15 years experience at the intersection of security, education, and systems administration. As a fierce advocate for Open Source and founder of ThreatResponse tool suite, Andrew has helped inspire the landscape around forensics and incident response in the Cloud. Andrew has been a presenter at a variety of conferences, publishing papers with BlackHat USA, DerbyCon, and many more.

What is Antisyphon Training Anticasts?

Podcast audio-only versions of weekly webcasts from Antisyphon Training

Jason Blanchard:

Hello everybody. Welcome to today's Antisyphon Anticasts. We have Andrew Krug joining us today. He's an instructor that teaches with Antisyphon, and so he has classes coming up. And one of the things that we're doing today is that if, you know, this is free educational content, it's a free one hour training session, where Andrew's gonna teach you about Securing AI in the age of cloud.

Jason Blanchard:

Just a reminder that we do this, one, because it's really good for you and it's good for the community, but it's also an opportunity for you to see, do you do you enjoy Andrew's style of teaching, do you like the content, and then would you like to take his class? So at the end of this Andrew will talk about the class, but in the meantime, it's gonna be pure educational content, we appreciate you being here and choosing to spend your time with us today. So Andrew, Andrew I've known Andrew for a long time, he works at a company, I'm sure he'll mention it, but we put together a deck of backdoors and breaches at one point. Andrew was a delight to work with, incredibly knowledgeable about the work that he does, and like especially when it comes to cloud, and so it is a fantastic opportunity to learn new techniques, new tools, new skills, new things today. So thank you so much for joining us and spending your time with us.

Jason Blanchard:

Andrew, it's all yours.

Andrew Krug:

Thanks so much for having me. As as always, it's really nice to be here talking to this large live audience. So if you have questions at any point, I do keep an eye on the Discord chat, and I will try to engage with you, like, as we're as we're running the webinar. The title for this ended up being Securing AI in the Age of Cloud, which is kind of a crazy title because pretty much all the AI stuff that we use today has come about since cloud has been the platform that most people are working on. The pre title before we we decided to repackage this was the hitchhiker's guide to surviving AI in the cloud.

Andrew Krug:

Because we're gonna do something a little bit different today that is very much, like, not my normal teaching style. My normal teaching style is, like, usually lots and lots of hands on demos, like, very deeply technical. This is gonna be much more storytelling because I think that we're all kind of, like, trying to figure out how to deal with onboarding these AI pilots to our organizations, shadow AI. Like, all the stuff that we dealt with almost ten, twelve years ago when when cloud started to really become popular, we're we're kind of seeing trends again in the way that we're handling those in the the AI age. So I I don't know if we have any Hitchhiker's Guide to the Galaxy fans in the chat.

Andrew Krug:

If you are a fan, please feel free to let us know. So we're gonna take a little trip, like, down through the through the galaxy here, and we're gonna look at a few different themes in our less than one hour together. We're gonna look at, like, what's going on in the agentic space, what is not going well, and what we haven't learned from the past. And if you have seen webinars with me, you know, I love to talk about how there's there's basically nothing new. We're just kind of like reliving a bunch of trends over and over again with different actors and different sets of technology, which is great because it makes it really easy to avoid mistakes if we take kinda the historian approach.

Andrew Krug:

And then we'll kinda look at what are some potential solutions or at least ways that you can make your life more sane if you're building AI pilots on top of the AWS cloud in particular, but a lot of what we're gonna talk about is just generic to AI. I've I'm also tricking you into sort of learning some things about literature during this. So who knows what kind of literature is The Hitchhiker's Guide to the Galaxy? If anybody studied literature, there are are many different forms. There's, like, masculine, feminine, you know, the the archetype of the story.

Andrew Krug:

If you somebody put satire, which is which is a good guess, but The Hitchhiker's Guide to the Galaxy is actually a type of literature that's called semiotics, which is using signs and symbols that carry meaning to tell a story. So in in semiotics, you can actually have all of the other forms of literature present through symbolism. And if if anybody here actually has a literature degree and I get this wrong, like, please feel free to correct me because I most definitely do not have a degree in literature. So there's lots of different themes kinda going on in this book, and I I think it's a a great representation of all the things that are are kind of present in the the AI age. And I had Gemini put together this quick graphic of kind of some of the the major things.

Andrew Krug:

Like, great example is that there are super intelligent dolphins in the book and people, and the dolphins believe that they are more intelligent than the people, and the humans believe that they're more intelligent than the dolphins. And so, like, when I when I thought of this, I mapped that to to engineers and security folks because we often think that we know more than the engineering teams, and the engineering teams often think that they know more than the cybersecurity folks. Let's kinda talk about what's going on in the AI agent space. Anybody wanna confess to building AI agents today as as a part of your your primary job? I'm definitely in a in a situation now where I'm building AI agents on a regular basis to perform all kinds of things from, like, going out and doing security audits and summarizing and triaging findings or judging outputs or just, like, overall trying to make my life more efficient by building these custom AI agents.

Andrew Krug:

And I think that's what everybody is trying. So before we get into that, let's talk about the set of characters as well because I think the characters are really interesting to kind of map to the different ways that we think about who we are in our story about AI. So you have all these characters in Hitchhiker's Guide to the You have Arthur Dent, who's like the main character. Arthur represents kind of this fixed point in in time in that he's learning things from these other characters, but he's really kind of a curmudgeon. And then you've got you've got Ford who's actually kind of an anthropologist.

Andrew Krug:

He's one of the authors of the guide itself. So he he loves to poke fun at humanity and and kind of laugh from, like, this place of, like, wow. That's ridiculous, and I'm gonna I'm gonna observe that and write it down. You've got this guy with the two heads who kind of represents this galactic leader. You know, he has a this massive ego and and, you know, wants to to take over the the entire galaxy.

Andrew Krug:

You have Marvin who is this incredibly depressed robot, which is super interesting because the robot kinda represents this notion that if an artificial intelligence was as smart as a person, they would become existentially depressed due to just understanding the the nature of life. And then you have this character, Trillion, who some people think are is is like a mirror for for the rest of the book, but but I actually disagree that that she's just there to do unnecessary exposition and reflect back. I think Trillian is actually the optimist of the story and curious, but, you know, we can we can debate that later in the talk. Let's, let's talk about what's going on in the state of AI in the enterprise. So whether or not you are a fan of Deloitte, they get to see a lot of stuff.

Andrew Krug:

They survey a ton of companies. They published this AI report last year that did get, quite a bit of media coverage. I I did read this when it came out last year and and summarized a little bit of of what's going on from, like, a threat landscape perspective. I think the most interesting things here in this report are actually how companies feel about how ready they are to build AI agents securely. So you got a lot of stats on this slide.

Andrew Krug:

Data privacy and security is obviously a big concern. People don't really understand how the models work. ShadowAI is a big thing. But for this talk, I really wanna focus on two stats, which is that 74% of the companies surveyed are actively building and deploying AI agents. So if you think about the customer base that Deloitte represents, that's probably like every, you know, Fortune 500, like, down to top Fortune tens.

Andrew Krug:

And then on the governance side, only 21% of those companies felt like they were mature enough to secure the AI agents that they're building at the rate that they were scaling. And so we have this, like, very high adoption curve and very low confidence as an industry, according to Deloitte, in our ability to do a good job. And that's really, really interesting because if I think back to a time when I started my cloud security career, it there was very similar stats to this. And I looked at that, and I said, I'm gonna have a job for life if I get really great at closing this gap. It turns out, you know, a long time later in the future, I'm still doing cloud security stuff.

Andrew Krug:

I'm learning lots of AI, obviously, as a part of it. But this is a a massive emergent field of opportunity. So I thought it would be it would be fun to actually create a AI model of the Hitchhiker's Guide to the Galaxy profile, each of the characters, and see what they would say about the current state of AI if they were if they were present today. So this is the character Ford, the anthropologist. So he says, building AI agents is a bit like trying to write a dictionary while you're falling out of a plane.

Andrew Krug:

It's a marvelous way to pass the time, and the results are often funny right up until the moment of impact. Just make sure the agent knows where the towel is. If an AI doesn't know where its towel is, it's not an agent. It's just a very expensive calculator with an attitude problem. I don't know if anybody identifies with that or if that's how you feel about how AI agents are working for you.

Andrew Krug:

I don't know if you see folks pull out their phone at restaurants and use them to calculate tips on receipts and and things like that. We we are guilty of sometimes using AI models as very expensive calculators. So as we think about that perspective, that's a a really great segue into, like, how do teams build agents? I think the majority of us that are building AI agents today are actually vibe coding them. So most of most of the industry has consolidated on Cloud Code, seems like, as the preferred agentic coding model.

Andrew Krug:

They're building and evaluating those in CICD pipelines either in GitHub actions or, like, in a a cloud environment. They're deploying them on the large using a platform called Amazon Bedrock, which we're gonna talk about the why of people using Bedrock. And then they actually run those those production agents. And each of the phases of the the this in the software development life cycle brings with it some risk, and it also brings with it some set of advantages. And I think that the perception overall in industry is that all of these things now all of a sudden require new and novel approaches to the way that we think about securing them when they actually don't.

Andrew Krug:

So we have all of these different opportunities to apply things that we have learned to each of the software development life cycle phases for making and securing AI agents. So what are teams using these agents for on the large? So the answer is they're using them for everything. We are still in a phase where business and individuals are trying to figure out what is the sweet spot for using AI agents on the set of models that we have. Like, obviously, we live in a reality where the intelligence of models is getting better and better at an exponential rate, like similar to advances in hardware.

Andrew Krug:

But we're we're taking these and we're literally trying to apply them to every task that we have across the industry, like business logic side, software engineering side. And sometimes we find out things like this, that you can't, for example, use a a towel to open a crate very effectively. It's important to, like, really understand, like, what are the things that the models excel at, and then what platform is, like, really gonna support building in the best way for those use cases. So why build on Bedrock with AWS? The answer is probably not what you would think.

Andrew Krug:

It's largely it's largely reg regulatory and and, like, contract based. So bedrock on a AWS brings the same foundational models that, like, the big providers have, like OpenAI and and Anthropic, but they bring it in a way that contractually guarantees that your data is never used to train the underlying models. It also supports regional rollouts, so you get to decide your your data sovereignty, which is kind of a big deal. And then it also has built in governance tools and guardrails that help kind of con close the gap on some of the AI risk. So that's why a lot of companies build on Bedrock by default.

Andrew Krug:

It's the the indemnification guarantees. So this is a platform that on the large, when I run into folks in the wild, the this is what they're building on. A question I always like to ask in these is for for the for the things that we're building and the incident trends, are we actually learning from the mistakes that we made in the past, or are we not learning from mistakes we made in the past as software engineering culture? What do we think? Learning, not learning?

Andrew Krug:

What any guesses from the audience on what the prominent themes are? I see lots of flower pot jokes in the chat. On that, let's think about what Arthur might say if Arthur were to comment on the the current state of AI. So Arthur, remember, is kind of a curmudgeon. And if you're building AI or you're in an engineering team today, I'm sure that you work with at least one person who's like that AI naysayer, you know, like, this is a bad you know, AI is never gonna, like, really take off.

Andrew Krug:

There are people still exist that that honestly do believe this. I firmly, for one, obviously believe that AI is here to stay. So Arthur might say, that's impressive, I'm sure, but can any of these agents actually find me a decent cup of tea? Because for the entire book, he's actually just trying to get more tea. Like every place that he goes, he's obsessed with the idea of finding a good cup of tea because he's English.

Andrew Krug:

He says, I've asked three of them now. One tried to explain the molecular structure of water while the other two told me they were language models and didn't have hands. So, obviously, they can't make a cup of tea. It seems a lot of effort just to be told no by a machine instead of a person. One of the things that that I ask, you know, when I'm I'm looking at, like, what are the dynamics, like, in the engineering cultures is, like, is this your perspective?

Andrew Krug:

Are you the fixed point in a shifting universe, or is the universe fixed? And you need to just, like, build on the things that that you've learned in the past. Looking at that, the top AI breaches, if we were to look at root cause analysis, we might guess things like the really, you know, the really interesting attack vectors are are the most prominent AI breaches, you know, like sandbox escapes, model poisoning, adversarial AI, and it's really not. So if if we look at this, we see that social engineering and phishing, like, still lead to a lot of AI breaches as, like, the initial attack vector. Vulnerabilities and misconfigurations make up a fairly significant percent of overall attacks.

Andrew Krug:

The majority of these are the same threat vectors that we've pretty much been dealing with in every application and technology revolution that that I've seen in my career, you've probably seen in your career. So we're dealing with a lot of the same things. So we can apply some of the same security principles to designing secure systems around these AI agents that we've been using for all kinds of other things. Only sixteen percent of identified breaches in the last year began with an AI specific threat factor. And, like, while we we can speculate that this is probably gonna increase as folks get better at at doing things like breaking the models themselves, prompt injection, you know, dataset poisoning, etcetera.

Andrew Krug:

The majority of the surface area is really in the things that are the just tried and true tactics and the fact that, you know, the the target is AI stuff is really not super relevant. Somebody asked in the chat, I missed it, is this a book we find to read? Yeah. Hitchhiker's Guide to the Galaxy is both a book and a movie. It's quite old, but still a good read, very popular in the cybersecurity community.

Andrew Krug:

The other thing is that we can map a lot of the the new labels for these things to old old attack trends that we've just been dealing with forever. So, like, there's a saying in AI stuff, prompt injection is just the new SQL injection. So, you know, SQL injection, you know, super popular as an attack, like, in the in the two thousands. Now we have prompt injection, which is effectively following kind of the same path through the the AI agent that we would have followed. And so we we actually are applying a lot of the same attack tact or attack mitigations for these tactics.

Andrew Krug:

So if you think about SQL injection, when it first, like, was really on the rise, you know, all of a sudden, we started to talk about things like parameterized queries, input validation, etcetera. And so on the prompt injection side, we're applying those same tactics and techniques. They're just AI specific. The sandbox escape, also not a new thing. So if you can escape the process that the AI agent is running in, obviously, you can do things like pivot to the runtime and eventually the host.

Andrew Krug:

Sandbox escapes are not new. We've been dealing with these in shared computing environments since the age of, like, mainframe style applications all the way through container escapes. There's really not anything new or novel there once you actually escape the agent itself. You're still just dealing with a operating system that has a process that all of a sudden now is doing something it's never done before. So we can apply a lot of the same tactics and techniques to securing and detecting sandbox escapes that we that we've been doing in large container environments.

Andrew Krug:

So nothing new. It's more that the the AI agents themselves necessitate now that everybody, for example, that's running one of these has some sort of runtime observability capability for workloads. And people like to talk a lot about the supply chain, for AI agents as well, which is not wildly different than the software supply chain. It just has a couple of additional concerns for the the data source that comes in to train the the foundational model and how biased that may be, and then where are the models themselves coming from. So, like, obviously, you have the ability to take a model from a large provider like AWS or a model provider like Anthropic or OpenAI, but then there are also AI markets like Hugging Face where you can go and you can get effectively untrusted models where you need to do your own diligence on how you validate the performance of that.

Andrew Krug:

So it's it's quite simpler similar. Yes. Somebody says, Gen AI images with proper spelling. It's the future now. And, yes, all pretty much all the images in here are generated with AI for for the talk.

Andrew Krug:

Nano Banana, if you tell it it has a misspelling in one of the images, it will actually go and verify the spellings, and then it will re prompt itself to fix spelling errors, which is just it's crazy how good Nano Banana has got. I'm I'm told soon it will be able to create hands with the correct number of fingers. So that brings us to the robot. Right? Because we have all these things that are really, really similar, right, in terms of the attack vectors, the way that we detect them, the way that we respond to them.

Andrew Krug:

They're mapped to similar trends in cybersecurity from the the last ten years. And I I think we we've all probably heard the term security nihilism. And when I think of security nihilism, I think of Marvin the robot. And he says, I've been an AI agent for several million years. It's terrible.

Andrew Krug:

You think you're being clever, but you're really just spending your vast infinite processing power calculating the probability of a human losing their car keys. One of the major plot points in the book is that you figure out that the the Earth, the planet, is actually just a big supercomputer that's meant to calculate the meaning of life. And it outputs, as the answer to that question, the prompt, in this case, we could just call it a prompt, the number 42, which is almost a representation that there there isn't a a purpose and that sometimes the answer to a really important question can actually be random. So Marvin believes that basically all prompts are pointless, and the universe is just a super lonely place. So let's talk about the bad news.

Andrew Krug:

So we looked at at kind of, like, the different attack vectors, the the some of the trends. Let's look at, like, what's really not going right. So what's really not going right is is kind of the the things that we've been doing for detection in cloud for, all of the other types of resources. So in the in the Securing the Cloud class, which I teach, we talk a lot about the right ways to set up CloudTrail, the right ways to secure CloudTrail, etcetera. Like, the bad news for AI agents and, like, building on top of a platform like Bedrock is that CloudTrail can only tell you that the the the model was invoked.

Andrew Krug:

It can't actually tell you what the prompt was to that model. It can't tell you what what went on inside of the decision chart in the in the process of actually answering a prompt. So we really like CloudTrail for most things, right, because it allows us to to detect any interaction with a resource from the the control plane perspective, but we have all of this context now that's missing in the way that an AI agent reasons and interacts with data. And those are two really huge gaps. So when we think about threat vector coverage, and there are far more threat vectors in this table than we discussed earlier, but if we look at some some of the some that we talked about like prompt injection, we have answers on the AWS side, but you'll notice that most of them say that they only have partial coverage for the threat vector.

Andrew Krug:

So, like, for prompt injection, for example, there's partial coverage in a thing called Bedrock guardrails, but we have some limitations in those in terms of what they can understand and what they can act on. So the answer is really to wrap these with additional tools like third party runtime security or more observability. But, like, really, as an industry, we're, like, still figuring out how to completely cover detection for for some of these threats. Let's talk about secrets real quick as well. So one of the things that we really like to hammer home when we talk about setting up cloud environments is one of the worst things you can have in a cloud in a cloud account is a set of long lived credentials.

Andrew Krug:

Right? These access key pairs that we love to to rant about again and again and again. Well, we spent probably the last five years or so, like, really championing single sign on, building great single sign on flows for for AWS that work in the in the browser and on the command line, and then all of a sudden we have this AI revolution that introduces one of the most expensive API keys to leak in computing history. So AI service secrets last year reached over over 1,200,000 in 2025, and that was just based on data from publicly crawling, GitHub. It was up 81% year over year from 2024.

Andrew Krug:

That sounds like a a huge number. Right? But, like, also, when we look at big scary numbers like this, we have to remember that Claude Code only celebrated its one year birthday this month. So, like, Claude Code has only been around a year at its at its current capacity, and it's had this, like, massive surge in adoption, which has created all of this additional attack surface. So I would expect this number will continue to increase over time.

Andrew Krug:

When they when they rolled out Bedrock, they also rolled out the ability to create long lived static credentials for Bedrock as well, which is different than a AWS API key. So you have now AWS API API keys, the access key and secret key pairs that we've been trying to get rid of in environments for so, so long, and then we introduce this other long lived, very expensive static credential. So it's like, why do we keep creating ways for people to create these things that we really have tried to eliminate as an industry for so long? So this is, like, one of the things I love to rant about simply because we are not learning from our previous mistakes every time that we build a feature that introduces a long lived static credential. Somebody said we did it because it makes money.

Andrew Krug:

And the answer is yes. It does make money to empower people to build quickly, but also if AWS has to give huge bill credits because somebody leaks one of these API keys, that starts undoing itself relatively quickly over time. So there are always trade offs. So what do we what do we do about this stuff, like the the types of AI specific attacks, observing them, detecting them, etcetera? And then what what can we do just to, like, make our cloud environments a tiny bit more sane as somebody who's building AI?

Jason Blanchard:

Hey, Andrew. Not to interrupt you. Would you like to take some water?

Andrew Krug:

I have I have coffee right here, but it's it does not super help.

Jason Blanchard:

Oh god.

Andrew Krug:

I have been I've been dying since before Wild West Hackenfest.

Jason Blanchard:

Aw. Aw.

Andrew Krug:

Yeah.

Jason Blanchard:

Well, thank you. And I heard you, and I was like, oh, is there something I can do for you?

Andrew Krug:

So Unfortunately, there there's just like no mitigation for the dry cough that I've been dragging around since January.

Jason Blanchard:

Alright. Well

Andrew Krug:

Thank you.

Jason Blanchard:

We're sending love your way, and Thanks. Thank you.

Andrew Krug:

So I did not get sick at Wild West Hackenfest, just to be clear. I I don't wanna unadvertise Wild West Hackenfest by by saying I was so I was better by Wild West Hackenfest Denver. I got this in New York in January during the historic blizzard where it was, like, seven degrees outside, and it was just so dry. So, so dry. So let's talk about strategy here.

Andrew Krug:

I love this quote. Often, hope is not a strategy, but then I like to follow it up with, but a strategy can give us hope. So, like, don't hope all by itself. Build a strategy that that gives you hope for the future that you can actually detect and respond to some of these things and ultimately secure your your cloud accounts that you're building these AI agents in. And I can think of no better character to represent this notion of hope than the character in the book Trillion.

Andrew Krug:

I won't read the the entire, paragraph aloud here, but Trillion in the book is just generally very curious. So she says it's fascinating. You're trying to build something that thinks like you do, which is brave considering how often you humans seem to change your minds. And that seems very representative of the nondeterministic nature of AI agents. You might give the same prompt three times and get a different answer all three times.

Andrew Krug:

So, like, advice piece of advice number one, that I think is really important for running AI stuff in cloud is plan for the maximum risk scenarios even when they are super, super low likelihood. And if you have read Hitchhiker's Guide to the Galaxy, you know that they have a spaceship in the book that has this thing that's called an infinite improbability drive, which is the way that they travel between planets, while there there was a one in 8,000,000,000,000 chance of the infinite improbability drive creating something random outside of it and just, like, willing it into existence. And so that's where this famous image of the whale and the flower pot come from because it it accidentally creates over the top of the planet a whale and a flower pot, and they start immediately falling towards the ground. And in the amount of time that it takes these two things to come into consciousness, the flower pot is aware that this has happened before, the whale is not aware that this has happened before, reasons about the ground, and then asks the question, I wonder if the ground will be friends with me right before it splats into the earth, which is this very obviously sad scene, but this is a great metaphor for high impact, low likelihood scenarios.

Andrew Krug:

So reason about all the like, just because they're AI agents doesn't obviate the need for us to do basic things like threat modeling. And, yeah, thanks to the folks that are wishing me well in the chat. I I too hope that my cough goes away just in time for RSA next week. So the way that we do that, you know, is if you've ever taken my class or you've seen webinars, is I love to map things back to security principle foundations. And so we can lean on the same pillars that we lean on for most cloud security stuff like governance and compliance, infrastructure and data protection.

Andrew Krug:

And then I would almost call the resilience and model behavior profiling portion the AppSec portion of this. So we have some squishy stuff, which is like the intangible things that we actually can't configure in cloud, but those are just the due diligence that you've gotta do. So data sovereignty in this, still really, really important if you're building AI agents. Does your data belong in that model for the class of data? Because, like, believe it or not, not all models are certified to handle specific datasets.

Andrew Krug:

So, like, health care data would be a great example of data that you legally probably couldn't use specific models to process. Regions are a big one as well. So, you know, if you're subject to things like GDPR, is your data flowing inside of the right set of geographies? Model providence, like, do you actually trust the company that's, like, building and training that model both not to be biased, but also not to do things that you don't believe that it does. Or, you know, if you're taking one of those models from, like, an open marketplace like Hugging Face, do you trust the person that that tweaked or modified or trained that model?

Andrew Krug:

And then the not squishy stuff that you actually can configure in AWS are all of the technical guardrails that we can use for things like identity and access management, data perimeter, etcetera. And then there are some unique things to the actual underlying service bedrock that we can use in order to secure prompts or observe and prompts and detect malicious behavior. In a in a Securing in the Cloud class for the last three years, we've been teaching service control policies as guardrails. These are a way in your AWS environment for you to describe things that you never want to have happen, period. And regardless of whether or not someone is an admin, they cannot do that thing.

Andrew Krug:

It is the heaviest handed of all of the identity and access systems that you have access to. It's also very, very difficult to grant exceptions. So I always encourage people to keep these really, really simple. For Bedrock, though, it's great because you this is the the one place that you allow list the models that you want your teams to have access to and what regions you want them to have access to in. And in the class I I have taught in the past, folks have struggled with this simply because the the grammar sometimes is it's hard to reason about because they have a a very short maximum size.

Andrew Krug:

So we end up with things like a condition that says if the string is not like this, which is an inclusive match, so we have a deny on string not like. So it's allowed if it's any string like that, but as a human, these are very hard to reason about sometimes. And so one of the I think one of the best tools that you have access to as a practitioner and somebody that's learning cloud is actually their their CLI that's called QDeveloper or formerly called QDeveloper. Now it's just called Kiro CLI. Anybody with a builder account has access to an incredibly generous amount of Kiro.

Andrew Krug:

It's something that you can just sign up for for free. Behind the scenes, these are actually just Claude Sonnet models, but you can ask a question that's like generate me an SCP that restricts access to all Bedrock models except and I said Claude Sonnet models, but, obviously, it could have said, you know, restrict access to only deep sea or don't restrict access to the following, and it would give me the service control policy that I would actually need to go and apply. So we live in an age where you almost can do intent based security, where you describe your intent and then it actually gives you back all the technical bits that you need to go and apply in order to make that true in your environment. So the other thing here that I wanna talk about is infrastructure and data protection. So, you know, for for years and years, we've actually got pretty good at securing networks, like both in data centers and in cloud.

Andrew Krug:

This is not new for AI stuff. So, like, obviously, your AI stuff needs access to to some set of data in order to do what it needs to do. Maybe that data's sensitive. Maybe it's not. For a lot of companies, it is because they're sitting on large datasets.

Andrew Krug:

If you are sitting on large datasets in AWS, you're probably using a platform called SageMaker to run machine learning jobs. Because, like, believe it or not, there was artificial intelligence before there were large language models. We used to just call it machine learning. We would run these machine learning jobs on top of the SageMaker platform, which is just a fancy way to say that we had some instances in AWS that ran some specialized software that was optimized for doing machine learning. Like, it came loaded with, like, the TensorFlow or or whatever it is that you needed in order to crunch the data, reduce it, and provide it in a really consumable form.

Andrew Krug:

So, you know, super obvious. You just isolate these things from the the the broadest network boundary you can, and you only give them access from a network perspective to only the things that they need. So instead of looking like this where you just have a regular VPC where, like, maybe if you compromise the SageMaker endpoint, you could exfiltrate some data. You put it in a subnet with no network access, no egress to the Internet by default, and then you deploy these things that are called VPC endpoints to allow the AI agent to reach in and access the data. Sounds simple.

Andrew Krug:

Right? We've been doing this in in data center environments and in cloud environments for years and years. It's really no different than it has ever been. Alright. The next the next big piece of advice here is, like, understand what the privilege escalation paths are.

Andrew Krug:

And we talk to death about identity and access management in the securing the cloud class, But what's new here is actually the potential pivots from compromising an AI agent to full on admin. And there is no better resource out there available, at least that I'm aware of, besides pathfinding.cloud. So pathfinding.cloud is a a new open source dataset. You can actually contribute privilege escalation paths to this. It's consumable by AI agents, and it's also just a a searchable website.

Andrew Krug:

So what you can do is you can actually go there. You can filter it by AWS service, and it has things that look a lot like MITRE attack IDs. So we have what we've done is we've we've categorized each of these with with a code that you can use to refer to that attack chain over and over again only for identity stuff. And so this is an example of pivoting from the Bedrock code interpreter to full admin provided that one or more mistakes was made in that trust chain. So just understanding the Pravesque paths is a big part of securing these workloads.

Andrew Krug:

So this is what the website looks like, pathfinding.cloud. It's gonna have interactive labs coming really soon that will allow you to to play with some of these privilege escalation paths and some of the offensive tools that that folks use to actually exploit mistakes. And this is, like, totally free, open source everything. And then there's the AI specific stuff. So in in this entire session, we really haven't hit any security control that is specific to AI only.

Andrew Krug:

This is the only part of it that that is AI specific that I'm gonna recommend you do by default if you're building an AWS on Bedrock, which is to use the guardrails down here. And if you think of the flow that something goes through in the life of a prompt to retrieving and generating data and giving you a response, there's a bunch of stuff that happens behind the scenes. But what you do have is you have a choke point here in the invoke model API. And what it does is it looks at the prompt, it evaluates it using this thing called guardrails, And if it passes that evaluation, it's just gonna send it on to the back end. And this this might not make super sense right now, but I I assure you that it will when we look at the use cases for what kind of security that can provide.

Andrew Krug:

But to me, the first time I ever saw this, it looked an awful lot like a firewall. Did we just reinvent the firewall for AI stuff or at least a WAF? I love to say that there's no new ideas. We're just late to the party sometimes implementing good ideas that already existed, and that's really what guardrails or AI proxies or whatever you wanna call them. There's an open source project called Llama Firewall out there that does the same exact thing for open models.

Andrew Krug:

They're really just firewalls before we pass the request on to the back end. You all can configure firewalls, I bet. So if you can configure a firewall, you can probably configure guardrails for AI stuff. So these are some examples of things that you can use guardrails for. You can use them for content filters, sensitive information.

Andrew Krug:

You can create denied topics. The most cliche of that is, like, don't give medical advice. If you're a chatbot agent that does customer service, don't give medical advice. You can do word filters on anything that you absolutely never want to have passed to the back end. You can even do hit counts on those word filters, and you can start to do things like ban people that put undesired text into your model again and again and again.

Andrew Krug:

You can invalidate their session, etcetera. So you really can do a ton of reasonable introspection all with just out of the box stuff. Alright. Final recommendation here is also to lean into open standards that are really helping support us across the industry. And, like, one of those that we don't always think of when we think about security stuff is OpenTelemetry.

Andrew Krug:

So if you're not familiar with OpenTelemetry, it's just a standard for traces, metrics, and logs that is abstract of any company. So every company like like Datadog, New Relic, Refana, they're all capable of ingesting OpenTelemetry or what engineers will just casually refer to it as OTEL. These are just a way to break down how long it takes each step in a request. And if you think about it, like request between two API endpoints are an awful lot like an l l a LLM agent that's reasoning about what it needs to do. So OpenTelemetry supports tracing LLM agents now, and you can use OTEL in the same way that you can use Bedrock evaluations to do things like security, detect anomalies, hallucinations, etcetera.

Andrew Krug:

It gets better, though, because OTEL is not just for AI agents. Cloud Code and, some of the other coding agents now support exporting otel from endpoints. So you can even enable on your, fleet of machines where people are vibe coding hotel traces to come out of that, and then you can do all kinds of analysis on that data. For security, you can start to do things like data flows, see if it has access to data sets that you don't want it to. You can start to do things like detect shadow AI workflows by pushing this out.

Andrew Krug:

And it is admin configurable. You might ask, you know, like, why would anybody turn this on or, like, how could I get buy in to turn this on at my company? Well, why would the business care about hotel AI agents and coding agents if it's hard to get them to care about security stuff? The answer is money. The the number one reason to use hotel tracing is actually to see where you have opportunities to cost optimize.

Andrew Krug:

So if you want a reason to, like, push this out and get telemetry that's good for costs, good for the business, and also good for security, you will save money when you start to analyze how people are using agentic coding workflows. Another great thing to push out as as part of that is context window compaction tool. So if you really are looking to sell the the notion that you could roll out this security control and save your company 60 to 90% on their cloud code bill, you can roll it out with a tool like this. Sounds really complicated to say context window compaction, Like, really, all a tool like this is doing, which is totally open source, by the way, is it's taking the white space out of things that are supposed to be human readable, for example, and it's compressing it down to something that is friendlier to machines. So you can actually you can roll out those OTel traces and something like this, and you can save your company a bunch of money and get more security controls.

Andrew Krug:

Fine final thought before we close out and then go to questions is we did say at the beginning of the webinar that CloudTrail was not super adequate to detect and respond to different threat vectors in AI stuff, but that doesn't mean that you still don't have to have logs. So in the event of an incident, you know, like, let's say that you are an incident commander running a team to try to figure out what happened, you still need all these supporting logs. And one of the things that we are talking about this year in the securing the cloud class is Amazon Security Lake and the move of all these logs to a format that's called OCSF. And if you're not familiar with OCSF, it is something that is changing the industry in terms of normalizing how logs are sent between any two security platforms. So OCSF gives you independence from whatever your underlying SIM is because the data is all normalized in the in the thing that is shipping to the SIEM back end or the log aggregator or data lake, whatever you wanna call it.

Andrew Krug:

And that makes it easy to migrate between platforms. It makes it easy to write generic detections across things like multiple identity providers. It's just super amazing to see all these companies actually get on board with an open standard for the first time in my career. So OCSF plus Security Lake, total winner. We we have it in this year's run of securing the cloud foundations, which is great.

Andrew Krug:

And you can combine that actually with your own AI security agents. And this is where the mind blown moment actually actually is. And in the class, we're gonna have a little security agent demo that goes out and it queries the log stores. So an LLM is capable of understanding the schema and doing things like threat hunting, forensics, and timeline analysis for you. Like, no longer do you have to be an expert in writing really obscure SQL queries just to get the data out that you wanna get out.

Andrew Krug:

AWS has over 200 different MCP servers that you can query for knowledge about the environment. So the easiest one to onboard to is actually just called the AWS Knowledge MCP server. It has access to all of the schemas for all the log formats and the entire repository of documentation, and it is a completely free MCP server that you can use with any AWS account. So if you enjoyed this webinar and you wanna see more of me teaching content like this, you can obviously register for my class coming up April. It's a sixteen hour class right there on screen.

Andrew Krug:

It's after the SOC summit. I am not giving a presentation at the SOC summit, but please do sign up for the class. If you wanna find me, you can find me at RSA and B Side San Francisco. Francisco. I will be there with the Datadog team.

Andrew Krug:

You can also reach out to me to hire me for things like cloud security audits, architecture design and guidance, or sometimes people will ask me to do crazy stuff for for contracts. That's a case by case basis. But you can reach out to me for for any of that stuff as well, or you can send me a message on LinkedIn. I do reply to my messages on LinkedIn. It is not where messages go to die.

Andrew Krug:

And with that, I'm happy to take any questions with our remaining time. Great.

Jason Blanchard:

Andrew, thank

Deb Wigley:

Andrew.

Jason Blanchard:

Yeah. Thanks, Andrew. Thank you so much for doing that, and we'll see. If you currently have a question that was not answered during the the actual Anticasts, go ahead and ask that question now either in Zoom or in Discord. Deb and I will keep an eye on on both.

Jason Blanchard:

If you haven't yet checked in yet for Hackett, please go ahead and do so. If you don't know what Hackett is, it's our way of saying thank you for attending our Anticasts webcast, and then we send you rewards when you hit ten, twenty, thirty, forty, fifty, and one hundred. The way to do that is just to engage in Discord live during the webcast. Alright, and then like Andrew said, join us next week for the Sock Summit. It is six hours, it is free, it is on Wednesday, March 25.

Jason Blanchard:

We have 11 expert speakers, and we'd be we're very happy for you to join us. And then don't forget about Andrew's class coming up. Deb, do you see any so looks like someone figured it out.

Deb Wigley:

Let's see.

Jason Blanchard:

Someone mentioned that Hitchhiker's Guide was a radio show before it was a book.

Deb Wigley:

Yeah.

Andrew Krug:

That is actually a true story.

Deb Wigley:

Uh-huh. yea, if we did not get your question and you didn't get answered. Go ahead and pop it in Discord again, and we'll ask Andrew.

Jason Blanchard:

Yep. Also, for everyone that's currently signing up for the free InfoSec Survival Guide that I put the link into the Zoom chat, so you can go there and find the link that takes you to the Spear Fist General Store to order that for free. If you haven't gotten it yet, go ahead and order that. It is free, we'll mail it to you. If you're not in The United States, there's a version that you can read and download online.

Jason Blanchard:

The reason that we like to send it to you is because we like to send stuff to people. Like, I love getting mail that I actually want instead of just mail that's been sent to me?

Deb Wigley:

Yeah. Oh, question. Do these AWS Bedrock things also apply to Azure AI Foundry?

Andrew Krug:

Yeah. The the answer is most of the capabilities exist in every CSP's flavor of vending AI stuff. Like, all of the major ones have the ability to do the guardrails. What AWS calls guardrails is really just generically called evaluations. What they don't have is always the ability to act on the outcome of an evaluation.

Andrew Krug:

Mhmm. So you can observe and detect that something went wrong, but you can't necessarily in every single cloud service provider platform stop it.

Jason Blanchard:

So, Andrew, I have a couple of Jason related questions for you. The first one is, are you excited about this time in, like, cybersecurity history because of AI, or is it a nuisance?

Andrew Krug:

I think it's it's a little bit of both. Like, I I will say, like, for the first time in a long time, like, I am nerding out like no other on the way that all this stuff works, like, behind the scenes, and that hasn't happened for me in a in a hot minute.

Jason Blanchard:

Mhmm.

Andrew Krug:

So I like, I'm really enjoying it. I'm really enjoying seeing where it can accelerate some of the work that, you know, maybe I don't do as good because I'm not passionate about it, like triaging, you know, 4,000 findings from a cloud security audit. So I think it's really exciting. We live in a a time with tons of opportunity.

Jason Blanchard:

Uh-huh. And then my next question is why do you like to teach, Andrew?

Andrew Krug:

I I think I just enjoy teaching because when when I when I graduated from college, I didn't learn a ton of stuff in the in the college program itself. I actually learned the most from folks that were willing to be very generous and share their time. So and and relate that back to, like, what's what college professors might call, like, vocational training, like, how you actually earn money in this field. And so I started doing this as just, like, a way to pay it forward because, like, so many people were were willing to share their their secrets with me. And and I still think that there is a cyber like, I still think that there's a massive gap in the number of practitioners in these roles that are are deeply, deeply technical, and I would love to see that gap closed.

Jason Blanchard:

Yeah. Alright. So you're teaching next in about two weeks or so. So anyone who's interested in taking your class, what's the, like, two takeaways that people will definitely have while taking your class?

Andrew Krug:

So you will you will understand at least four different jobs that you can do in cloud security. So we we do it in, a rapid fire. Like, you would never do all of these jobs in any one place unless you were, like, the founder of a startup or something. So you'll get that. You'll get, like, four different lenses on doing cloud security, and then you will also walk away adequate, at least, if not an expert in identity and access management, which is one of the most important things in cloud, period.

Andrew Krug:

Has been one of the most important things. You'll be able to just do that in AWS.

Jason Blanchard:

Yeah. And are there hands on labs in your class, or is it all, like, PowerPoint based?

Andrew Krug:

There are absolutely a ton of hands on labs. There are even bonus labs for the overachievers. So you walk away with a ton of content that you can both take back to work and then fork and use to secure your cloud environments. And the labs, you have access to in perpetuity after the class is over, including the updates to the labs because they are all just open source.

Jason Blanchard:

Cool. Is there anything if you were to take everything that you talked about today and sum it up in one final thought, what would it be?

Andrew Krug:

There is no replacement for observability in these very complicated systems. You gotta you gotta have it. I feel like in a lot of systems, it was optional in the past, and now it's absolutely essential as just the way we build.

Jason Blanchard:

Awesome. Andrew, thank you so much for your time. Hey, Deb, do you wanna do one final contest before we go?

Deb Wigley:

Oh, yeah. Of course. Answer that's always yes.

Jason Blanchard:

Okay. So, Andrew, thank you so much for your time. We're gonna do one final contest for people. They can do it in Zoom too. You don't have to be in Discord.

Jason Blanchard:

So if you're on Zoom, you don't have a Discord. Right now, you're guessing how many people are registered for next week's Sock Summit.

Deb Wigley:

Uh-uh.

Jason Blanchard:

The person who comes closest will win. We'll give it like a minute or so for the answer.

Deb Wigley:

So as of right now, it's it's definitely gonna go up. Yeah.

Jason Blanchard:

As of right now, how many people are registered for next week's Sock Summit? And you can join us next week for the Sock Summit. It is on March 25. It is six hours long. It's gonna be a good time for all.

Jason Blanchard:

And so I see a lot of numbers coming up. People are like, well,

Deb Wigley:

do I

Jason Blanchard:

how do I vote? And You can do it in the q and a section on

Andrew Krug:

Like that?

Jason Blanchard:

On Zoom. You can put your number in there.

Deb Wigley:

Oh, boy. Now I have two places to look. It's fine.

Jason Blanchard:

I know. We'll check the q and a and see if anyone gets close.

Deb Wigley:

Yeah.

Jason Blanchard:

So we'll give it thirty more seconds, and then we'll announce how many people. So Andrew, I hope people choose to take your class.

Deb Wigley:

Mhmm.

Jason Blanchard:

I know it's gonna be a really good time for those those people that choose to take it and for all of you that joined us today. Deb, what are your final thoughts for everyone that joined us today? And then at the end of that, give the give the number.

Deb Wigley:

That was Jerry. He always says, be kind to yourself and to others. Oh, just thank you for showing up. We appreciate that you guys spend your time with us and wanna learn and wanna help each other. And to everyone that answered questions in Discord, we just appreciate you.

Deb Wigley:

And just, yeah, be kind to each other. Be a nice human. And we love you guys. That's it.

Jason Blanchard:

Alright, Deb. How many people?

Deb Wigley:

So total one more refresh. Hold on. Is 4,553.

Jason Blanchard:

Alright.

Andrew Krug:

Right now.

Jason Blanchard:

Whoever's closest to 4,553.

Deb Wigley:

Lions here. No, you had to go to a meeting. That's right. You can't put it in there.

Jason Blanchard:

Alright, everybody. Thank you so much for joining us today for this Antisyphon Anticasts. Remember, next week's Sock Summit. Andrew's got a class coming up, and thank you so much for choosing to spend your time with us. We know there's a lot of places that you could have spent your time today, but you chose this place, and we appreciate that.

Jason Blanchard:

So we'll see you next time. Alright. Ryan?

Deb Wigley:

Kill it.

Jason Blanchard:

It's time to kill it.

Deb Wigley:

You used to say, buh bye. You don't find that anymore.

Jason Blanchard:

That for my livestreams.

Deb Wigley:

No. Can I

Jason Blanchard:

do that for webcast?

Deb Wigley:

I am Bye bye.