Cyber Sentries: AI Insight to Cloud Security

Cyber Sentries: AI Insight to Cloud Security Trailer Bonus Episode 6 Season 1

AI Revolution in DevSecOps: Insights from John Bush

AI Revolution in DevSecOps: Insights from John BushAI Revolution in DevSecOps: Insights from John Bush

00:00
Unlocking the Power of AI in DevSecOps
In this episode of Cyber Sentries, host John Richards sits down with John Bush, solutions architect at GitLab, to explore how artificial intelligence is transforming the day-to-day lives of developers. Bush, who has been coding since childhood, shares his insights on how AI is becoming embedded into every aspect of the DevSecOps pipeline, from writing code to identifying and remediating security vulnerabilities.
John and Bush dive deep into GitLab's AI-powered features, collectively known as Duo, which are sprinkled throughout the software development process. They discuss how these features enhance productivity, automate monotonous tasks, and provide valuable insights to both developers and business users alike. Bush also sheds light on the importance of human oversight in the AI-assisted development process, emphasizing the need for thorough code reviews and security scans.
Questions we answer in this episode:
  • How is AI changing the daily work of developers?
  • What are some real-world use cases for AI in the DevSecOps pipeline?
  • How can organizations ensure the security and reliability of AI-generated code?
Key Takeaways:
  • AI is becoming an integral part of the entire software development lifecycle
  • Developers must still carefully review and vet AI-generated code before deployment
  • GitLab's AI gateway allows routing requests to the most appropriate AI models
Bush provides a fascinating look at the evolution of DevSecOps, stressing the importance of considering security throughout the development process rather than as an afterthought. He explains how GitLab's AI-powered features, such as vulnerability scanning and automated remediation, help developers efficiently identify and fix security issues early on, saving time and resources in the long run.
This episode is a must-listen for anyone interested in the cutting-edge intersection of AI and DevSecOps. Whether you're a seasoned developer, a security professional, or simply curious about the future of software development, you'll come away with valuable insights and a clearer understanding of how AI is revolutionizing the industry.Episode Notes
Links & Notes
  • (00:00) - Welcome to Cyber Sentries
  • (00:58) - About John Bush
  • (03:58) - Moving to GitLab
  • (05:30) - Solution Architects
  • (06:40) - Duos AI Solutions
  • (10:26) - Context
  • (12:17) - Switching Models
  • (13:58) - Best Practices
  • (17:51) - Policy Capability
  • (22:37) - Remediate the Vulnerabilities
  • (23:59) - Dev Sec Ops in This Ecosystem
  • (26:21) - Organization Approaches
  • (28:55) - Level of Knowledge Required
  • (31:09) - Finding John
  • (32:14) - Wrap Up

Creators & Guests

Host
John Richards II
Head of Developer Relations @ Paladin Cloud The avatar of non sequiturs. Passions: WordPress 🧑‍💻, cats 🐈‍⬛, food 🍱, boardgames ♟, a Jewish rabbi ✝️.

What is Cyber Sentries: AI Insight to Cloud Security?

Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.

John Richards II:
Welcome to Cyber Sentries from Paladin Cloud on TruStory FM. I'm your host, John Richards. Here we explore the transformative potential of AI for cloud security. Our sponsor, Paladin Cloud, is an AI-powered prioritization engine for cloud security. Check them out at paladincloud.io. Today we're joined by John Bush, solutions architect at GitLab. From writing his first code as a child to architecting solutions for organizations, John has had his hands in code his whole life. In this episode, we discuss how AI is changing the day-to-day life of developers as it becomes embedded into every aspect of the DevSecOps pipeline. Let's dive in.
Welcome, John. Thanks so much for coming on the podcast here. I met John recently at a conference where he was talking about artificial intelligence, and I really enjoyed the talk and I'm so thankful that you were willing to come on here, share some of those insights that you shared there, a little bit about what's happening over at GitLab with AI. Quite exciting in my mind, but before we jump into that, I'd love to hear a little bit about how you got started in this industry.

John Bush:
Sure, yeah, and thanks for having me. I love talking about this stuff. This is fun. So how did I get started? So I've been in the industry for a while, so I'm going to have to turn the clock way back. But I guess if I had to pinpoint how I got into this industry or why, when I was, it must've been 10 or 12 years old, my dad... And again, this will probably date me a little bit, my dad bought an Apple II Plus computer. And this is back when it probably cost several thousand dollars. At the time, it was very expensive. In fact, most other families, most friends of mine, they didn't have a computer. My dad bought the Apple II Plus, and I think he was planning to use it for himself primarily, but again, I was 10 or 12 and I pretty much monopolized it.
Once we got it, I was fascinated by it. I ended up teaching myself to program, and at the time it was just Apple's dialect of basic programming language. And just one example of how I used it, one of the first programs I wrote, it might've been the first Christmas after we got it. So my parents would force me to... I guess this is their trying to instill manners in me, but I was supposed to write thank you notes to everyone in the family who had given me a gift; aunts, uncles, and so on. And I hated doing it, and I'd have to handwrite all these things. Well, I wrote a little basic app that basically cranked out a form letter to everyone in the family. Dear blank, thank you for the blank. I really liked it. Love, John.
So that was one of my first apps, and I might've had to crank out six or eight of them. And by the way, this is on an old dot matrix printer. For the people that remember it, it had all the perforation on the sides and you had pull it off. And at the time, I think everyone thought it was kind of cute that look, John was able to do this. I think now I'm not sure. I probably couldn't get away with it, but at the time, no one else was doing that sort of thing, so my family thought it was neat.

John Richards II:
You're not using ChatGPT to write all of your Christmas cards this year?

John Bush:
Exactly. That would be awesome. I probably would use ChatGPT if I was going to do something like that. At least it could mix it up. It wouldn't be such a boilerplate template. But anyway, that was my first attempt at programming and it just went from there. I kept working with computers through high school, went into college, got a CS degree and went into software development from there.

John Richards II:
How did you end up over at GitLab, which is where you're at right now, and involved in some of the AI work that they're doing right now?

John Bush:
I never thought I'd be in the role that I'm in now, which is a solutions architect, which is more of a technical sales role. As I mentioned earlier, I spent the bulk of my career in software development, so I was a programmer and I loved doing that. I've always loved building things, especially with computers and programming. And I did that for a long time, probably over 20 years. One day I got a call from a buddy of mine who was working not at GitLab, but another company who had a technical sales position open. His motivation was probably to get a referral bonus, but he said, "Hey, John, I thought you'd be a good fit for this technical sales role we have open at where he was working. Why don't you interview?" And I did. I got the job and I took it.
And it was one of those things where I'd been doing the same thing, that is, programming for a long time and this allowed me to mix it up a little bit. I still got to take advantage of a lot of my technical skills, but now I'm meeting with customers. It was a little bit of a growing experience because now I'm having to use more interpersonal skills, people skills, but I'm still talking to technical folks and I'm still answering technical questions about the products that I'm selling. And so anyway, it was a nice change of pace, but not dramatically different than what I'd done the rest of my career.

John Richards II:
I love talking to solution architects because I find there's nobody who quite is as good at that marriage of the true technical solution and then trying to fit that into real world use cases and problems because each new customer comes in with a slightly different challenge. And on paper it's easy to say, this just solves everything. But you get in there and you've got to deal with these inconsistencies and then all of a sudden like, oh, rubber meets the road. Now we have to say, can this really work for you? Can this provide value?

John Bush:
Yeah, that's an excellent point. You're right, the devil's in the details because everybody's needs are slightly different. And that is a lot of the challenge, is understanding what a customer is trying to do. And then how can the product that you're selling help them and how does it best fit their needs? But that also is what's fun about it. And like I said, in doing that, I get to meet with the engineers at the customer and they're doing the things that I used to do, and so it gives me some credibility talking to them that, hey, I was you at one point.

John Richards II:
Let's talk a little bit here about how Duo is looking to solve some of these. If I understand right, this is the AI version that GitLab is building out there and looking to solve some of these real developer challenges. And so this is what interests me in the talk, and I'm excited to explore more with you here, is how AI is being used to tackle some real challenges that are out there.

John Bush:
That's a good question too. And just to set the stage a little bit, so Duo is the brand name that GitLab has given to all of our AI features. So it's within GitLab, and I can't keep up anymore, we keep introducing new AI features, but I think it's somewhere around 15 different features at this point in time. 15 different AI features that are part of GitLab, and they're collectively called Duo. And for those that don't know, GitLab is an end-to-end integrated platform that encompasses the whole software development life cycle. So just to give GitLab in a nutshell, it's the single platform where at the front end of it, you could have business users input requirements and define new features for, let's say, a product. Maybe they could describe bugs that need to be fixed and that sort of thing.
And then those requirements can get assigned to developers all within GitLab. Developers can then go write code within GitLab, check that code into Git, typically on a feature branch. They can run CI/CD pipelines, they can do security scans, they can do their unit tests, and then they can ultimately deploy this application hopefully into a production environment. And so that's all GitLab end-to-end. And so from a Duo perspective, Duo is our suite of AI features that we've sprinkled in throughout that whole software development process. And you mentioned developers, and we do have AI features for developers. So there's ones that most people listening are probably familiar with. Things like, hey, while I'm developing in my IDE, we have features that are going to try to suggest to you what's the next code that you're going to want to type so that you can accept it and maybe it'll make you a lot more productive. I don't have to type every single keystroke in.
But we also have features that'll generate code. And again, this is for developers, I can ask it, can you generate me some tests or can you refactor this? It's best suited, at least in the times that I've used it, it's best suited for monotonous, repetitive coding tasks, but it's really good at those things now. But that's just developers. And as I mentioned, GitLab is this broad platform, and we have AI features in there that are part of Duo that help with tasks for the business users. So for example, there's things that will summarize discussions on issues. There's features that will identify security vulnerabilities and tell you or suggest to you how to remediate them. And we can get into these maybe later in more detail, we have features that'll even remediate a vulnerability for you.
The feature that I tend to like the most in GitLab, the one that I end up using the most at least, is, so in GitLab, we run CI/CD pipelines. And every once in a while I'm building out my pipeline and I do something and I break it, and GitLab has a feature called Root Cause Analysis, and you click a button and AI analyzes your pipeline and says, oh, here's where you made a mistake and what you need to do to fix it. And it's one of those just big things that are big time saving features for me.

John Richards II:
I've spent a lot of time digging through logs trying to find something, and it could take a lot of time to do that. Now, especially because you're talking about Duo being maybe this encapsulating term for a lot of these features. How does Duo know when a user is making a request what to engage with? Is it using one AI model behind the scenes or are you splitting that out based on context?

John Bush:
That's a very good point because if you think back to the features we talked about, we have features that are AI features that are summarizing text. We have features that are trying to help you with security vulnerabilities, features that are helping you write code. All those are very different use cases. And in GitLab, we came to this conclusion that there's no single AI model that's best at all those different use cases. And this is more behind the scenes what I'm about to describe and not something that you would even really notice if you were using GitLab, but behind the scenes in GitLab, we have something, we call it the AI gateway. And essentially all these AI features, all the Duo features route their requests when they're talking to an AI model through this gateway and the AI gateway allows us to route the request to the most appropriate AI model for that use case.
So one AI model may work best for summarizing text, so we'll go to that model. Another AI model might work best for another task, and we'll route to that one. So it gives us a lot of flexibility in determining what model to choose and which one works best. And also, just given the state of AI today with new models coming out constantly and the models that do exist are changing and improving, it makes it easy for us to change as time goes by to say, you know what? This used to be the the best model, but now there's something even better that we could switch to.

John Richards II:
And then I guess that means for the user, you could switch that to this new and improved one without them needing to learn a whole new... The way you do Root Cause Analysis, for instance, would be the same. You would just be getting maybe an improved result 'cause the better models come in, but they wouldn't have to be like, oh, I now have to stop and use a whole different process that I'm not used to?

John Bush:
Right. And so today the model that's selected is pre-selected by GitLab, so you can't change out the model. We've done the analysis and decided what model to use. In fact, we have in our documentation a list of, for this feature, we're using this model for that feature, we're using this other model. And the models that we're using are ones that you may have heard of. Anthropic Claude is used for some of our features, Google Vertex is used for other features. We have custom models even for some of the functionality that exist in GitLab. But what you described is something, and again, this doesn't exist today, but perhaps down the road, we could allow the end user to choose what model they want to use or allow them to even route to...
One of the things that I constantly hear from customers is that they're very concerned about the security and privacy implications of sending requests out to the cloud, whether it be to whatever AI model. So maybe they want to host their own AI model on-prem and route all those requests to it so that none of their data is leaving the promises. And again, I don't want to promise anything, I'm not a product engineer, but I know those are things that we're thinking about and those are possibilities down the road.

John Richards II:
So you're talking a little bit here about the security impact of this too. One area I'm curious about, as you have AI write some of this code, I'm impressed that the quality that does come out, but there's times where it's not completely... How can you know can trust it? So what are the best practices you're recommending for developers that are using this? Should you be like, hey, I spun up this code, I'm going to push it straight to production, now we're ready to go, or what guardrails are folks putting in place?

John Bush:
I think that's something we're hearing from folks quite a bit too. And by the way, so again, back when I was a developer, I oftentimes, when I would get stuck, I would Google, how do I do this sort of SQL query? Or how do I do something? And more often than not, that would lead me to Stack Overflow and there'd be a code snippet and I might cut and paste that and put that into my IDE and use it. And more often, it was fine, it would work and I'd get the task done. I see AI as just a more efficient version of that where it's going to be suggesting solutions or trying to help you with tasks that you're trying to do and it's going to be suggesting code that you should use. But at the end of the day, I guess whether you as a developer are writing the code yourself, whether you're cutting and pasting it off the internet, whether AI has suggested it, you still need to thoroughly vet the code and scrutinize it and make sure it's...
At the end of the day, it's going to be a part of a commit that you as a developer are making to the code repository and you're responsible for it. So you need to ensure it's free of bugs, free of security vulnerabilities and so on. And it doesn't matter really where it came from, it's all your responsibility as a developer. And if you zoom out, I guess, a little bit, maybe this is what developers will be doing more and more as time goes by, is instead of writing the code themselves, maybe your job as a developer is going to be more to be, I'm going to be reviewing the code that was produced by an AI model. But the point is we still need a human in the loop. We still need a human reviewing all of this code regardless of where it came from to ensure that it does what it says it does or it's supposed to do, and it's free of vulnerabilities.

John Richards II:
And somebody who's copied and pasted some code and didn't fully look at it, trying to make a commit, and then suddenly I'm testing it and I'm like, wait, that's not my company name. Oh wait, that's this other company that had posted Stack Overflow or whatever the example. I'm like, "Oops, better clear that out." There is that you got to have a process in there to catch it. I do like this idea of treating it as almost just another developer, and whether you've got some kind of request flow or you've got that review process that's happening to make sure, hey, this is vetted, somebody's put eyes on this and we're certain. Somebody's owning responsibility, I like that term there. Because you got to have some accountability at the end of the day for whoever said, okay, I trust this enough, I'm going to try and push this up to production.

John Bush:
That's exactly correct. The one thing I'm always wondering with AI is where are we going to be with AI in three years, five years, 10 years? Because maybe we'll get to a point where you can trust the code coming out of these AI models, but we can't do that today. And you're right, you still need to review the code. You need to go through all the same steps as if you had written it yourself. You need to have code reviews, have colleagues on your team review the code. You need to do security scans and test the code and make sure there's no vulnerabilities in it. The last thing you want to do is to take the code that came out an AI model and just commit without reviewing it. That's where you're going to get into trouble.

John Richards II:
Yeah, for sure. One of the things I was very excited about was you were talking about some of the... Well, I don't remember the right spot, so you let me... But somewhere in the almost post-commit, there's a way to add in some extra monitoring or security here and patch some of this stuff so that even if somebody is pushing this straight through, how do you make sure you haven't exposed a secret or something along those lines? That checking things and even maybe even blocking them, I remember you mentioned. So can you walk through some of the policy capability there?

John Bush:
You're right. So I guess as an organization, what you ideally would want is you want to make sure that all my code is scanned and secure before it makes its way to production. And so in GitLab, we have features, they're called security policies. And essentially a security policy, you're able to dictate that, hey, as part of all of our CI/CD pipelines, we want to ensure that certain security scans are executed, and maybe that's static analysis or dynamic analysis. And within GitLab, there's a whole variety of security scanners. There's container scanning, dependency scanning, but as an organization, you can require that those scans execute so that the developers can't bypass it. Now, what's neat from the developer perspective is... Because as a developer, and again, that's my background, I don't want anyone telling me what I can or can't do in my CI/CD pipeline. I want to have control over that. And in GitLab, you still do.
You can write your CI/CD pipeline in the way that you want to do it, but if these policies are implemented, they're going to inject an additional job that will run in your pipeline, or at least one or more jobs to do these sorts of security scans. So you can still do what you want in your pipeline, but the organization, if they use these policies, they can have extra jobs sort of inserted into the pipeline that are going to run in addition to whatever you as a developer have added. But with those policies, you can say, I want to ensure all those security scans execute. But then in GitLab, you can take it even one step further. There's a different type of policy you can write to say, once the scans execute, I want to block the pull request or in GitLab, we call it a merge request from being able to merge into production until all...
And I'm making this up because these rules would be configurable, but you could create a rule that says, I'm going to block any merge request from being merged into the production branch until all higher critical vulnerabilities have been fixed. And then as a developer, you won't be permitted to merge until that is fixed. And again, that way, with policy set, as an organization, you can be confident that your code is all being scanned and you don't have to rely on the word of the developers that, oh, sure, I'm running the security scans. You know they're being executed because the policies won't allow them to proceed without scanning.

John Richards II:
And some of these policies, are they using AI? I feel I remember there an example of one that could look for common patterns of secrets or maybe credit card numbers or something and could flag those and say, oh, well, if we detected this, we're going to let you know so you can make sure, hey, I'm okay with this information going on.

John Bush:
So today, the policies themselves are not using AI. They're just enforcing the execution of security scans, or they're preventing you from merging into production if you have certain types of vulnerabilities, like higher critical ones. But there are features, and by the way, you mentioned secret detection. That's another type of scan you can do. We do have features within GitLab. There's one called Explain this vulnerability. And this is more of, so I'm the developer, I'm working on a new feature, I'm doing this on a feature branch. As I make each commit, these security scans are going to run, and if they find a vulnerability, now I'm required to fix it. So if it's a vulnerability I'm not familiar with, I can use explain this vulnerability, and it's going to tell me why is this code insecure and what do I need to do to fix it?
And so it'll sort of hold my hand and let's say it's a SQL injection vulnerability, it'll teach me about SQL injection vulnerabilities. Why are they a problem? What risks do you run with a SQL injection in the code? And how do you fix it? And the idea would be not only showing you how to fix it, but let's educate you so that you don't make this mistake again.

John Richards II:
I love that so much. I've been a developer who's gotten security reports from a security team and been so lost, and some of them I'm like, "I don't even know if this really matters." And often the language or the title of it isn't very clear, and I'm going a ton of searching, trying to find this out and be like, "What do I do here?" So this level of clarity, and especially in context as you're getting it, would be incredibly helpful.

John Bush:
Yeah, for sure. And then we even take it one step further where there's a remediate this vulnerability feature that it essentially... Think of it as you have another developer that you can ask to fix the vulnerability for you. And essentially it's going to create a new merge request or pull request, and it's going to include in that the code that remediates the vulnerability. Now, as we were talking about earlier, you don't want to just blindly approve the merge request. You still need to have a human go review it and make sure that the code produced by the AI tool is correct. But more often than not, it is. And again, it's just another productivity enhancement that allows you to get these vulnerabilities fixed more quickly.

John Richards II:
I feel like an old man get my lawn kind of thing, but I'm so jealous of the developers that are having these tools right now. I'm like, "Why didn't I have that when I was building my application?"

John Bush:
I'm the same way. I am jealous. I didn't get to use these back when I was writing code, but-

John Richards II:
Oh, I love it. So with this idea, you talked about this maybe holistic overall approach to this developer pipeline that GitLab is focused on, and it sounds like you're using AI a little bit throughout there, which I love. And there's the security piece, so this DevSecOps that starts to come in here as you're saying, how do we embed security both early on in the pipeline? But then you've got these scans that are happening, and then there's this remediate piece. And so how do you start thinking about DevSecOps in this kind of larger ecosystem?

John Bush:
Well, to me, and I feel like everybody has different opinions on this, and there's no right or wrong answer on if you ask someone what is DevSecOps? You're going to get to 10 different people. You'll get 10 different answers. But to me, it's not anything specific. It's more just like an evolution in thinking. And really, it's similar to how when DevOps came out, it's hey, we want operations to be considered throughout the development process. And now all we're saying is we think security also should be taken into consideration throughout the entire development process. This isn't something you can just throw over the fence at the end of the development process to a security engineer and say, hey, now it's your problem to deal with any security issues.
I guess you could still do that, but at the very least, it's going to be much, much more inefficient to have someone do it that way, because what's likely to happen is you implement a feature as a developer, you're done. You send it down the development process. The security engineer at the end of the day is going to, maybe just before going to production, run some security scans on it. He finds some issues. Well, now he's going to toss it back to you as a developer to say, hey, I found these vulnerabilities that you need to fix. Well, by that point, you've moved on to some new feature, and now I've got to... So anyway, it interrupts my flow. It's just very inefficient. And by the way, I'm sure most people have heard of this whole shift left idea in software development. This is the whole benefit of it.
Let's find those vulnerabilities upfront and get them fixed before they get close to production. And that's the whole point of those policies we were talking about earlier, is let's ensure that we've identified those early in the development process so that they get fixed right away.

John Richards II:
Now, do you see different sizes of organizations approaching this idea of DevSecOps differently? I've seen some groups being like, "Well, we're going to try and form up a team." But you have to be fairly large, more enterprise level to try and make that happen. And then other groups being like, "Oh, well, how do we incorporate this in here?" Or do you think there's less structure around that and everybody's tackling this in their own way right now until we get more standards and best practices around this?

John Bush:
Well, and it goes back to what I was talking about earlier that I don't think there's any right or wrong answer when it comes to DevSecOps. And there is definitely a difference in approaches between a large organization and a small organization. The large organization, I guess by their nature, things are more formalized oftentimes and people's roles are more defined. Whereas on a small organization, it's, I don't want to say anything goes, but it's a little more loosey-goosey as far as how you get things accomplished, or people's responsibilities aren't just constrained to a certain box. But the point is that we want to think about things like operations and security throughout the whole software development process.
And however you get there, I do think as an industry, we've recognized that that's the right way to do software development. As I was speaking about earlier, you don't want your developers writing apps and then throwing the code over the wall to someone in operations or to someone in security. Maybe that's the way we used to do it, but we now know that that's not the most efficient way to go about things.

John Richards II:
That makes a lot of sense. You've got too much back and forth for that to happen, and the people who need to remediate it maybe have different priorities than the folks who are saying, "Hey, here's this list of things that we need to tackle." And you get this conflict of, well, I think it's really important. We need to solve it. And you just go round and around if you don't have folks that are actually building it, having this as a top concern as well.

John Bush:
Right. Exactly. And the other part I'd add to that is that it promotes a culture where everybody feels responsibility for all of those things. That software developers are responsible for not only implementing this feature and getting it working properly, but they need to consider what's it going to take to get this application into production and operate it? They need to think about security from the get-go and not think those things are someone else's problem. Again, that's just a bad way to do it.

John Richards II:
And I've been hearing a growing concern around developers that all of this shifting left is starting to raise the total knowledge you need to be in this field higher and higher. As we're shifting left the ops, we're shifting left security and the level of what you need to know around microservices and what you need to know about the different cloud architecture and what you need to know about maybe Kubernetes is just constantly adding more and more. But one thing that excites me a little bit is that maybe some of the level of knowledge required may start to decrease as we can bring in artificial intelligence to help supplement that. So maybe you don't need to know every security vulnerability if you're able to bring that AI mind in to say, hey, give me the context I need to know on this specific thing.

John Bush:
I like that. I agree. We were talking about how I'm envious or we were both envious of the developers today having access to AI, but I'm not envious of the new developers coming into this industry. And I do feel like there is so much more that you need to learn and to ramp up on, but you do have AI now and these AI tools. Maybe that's going to be your assistant that's going to sit next to you while you're working and help you with all these things. And to some extent, I always thought as a developer, you can't know everything. There's just too much. Things change too quickly. At the end of the day, I guess the folks that I worked with in development that were the best we're just the ones that they loved this stuff and they were always interested in learning. And when some new technology came along or some new Kubernetes, or you name it, whatever it was that they had to start, or some new security vulnerability, they would just dig in and learn it.
And I think that's just the attitude that works best for someone in this industry. And don't worry that you don't know all this stuff because no one does. It's impossible.

John Richards II:
You got to embrace that learning mindset 100%, which is a great segue into, for folks that want to learn more about you, John here as we had to wrap up, where can they learn more about you and anything you want to shout out here at the end of our podcast?

John Bush:
Sure, sure. So I guess if you want to find me, I'm on LinkedIn. My username is J. Bush. Or I'm on Twitter or X and my username is johnnyb, J-O-H-N-N-Y-B. Feel free to connect up with me on either platform. And the only other thing I would throw out there is we talked a lot about GitLab. If you have any interest in GitLab or the AI features that we spoke about today, the ones called Duo, go to gitlab.com. You can sign up for a free trial and it doesn't cost anything, and you can have access to all these features. And at the very least, it's fun to play with them and try them out, so please go check it out.

John Richards II:
Absolutely. Yeah, go check that out. We'll make sure to have a link to that trial in the show notes. So anybody who wants to go check that out, click on that. Thank you so much, John, for coming on the show. Loved hearing about what you guys are doing over there. Thanks for bringing your expertise here in AI and just letting our listeners know more about the exciting opportunities that are out there.

John Bush:
Thanks for having me join the podcast. This was a lot of fun.

John Richards II:
This podcast is made possible by Paladin Cloud, an AI-powered prioritization engine for cloud security. DevOps and security teams often struggle under the massive amount of notifications they receive. Reduce alert fatigue with Paladin Cloud, using generative AI that model risk scores, and correlates findings across your existing tools, empowering teams to identify, prioritize, and remediate the most important security risks. If you'd like to know more, visit paladincloud.io. Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of TruStory FM. Audio Engineering by Andy Nelson, music by Amit Segee. You can find all the links in the show notes. We appreciate you downloading and listening to this show. Leave a like and review. It helps us get the word out. We'll be back June 12th right here on Cyber Sentries.