Cyber Sentries: AI Insight to Cloud Security

Ori Bendet Shares Insights on AppSec and Managing AI Risks
In this episode of Cyber Sentries, John Richards is joined by Ori Bendet, VP of Product Management at Checkmarx, a leader in application security. They explore the critical role of application security in today's digital landscape and discuss strategies for managing the risks and opportunities presented by the rapid adoption of AI in software development.
Ori shares his journey into the cybersecurity industry and offers advice for those transitioning into the field. He emphasizes the importance of focusing on areas that are business-critical, such as application security, as more companies become software-driven. Ori also discusses the shift in application security from finding every vulnerability to prioritizing the most critical risks, given the accelerated pace of development and deployment.
Questions we answer in this episode:
• How can organizations effectively prioritize application security risks?
• What are the key challenges and opportunities presented by AI in software development?
• How should security teams adapt their practices to manage AI-generated code?
The conversation delves into the disruptive impact of AI on software development and the new types of risks it introduces, such as AI hallucination, data poisoning, and prompt injection. Ori stresses the importance of a layered approach to securing AI-generated code and the need for organizations to assess their specific use cases and risks before defining policies and tools.
Key Takeaways:
• Application security is critical as companies become increasingly software-driven.
• Focus on prioritizing the most critical risks rather than trying to find every vulnerability.
• Adopt a layered approach to securing AI-generated code and keep the human in the loop.
This episode offers valuable insights for anyone looking to understand the evolving landscape of application security and the impact of AI on software development. Ori's expertise and practical advice make this a must-listen for security professionals, developers, and business leaders alike.
Links & Notes
  • (00:00) - Welcome to Cyber Sentries
  • (00:56) - Meet Ori Bendet
  • (02:31) - Advice When Thrust Into Cyber Security
  • (04:34) - Application Security
  • (07:37) - Opportunities for Growth
  • (09:58) - Shift to Business Risk
  • (12:28) - Making Assessment
  • (16:08) - Core Cybersecurity Principals
  • (20:31) - Restrictions Needed?
  • (23:17) - Using AI in Checkmarx
  • (27:57) - Give Them What Matters Most
  • (29:40) - Wrap Up

Creators & Guests

Host
John Richards II
Head of Developer Relations @ Paladin Cloud The avatar of non sequiturs. Passions: WordPress 🧑‍💻, cats 🐈‍⬛, food 🍱, boardgames ♟, a Jewish rabbi ✝️.

What is Cyber Sentries: AI Insight to Cloud Security?

Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.

John Richards:
Welcome to Cyber Sentries from Paladin Cloud on True Story FM. I'm your host, John Richards. Here we explore the transformative potential of AI for cloud security. Our sponsor, Paladin Cloud, is an AI powered prioritization engine for cloud security. Check them out at paladincloud.io. In this episode, I'm joined by Ori Bendet, VP of Product Management at checkmarx, a leader and enterprise application security. We explore why application security is now indispensable for businesses and delve into the critical role of focus in a world of accelerating deliverables supercharged by AI. Let's jump right in.
Hello everyone. Today we're joined by Ori Bendet, VP of Product Management at Checkmarx. Ori, thank you so much for coming on the show. We're really excited to have you. So thank you for being here today.

Ori Bendet:
Thank you, John. Thank you for having me. Going to be a great talk between us today.

John Richards:
Absolutely. Well, I wanted to start out with getting to know you a little bit better. So can you talk about how you got into the security space and specifically this role at Checkmarx?

Ori Bendet:
Yes, of course. So I started my career in engineering. I was what was called an automation engineer and grew my way into engineering management. And then a couple of years later, I decided that I want to make the shift from engineering to product management. I moved to become a full-time product manager when I worked at HP as in Hewlett-Packard as part of one of their acquisition in Israel.
And then kind of fell in love with being product management, being a product manager in product management. And after a couple of years I moved to a smaller Israeli startup and then a friend that worked with me in HP, the authority moved to Checkmarx, said, "We're looking for product managers, why don't you join us?"
But I said, "Yeah, but I don't have any cybersecurity background. I do have engineering background."
And he said, "That's fine, you will get it."
And almost five years later, here I am and running product management at Checkmarx and right at the heart of everything that we do.

John Richards:
Amazing. I love this story and it's very familiar to a lot of folks I know who... There are plenty of folks out there right now who are getting degrees in cybersecurity. But most of the folks who are already in these fields got thrust into it, kind of similar to you where you were doing something engineering related, there was a need in a cybersecurity space.
Any advice for those kinds of folks that are out there and saying, "Hey, I've got thrown into cybersecurity. How do I start to understand this? It's broad. It's so vast."

Ori Bendet:
First of all, cybersecurity is one of the hottest trends if you look at the amount of startups, the funding ground and everything that is happening. So this is probably the industry to be, not to take away from other industries have been in those as well, which is great. But it is something that also comes in mind as I grew up in my career, is also making sure that whatever you do, you want it to be part of what is called business critical. And security cybersecurity and also specifically application security that we will dive in a couple of minutes is critical to the business. And this is one of the best tips I give people today that make sure that whatever you do, it's business critical.
Now on getting started with cybersecurity, first of all, I think that you... It'll make it easier to come from a technical background. I'm not saying that it's the only way that you can do it, but obviously a lot of times you need to understand the technicalities and just look at one area and get started. Don't try to conquer the moon. Baby steps and start with learning the [inaudible 00:04:16] or any other known framework that exists out there. And then take it because it's so broad as you said that there are many, many areas and you might like one but you would not connect to the other. So kind of give it a try and start small and then decide how you want to expand.

John Richards:
Great advice. And so with that idea of focusing in cybersecurity is so broad and as you mentioned, Checkmarx specialization is application security. So explain a little bit of what specifically application security is and kind of why folks should really be thinking about this area, how it's so important.

Ori Bendet:
Checkmarx is one of the pioneers and veterans in application security. And the story is very kind of romantic if you want to say. It's two Israeli entrepreneurs in 2006 with a dream of helping developers write secure code from the get go. Almost 20 years later, we have 1700 customers, enterprise customers all over the world. So from them being developers back then wanting to help developers write secure code, here we are making this dream come true if you want. And this is what we do. So we help developers and application security teams make sure that their code is as secure as possible as part of the development phase. A lot of vendor and also Checkmarx like to call it shift left, you want to start the security assessment as early in the cycle as possible. Some even say before left of left or...

John Richards:
Yeah, I liked your tagline I saw on a couple of things that was like makeshift happen. So yeah, you guys have really leaned into this idea.

Ori Bendet:
Yeah, I mean we keep changing. A couple of years it was the words run on code and we help secure it. Now the important about that is as more enterprises and companies in general become digital, it means that everything is software. Which means that application security is now becoming critical to the business, okay? Because it's not like a central IT that develops internal solutions. It's everything that the company is doing. And the best example, I like to say one of the largest airlines in Europe and they are a Checkmarx customer. I talked to one of their senior managers and he told me that they don't look at themselves as an airline. They look at themselves as a software company with wings. And just think about what does it mean?
Everything that they do is software, which means that everything that they do have to have security baked in. And this is basically what Checkmarx is helping and their customers to do.

John Richards:
You're then following that not only your recommendation that folks should find something that's critical to the business, but now you're like, this is the whole business, this software, how do we secure it? When it goes down, we're not talking about just, oh, we've got backlog internally to fix all this stuff is... No, we're losing money every single second because this software is so important. So what areas do you see application security really expanding in right now? What are the biggest opportunities for growth?

Ori Bendet:
I think one of the main shifts that has happened in the market is that 10 years ago, the kind of expectation from the vendor is to find everything, okay? And obviously, development phase solutions or static solutions if you want, tend to be noisy. Simply because how it works, you lack the context. Now, as the year moved and as kind of the industry shifted, it was no longer give me everything, but help me focus on what matters most. Why? Because of DevSecOps, because of running faster than ever. And as more companies moved into cloud native cloud first digital transformation, it meant that the time that they had to develop test, release the code has shifted from months to probably minutes.
And the average last I checked talks about an average, not even the edge case of four times a day for a company to push a new version to production, which is crazy, okay? When I started, it was we had half a year, everything was one after the other, we had so much [inaudible 00:09:09], which is great, right? But now everything is running faster, which means that the [inaudible 00:09:14] that we have to even assess doesn't exist because everything has to happen in minutes and this is a big change.
So instead of finding everything, you need to help focus on what matters most. And this is over the last year or so, we've seen the rise of ASPMs that the application security posture manager management solutions and other where the goal is to take as much information as possible, aggregate, correlate, and give you or most important, your developers a list of top 10, top five, top 50, whatever your policy is, and make sure that they focus there. Because no matter what, we don't have time to fix everything.

John Richards:
Yeah, the noise ratio now is crazy. There's so much code, as you mentioned, getting deployed rapidly, then we've got now AI writing a lot of code, or at least hopefully then the developers are reviewing it. But the rate of code generation is just increasing and trying to get on top of all of these alerts can be incredibly overwhelming. So I love that you're looking at how do you prioritize this? What are the most important things that the team can focus on? Because at the end of the day, a lot of these security teams are understaffed or at least overwhelmed and they're trying to say, well, how do I triage what's going on? Many folks are in environments where the idea of everything at zero is impossible or at the very least isn't financially doable for some of the stuff that has to be remediated. But by focusing, they can really make sure, Hey, what's my biggest risk here? How do I make sure if something goes wrong, it's not one of these obvious things that's going to cost me a ton of money.

Ori Bendet:
Yeah, absolutely. I think the shift has moved from, let's talk about vulnerabilities to let's talk about business risk. And business risk is the language that every company is talking to them because it might be an SQL injection or a Log4j or any other vulnerability that you have, but you should look at the bigger picture.
And because you don't have time, this is what you need to do. And this is the main shift that also Checkmarx is doing in providing our customers with an aggregated view. And I think that the best example that we like to give to our customers is we have Checkmarx, this is our cloud native platform that takes all of the solutions in one great platform, but it's not only an aggregator. It takes and it correlates different findings. And the best example is we correlate your open source finding, the libraries that you're using with your source code. And by doing that, we can even tell you if the vulnerable function from the open source is even being called by your source code.
If it is, it's probably top of the list. If it's not, you can probably leave it for later. And let's be honest, you will never get there. So this is kind of the main shift in what we are trying and helping our customers do. And by that, helping them drive the business, reduce the risk because today security teams have to enable the business. And as we already discussed, the businesses development and developers.

John Richards:
Being able to identify that kind of information. Are you starting to look at using AI to make these assessments or is this a Hey, we run through the whole code chain, just try and see where it's used? What's that look like?

Ori Bendet:
AI is one of... Not one. I believe it's the biggest disruptor that software development has had since... It really depends on us, but at least in the last 20 years, okay? Now, it changes almost everything that we do. And you mentioned a very important point because one of the things that AI is going is already done, okay? Is that the amount of code that an average developer is going to generate is going to increase exponentially. I've seen reports anything between 30 to 60% of developers state that they are more efficient, which is great because everyone wants to increase development velocity. However, as you also said a couple of minutes ago, security teams are already understaffed.
So we need to think how we can close this gap that is going to grow, whether it's by providing the right tools to developers to self remediate, self-onboard, and also to make sure that they have the tools to scan those. And one of the things that Checkmarx has announced in our service year is how we can help scan the generated code by AI. We started with a GPT plugin that helps scan GPT generated code [inaudible 00:14:14], we announced the ability to scan GitHub's copilot generated code. And the way that we look at it is with AI, you need to have layers and what does it mean? You need to protect it first, add the source, which is the gen AI solution, whether it's GitHub copilot, GPT or any other. There are many, many and many more being added every week, which is great.
And then you need to add another layer, which is probably what you have today in your DevSecOps, whether you're doing on a branch or a pull request, whatever the policies that you have. And the one thing that I recommend to every customer that I talk to is whatever you have in place, don't drop it. Any DevSecOps processes that you have, please keep them. Because what the machine is doing is actually giving us false hope of security. And if you read the very famous Stanford research on secure code generated by generic solutions, it's actually the opposite. It gives us four hosts because as humans, we believe that machine generated code is more secure. It's kind of how we think and how we blindly trust everything that the machine is telling us. And on the other side, it's known and that Stanford research is proving that it's by definition less secure.
So we believe it's more secure, but it is the secure and this is what AI is causing and will continue to cause. And last aspect on AI is the fact that it also brings new types of attacks and AI hallucination is one of them. Data poisoning, prompt ingestion. I'm sure that everyone has heard and saw the news, so we also need to protect against those.

John Richards:
Yeah. Now anybody who hasn't seen that report, we'll make sure to link it in the show notes if you want to check it out as well. Something Ori here that you mentioned that reminded me, you're talking about, hey, you can't trust just one layer of security here. And what I found fascinating is this AI adoption begins to happen, it's brand new and teams are scrambling. How do we handle this? How do we use it to get benefits and how do we secure that of course?
But it's interesting to see a lot of tried and true practices for security being used on AI effectively. So you're talking about you can't stop one area. Well, this is the feds in depth kind of coming back and saying, Hey, we've been using this in other areas, but we need to do that with AI. We can't trust just one area to stop this. We need to be layering on different gates to check and see, hey, is it still secure at this step of the process? I've been hearing a lot of folks talk about identity and ownership being really critical around this. Who do you trace this up to be like, I own this? I looked at it or I verified this work. Do you see any other core cybersecurity principles that people should be thinking about as they adopt new tech, but like, Hey, how do we bring back some old best practices that still apply just in this kind of new space?

Ori Bendet:
Sorry for the cliche, but AI is a great use case of with great power comes great responsibility for the Spider-Man [inaudible 00:17:30] that are listening. But it is because it's a great piece of technology and the adoption is breaking any record that we had over the last, I don't know ever, okay, probably from OpenAI to any other... And it's kind of an arms race right now between the big tech from Google and to AWS, to Amazon to OpenAI to Microsoft, everyone. And it's great because it means that the solutions will continue to improve. But I think that we as security professionals should look at it just like any other piece of technology. And it's a great piece of technology. Okay, don't get me wrong. We need to first of all understand that there is no one size fits all, okay? You can't force your developers to work with GPT, just like you can't force... I don't know, your HR personnel to work with copilot, okay? And the different variations of copilot.
So you need to assess and you need to find currently the best AI solution or general solution for every use case. And I'm talking not about embedding it inside your product, which is another discussion because it also kind of unravels new types of risks including [inaudible 00:18:47] LLMs and other things that we are just now beginning to see. And once you identify the use case and the tool that you want to use, then it was identifying to understand the risks, okay? Unlike other technologies that were mostly creating risk for the technology side of the house, engineering, product management, QA and so on. GenAI creates a risk for everyone in the organization, which means that it can tie up all the way not to the CEO because if my finance team has uploaded an Excel file that has sensitive customer information, or if an HR has uploaded sensitive employee information, it got leaked.
That's a risk on the CEO level. So this is another thing. So assess the risk and then take any defensive or protective measures that you have, whether it's the tools that are starting to appear, the executive order from the White House is one of the nice, let's say starting point to look there. Also define your internal policy. And one thing that I can tell is that don't try to block it. Because by trying to block it, you are closing the front door, but people will get from the windows, right? That's probably the worst thing that any organization can do because people want to use this technology, it'll make them better in doing the work. And this is something that we all need to do. So understand the use case, assess the risk, define the policies and tools that you want to have on top of that.

John Richards:
What's your thought on the current different, almost more of at the national level that folks are trying to decide, do we need to start adding in restrictions or policies around AI right now? And Europe is kind of being a faster adopter of this idea of let's have some regulation, this can get out of control very quickly. There's a little bit more of a, let's see what happens here over in the United States and go. And I've seen discussion around this topic of it's back to that risk reward piece. What's your thoughts on where we should be leaning? Do you think most folks are leaning a little too far, let's trust this and just use it? And they need a little more regulation or to slow that pace down or should we just run with it and then deal with those problems as they come up and say, Hey, we've got to first see how that gets used to be able to properly start to add restrictions around it?

Ori Bendet:
The best answer I can give is it depends, right? It depends on the industry, it depends on the application. I've heard insurance companies already embedding full AI bots that have learned everything up to LLMs that are being fully embedded into solutions to companies that have decided to take it a bit slower. Always what happens is that the early adopters make the mistakes for the entire industry and then they can come back with the set of lesson learned on what to do and most importantly, what not to do. Unfortunately, it'll mean that we are now in a very risky place. And this is why I go back to my previous point of let's try to take it one step or one use case at a time. I think that we are still unraveling in, it's very early in the technology cycle, if you want, just like we are sitting here or we have security conferences. Hackers have their own and they always will be one step ahead of us.
So we need to kind of think together how to do, but I would argue that it makes no sense to wait to regulations or policies or restrictions from central bodies or central organizations, decide what's best to you. Also have like you as an organization have your own business demand and based on that you can define, okay, do I want to take a bigger leap or take a larger risk and then decide how you can move forward. At the end, I think that the business need for speed today will almost always win. So taking that into mind, you can decide how you want to proceed.

John Richards:
All right, so let's bring AI back to Checkmarx and where you are with this. So you talked about you're scanning this AI generated code. You mentioned also you've got some modules here. How are you using AI within Checkmarx to help with AppSec or help developers in their process?

Ori Bendet:
In Checkmarx, we look at AI with threefold. First of all, the fact that the developer workflow is now changing, which means that if in the past it was mainly manual coding and some copy paste with Stack Overflow or other resources, you can argue how much of your code was copy pasted. But let's leave that for aside. So this is changing now. You have copilots that are generating code. You have GPT that developers are using. So we believe that you need to protect right at the source where the code is being generated. The second is the new types of AI attacks. The first one and one of the most elegant that I've seen is around AI hallucination. The research from one of the Checkmarx partners called Vulcan Security. And they were able to prove that if you would ask GPT for open source packages that you should use this for recommendation.
At some point it'll hallucinate and suggest nonexisting open source packages, which of course hackers can understand and then create those open source packages with malicious code in it, which means that by using GPT, in this case, you are opening the door to a supply chain attack and kind of entering malicious code into your development environment. So this is one of it. There is code leakage and prompt injections and data poisoning and many, many others. So this is the second aspect that we look. And the third one, where the majority of the industry started is how we can take and we embed AI solutions inside the currents' portfolio so we can help our customers accelerate. So the best example that we have in checkmarx is the ability to have automatic remediation for vulnerabilities. So we take the snippets of code, we send it to an LLM, and it can actually give back the full fix of the vulnerability with also have very high confidence.
The only thing that at this point I like to mention to our customers is that it's very important to keep the human in the loop because first of all, the technology is not perfect, and I hope... I believe it'll get there in a couple of years, but right now it's not there. So I wouldn't completely trust the output no matter how good the model is. So it's always important to keep the human in the loop and the developer can take the suggestion, go over it, review it, and only then merge it back. Although I have seen customers go to fully automated remediation and this is kind of the holy grid. So at some point, the AI will find the issues, the AI will fix the issues, and we as developers will be able to focus more on the business logic of the business and less on the rest of the [inaudible 00:26:41]. Hopefully we'll get it one day, but it's kind of we'll need to see.

John Richards:
Yeah, and I think that last category is so important with this idea of shifting left. So many developers do want to be focused... You've got the security folks and they are deep in this, they know this stuff, but so many developers are wanting to be like, I've got a deadline to get my app built by X date or my fourth revision pushed out to production today. I'm trying to churn out these tickets as fast as I can. And it's easy to start to forget about some of that. So where we can assist them with like, Hey, here's a quick remediation, or here's a faster way to scan this, or here's some recommendation. I loved your idea of the folks being like, Hey, you're including these open source libraries, but maybe this one has a vulnerability. Did you know about that? And are you using it? Because that's a less weight on them where they need to be processing.
Oh, do I have to analyze my whole chain? Oh, no, I've got this assistance, which is why they're using AI in the first place is, Hey, can I offload? A lot of folks I talked to are using copilot, are not using it to write massive swaths of code, and there are folks doing that, but they're like, it's so much faster to do auto complete. Our team demoed it and then nobody's willing to give it back. We're like, we're going to keep this. It's speeds us up.

Ori Bendet:
I think one of the things, and it kind of ties to it's a nice look that we're closing to the first point that we have, the most important thing that you can do to your development team from a security perspective is make sure that whatever you give them to a media is what matters most. And this is where context is critical. And this is why at Checkmarx, we're investing a lot in code to cloud, which means that everything that we do is being fed or being driven by context that we get from the cloud.
It can mean the containers that are running. It can mean the application, the repos that we see that are being called in production. And by that it means that all the noise, all the number of vulnerabilities that we discussed in the beginning will simply not go away, but a lot of them will be reduced because we make sure that if this SQL injection or this open source library is getting to development, this is because we already checked and it's being ran on this container image in this application and it's calling this service. So whatever we do is taking the most and making the most out of the remediation effort. And this is also what builds the trust between security and developers. So developers know that once a vulnerability gets into the system, it's probably means something to the business and it has a criticality, so I better fix it.

John Richards:
I love anything that can help build trust between those teams because there can be sometimes an antagonistic relationship. So not only is this making life easier, building that trust, that's a win-win for everybody. I love it. Ori, I really appreciate you coming on here for your time today. I found it so enlightening. Thanks for running through and sharing your experience. Before we let you off here, we would love to hear anything you want to promote or where folks can find you out there.

Ori Bendet:
So first of all, thank you. It was a great conversation. I really enjoyed it. The last thing I can say is that we just launched our new website, so you can check it out, checkmarx.com. Completely rebranded, redesigned. Happy to get the feedback. Happy to connect on Twitter, on LinkedIn, and thank you very much for having me, John.

John Richards:
Awesome. Well thank you so much, Ori. I'll make sure to have the website and your LinkedIn in the show notes for anybody that does want to follow up and connect with you or check out the new site and the great service that Checkmarx is adding. Thank you so much, Ori, have a great day and we'll talk to you later.

Ori Bendet:
Thank you very much.

John Richards:
This podcast is made possible by Paladin Cloud, an AI powered prioritization engine for cloud security. DevOps and security teams often struggle under the massive amount of notifications they receive. Reduce alert fatigue with Paladin Cloud using generative AI, the model risk scores, and correlates findings across your existing tools. Empowering teams to identify, prioritize, and remediate the most important security risk. If you'd like to know more, visit paladincloud.io. Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of True Story FM. Audio Engineering by Annie Nelson. Music by [inaudible 00:31:26]. You can find all the links in the show note. We appreciate you downloading and listening to this show. Take a moment and leave a like and a review. It helps us get the word out. We'll be back, August 14th right here on Cyber Sentries.