Cyber Sentries: AI Insight to Cloud Security

AI's Evolution in Security Operations: From Support to Collaboration
Doron Davidson, Managing Director of Security Operations at Cyberproof, joins John Richards to explore how AI is transforming security operations and pushing SOC maturity models into new territory. With extensive experience from telecom security to founding SecBI, Doron brings unique insights into the evolution of managed security services.
Rethinking Security Maturity Models
Doron outlines Cyberproof's innovative approach to service maturity, moving beyond traditional frameworks to incorporate AI capabilities. He describes how they've developed a 1-5 scale measuring people, processes, and technology—with quantifiable 20% year-over-year improvement targets.
Questions We Answer:
  • How are security operations teams evolving with AI integration?
  • What does SOC maturity look like in an AI-enabled world?
  • How can organizations measure and verify AI's security impact?
  • What role will agentic AI play in future security operations?
Key Takeaways:
  • AI is enabling better collaboration between security services
  • Automated investigation has significantly reduced L1 analyst workload
  • Well-defined processes and KPIs are crucial for successful AI implementation
  • New maturity levels (6-8) are emerging focused on agentic AI capabilities
The Future of Security Operations
The conversation explores how Cyberproof is moving beyond traditional AI integration toward agentic models where multiple AI agents collaborate across security functions. This shift enables enhanced threat intelligence sharing, automated investigations, and improved visualization of security metrics for stakeholders.
Practical Applications and Impact
From automating email attack investigations to improving cross-team collaboration, Doron shares concrete examples of how AI is reducing human error while expanding security capabilities. He emphasizes the importance of maintaining human oversight while leveraging AI's strengths.
Resources
  • (00:04) - Welcome to Cyber Sentries
  • (00:33) - Today’s Guest: Doron Davidson
  • (01:32) - Doron’s Background
  • (03:27) - Latest Challenges
  • (05:57) - Honesty Mindset
  • (09:16) - Current AI Strategy
  • (12:23) - Focus Areas
  • (19:43) - Shifting Models
  • (25:04) - What He’s Excited About
  • (26:40) - Maturity Model
  • (28:49) - Wrap Up

Creators and Guests

Host
John Richards II
Head of Developer Relations @ Paladin Cloud The avatar of non sequiturs. Passions: WordPress 🧑‍💻, cats 🐈‍⬛, food 🍱, boardgames ♟, a Jewish rabbi ✝️.

What is Cyber Sentries: AI Insight to Cloud Security?

Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.

John Richards:
Welcome to Cyber Sentries from CyberProof on TruStory FM. I'm your host, John Richards. Here we explore the transformative potential of AI for cloud security. This episode is brought to you by CyberProof, a leading managed security services provider. Paladin Cloud is now part of CyberProof's expanded cloud security capabilities. Learn more at cyberproof.com.
On this episode, I'm joined by Doron Davidson, managing director of Security Operations at CyberProof. We explore how security operations teams are evolving, especially through the lens of AI. Doron shares insights on how CyberProof is transitioning from traditional AI integration towards more agentic models and what that shift could mean for the future of SOC maturity frameworks, as the industry rethinks what mature even looks like. We talk about pushing past the classic level five models and into new territory where intelligent agents not only support but collaborate.
I'm super excited to be talking to Doron Davidson today, managing director of security operations at CyberProof. Doron, how are you?

Doron Davidson:
Very well, John. Thank you for having me.

John Richards:
Well, thanks for being on here. I'm excited to talk to you. I know you guys are so deep into security and doing a lot of cutting edge stuff. But before we dive into that, I'd really love to hear a little bit about how you got started, what led you to get interested into security, and how did you end up in the role that you're in now?

Doron Davidson:
Wow, you're taking me back like 20 years back. I guess I started my security background back when I was still in telecommunication. We were helping setting up the right altitude and the setup of the different cellular network receptors. And back then there were already some interesting attacks and I thought, "Oh, that's an interesting area." And therefore my next role, I moved into RSA Security and I started at RSA Security getting deeper and deeper into financial institution threats and anti-money laundering and things of that sort.
Over time, I got more and more into cybersecurity until at some point I actually started my own company. We did automated threat hunting at SecBI. And when SecBI was finally acquired by LogPoint, by a SIEM company, I then moved on to what I used to do before at RSA, managing professional services and customer success. So I moved to CyberProof really to help rebuild and reshape how MSSP is supposed to provide services to very large enterprises. So I bought into this idea and I joined CyberProof two years ago, and it was an amazing journey ever since.

John Richards:
It's amazing how many security stories start in telecom. I read about both on the exploitation side and even the user journeys of folks who got into this because it was really the kind of original heartbeat where a lot of this stuff started. So I love hearing that. It's a whole different scale though now that you're working at trying to work on this. What have been the latest challenges that you're keen to tackle?

Doron Davidson:
When I just joined, I was kind of looking at how can I improve the services and mainly the value that we provide our customers. So I started by implementing the services maturity model. We built a dedicated services maturity model that would fit our type of services. For those who are not familiar with CyberProof, we are an MSSP, managed security service provider. We provide everything from a managed SOC through MDR, managed detection and response, and then all the different services around it that customers require in order to have the full scale of protection. It can be from a penetration testing to vulnerability management down to tailored track intelligence, threat hunting, and so on.
So we provide the full breadth of the services, but really when you start digging into each and every one of those services, there's a lot you can always improve. And the team was asking me, "Okay, but improve up to where? What's the end goal?" So we put this service maturity model and I asked every one of my managers and team leads where do they see themselves in a scale from one to five, from a maturity perspective on all of the relevant scales, which is the people, processes, and technology.
Level one is anything from activities that are uncoordinated within the organization down to level four with working closely with the sales team and having clear roles and responsibilities between the different teams and so on. Where level five is when I have a well-defined service with a well-defined service lead that even have a successor where we have a culture of a constant 20% improvement year over year. And by improving it can be anything. It can be from providing 20% more value to customers to being able to provide the same service but 20% cheaper or 20% faster, something that we can actually put a KPI towards, that we can measure, and we can see where we were January 1st and where we are December 31st. If we've improved by 20%, Mr. Service Lead, you've done your work well.

John Richards:
Yeah. How do you encourage folks to be honest in this? In security in general and in any kind of self-assessment like this, it can be a challenge to be fair and it can be challenging when you find out stuff you don't want to be. So how do you approach getting folks in that mindset of being like, "No, actually we're only a two, we got a lot of work to do"?

Doron Davidson:
Very interesting question. First of all, I trust my team. But it's not only about trusting the team itself, it's also about defining KPIs that are not only qualitative, they are quantitative as well. Putting KPIs that we can measure, whether it's the response time with the customer, whether it's the customer's satisfaction level. So yeah, of course it will change from one customer to another, but when you look at the breadth of all of our customers and building and averages and so on, eventually you do get to some kind of indicative number.
It doesn't matter if you start from a 2.4 or a 3.7; I want to see a constant improvement in growth over time. So you can start from 2.5, that's fine, as long as at the end of the year you'll be around 3 if not 3.5. And if you are only a 3, just tell me, what do we need? What kind of investment do we need to put in place to grow to a 3.5 or a 4? It might be just a new technology. We're now in the era of AI. We are going to talk a lot about AI at this session, but there are tools out there that if we look into implementing those tools, capabilities, methodologies, and so on, we'll automatically improve ourselves. So let's get to that point of time where we can really grow. And that's on the maturity level.
When we do get to 5 on the technology piece, for example, number 5 means that we constantly reviewing new technologies. So I do expect every service, every service lead to review at least three new technologies every year to see whether they improve what we have to date. Now, maybe it's the existing technology that is just improving, and then that's great. But if we will not keep testing and keep knowing what's out there, we'll just stay behind.
And our customers, by the way, a lot of our contracts with customers has an innovation clause. They expect us to bring innovation to them to buy, to use, for us to be able to use it because this is where they expect to see a scale of innovation. They cannot test all 500 new products, new cybersecurity products every year. We can test those that seem more promising. We can test it with specific customer data even, with customer's permission and CISO, and there is a big process around that as well. But then once we prove it on one customer, then it's easy to also show it to other customers. So we have that process in place as well as part of the maturity process.

John Richards:
I like that. And getting at real data, like you said, versus conjecture, and then also focusing on trust as a way to move into this. I feel like both of those are also important elements as you start to look at AI. I'm curious, how has that fit into the current strategy? And are there key things you're looking to monitor to see how AI is performing and what trust levels are needed as you bring that into a product or a service?

Doron Davidson:
I think that on our journey from maturity before we got to AI and implementing AI and agentic and so on, one of the missing pieces, and it also connects to the KPIs and measurement, is the collaboration. We were looking to build better collaboration around the different services. For example, if we have a CTI service, and as part of our CTI service, they research for IOCs for specific attack vectors, build threat profiles and so on. And those IOCs are then shared amongst many customers. Now, some customers say, "Well, we don't really want to see data that is not relevant to my organization, not relevant to my organization as part of my segment or as part of my technologies." If a pure Mac shop don't send me IOCs that relates to Microsoft, it has nothing to do with me. To some extent may be true, maybe not, but this is their expectation and you can't fight with customer's expectations.
So we started building collaborations in between all of these teams and everything eventually built up around threat intelligence. So CTI shares data with our threat hunters, and CTI shares data with a SOC obviously. And mainly they were sharing data with our use case managers, and use case managers were looking at this data that comes from CTI. We're mapping our customer's existing detections based on the MITRE framework in order to find where we might have gaps based on threat intelligence, based on track intelligence of specific threat actors, and what those threat actors are using as tactics and techniques, and whether those techniques are now the customer is already protected against those techniques.
Kind of putting all of that together built a really well-defined solution for our customers with collaboration around all of our services together. So CTI sharing data with use case management, use case management building all of those use cases in rules and queries and putting that into a catalog that our threat hunters can use. Either that, or the IOAs that, again, come from CTI. If something is detected, it is shared automatically with the FIR. That also has visibility to all of that data and the red [inaudible 00:12:03] and the managed EDR and managed platforms and automation and so on. There were so many different elements within CyberProof that it just became a very big problem to keep on solving time after time. And then came the era of AI and it made it much easier.

John Richards:
I like hearing that. What was the first area you brought this in or let's try and see if this can make our life easier.

Doron Davidson:
Yes, so CTI was definitely one of the main focus areas; one, because it became very centric to all of our services. So even though our biggest kind of service is our MDR, but everything there has very well-defined processes. So it will be easy maybe to build AI, but we were looking for what kind of specific problems or specific tasks we can actually solve. And this is where we start looking at the threat intelligence, one, in order to gather a lot of information from many different sources. So we are using at least five different products; each of them has over 40 different sources. And we need to collect all of this information. So one, the collection became significantly simpler. But what's more important is not only the collection, it's also finding where they have an overlap of information, and then mapping that to specific customers. And then specific customers, as I alluded before, it's what kind of technologies that our customers have, what kind of processes, and what segments these customers are in. So that's kind of where we started with AI capabilities.
The other area was in two specific areas; one, summarization of incidents. Today we have a couple of hundreds of L1s that do the initial triaging of incidents. And there are a lot of different things and steps that they need to go through in order to investigate or initially review a certain incident before they can define whether this is a clear false positive, or this does look suspicious and we need to pass it to an L2.
Eventually the L2 that gets that incident, he needs a summary of all the investigation that could have been anything between five minutes to an hour worth of work of a person. We used to have our L1 summarizing everything that they've done and putting all the information and so on. We started utilizing automation in order to add the data that is required and AI to summarize everything that the analyst has done as part of his investigation process, including all the data that kind of supports his finding and his decision. That was the initial part, and we've done that with collaboration both with Gemini as well as Copilot, so both with Google and Microsoft.
The initial testing, and that was over a year ago, was a tough start to say the least, but the improvement was so fast. Within weeks you can see how these models that they were using, both in Gemini and Copilot, were improving. And today, this summary has a much, much higher precision than an analyst would with the only relevant data. There are still mistakes from time to time, dare I say even hallucinations, but significantly in the lower number. What analysts are doing today are actually reviewing that summary, making sure that it actually fits what they've done, and if so, then they would escalate the incidents to an L2. That was the second area.
The third was tools that were built, again, by Gemini that can help you build automation. So if you have a certain detection that you would like now to build an automated process for investigation, for initial triaging, for enrichment, you can actually utilize AI in order to build that kind of automation process, all the way through the actual building blocks of the automation itself that you can afterwards run in the system. Again, we still test everything. There's still some area of, I don't want to say disbelief because AI listens to me and I don't want them to shut down my computer. So I don't want to say this belief, but we are very fine, everything that we're doing today. I think it's like moving to the Cloud 15 years ago where we're hesitant. Today it's part of their transformations projects that we're doing. Every customer wants to have everything in the cloud and not asking you twice. And they know that there is ways and there are other solutions out there that will have the right detection, then our company that would be able to do something about it.
Today it's the same with AI and agents. We'll talk in a second about agentic AI. But today, customers are hesitant to allow an AI to make a decision. But to give a commendation that will be verified by a cyber expert? Of course. No problem with that. So that's where we started.
And then in the past six months or so, just like all other advanced companies that have started playing with agentic AI and seeing those capabilities, and with both Google and Microsoft also pushing some of the functionality that they have today into agentic, it just means that we have to have our own agents that will be able to communicate and consume the data and the capabilities that those hyperscalers are offering today. Because we support customers of every technology that they have today, we believe in mid-customers where they are. If you're running Microsoft, we'll help you on Microsoft. If you're running Google, we'll help you on Google. Any other technology, we'll help you on that as well. It means that we have to have a very robust AI capabilities that will be able to do the translation.
So instead of I having a use case management team, and today I have a sub-team that is responsible for curator, a sub-team that is responsible for Sentinel, for SecOps, et cetera, now what we are building is an agent where any of these teams can build a set of code for detection, and then we'll have an agent that will take that and will be able to translate it to every possible query language that is related to different SIEM technologies. And same goes for EDR technologies. The same goes for firewalls, et cetera. Which means that with the same size of a team today, I can provide same level of service, maybe even better one, definitely faster to all of my customers. So instead of having knowledge just about a specific technology, now everyone can contribute to all of our customers, and that's a huge change.

John Richards:
What are the important things you're keeping in mind as you shift to agentic? Is it more of the same? Because you're talking about this expanded capability here. Do you think from a different perspective, "Okay, I'm solving a different kind of problem as I start to shift over to this agentic model"?

Doron Davidson:
At the stage where we are, and I'm not sure if we as CyberProof or we as humanity, we are still trying to agentize our current tasks. I know that part of what my CTI is doing is building threat profile that is related to a specific customer. So obviously we have now an agent that goes, gathers information, and builds a threat profile per customer. Not only that, he already took it to the next stage of what before that the use case management used to do along with CTI, which is to map. So take the MITRE mapping based on the rules that the customer currently has, find from the CTI thread profile where a specific customer is more prone to, what the specific threat actors, what are their capabilities around specific tactics and techniques. And then they already suggest where we might be missing a specific rule.
So yes, it kind of took it already to the next step, but it's not something that we didn't do before. These are all things around the existing MDR capabilities. But I do see now that we are thinking already of all of these agents working together with different user stories, whether that user story is around exposure management or user story around risk management or around threat profiling and so on, we're starting to think, "What are the capabilities that we couldn't do before, whether because we didn't have enough man power? Or whether it's because we just couldn't do it with the existing technology and now we can."
I do believe that we will have another call in just eight weeks from now, and I'll be able to tell you, "Listen, we actually have now two new services that we did not think about before, and now we are able to service our customers with an agent that does it automatically with very low touch." One of those things, just as an example, we're thinking all the time, "How can we convey the value of our service to a customer?" For example, I had a CISO a few days ago calling me stating, "Hey, I'm not sure that I see enough value from the track intelligence team." And I'm like, "Okay, let's see what did you get?" And so on. And then we found out that a lot of the things that the team was doing for him, and some of it was already automated through an agent, it went through an automated discovery after mapping onto MITRE and so on. Those IOCs were shared automatically into our L11 team into specific detections that were detected, automatically blocked, and he didn't even know that that's thanks to our CTI.

John Richards:
That's the security challenge, right? It's like, you do your job too well, and people are like, "Oh, are you doing anything? We're not seeing anything happen."

Doron Davidson:
Yeah, exactly. So now what I'm actually thinking is how agentic framework, other than solving all of this issue, how can it now also portray this to the CISO in a much simpler way? So maybe some kind of visualization that will show him how the risk had shrunk thanks to specific work that was done by CTI agent, and I mean, behind that CTI agent obviously there is the verification of the CTI experts that are setting it all up and so on. So from the CTI service as well as the use case management service and maybe some other services, and that contributed to a reduction of risk by 3%. And maybe we took your MITRE mapping from 89% to 92% at the same breath. Maybe you don't see it, but when you see suddenly the numbers and you see the map becoming greener and so on, then...
So those kind of things, an agent would be able, one, to recommend. I believe that this specific CISO, his main interest, his main questions are around A, B, and C. Let's visualize that to him. Because if you visualize everything, obviously you don't see anything. So we need also to customize the visualization to a specific customer, things that you could not have done before. Now with agentic, the sky is the limit.

John Richards:
That is so cool. I mean, you're getting at a problem that AI raises, which is it can be a bit of a black box and we don't know what's going on. So using other agents to be able to create visualization around that so that folks can follow the value it brings, track the real progress back to what you said earlier about the importance of having real numbers to be able to show that value. You're bringing that all together here. So I think it's a great way to show value from something that can feel very abstract.

Doron Davidson:
Yeah.

John Richards:
Anything out there that you're most excited about in the AI space that you're tackling right now that you think? Are there certain threats or things that you're seeing AI really good at being able to help handle?

Doron Davidson:
I'm not sure about specific threats because, again, AI is intrinsic to all of our investigations. I think that I'm more around what kind of processes AI is helping me solve. One of the biggest successes that we've seen is around automated investigation of email attacks and phishing attacks that have reduced significant. And we keep on improving it so it keeps on reducing more and more of our L1 efforts. Because suddenly things that were kind of obvious, you can clean out as false positives and you can concentrate more on the true positive. So those kind of attacks, I think, were easier to solve using AI. But from processes perspective, I think that it helps the whole organization adhere much better to our MDR processes. It helps the collaboration of the different services much better. Sharing information became significantly faster and simpler. And it reduces a lot of human errors, as much as it hurts to say.

John Richards:
I make a lot of mistakes, so yes, I agree. I'm glad for the less human errors for sure. I guess to maybe tie this up here, we started talking a little bit about the maturity model. How have you seen AI kind of fit into that? Is it something that you're like, "You really need to be a stage 4 or 5 before you start bringing this in so you understand your own processes well enough"? Or is it, "Bring it in early as possible. It's just another tool that will help move you to maturity," assuming you're using it correctly?

Doron Davidson:
I do think that you need to have a well-defined processes in order to build automation. In order to build AI, you don't have to be as mature, but the more mature you are, the better you would be able to build the KPIs and verify the output that you are getting is the output that you expect, and what's an improvement and how much better you are today than you were before. If you didn't have those kind of KPIs, you wouldn't necessarily know whether an agent is doing a better work tomorrow. So those kind of things and measurements are critical. You do need to have those.
That said, I can also tell you that we are having and still building and rethinking of new measurements that we need to measure through agents and agentics that we did not measure before. And specifically if we're talking about the service of maturity, as I told you before that I had it between 1 and 5, I am thinking that number 6 is going to be more around the agentic, but it's going to be more around specific agents that can do and make specific tasks simpler and faster. Number 7 and 8 are really going to be when you have several agents already communicating with the communication layer, and that's where I do want to see the organization grow and mature.

John Richards:
It blows my mind that AI is expanding our capabilities such that there's now almost new realms of maturity possible as we collaborate in there. But I can totally see it. Doron, this has been so fascinating and informative. Thanks for walking us through both maturity of services, but also thinking about that from the AI into the agentic space has been super helpful. Before I let you go, anything you want to shout out? How can folks learn more about what you all are doing? What's the best place to check out?

Doron Davidson:
First of all, everyone are more than welcome to check cyberproof.com and see the different services and capabilities that we have today, and maybe any of those can help them. But just as important, please do connect with me, especially if you have an idea of how agents, agentic, and even simple automation can help. I do believe that we, and again, we humanity, not just CyberProof, are in a process of maturing ourselves. And any idea that can help us build a better, better services, better cybersecurity protection for our customers, I'd love to hear those ideas. Building an MDR is not a task for one person or even one company. I think that we are agentizing our MSSP with the community. So do contribute. Do share your ideas. I'd love to hear them.

John Richards:
I love hearing you thinking at the scale of humanity. Thank you so much for being a guest here, Doron. I appreciate it. I hope you have a wonderful rest of your day.

Doron Davidson:
Thank you very much, John. You too.

John Richards:
This podcast is made possible by CyberProof, a leading managed security services provider, helping organizations manage cyber risk through advanced threat intelligence, exposure management, and cloud security. Paladin Cloud is now part of CyberProof's portfolio of solutions, extending their capabilities in cloud security posture management and risk prioritization. From proactive threat hunting to managed detection and response, CyberProof helps enterprises reduce risk, improve resilience, and stay ahead of emerging threats. Learn more at cyberproof.com.
Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of TruStory FM. Audio engineering by Andy Nelson. Music by [inaudible 00:31:11]. You could find all the links in the show notes. We appreciate you downloading and listening to the show. Take a moment and leave a like and review. It helps us get the word out. We'll be back right here on Cyber Sentries.