Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.
John Richards:
Welcome to Cyber Sentries from Paladin Cloud on TruStory FM. I'm your host, John Richards. Here, we explore the transformative potential of AI for cloud security. Our sponsor, Paladin Cloud, is an AI-powered prioritization engine for cloud security. Check them out at paladincloud.io. Today, I'm joined by Daniel Deeney, a seasoned enterprise software CEO, founder, and venture capitalist, with a proven track record of building and scaling cybersecurity companies. He's currently the CEO and co-founder of Paladin Cloud.
On this episode, we explore how data fabric is transforming cybersecurity, unifying risk across tools, leveraging gen AI for threat intelligence, and redefining how teams prioritize and respond to threats. Welcome, Daniel Deeney. Thank you so much for coming on here. I know you as Dan. We've worked together in the past, so I'm so excited to get to talk to you here now. You're the co-founder and CEO of Paladin Cloud. Thank you so much for coming on the show today.
Daniel Deeney:
John, great to see you again, and thanks for having me. Super excited about the episode today.
John Richards:
Well, I'd love to kick off by asking you, how in the world did you decide to start Paladin Cloud? What was it that you saw in the market? How did you get to that spot to be a founder? Those are challenges that a lot of people face, and I want to know how others tackle that and get to where they're going.
Daniel Deeney:
Yeah. Well, look, I mean, my background is in both technology and venture capital. On the venture side, I was investing in cybersecurity and enterprise startups at an institutional fund for many years, and sitting on boards, working with founders. Always saw kind of this best-of-breed landscape of tools, and clouds, and problems, and challenges CISO, security teams, SecOps teams were having. Then, at my last startup, in the secure remote access space at AetherPal, we saw that, as well. VMware ended up buying our company in a successful exit in 2019.
Then, I decided to launch my second startup here, Paladin Cloud, really to kind of tackle the problem in the cybersecurity landscape of having multiple tools across the stack, cloud security, vulnerability management, data security, application security. Right? You've got these different domains and silos, and some thoughts around, well, how do you really provide more of a unified holistic view to help think through the risks, and provide business context, and ultimately drive prioritization? AI plays a huge role in that, and that's kind of what got us excited to launch the company three years ago.
John Richards:
There was a world where the challenge was not having enough information, so everybody started to create tooling. Then, now, we're in the world where there's so much information. How do you handle that as that comes in? So how does Paladin Cloud take an approach here to help folks that have all this information flowing in?
Daniel Deeney:
Yeah. Well, that's it. I mean, this tool proliferation over the last four to five years, that started really with the transformation to moving on-prem into multi-cloud environments, and then all the different tools that you need to layer on top to manage risk across these clouds. As you said, it's resulted in a massive explosion of data, and this is different than the SIEM and the SOC. We're not talking about real-time alerting. We're really talking about findings, policies, vulnerabilities, CVEs, that type of thing.
And so the vision was really to create a data fabric. Right? So think about building third-party plugins to hundreds of tools, and platforms, and clouds, and enterprise systems to be able to bring in data. That starts with asset discovery, asset information, policies, which are the inspectors, vulnerabilities, findings, policy violations. All of those sorts of things come into the data fabric. Right? That data fabric provides kind of this foundational layer, if you will, the consolidation, correlation of data to really apply generative AI, in our case, on how we're doing that. I can talk more about that.
But that really kind of helps, ultimately, customers solve problems around prioritization and risk, and to do that, you need business context. A lot of times today, you're kind of stuck in the different silos of the different tools, and some of that's manual. These teams are short-staffed, and they're busy. So it's like, "Okay. Well, how do you really synthesize what's the most important thing based on business context?" That's kind of, really, the problem we're addressing.
John Richards:
Well, with this data fabric layer you're talking about, sometimes it feels like looking for a needle in a haystack. They're like so much data there. How should people be thinking about this data fabric layer and what they should be focusing on when ... What I'm hearing is it's so large. How do you know where you spend your time and effort? Because you can't address every single alerting and finding that comes through. It's just impractical.
Daniel Deeney:
Right. Well, exactly. Yeah. And so the data fabric we're exploring in cybersecurity is really focused more around ... Think about as the assets or cloud assets, the tool findings, and being able to use common identifiers, like asset IDs, IP addresses, container images, those types of things, to really connect the dots across tools and findings. So, for example, a customer might have a CSPM with several findings. They might have a vulnerability scanner with a bunch of CVEs, and they might have a application security tool like a DAST tool scanning the code as it's running in the cloud.
So you might have three different tools, but each of these tools will report an asset ID, or a URL, or an IP address that we can use to kind of match and connect the dots. That's a way that the data fabric is functioning to provide value, because otherwise, customers are looking at three different dashboards. They'll look at a CSPM dashboard, a vulnerability management dashboard, and an application security dashboard in isolation. But by creating the data fabric and using common identifiers, you're piecing these things or chaining them together to be able to have a more unified view of exposures.
Then, once you apply generative AI, and we can talk more about how we're doing that and what we're seeing in the market, is then, you really have more of an informed view around potential threats or other risks that might be out there. So it's not all about just a single pane of glass. Yes. That's part of it, but it goes much farther beyond that. It's correlating, consolidating, applying risk techniques, thread intel information to be able to bring it all together to ultimately, to help make better business decisions to prioritize risk and lower your risk profile. I mean, that's ultimately what it's about.
John Richards:
So what I'm hearing is if you just have the data, it's kind of like a data mess. The thing that makes this a data fabric is that you've woven together that data using these unique identifiers. So now, things are linked together. There's a chain, if you will, between different devices and different IPs, URLs, all this different data that you're getting. So it's not just a single report in isolation. You now know where that's at, context about where that's at so that you can understand oh, this is really important or something I don't have to worry about. Does that sound correct? The idea is by linking that, it's not just a bunch of data, individual. It's the value that each piece gives context for another piece.
Daniel Deeney:
Right. That's exactly it. That's exactly it. Yeah, and important element of that is this tagging capability, which is another part of the data fabric, is that whether you're tagging all your applications, your cloud assets by application ownership tags, asset owners, these different ways to tag, then that provides a layer of intelligence to the data fabric. So think of the idea of bringing in, let's say, assets and findings from multiple tools across the stack, and you're correlating them using asset IDs or IPs. That's great, because that helps you prioritize asset-level risk, for example. Then, once you layer in the tags, and the tags provide business context. So you now know, "Okay. Is it my business-critical customer-facing application here that's at risk, or is it some sandbox that a developer spun up that maybe I don't care so much about right now?" So you kind of need to know the context around the application as it relates to the asset risk.
Then, the other interesting thing is, okay, well what environment is it in? Okay. Is it live production environment, or is it dev or test? I mean, that matters in terms of prioritization and remediation. Then, also, who are the asset owners? Right? So one of the challenges is, who owns this asset, and how do we drive that workflow remediation? How do we automate that process? So there's a number of different variables that kind of go into the data fabric and how everything comes together to ultimately prioritize and then kick off the workflow to remediate.
John Richards:
This is so much data, and you've kind of teased at this already, that AI plays a critical role in trying to understand what's going on here. So can you share a little bit about how you're using AI to approach this, and how does that make people's lives easier? What's the spots that this is coming in to address pain points?
Daniel Deeney:
Yeah. Great. I mean, one of the things that we noticed early on in kind of building out the platform and talking to customers is that customers kind of said, "Look, we're really interested in generative AI's capabilities to be able to, let's say, crowdsource threat intelligence information that's out there or other things that gen AI can do. But we don't want any customer data running through any of these models, so how do we solve that problem? So that was kind of one of the-
John Richards:
Yeah, because this data is very important. You can't have this exposed, so you do have that. I wasn't thinking about that, but there's a big challenge here of private data that you don't want to expose, but also needing to get learning, gain learning from the AI that can't.
Daniel Deeney:
Right. Yeah. That's it. Yeah. So the customer said, "Look, yeah. We would love to have generative AI in the product. We love the concept of data fabric. We love where this is going. It could definitely help us better prioritize and streamline the workflow and remediation, but we need to make sure that the customer data is kind of air-gapped. It's not going through any of the gen AI model." So how do we really accomplish that? Right? Because that's kind of a deal killer with customers.
So what we did was we sort of thought about, "Okay. Well, let's think about these security tools, whether it's a CSPM or application code scanner, for example. Right? These different tools have policies, and these policies are checking or inspecting for certain things. Right? Vulnerability scanners check for the presence of CVEs, known vulnerabilities. Right? So each of these tools has their own set of inspectors, policies, for example, and those policies or inspectors are standardized for the vendor tools. We sort of felt like, okay, after looking at a bunch of these tools, the depth and breadth of the policies or the inspectors was definitely there. Right? These market leading tools around these categories really provide best-in-class coverage around whether it's vulnerability management or misconfigurations on cloud resources, that type of thing.
But one of the things that we felt was missing was that those inspectors or policies were not necessarily threat-informed, and that is that out there in the wild, you've got the bad guys that are out there constantly running campaigns, these current campaigns that are exploiting different things. Right? So these campaigns might be exploiting a known vulnerability. They might be exploiting a port open to the public, or encryption or MFA is not turned on. Right? There's a bunch of these misconfigurations, or security risks, or vulnerabilities that are being exploited.
But the policies from the leading security vendors didn't necessarily reflect the current campaigns out there and whether it might be exploitable. So there's different ways to do that. You can bring in premium threat intelligence feeds and try to connect all the dots through a Mandiant feed or that type of thing, or you can use generative AI. Then, we felt, okay, well, look, by using generative AI, we can effectively build a risk-scoring model that's heavily-weighted to gen AI and then other variables that we would layer on top to be able to assess the risk of a given policy, inspector, even asset-level risk, bring in the cloud asset tags, for example. Now, you've got some business context, really to kind help drive the prioritization.
So, for example, you could take a security policy, let's say for cloud security, and you could run it through one of the generative AI engines. We use ChatGPT-4, but we're exploring Grok and a couple of others. You can ask, "What are the top five attack patterns for this given policy?" Right? Then, you can prompt it and have a conversation. You have to critique it, and then, ultimately, you're going to get a risk score. And so gen AI, effectively, think of it as crowdsourcing threat intelligence information and then tying it back to the policy or the inspector that's continuously running on an asset, for example. So that way, the policies or inspectors are kind of threat-informed. Because gen AI, we found, does a really good job of picking up signals and patterns on what's out there.
So give you an example. Let's say if port 22 was open to the public, and that's a policy that gets inspected, let's say, for cloud security. For example, if there was a current breach with a customer where the breach occurred because port 22 was open, gen AI is going to pick up that signal that, "Oh. This breach that just happened, the root cause of it was because of port 22 was open." So if that's the case, and there was a campaign that was targeting port 22 being open, and it actually succeeded in a breach, means it's exploitable, gen AI is going to pick that up through the prompt engineering and queries. Then, that's how the risk score gets developed and tied back to the policy that's actually checking if it is.
So the nice way to provide this air gap between the customer data and gen AI is that we figured out we're effectively leveraging gen AI to risk score all the policies and the inspectors from the vendors and security tools. So when a customer connects their accounts into our product, all that kind of gets applied to the customer data, but the customer data never leaves the secure tenant in the ... It never moves. So it's an offline air-gapped model that's based more on the security vendor policy. So it's a pretty interesting way of leveraging data privacy in a way that you can still use the gen AI to inform threat intelligence and prioritization of risks.
John Richards:
Yeah, and it sounds like this also tackles another problem, which is you've got all these different tools that you are bringing in, all those different data sources, and everybody has their own definition of what's an urgent, or a high, or a medium. You know? Or they might have a number system, or a red/green, and you need a way to say, "How do I bring these together and normalize that in some way?" So does this do that, as well?
Daniel Deeney:
Well, yeah. That's right. Yeah. It does bring that in. It normalizes the data. Because the problem is with the traditional severity levels, the critical, high, medium, and low, and we hear this all the time from customers. We've talked to analysts, as well, and it's like, "Okay." They might log into a security tool dashboard, and they look at their criticals. They might have tens of thousands of critical findings. Okay? It's like, "Oh. Well, where do I start? I've got tens of thousands." Then, it becomes a difficult task, because they only have limited cycles where they can work on certain high-value problems.
Then, what these teams tend to do is they'll look at two or three different dashboards. Sometimes, they're dumping data into CSVs or Excel spreadsheets. Some customers have said, "Oh. I tried to use a SIEM, but that doesn't work. That's really not designed for that, because that's more real-time alerts. I'm really thinking about findings on tool dashboards. I'm not really doing real-time alerts. Right?
So there's these different kind of challenges and problems when you have the multiple tools, multiple dashboards. You've got these broad base severity-level categories, critical, high, medium, and low, and it's just not fine-grained enough in terms of the problems. It doesn't provide that consolidation, the correlation, the context, the threat-informed views. Right? All of those things are key variables into prioritizing, because we hear customers say all the time that, "Look, the CISO team made a list of a thousand things to go fix," and then they gave it to the security operations team. The operations team says, "Look, I only have bandwidth to fix 50 things this week. What 50 things do you want me to fix? I can't handle this list of a thousand." Right?
So this is pretty common, and we also hear stories from customers that say, "Look ..." They might manually go through, let's say, a CSPM dashboard and a vulnerability scanner dashboard, and they might kind of piece together what they think is a critical issue. They might go over to security and say, "Look, we need to fix this, remediate this. We believe this is a serious issue." Then, the security ops team does some research, and they figure out that it's a sandbox environment that a developer spun up two weeks ago that's in a test environment. It's behind a firewall, and so no. It's secure. We're not going to spend any time on this.
But the CISO team that developed the list or the priority, they didn't have the proper business context. Right? They didn't know it was a test environment. They didn't know it was a developer sandbox. Right? So these are some of the missing pieces that we hear out there from customers when we talk about some of the challenges.
John Richards:
Who do you see using this? Is this for the C-suite? Do they, "Oh. Hey. I find what this is, and I send a report over to the developers and say, 'Fix these. These are what were rated the highest?'" Or do the developers use this to try and remediate what they're working on?
Daniel Deeney:
Yeah, it's really both. The CISO, security teams, security engineering teams, and then also the DevOps, security ops, cloud architects, those teams. Because a lot of times, the workflow is distributed depending on the team, or the nature of the issues, or cloud security, application security, vulnerability management teams. So really, we try, with our customers, to get deep adoption of the product across multiple teams. It might start with the CISO team. But then, they give access to the security ops teams, because the priorities are constantly shifting.
There might be an initial top-ten list or top-twenty-five list, and DevOps starts working on it. But there's constant scanning, monitoring, updates to the data fabric, AI updates. Right? So it's important to have broad adoption of the tool for these distributed teams, especially on the developer and security ops side. So that's kind of how we see it with our customers. There's an interest and a need to kind of access that information, because it helps streamline the whole process. It makes it a lot more efficient with that broad adoption.
John Richards:
Yeah. Well, we've got shared context. You both know what environment this is. It's a lot easier to get these things to the end of the line. I know in the past, I've worked without that shared context, and it was so much time wasted back and forth trying to say, "Well, what environment is that. Okay. No. I want test. This doesn't ..." So yeah. Very helpful.
Now, you mentioned exploring some other models. What are you finding is best in the cybersecurity space if you're trying to understand these threats? You mentioned using ChatGPT right now, but anything exciting coming up with the up and comers?
Daniel Deeney:
Yeah, I mean, our technology team has explored several different models, ChatGPT, Grok, Gemini, and a couple of others. We found ChatGPT is fairly reliable with the security prompting that we're doing. Grok is also very good. We sometimes supplement with Gemini. So yeah. These models are continually improving, and when we got started 12 months ago on this, we were just sort of exploring, like, "Okay. Let's try to get a better sense of the data, the results, or their hallucinations, to what extent."
And so we started trialing multiple models over multiple months, running them in parallel, doing the risk scores on these security vendor policies or inspectors. Then, over time, we sort of felt like the consistency was there with ChatGPT. Grok, over the last few months, is getting pretty close. But when you look at the types of queries and prompts that we're doing, really relates specifically to these inspectors or policies. So what we're looking for in return, ultimately, is a risk score based on a scale that has some context to current campaigns or threats that are out there.
So that's really the basis. What are the top five attack patterns? Okay. So if we run that through ChatGPT-4, and we run it through Grok, and then we start to compare the results, and then we consistently do that across thousands of policies in an automated way, and then we figure out, "Okay. Well, where are the gaps? To what extent are hallucinations appearing?" So it's part automated, part manual when you start to go through the results, and ultimately, you want to get to a point where you can refine that risk scoring based on a scale that's going to be relevant to that level of risk.
It's got a number of dimensions around it. It has to do with the business risk, like the application or the environment, for example. It has to do with validation. Right? Is it exploitable, type of thing. Then, the threat-informed piece, which is, are there current campaigns that are running that might be targeting this technique or targeting this asset, or this vulnerability, or this misconfiguration? So you kind of look at the different AI models across those dimensions and then the consistency over time with all those models. That's kind of how we're looking at it. We've ultimately got to the point where it's working pretty well.
What's interesting with customers, when we go through design partnerships or co-innovation partnerships, and even a couple of our existing customers that have deployed and are using the AI, is that they don't necessarily need this fine-grained answer. More of a coarse-grained approach, in many cases, will do, like, "Okay. So port 22 is open. So how high of a risk is that? Is that a top-25 risk that I need to remediate in the next few days or this week, or is it something that could wait a couple weeks?" Right? So customers are thinking of it, a lot of times, in those dimensions. Right? Because these DevOps or developer teams, tech ops teams, their hair's on fire. They're busy context switching all day long, trying to figure out all these things. Security's in there looking for remediation of high-priority risks, and they try to plan out their week, "Okay. What are we working on here this week?" or this type of thing.
Usually, the way course grain kind of works in DevOps teams, it's like, "Okay. Do I need to manage this today? Do I need to remediate it, or the next couple of days, or even this week? Or is it that kind of urgent, or is it something that could wait a couple weeks?" Right? That's usually how they do their planning, how they think about it, and that's really where the risk scoring comes in. Because their risk scoring is, I mean, it's not perfect. It's risk scoring that we're always fine-tuning the risk scores, but they've been fine-tuned enough with the gen AI engines that we've been using to be able to address how these developer teams manage their workflow.
So when we present risk scores, if we present a risk score of 587 on a scale of zero to 600, that tells the developer's SOC ops team, "Okay. Look, this is something that probably should be remediated this week, with a score of 587 out of 600." Right? It doesn't really matter if the score is 585 or 590. The fact that it's 587, it's getting close to the ... that's kind of good enough to be able to get to the prioritization in the queue that they have to work on. When I say coarse grain, that's kind of really what I mean.
If there's another risk that we've ... Let's say risk scored at a 413. Okay. So a 413 risk score, I could probably wait a week or two on that one. Right? This is kind of how the customers interact with the risk scores that are coming through gen AI, which is part of a multivariate model, and that seems to be working pretty well. So that's kind of how we think about that.
John Richards:
Well, it sounds like you talk about a number versus just a high, medium, low there. It does give a clear direction of what to tackle next and where you can go to have the most impact. "Oh. I've got two hours here to go work on this. Here. I can look at this list and see, what are the top ones?" I know if I take care of those, I'm having the most impact on keeping my environment safe. Is that right?
Daniel Deeney:
Well, that's right. Yeah. Exactly. So once the security risks are prioritized, you sort of think about the workflow and remediation, "Okay. What needs to get done here?" Let's take this example where, let's say, you had three tools. You had a CSPM. You a vulnerability scanner, and you had an application security, let's say a DAST scanner. They've all reported findings, and the findings correlate to, let's say, a virtual machine in the cloud. That gets risk scored, and it comes up with, let's say, this high risk score of 587 we're talking about. So a customer picks that up on the top-25 list and says, "Okay. I need to remediate this issue this week, because we have three tools that are reporting findings related to this asset." Maybe the cloud asset tag says that it's the customer-facing application, and it's in the production environment. Right?
So you've got all of these elements coming together that, "Okay. This is something I need to really remediate this week." Then, it becomes, "Okay. Well, I only have two hours," like you said. "What do I work on first?" Because there's these three things. Let's say there's three findings across the three tools. You can say, "Okay. What does the kill chain concept look like here?" and say, "Okay. Well, the misconfiguration from CSPM, that'll take me two hours to remediate a misconfiguration, because I need to go into my AWS portal and go through the configuration. Or I can work on an auto fix or something. That takes me two hours to fix the misconfiguration."
The CVE, let's say, requires some patching of a vulnerability. Okay. Well, that's going to take one to two weeks to get the vulnerability patch. Right? Then, the application security DAST finding, okay, well I got to go back to the developers for that one. So that might take two, three weeks to remediate, because they got to touch the code, and QA it. There's a sprint involved. Right? So what you do is you kind of come up with these timelines to remediate across the tools based on the findings, and then to say, "Okay. What can I kill first to kind of reduce risk? Let's say the CSPM finding was a port that was open to the public. Okay. Let me shut that down. I know I can do that within two hours. Right?
Then, I'll kick off a ticket for my vulnerability management team to get the patching done, and they'll do that by next Tuesday. Okay. Great. So another week, I'll have the vulnerability patched from my vulnerability scanner, and then I'll send in another ticket to the developer team, the CI/CD pipeline, to fix this code vulnerability. Right? So these vulnerabilities or misconfigurations have different timelines to remediate, and what you want to be able to do to reduce risk quickly is to sort of shut down the one that you can do in two hours. Then, you kind of start to kick off workflow on the others, and that's how we see customers sometimes tackle these problems where you've got multiple tools reporting a risk around an asset, for example.
John Richards:
Yeah. That makes sense. What do you do for customers that come in and ... Of course, we all want to believe every company out there's staying on top of this, and they pull it up, and there's only a few problems, and we're going to deal with it. But they come in, and they really didn't have any visibility. Now, they're overwhelmed. They've set up Paladin Cloud, and they can see there's so much out here. How do I start? Are there services or things they could turn to help them when maybe the scale is just so much that they're like, "Oh. We didn't know this. We need to do something quick, and I don't know how to get started"?
Daniel Deeney:
Yeah. That's a great question. Yeah, and customers bring that up in most cases. There are services partners out there, one that we've worked with pretty closely named CyberProof. They're a leader in cyber security services and solutions. They're part of UST Global. These service providers like CyberProof, they will wrap services around a technology or product. We're seeing this with some large enterprise customers we're working with CyberProof on, where they'll look to bring in different services, whether it's vulnerability management service, a tagging service, those types of things, to help accelerate that workflow and remediation. Because you're right, John. A lot of these teams, whether it's CISO teams or SOC ops teams, they're generally short-staffed. There's limited budget constraints. You're kind of on FTEs they can hire.
They're generally overloaded with work just from the different tools, the clouds, getting the next application release out the door. Right? They've got all these different workflows, and security sometimes can take a backseat to the business when the business teams are pushing the next release and and then the CISO teams are pushing remediation on some key issues. Right? They just don't have enough cycles, and that's where I think the services or even managed services around it can really help and add value there. We're starting to see customers increasingly bring in the service layer piece around a platform to drive that value, accelerate the time to remediation.
John Richards:
Yeah. That makes sense. Thank you, Dan. This has been super informative. Thank you so much for coming on the podcast. Now, before we wrap up, anything you want to shout out? Give you a moment. Where could people find out more about Paladin Cloud or follow up with you?
Daniel Deeney:
Yeah. I mean, we're paladincloud.io. Feel free to reach out, check us out, and yeah. We're expanding the platform, working with partners like CyberProof, and really looking to help drive that value into customers. This is kind of a new segment in the market. Gartner's now following. It's really a component within this broader CTEM space. So I think there's a handful of startups in the space now that are emerging, and I think this is definitely a category to watch.
John Richards:
Yeah. So if you're out there looking to understand your data fabric, get a handle on that, definitely check out Paladin Cloud. We'll have a link in the show notes. Thank you so much, Dan Deeney, for coming on here. It's been such a pleasure. Appreciate you sharing your knowledge here.
Daniel Deeney:
Thanks, John. I enjoyed it. Take care.
John Richards:
This podcast is made possible by Paladin Cloud, an AI-powered prioritization engine for cloud security. DevOps and security teams often struggle under the massive amount of notifications they receive. Reduce alert fatigue with Paladin Cloud. Using generative AI, the model risk scores and correlates findings across your existing tools, empowering teams to identify, prioritize, and remediate the most important security risks. If you'd like to know more, visit paladincloud.io.
Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of TruStory FM. Audio engineering by Andy Nelson. Music by Amit Sagi. You can find all the links in the show notes. We appreciate you downloading and listening to the show. Take a moment, and leave a like and review. It helps us to get the word out. We'll be back May 14th, right here on Cyber Sentries.