Cyber Sentries: AI Insight to Cloud Security

AI-Powered Compliance: Transforming Enterprise Security
In this episode of Cyber Sentries, John Richards speaks with Richa Kaul, CEO and founder of Complyance. Richa shares insights on using modular AI systems for enterprise security compliance and discusses the critical balance between automation and human oversight in cybersecurity.
Why Enterprise Security Compliance Matters Now
The conversation explores how enterprises struggle with increasing cyber threats and complex third-party vendor networks. Richa explains how moving from reactive to proactive compliance monitoring can transform security posture, sharing real examples from Fortune 100 companies and major sports organizations.
AI Implementation That Prioritizes Security
Richa details their approach to implementing AI in compliance, emphasizing their commitment to data privacy and security. The company uses a modular AI infrastructure with opt-in features and minimal data access principles, demonstrating how AI can enhance security without compromising privacy.
Questions We Answer:
  • How can enterprises shift from reactive to proactive compliance monitoring?
  • What are the key considerations for implementing AI in security compliance?
  • How should companies manage third-party vendor risks in the AI era?
  • What role does employee education play in maintaining security compliance?
Key Takeaways:
  • Continuous monitoring beats point-in-time compliance checks
  • Modular AI systems offer better security control than all-in-one solutions
  • Third-party vendor risk requires automated, continuous assessment
  • Human elements like training and culture can't be fully automated
Looking Ahead: Security Challenges
The discussion concludes with insights into future challenges, including quantum computing's impact on security and the growing complexity of AI-related risks. Richa emphasizes the importance of building nimble, configurable systems to address emerging threats.
Links & Notes
  • (00:04) - Welcome to Cyber Sentries
  • (01:13) - Meet Richa Kaul from Complyance
  • (02:32) - Areas Needing Security
  • (04:19) - Reactive vs. Proactive
  • (06:17) - Integrating AI
  • (07:59) - AI Compliance Challenges
  • (10:48) - Training Their Models
  • (12:16) - Evaluating Third Parties
  • (15:49) - The Team
  • (19:04) - Looking to the Future
  • (20:44) - How Others Are Implementing AI
  • (24:04) - Creating Capacity
  • (25:44) - Companies Doing It Well
  • (27:25) - When They Don’t Have the Resources
  • (28:50) - Wrap Up

Creators and Guests

Host
John Richards II
Head of Developer Relations @ Paladin Cloud The avatar of non sequiturs. Passions: WordPress πŸ§‘β€πŸ’», cats πŸˆβ€β¬›, food 🍱, boardgames β™Ÿ, a Jewish rabbi ✝️.

What is Cyber Sentries: AI Insight to Cloud Security?

Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.

John Richards:
Welcome to Cyber Sentries from CyberProof on TruStory FM. I'm your host, John Richards. Here, we explore the transformative potential of AI for cloud security. This episode is brought to you by CyberProof, a leading managed security services provider. Learn more at cyberproof.com.
On this episode, I'm joined by Richa Kaul, founder and CEO of Complyance. That's Complyance with a Y. We cover really interesting ground around the risks that come with third-party AI tools and why being proactive, not reactive, is key for both security and compliance. Richa talks about the balance between what we can automate and what we shouldn't, like education and training, and how freeing teams from the tedious parts of compliance can make massive differences. We also chat about the growing importance of modular AI systems versus massive all in-one models, and what that means for the future of enterprise AI.
Hello, everyone. Welcome to Cyber Sentries. I'm delighted to be speaking to Richa Kaul today, CEO and founder of Complyance with a Y. Richa, thank you for joining us. How are you doing today?

Richa Kaul:
Thanks so much for having me, John. I am really excited to be here today.

John Richards:
Well, I would love for folks to get to know you a little better and understand how you founded Complyance and why. So what led you on the journey to where you're at right now?

Richa Kaul:
Great question. So I will admit that my reason for founding Complyance is very personal. I am a bit of a data privacy nut in my personal life. I am the one who rejects cookies on every site that I go to. I give the least amount of information possible every time I'm asked. And I kind of realized that in order to really protect personal data for myself, my friends, family, loved ones, it is actually most efficient to help the enterprises who hold our data be cybersecure and really thoughtful about their security and privacy. So we actually help enterprises across the world basically automate the information security and data privacy compliance in their organizations.

John Richards:
Well, I'm glad you're doing that, helping. This is a huge area. So what are the big areas that you see groups struggling with where they're like, "Hey, we need to bring somebody in to help with this?"

Richa Kaul:
Great question. So when it comes to enterprises, it's a very complex landscape. Right now, just as everybody who's listening to this already knows, data breaches are on the rise because cyber attacks are on the rise. Our third party networks of vendors are more complex than ever before, and we are only as strong as our weakest third party vendor, to be honest with you. And so it's a very complex time for enterprises trying to manage all of the increasing risks that they're dealing with.
A lot of them are defaulting to what has been more traditionally the way to manage compliance. So much more manual point in time audits. And they're looking, really, really looking, for a way to go from being reactive to being proactive in their compliance management around security and privacy. But it feels unattainable because they have such tailored environments, complex, they have legacy systems in play, siloed tools, and they feel like it's not possible to bring everything under one roof and have that true visibility of their compliance health.
And what we do is show them how our custom GRC platform, fully configurable, including custom integrations and custom AI agents, can actually monitor their compliance in a way that works for them. And that's just been an amazing journey to watch that unfold and see the impact for these companies, Fortune 100s, major league sports organizations, and so on.

John Richards:
Now, what's it look like, the difference between being reactive versus being proactive? I mean, I assume reactive, you had a breach and now you're trying to figure it out. But where do these companies want to be?

Richa Kaul:
For enterprises, being reactive is not so much waiting for a breach because we hope that they're at least more prepared than that. It's more like being reactive to audits or being reactive to requests from either customers or others versus being proactively in control year round. It's so funny to think about, John, the products and the companies that we trust every day, when they say that they're audited and certified, we're really talking about point in time checks once a year. And it's almost laughable to think about that being a true measure of a company security point in time check.
Just take a simple example. Let's say that you want to make sure that your service provider, on all of their laptops or endpoints, have anti-malware and basic protections on. This is just a very simple thing. If you check it one time in the year, you may not really be capturing if that is true or false. And so as simple as that, just getting to continuous monitoring and being able to say we actually go from more reactive point in time checks and responding to problems when they present themselves to proactively monitoring the controls that matter to us with custom checks, and then being able to actually know when something is going off and proactively address it. That's kind of the difference. Does that make sense? It does.

John Richards:
It's like the digital version of a restaurant with a health inspector coming and they say, oh, we're going to clean everything deep before they get here so we can pass this, versus every night we're on top of this, cleaning everything and every day this is a priority for us.

Richa Kaul:
Exactly right.

John Richards:
So you're doing this, and obviously the world's been changing with the advent of AI and what it lets us do. So how are you bringing AI into this space to help with compliance? There's a challenge in the amount of data people have. I know it can be a really helpful tool in that. So what's that look like for you all as you use AI to kind of help in this area?

Richa Kaul:
Counterintuitively, AI is most important for GRC when you go up market. So the more complex the landscape, counterintuitively, the more important AI becomes. And that's because relying purely on automation and integration can only take you so far when you have such siloed systems and legacy tools in place because they have limited API connectors, limited integration checks for continuous monitoring in a connected way. So you can actually use AI to help bridge the gap. Part of that is MCP, or model context protocol, but a lot of enterprises aren't there yet. We have built that capability for when they are.
But we also use AI to review manually collected evidence at scale. And so you're actually able to connect in, even with a frequent feed or with actually manually provided evidence. We're not relying on API connection points or specific built-in integration checks. We're able to use AI to produce completely custom checks. So that means you can decide how you want to check your compliance. You can build a custom test for every single control and use AI to actually run that test every day, hour, week, quarter, however you want to do it.

John Richards:
That seems great. But I also know that you're in the compliance space, and AI has its own compliance challenges. How do you get around those and get companies to say, oh, this is an acceptable and safe way to use AI? What are the things you're looking at to make sure that that's not the risk?

Richa Kaul:
We've nailed it and I think we've nailed it because we really care. It is important to us as an information security company to be thinking about exactly the highest bar for security and privacy that we can put in place with our AI. And we hold ourselves to the highest bar because we know that our customers value trust above all else. So there are certain guardrails that we have built in and rules that we have made ourselves follow. Just for what it's worth, none of these have impacted our pace of innovation. These are just built in constraints. And as we know, constraints actually breed innovation. And so here we are on the other side.
A few things is that we have refused to allow client data to train LLMs or our internal models. That is the table stakes part one. The second one is that the AI doesn't even train the model for our benefit to be used internally or for resources or anything. We have really strict guardrails up within the code itself to actually prevent that from happening. The third thing is that AI features are all opt-in, not opt-out. That seems maybe small, but as we all know, I remember last year there was this big issue when Slack had announced that AI was turned on for all of its users. And oh, you can opt out of it, but we were all wondering, well, we didn't opt into it. It's just crazy. I mean, that happens all the time around us, and so that's really important.
The fourth thing is that the AI features only access the minimum data that they need, which kind of ties into a different point. Each AI feature is built discreetly, so it's independent. It's not a web of AI operating across the platform, but rather discreet agents and discreet features that have a start point and an endpoint. Each are only given the minimum context that they need. That actually produces better results and reduces hallucination, which is a separate benefit. But it also creates basically minimum context on every AI run so that you can be more in control of turning features on or off independently, and being able to still get the value where you do want it, but turning off the value where you don't want it. So those are some of the principles that we follow very, very tightly internally to make sure that the AI is meeting security and privacy standards.

John Richards:
I like that you brought up even the small things of opt-in versus op-out. Because it really shows a posture that you're taking of being aware of the choices that you make and the impact that they have. Now, I did want to know, you mentioned not using client data and stuff to train. So how do you train these models? Are you using a lot of synthesized data, or how do you get around that challenge if you're not using client data to help build those out?

Richa Kaul:
Two inputs. We use publicly available data of which there is a lot. And the second is that we actually produce our own kind of examples of good and bad for the fine tuning.

John Richards:
Awesome. That makes sense. And you can get them trained. Now you also mentioned using lots of different little individual ones. Are you using one kind of model behind the scenes? Are you using a wide range of models for each operation? What's it look like to really break down and get to those discrete points?

Richa Kaul:
We have built our AI infrastructure in a modular way so that we can actually swap out which LLM we use, and including for larger enterprise clients, even swap out with their internal LLM when the time comes. That's part of our future proofing approach. For now, we have found that Microsoft Azure's enterprise deployment of OpenAI has the best balance between innovation and security that our clients expect. And so that's been the primary kind of backdrop of our AI.

John Richards:
Well, I'm glad that's working well. I like the idea of being able to let people swap in their own. It gives a lot of freedom, of course, then they have to deal with is they're good enough to keep up or give the same quality of results.
So something else, actually, this came up about a little bit in our pre-talk, which you were talking about how you help evaluate risks around third parties. I'd love for you to explain a little bit about what you're doing there on the danger. You could be very secure, but as you start to partner and connect with folks, this opens up new avenues. So how are you guys approaching handling compliance and risk in those areas?

Richa Kaul:
There's two answers. One is through just the platform's kind of baseline functionality, which is very automation enabled with, we have full third party risk management capabilities in the platform. So it's things like keeping your vendor registry there from a security perspective, monitoring security reviews for all of your vendors, sending out those tailored questionnaires and so on. You can do that entire end-to-end process from vendor diligence to vendor onboarding and vendor security reviews ongoing all on the platform.
But we also have two agents live already to support third party risk, and more on the way, which is exciting. The first agent is that it's actually a vendor risk score and we just recently released it. So it does objective outside in analysis of vendor risk based on negative news about them, news of data breaches, news of even sometimes things that are adjacent to security risks. So even if they have a big layoff, sometimes that's a concern because there might be continuity or other issues internally that could put your own data at risk as a client. And so we've kind of put together a really strong third party risk methodology and a scanner that runs to address those. And that links in directly to our platform.
The second one, the other agent, which has now become our highest growth agent, it has I think 172% month on month growth in AI calls last month. Yeah. Crazy, crazy. It's everyone's favorite agent. It doesn't sound very sexy, but it is sexy, I think so, is the fact that it actually runs on your tailored vendor questionnaires. So in the platform, you can create your own questionnaire that goes out to vendors to check for anything that you want to check for, whether it's obviously SOC 2 reports, but also, where will my data live? Where does the team live that supports me? What are your response times around incidents reported? All that.
You can actually tailor your own agent. So it's a customizable agent that clients can tailor in their environment based on every question. And they can say what their risk threshold is for each and what are some signals of a good answer, a risky answer, and so on. And basically as soon as a vendor submits their response, AI runs immediately. And you get an email that says they've submitted, here are the risks. For each of those basically findings, you can create a risk that you tag to the vendor, you can follow up with them, you can auto create a task for yourself internally. And it's a highly, highly high ROI, high leverage type of agent.

John Richards:
Yeah. No, as somebody who's been a bit on both sides of receiving these, the amount of work to review and go through, and you want to make sure you didn't accidentally... because these could be multiple pages and long answers, and you're like I hope I didn't miss a red flag somewhere in that, and having a way to just know right away, hey, here's a couple areas I should focus in on before reviewing the whole thing. I can see where the value comes there.
Now, hearing this makes me think you must have a very smart AI team that's working on this. So what's your kind of internal philosophy around building that team and what they're focused on? Because you've got all these different agents, and you're really looking to drive business impact. So in a way, you're doing a lot on how you optimize in this area. Are there any kind of priorities or leading methods that you're kind of like, hey, this is what drives the team?

Richa Kaul:
Recently, there's been a name and philosophy put to it. Actually, perhaps it's not recent, but I recently read, I should say, about a philosophy called tiny teams, specifically empowered actually by AI to work internally. But forgetting the empowerment by AI for a second, I think the entire philosophy of tiny teams, which was also reflected in a book I read about how Netflix manages their culture, is about really top talent density. And we have always been a lean and mean team with the best of the best talent internally.
That is how I think we've been able to produce to the degree that we have for these enterprises. They are always shocked when they see our platform. Sometimes they'll actually ask us, "Is this a proprietary platform or is this someone else's tech?" And we were like, "No, no, this is our tech. This is our platform. We built this baby." But I think it all comes down to the talent density on our engineering team. And of course, our amazing CTO who's been with us from day one, and is just such a, he sticks by the principles of innovation while also adhering to security and high UX. Those are hard goals to achieve all in one platform.

John Richards:
Yeah. That's a lot of tension there that you're trying to manage. And I've seen folks do that really well. It sounds like they must be on top of it. So do you kind of have one main team that builds out each piece kind of almost sequentially, they move on to the next? Or do you have lots or multiple of these kind of tiny teams that are individually working on different pieces that you're building out?

Richa Kaul:
On the engineering side, we split it by platform and AI. So from our perspective, I truly believe the platform is the moat long term. The AI is a differentiator in the near to medium term. And I think we are kicking butt on making sure that we're way ahead of the curve on that front. Truly right now, we can't see our competitors in the rear view, but we're hoping to keep that trend up. But in reality, it's only going to go for so long, and I'm very aware of that.
Long term, the platform is the moat. And as much as I'm talking about AI on this call, we are really invested, actually, 60% of our team is fully invested on platform. So a majority of the team is still spending every single day focused on platform innovation, workflow automation, making sure the platform looks and feels beautiful and works perfectly every single day. And that is actually, I think, a big reason that clients stick with us.

John Richards:
It's a good point of, it's easy to get distracted by the shiny and new, but you've still got all of this stuff that still needs to happen. So it seems like the wise choice there. What are you seeing as we're kind of, put on your looking to the future, see into what's coming up, and what do you think are going to be the big challenges for this area, for GRC that's going to be coming? Do you think it's going to be just more of the same? Or are there areas that you see that like, oh, this is going to be a hot button area that we're going to need to address in the near future?

Richa Kaul:
GRC and security compliance management, it's getting so difficult. And the rise of AI is only making it more and more and more complex. As an example, even the rise of quantum computing, for example, is going to make the landscape and the tools that are available to bad actors just so much more powerful. And we are not ready right now. And it's going to be something where we have to react in very quick cycles because the future is not yet built for us to secure and we cannot secure something that we do not yet know the shape of.
And as things evolve, it's going to be speed. Speed is going to be the name of the game. As innovations come out, how quickly can we protect and play defense, frankly, on what is coming down the pipe? And I think that using AI and having your kind of foundations built in a way that's nimble, configurable, custom is going to be the right launching pad to be able to respond to those new threats.

John Richards:
The unknown that's sitting out there as we're waiting to see is a little bit terrifying, but I'm glad we've got very smart people on our side as well that are looking at that and saying how can we handle this quickly? Are you seeing anything in the AI space, not using AI for that, but as you're working with these top companies, is there anything around how they're implementing AI that you're really focused on in the risk compliance kind of area?

Richa Kaul:
So I think actually a lot of AI vendors out there are not necessarily using security best practices in building their AI tools, and it's creating quite risky kind of situations, I think, for a lot of clients that are out there. And unfortunately, it's a bit unknowing. Because AI privacy and security practices are not yet part of everyday language and folks are not fluent in it. And so we are seeing an increase in just risk from things like privacy risk from these AI features taking way more data than they're either saying or than they even necessarily sometimes realize that they are.
We're seeing a lot of issues related to, again, more third parties having data, because a lot of AI vendors are relying on other parties to, not only run the LLMs, but also run all the infrastructure related to the AI. And that means more sub-processors that have direct access to your data. And then worried about the security of those sub-processors. So there's kind of this web effect that's happening even more so now, it's complex. What we know though right now is that the risks that are present today, I really believe that we can handle. I think we can handle with, again, custom compliance automation like we have and other folks have actually created to address.

John Richards:
I'm going to just repeat that back, make sure I'm understanding, is that for a lot of these enterprises, the AI risk of compliance that they're running into is less around specifically their own internal usage of it. But as they adopt third party tools and things where they don't know, like maybe they fill out a form, but how do you know they're following the same level of data integrity and governance and compliance that you are? And so as they bring them on, they might be consuming more data into their models and opening up avenues of risk that you just don't know about. And so finding ways to monitor that and ensure that's not happening is really important. Does that kind of capture what you were saying?

Richa Kaul:
Yeah, it does, John. And again, that's because from what I have seen, a lot of enterprises are setting up pretty strong contractual relationships with the enterprise LLMs that they're using. So some of those risks are actually taken care of. Of course, there's always the human risk, which I can come back to in a minute, but that's why I kind of pivoted over to the vendor and third party side, because that's where the level of diligence just isn't as concentrated. And so it, of course, has more holes in it.
Going back to the human risk, that's a big one. We have a lot of employees across the world right now putting their company's confidential data into ChatGPT, without even thinking about the fact that it's not the right thing to do. And that's where I go back to that fluency around AI security hasn't yet sunk into everyday conversation. You wouldn't put your social security number, for example, into ChatGPT, I hope, but you might put your corporate documents thinking, no big deal, it'll help me improve my productivity here, but you're still exposing some of that. So I think we've got some work to do there.

John Richards:
Is there anything, in my mind, that gets back to the education level of these companies. Do you do anything on the front of assisting on education for risk and compliance and governance within a group? Or is it mainly monitoring, and then they take those best practices and kind of educate their team around here's what we've put in place, here's how you follow this?

Richa Kaul:
The way I like to frame it is that we create capacity for them to do strategic work because our platform does so much of the manual work that they've been otherwise spending their time on. And to me, training and building compliance culture is not something that you can automate. It needs strategic time from these folks. And time is really well spent on that. So if we can relieve the time that they're spending chasing control owners, following up on audit prep, reviewing vendor documentation, great, let us handle that. But you guys handle building compliance culture, implementing a real security understanding across your team, that's going to be so much more valuable.

John Richards:
I feel like that goes back to earlier when you're talking about that posture of your organization and how you choose, hey, you opt in, you choose individually which pieces you want. It's not a all or not. It is a culture that's been created with the idea that security matters and is important and we're going to, whether it's internally or for our clients as well.
So I like too the idea that that's not just something you automate, everybody take this single test or to do this little training, and now you've really gotten what it means to be compliant and we're just going to move on. But rather actually putting in the effort needed to, from the top down, kind of show that's the direction you're going. Are there companies that you see doing this well that you're like, here's what they're doing that you're like, this is a good example of what companies should be looking to do?

Richa Kaul:
We have some clients who are amazing, and they're really doing a great job around being really custom and really proactive in their compliance management. One of our clients, household tech name, you and I both probably use their product every day, they are kind of the beacon of, I think, where a lot of our other clients want to get to. So they've actually set up an entire trust framework internally where they have set up a set of fully custom internal controls that satisfy all of the different compliance standards that they need to meet, but are written and are tested in a way that is really custom to their organization. That means that they have peace of mind every day, that things that they care about are being tested and basically pressure tested, and made sure that compliance is active and is working. And by doing that in a really automated way, they freed up their own time.
So what they've told us is that they've actually been able to, rather than checking once a month or once a quarter, I believe, they've been able to check once a week. And so now they have much more peace of mind around their baseline compliance health, and they're spending more time addressing the gaps that are now being surfaced rather than doing the can you please give me some evidence? Let me review it manually. Then I follow up, make sure I didn't get something wrong. That whole cycle is washed out. And so now they have a proactive compliance, I guess, as you said before, posture. And I think that that's what, I see that as kind of the golden standard.

John Richards:
That is a great example of that shift from reactive to proactive. Now, what do you say to, if you're talking to these enterprises and they push back and say, "We don't have the resources," whether that's money or time, "to really invest in this," how do you let people know the importance of it anyway?

Richa Kaul:
It's expensive to not do compliance, period. It is very expensive to be reactive. It is very expensive to not be prepared. It's very expensive to not have visibility. Because when those data breaches happen, it's not only the stock price that goes down, the amount of resources internally that are thrown at a data breach is immeasurable. Not only are you actually dealing with the breach, your marketing team is under fire, your customer team is under fire, your executive team is under fire. So there's a lot of folks that are involved. It's an expensive proposition. So do you want to spend money on that, which also has a lot more stress, or do you want to be in control? And I think a lot of companies want to be in control, and we're seeing that. We have incredible growth in our company and also in the space overall.

John Richards:
Yeah. I think the amount of, I mean, sadly almost, the amount of breaches and whether it's legal fees or paying bounties or these different things that have come in has helped wake up folks that this isn't something you could just sit on and wait for. It is a real threat. And the rise of larger actors out there as well that are so focused at targeting. Richa, this has been incredibly fascinating. I appreciate you coming here on the podcast. Before we close out, anything you would like to pitch? What's going on? What should people be paying attention to? Maybe where can they find you on LinkedIn?

Richa Kaul:
Absolutely. Yeah, you can absolutely find us on LinkedIn. Also just on complyance.com. That's Complyance with a Y. So C-O-M-P-L-Y-A-N-C-E .com. And we really look forward to having these types of conversations with enterprises who just want to get more in control of their information security compliance.

John Richards:
Awesome. Well, I will make sure that is in the show notes so you don't have to, our listener, you, don't have to remember to drop the Y in the Complyance, we'll have a leak down there for you. So absolutely check it out. Richa, this has been informative. Thanks again for coming on here. It's been wonderful. Thank you.

Richa Kaul:
It's been a pleasure, John. Thank you.

John Richards:
This podcast is made possible by CyberProof, a leading managed security services provider helping organizations manage cyber risk through advanced threat intelligence, exposure management, and cloud security. From proactive threat hunting to manage detection and response, CyberProof helps enterprises reduce risk, improve resilience, and stay ahead of emerging threats. Learn more at cyberproof.com.
Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of TruStory FM, audio engineering by Andy Nelson, music by Ahmed Seghi. You can find all the links in the show notes. We appreciate you downloading and listening to this show. Take a moment and leave a like and review. It helps us get the word out. We'll be back December 10th right here on Cyber Sentries.