Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.
John Richards:
Welcome to Cyber Sentries from CyberProof on True Story FM. I'm your host, John Richards. Here, we explore the transformative potential of AI for cloud security. This episode is brought to you by CyberProof, a leading managed security services provider. Learn more at cyberproof.com.
On this episode, I'm joined by Idan Gour, co-founder and president at Astrix Security. We get into the challenge of credential leakage as everyone rushes the standup MCP servers and new open source MCP wrapper designed to help prevent those leaks. We talk about how the security concepts we want to follow often clash with the reality of fast moving AI adoption. We also take a step back to look at where enterprise teams actually are in their AI journey. From early experimentation to embedding LLMs and internal products, to deciding whether it's too soon to trust AI and core business workflows. We look at how the risks differ between workforce tools and customer facing applications. Lots of good insight in this one. Let's get into it.
Hello, everyone. Thank you for joining us here on Cyber Sentries. We are fortunate to be joined by Idan Gour, co-founder and president of Astrix Security. Welcome, Idan.
Idan Gour:
Hey, John. Thank you. Thank you for having me.
John Richards:
Oh, thank you so much for coming on here. I've heard exciting things about what you all are doing. I'll ask you the question we kind of ask everybody who comes on here. How did you end up at Astrix Security doing what you're doing? I mean, you co-founded. What led you into starting this area?
Idan Gour:
Actually, we started this journey in 2020, turning 2021. We left our previous jobs, myself and my co-founder and CEO, Alon Jackson. We're good friends. We spent a lot of time together before that. But we were trying to find a problem that organizations are actually having and nobody wants to solve it for them. And we realized what would later become non-human identity security space, that the opportunity that automation and connectivity and productivity provided to organizations, bringing more technologies, accessing more data and resources and actions, everything on our behalf, is a huge opportunity. It's a huge game for enterprises, for all personal life as well, of course. But on the other side, when we decided to connect everything to everything else, we forgot that there is something that allows us to connect these things. And these are tokens and credentials and identities and so on and so forth.
Later on, as I said, the non-humans to that extent. But back then, we spent almost eight months talking with almost 150 professional people, experts, leaders, practitioners, researchers, and so on and so forth to get a really good, deeper dive understanding of that specific problem. And we understood that there is an amazing friction that is happening between the business value, but the security challenges that it creates. And that allows us to understand that there is a great opportunity for a security company to be established in order to try and bridge these gaps and still providing the ability for organizations to enjoy the productivity and the automation, but in a more secure way. So the question was always was how to unlock these gains in a more secure fashion for companies to have.
John Richards:
Now, you already mentioned the kind of non-human access. What did it look like as that vertical really started to grow? I know it's always been around for a bit. Did you guys go back to the drawing board with that research and try to implement in that sector? Or was it a logical extension of what you were already doing? What are you doing and what did you do to prepare for the big shift that happened?
Idan Gour:
So for many years, we had this problem. It's not something new. We connected things and we created identities and created credentials and so on and so forth. So the problem existed very much and everybody are familiar with that in different languages, technical accounts, service accounts and so on, virtual. But the solution space never really existed for organizations. It was a lot of manual labor, if at all, and that manual label never actually scaled up with the distribution of businesses to owning their IT systems. It never actually scaled up with the number and the size of SaaS applications or cloud infrastructure and so on and so forth. So it was almost irrelevant and that was created this huge attack service. And when we started, we talked a lot about third party risk. It was 2020, 2021. Supply chain attacks were a big thing. There are still a big thing. Just last week we had a new one with Gainsight, but they were still important back then and the third party risk was an important priority, I would say, across organizations.
But the moment that we were working on this and we were providing value to our customers, we saw that this value extended outside of the third parties that are there and that we are becoming more and more focused on solving identity challenges to organization. Until at one point, one of our early design partners, CISOs, told us, "Listen, you're not capturing the value. You're not telling the value of what Astrix is providing." "I don't know how you should call it." He was saying, "I don't know how you should call it, but maybe call it non-person, maybe call it non-human." I don't know, he was upset to some extent, but you have to find a way. And with some non-human, that's interesting. And in 2023, we were privileged enough to participate at RSD Innovation Sandbox and to present what Astrix is doing.
And in startups, it's a lot about differentiation. What is the different value that you are able to provide others? And we wanted to make sure that we are putting on stage our best show. And that was the first time that we actually introduced the idea of securing non-human identities. So it's recorded, it's possible to watch it on YouTube as well. But it was the first time that we kind of had painted that picture of how the future looks with non-human identities. It was right after the introduction of the generative AI drawing people as kings and queens back then. So was a window to the future to some extent, but that was the opportunity to put this in front and center and then move from there to where we are today.
John Richards:
Wow. I love that story. Okay. So what has that journey looked like for a lot of the folks that you're working with as they've gone from that early version to where we're at today? What are the concerns? Are you starting to see enterprises start to implement? Are they still too afraid? What's the landscape that you're seeing out there as you're working with these groups?
Idan Gour:
So I would say that in the last couple of years, especially since the introduction of agentic AI, a lot of the focus that we are putting is securing non-human identities, but also securing AI agents. A type of non-human identity using non-human identities and tokens and credentials. Well, somewhere between each company puts a different perspective to that. But what we have seen is that organizations are moving from the exploration phase, which is where they are, most of them, for sure, to a more mature state. Trying to understand what they have and how people in the organization are using the technologies that they have already been provided with and the technologies that they learned of and they are bringing just like shadow IT, shadow SaaS, shadow AI, shadow AI agents with the bottom up adoption. But the question is very often not about how to reject it, and for security has been for a long period of time is how to embrace it.
So you have decided that you are adopting a free, let's say, enterprise that platforms for building AI agents, but then all of a sudden you understand that there's another one that is gaining traction across the organization. So maybe something is happening over there and maybe the right decision is actually to have a good conversation with the person that brought it in and they're now adopting and are managing to build things that they haven't been able to build in the past. So that's moving a little bit from exploration to more implementation and adoption and putting the policies that you want to have. So for instance, you know that agents may be having access to, let's say, resources, but only read only access and they're not provided with management access, or you're providing access that is for the right level of purposes and for right level of that.
But what we also see and the companies are starting to deploy and to think more about these implementations and the adoption across their, let's say, digital entities or their real products that they have outside of exploration phase or outside of the workforce, and then they're trying to be even more purposeful in the architecture, in the way that they perceive the potential value gains, what are the potential risk? How does the AI center of excellence is on one side helping to bring the technology, but on the other side, not risking the business too much from a security standpoint. It's a lot of collaboration in some areas, it's in [inaudible 00:09:16] some areas, it seems as it can improve, but this is what we see from an, let's say, adoption standpoint.
John Richards:
Yeah. What are the big mistakes or the big areas of concern right now for folks doing that, that they're trying to be like, "Okay, here's where I'm worried about, this is what keeps me up at night and what I'm trying to solve there."
Idan Gour:
So I think that some of the areas that we have the opportunity to see that organizations are struggling with is really what these agents can do in the organization and how these agents are being granted with access to systems. So a couple of things. So for instance, if you are a publicly traded company and you are bringing agents to your workforce and you have the responsibility to maintain integrity of your financial reporting, right? It means that these agents that needs to be audited just as humans are, should not be able to change your financial reporting. They should not be able to change the numbers because they think that the numbers are better looking differently or anything like that, even though sometimes we may want that to happen, but it's completely forbidden, right? But if you are not managing the access of the agents, if you're not managing the projects, if you're not managing and understanding what people are building and what type of things they are trying to incorporate into the organization, you actually live without any visibility or any understanding to potential risk that is being, maybe even materialized, in your organization currently.
So that's one aspect that relates more to discovery and visibility and continuous policy enforcement, for instance. Other areas, as you come and try to understand, okay, so how are agents being provided with access? Are challenges that are happening with introductions of technologies that kind of like, they were supposed to stay in the lab, but they were adopted so quickly without security frameworks or security controls, and now there is a challenge over there. So let's take MCP servers, right? We have done a research a couple of weeks ago. We published that research, and what we saw is that about 90% of MCP servers require some kind of credentials in order to access other services. Other things, they access the USA weather or stuff like this. You don't need credentials for that, you just do it. But others do need credentials and the majority of these MCP servers that need credentials are using hard coded credentials.
So you need to put your SPANs in adjacent file in an environment file, so on and so forth. So all of the work that we have done in the last 10 years to move to more modern credentials, to put everything in the past and in the secret managers. Now with the gravity force of AI agents that are happening, they're just being put outside of these vaults, outside of these secret managers, outside of the SPANs into the usage of these MCP. And it's a big gap because these MCP servers are on endpoints, laptops of employees, their own servers and so on and so forth. And that's the biggest leakage point for credentials because initial access vectors and attackers are trying to get to these endpoints anyway in order to infiltrate their organization, this is supported by Verizon Libre in a report and so on and so forth.
So we are helping organizations to understand what they have, to recognize these MCP servers and the way that they're using their credentials. We actually even released an open source, which is kind of an MCP wrapper that allows the MCPs not to have these hybrid credentials in their files, but to fetch them just in time based on the data. So kind of like reducing that, that's an open source that can be used by everybody. We are using it for our purposes internally, of course, for our support.
John Richards:
I didn't know MCP is doing that, I'll have to send them a link, make sure we put it in the show notes. I love that you guys are sharing that out more broadly to help everybody solve that problem. So is the risk higher as folks move from using mainstream LLMs that are going to be run by Google or Microsoft or ChatGPT, over to third party implementations, especially as you're talking about MCP servers and things like that, do you see that level of risk change or is it different kinds of risks that you're looking at?
Idan Gour:
I think that it depends. So when you're in a more protected environment, if the environment is built correctly, there are less chances of you making mistakes. So that you connected to something that you should not have, that something that you chose the wrong LLM model, for instance, versioning of MCP servers, maybe one of them has some kind of vulnerability and never updated. So that's right, that's the whole idea of SaaS, that you're getting the latest and greatest and that somebody is taking responsibility of all sorts of areas that you may need to maintain and make sure that they're up-to-date and they are secure and so on and so forth.
On the other side, in many of the things, and that's part of the problems that we see with SaaS in general is that you are having a very limited control in terms of controls. So like if the SaaS provider that is providing you an AI agent builder platform or a company like Salesforce, that provides you a TRM, if they haven't provided you with some capabilities or some technology in order to control better what you are building and what you are doing, you're stuck. So I think that there is an interesting balance between what you are being able to be provided with from third party vendor versus the things that you can build on your own and to cloud environments.
Our perspective currently that with the level of support that you have for different controls, there are many, many more things that you can do in instrument while you are building on your own in order to control the current adoption and the risks that are coming with rapid adoption of technology. On the other hand, to some extent, sometimes it doesn't matter and the business will still do whatever that they need to do and they will enjoy the technology that they want to have and you have to find a way, which is also what we are doing. We need to find a way in order to help them, in order to provide the best of visibility, but also in some cases to provide some advice and to help our customers to go and have a conversation with a specific third party vendor that they like in their organization. To tell them, "Hey, we like you. We adopted you on an enterprise level, but you are lacking a lot. You are lacking a lot from a security standpoint. And these are the items that we think that you should do."
And we are regularly providing advisory to our customers. Some of them are the logos of these AI agents platforms hoping to improve our industry as well.
John Richards:
Now, are you seeing a rise in folks targeting this? Is it new enough that you can get away without worrying too much about security yet? Or are you already like, "Oh, this is a big risk if I'm not ahead of the game here?"
Idan Gour:
So I think that it depends on the size of the company. Some of the companies we're able to get away with this for a longer period of time. But when you take a look on the leading enterprises and the ones that really wants to serve the enterprise as well, they recognize the fact that they need to be able to support their customers. So that's why OpenAI has their compliance API. That's why Anthropic list their compliance API, others have some kind of versions. There is no standardization, but I think that it will happen eventually because it's needed. It unlocks the ability for enterprise to adopt it purposefully. And to another level, and this has been always a conversation with single sign-on, for instance, and the adoption of single sign-on, it's kind of this tax that you pay for security features as part of the enterprise license.
So many of the security features across SaaS solutions were pushed to the enterprise license. There is a famous webpage called, I think SSO Hall of Shame or something like that, which shows the cost between having an SSO or not having SSO. So between the commercial license and the enterprise license, I think that it's going to be the same over here. It will allow to charge premium in order to secure for the enterprise. And we'll see the friction that it may or may not create along the time.
John Richards:
Is there a certain framework or maybe philosophy that you feel folks should have as they approach securing these agents? It's a lot to tackle. So what should they be thinking about as they go in there to implement one?
Idan Gour:
First of all, there isn't one framework to think about it through. And there's some work around that others is doing the other frameworks that are being built in order to help provide a better framework of analysis of understanding, so on and so forth. But we are trying to think about it more of first principles. So recently we introduced kind of a tagline that talks about this discovery, secure and deploy. So start with discovery, you start by understanding what you have, what you may have, how it looks, what's the context, so on and so forth. Then you understand and you identify the risks that you may have and you want to start to secure this specific risk and to put policy and enforcement and so on and so forth. And the aspect is already to think about the deployment phase and the introduction of how to do things right on the start as you implement in specific areas and to also be mindful to the fact that maybe not all agents are the same.
There are agents that are risky and there are agents that are not so risky. And maybe at the beginning of the journey, it's something that can live in a specific level of control because that's the level of the risk that you have. But once it goes outside of it and it moves to something that is more than privileged, it's doing something that is more important to the business, you want to increase the levels of controls. You want to increase the level of monitoring, you want to instrument in a different way because it's becoming critical for your business operations, for instance, right? So move it up to a level that you can deploy it in a proper way, that you control it in a proper way, you can monitor it security wise, operational wise, and so on and so forth. Treat the agent as it should have based on the importance that it brings to your organizational ways that it puts in your organization.
So these are like two principles that we think about in this discovery, secure, deploy, and treating agents as they should, versus all of them the same. So we also don't believe that it's practical to treat everything the same with, let's say, the utmost controls that you could have.
John Richards:
Makes sense. Now, is there areas where you either recommend or you see a lot of folks implementing this first where they can kind of get familiar with it, feel comfortable before they go to broader adoption?
Idan Gour:
So what we see is that organizations across the workforce are definitely adopting earlier technologies with lower levels of risk. I think that also with greater bottom up adoption and the organization is also willing to take more risks around some of these areas of workforce. Just believe it or not, it doesn't matter, but it is what we see. And I think that it helps people to become more oriented around the gains, around the operational way of work, around security challenges as well, and the potential value that is there. So definitely we see organizations adopting faster at the workforce. And then we see it starting to be transitioning to the more products or the really digital entities of these organizations where they're able to provide to their very customers, whether it's a B2B or a B2C business from that standpoint.
And then it's more an architecture discussion. There is a more conversation about different change management processes that needs to happen. There's more control that needs to be expected. There is more concepts, for instance, in an identity of just in time access or just enough access. More, more, more controls, like more thoughtfulness around these types of areas when you are moving from the workforce to more of the products, I would say, of the organization.
John Richards:
Now, when we were doing a little talk before this, you had mentioned how Astrix is really focused on securing the backend of these agents and the access. What's the danger or what do you see the threats that are out there for people that aren't taking care of that security?
Idan Gour:
The most concerning thing potentially, and again, not chasing ambulances, is that you really don't know. If you have no idea out of what is the interface that people have with the agents, if you have no idea how agents are maybe accessing your resources, looking forward, you don't have any idea of how they're accessing different maybe memories. You have no idea how maybe agents interacting with other agents or other services, but you have the organizations. If you really have no idea what's happening beyond these mountains of what you're accessing and what you're able to do, you really are not familiar anymore with the infrastructure that you have in your organization and what kind of gains are you getting or what kind of security challenges you're creating by being provided with access to different organizational resources that you may have.
And I think that not knowing is probably the biggest risk, because if something happens, then it really comes out of the blue, it's not managed. You are not seeing the stack that you may be needing to have in order to secure these technologies. You're not choosing with the technologies. And also from an efficiency standpoint, you don't know how much of your resources in your organization are actually utilized in order to unlock the potential of agentic in your organization. So all of that perspective, especially when you're thinking about it from a CISO level, I think, or you're thinking about from a board level, or you're thinking about it from a CDO level, a judicial officer or the chief privacy officer on the other side, it's really the ability to see across the organization and to know, right, what do we have? What do we do about what we have? How do we make sure that people are enjoying it? And do we make sure that we are protecting it while it's happening as well? Not seeing it is a big basic gap in my team.
John Richards:
Securing all this often makes sense in concept, but reality can clash with that. What are you seeing as ways to kind of bridge that gap between those two things and bringing... So you go into a group and they don't have this in place. What's that maturity look like to try and get closer to a secure place? How do you move from having nothing to a mature, safe position?
Idan Gour:
So first of all, I think that in this case, maybe not all roads lead to wrong, but many roads could lead to wrong. But I think that a good starting point is, as I mentioned, this discovery, right? Is this exercise of getting to know your stack, getting to know your infrastructure, getting to know the new infrastructure that is being used by people, getting to know the people that are leading these types of technologies across the organization and how they're leveraging them. Having a more broader and more context understanding of how things are being used across the organization, what are the purposes for them, which business units are the business units that are forward leaning versus others. If they think about LOBs, for instance, that are maybe taking things faster versus others, sometimes just because of their culture and the type of business that they are responsible for doing and so on and so forth.
So this exercise of discovering, having a first level understanding of what you have, allows you to understand and prioritize what you need to do looking forward. And then it starts with basic enforcement capabilities, basic other thing, different types of organizational processes. At the end, why do we buy technology at the enterprise? We buy technology at the enterprise because we want to make a change, something that's happening in the organization. That's the really meaningful thing that should be happening. So the ability to provide these organizational changes, whether for security or for productivity against the real opportunity of the discovery exercise that we believe is important for the right adoption of agentic AI.
John Richards:
When should they be thinking about engaging Astrix? From the very beginning, or is there a certain level you want to get to and you're like, "Oh, hey, we're now here. This is the right time to really bring this in." How does that look when you're working with different enterprises and companies?
Idan Gour:
We have the opportunity to interact with organizations that, for instance, told us, "Hey, we don't adopt AI agents, no AI agents here." And then we said, "All right. So let's check, right? We're happy to check." And all of a sudden there are hundreds of thousands-
John Richards:
I know.
Idan Gour:
... of them running around starting to do things. "Oh, okay." It's not the AI agents that are making your omelet in the morning or anything like that, but it's definitely LLMs that are working with access to different resources and they are there and there are here from that standpoint. And I think that some other aspect is that organization are sometimes saying, "We will get to securing AI agents when we get to the more mature states, the product state over there." And this is very wrong actually, in my opinion, because actually the more riskier phase potentially is the phase that nobody is watching, right? The phase that people are exploring, the phase that it's great, people are bringing their creativity and imagination into the organization, but they are still exploring and they don't have a training that is making sure that they are just like we had for developers when we are trying to improve application security.
We worked a lot, we put champions in different places in different teams. We created champions in teams in order for them to understand the risks and how to think about it. And so we invested a ton in order to make these types of groups, these types of units in the organization much more security oriented. And that exploration phase is happening without oversight and without training, without conversation about the potential risks that are happening, without an understanding of what do you have. That's why it's so risky because it's really a fire that is happening as is. Again, sometimes great gains, but potentially a lot of risk that is done with that. So our approach is to start early, start from the basics, build it together with the accelerated adoption and it's happening. There's a saying that, "If you're on time, you are late." About many, many topics.
But that's the same thing over here. When you're going to get to it, when you're feeling that you're already, you really now need security, now it takes time, right? It takes time to choose the security vendor. It takes time to deploy. It takes time to write organizational policy. It takes time to onboard people in this organization. To have the discussions that related to talk about the architecture, to talk about deployment, to talk about operation, to onboard the stock, so on and so forth. Everything that takes time in an enterprise from that standpoint. So approaches, right? If everything is moving in an accelerated approach, there's a lot of pressure from the board to adopt AI agents, that's probably not going to disappear and it's a transformational change that is happening to organization. We believe that it's right to think about it also from a security standpoint earlier.
John Richards:
I like how you frame that. And my takeaway is if you're not being proactive, you're already behind because you don't know who else in the company has already started to try this stuff out. As your example where they discovered, "Oh, we've actually got 100 of these out here already in use." It's not surprising to hear that, but it was shocking in a way, the size of the scale. I expected some because I know so many people are trying it, but does make you sit up and go, "Oh, we better get on top of this or else it's going to be too late. We're going to be playing catch up." And as you mentioned, you don't know what's there, who knows what's at risk.
So man, Idan, you've left us with some really challenging facts and situations, I'm glad you all are out there working to help make this easier because it is very complex. You mentioned the tie-in between what you were doing before already ahead of time with the non-human identities to where we are now, but it's felt like some of the same challenges, just a lot harder to solve, a lot more complex. So I'm glad you all are doing that. So with that, tell folks a little bit about how to get hold of you or what's next for Astrix and what they should be paying attention to. Definitely check out what they're doing.
Idan Gour:
We are everywhere that you could literally try to find us, but we have our websites, Astrix.Security. We have great blogs and resources over there that we are happy to share with the community, along with some explanation about the product. We're on LinkedIn. We try to share the latest and greatest over there. I'm LinkedIn. Any person could just find me, Idan Gour and they connect with me together over there. And we are available. True for our customers. It's true for people that really just want to understand and to learn better about this space, and it's true about how we think about our community in general.
John Richards:
Well, thank you so much, Idan. I appreciate it. This has been informative. We'll have links in the show notes. Definitely also check out their open source wrapper for MCP to help with credentials. It's a huge issue. So again, thank you all for that. And thanks everybody for watching. Appreciate you coming on here, Idan.
Idan Gour:
Thank you. Thank you, John. Thank you for having me.
John Richards:
This podcast is made possible by CyberProof, a leading managed security services provider, helping organizations manage cyber risk through advanced threat intelligence, exposure management, and cloud security. From proactive threat hunting to manage detection and response, CyberProof helps enterprises reduce risk, improve resilience, and stay ahead of emerging threats. Learn more at cyberproof.com.
Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of True Story FM, audio engineering by Andy Nelson, music by Ahmed Seghi. You could find all the links in the show notes. We appreciate you downloading and listening to this show. Take a moment and leave a like and review. It helps us get the word out. We'll be back January 14th right here on Cyber Sentries.