Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.
John Richards:
Welcome to Cyber Sentries from CyberProof on TruStory FM. I'm your host, John Richards. Here, we explore the transformative potential of AI for cloud security. This episode is brought to you by CyberProof, a leading managed security services provider. Learn more at cyberproof.com. On this episode, I'm joined by Walter Haydock, founder of StackAware, and a leading voice in AI governance, risk, and compliance. We dive into the fragmented landscape of today's AI legislation and the real challenges organizations face in trying to keep up. Walter shares perspectives on solutions that balance protection with innovation, and we unpack the risks for organizations without clear policies in place.
We also examine the growing problem of shadow AI, where employees adopt AI tools outside official channels and what enterprises can do to get ahead of it. Let's get into the conversation. Hello everyone, and welcome to Cyber Sentries. I'm excited today to be speaking to Walter Haydock, founder and CEO of StackAware. Welcome, Walter.
Walter Haydock:
John, thanks a lot for having me on the show.
John Richards:
Well, I'm so glad to have you here. Now, my understanding is you've been doing a ton in the AI cybersecurity and compliance arena. I'd love to hear how you got into that. You were a founder of StackAware. What was your journey to get to that kind of spot?
Walter Haydock:
So, I spent the bulk of my career in the US government. I was a Marine Corps intelligence and reconnaissance officer. I served on Capitol Hill, I worked in the intelligence community, and then I also worked in the private sector for a bunch of other companies after my time in government. And I thought I had completely understood chaos and uncertainty. But with the advent of new generative AI technologies, I've seen whole industries upset. I've seen risk paradigms being challenged and questioned and folks feel like they might be starting from scratch when it comes to cybersecurity compliance and privacy.
I founded StackAware because using those time-honored tools of risk management, I think we can apply those to new and emerging technologies to help harness the benefits, while at the same time managing the risk.
John Richards:
Now you mentioned seeing whole industries upset. What are the areas where you see the most chaos, the biggest changes and impact of AI right now?
Walter Haydock:
I work a lot in the healthcare space, and that's one area where there is both high risk and high reward for deploying artificial intelligence. So, this has created a weird situation where companies on the one hand are enthusiastic about deploying AI to reduce costs, to improve care for patients. But on the other hand, they're very worried about things like data leakage. They're worried about offending people, about delivering incorrect diagnoses. So, that is one industry where I've seen the impacts of AI most severe or most pressing.
John Richards:
So, how do you go about helping folks in that spot? What does it look like to bring some order to that chaos, or is it just a wait and see, or are there things right now that people can do to begin to try and utilize this without crossing either security or ethical boundaries?
Walter Haydock:
There are definitely things that companies can do right now, whether they work with StackAware or not, to help manage the risk. I recommend three steps to begin with. First of all, develop an AI policy that clearly states what's acceptable and what's not. Once that's in place, then do a thorough inventory of all the tools, systems, and models that your company is using and determine whether those fit within the risk appetite established by your policy or not. And then the third step, which is related to the inventory, is looking at the cybersecurity compliance and privacy implications of those systems.
Making sure that you are protecting data in a way that your customers or your other stakeholders expect, ensuring that you're complying with applicable regulations and that you're treating people's data with the respect that they anticipate.
John Richards:
So, it sound like important fundamentals. How does StackAware fit into that puzzle piece? Is it assisting with making those choices, or is it another layer on top?
Walter Haydock:
StackAware is a very high touch services firm where we go into our customer environments, work very closely with them, the security team, the compliance team, legal team, business teams to identify their business priorities and then contrast those against their limitations when it comes to data security, regulatory obligations, any customer contractual requirements, and then help those customers achieve their business goals in a way that optimizes their risk. So, I didn't say eliminate risk because that's not possible unless you shut down your business, but we help them optimize risk based on the constraints that they have.
John Richards:
So, if you're an enterprise out there, you should always be taking these steps, but if you're confused or unsure the best way to tackle this, then you might have a group like StackAware come in and assist you in that journey of putting together the policy, reviewing your business and putting all those pieces together. Is that a fair assessment?
Walter Haydock:
Exactly. StackAware works with clients in three distinct sprints. The first one is an assessment of their systems models and the impacts and risks resulting from those. The next step is building a governance program using things like a policy, developing actionable procedures, looking at other parts of the business and making sure that they're integrated correctly. And then the third phase would be for those who are interested pursuing ISO 42001 readiness, and that's a globally recognized AI governance standard that helps companies manage the risk in a systematic manner. And we assist organizations preparing for certification under that standard.
John Richards:
Can you share a little bit about that standard and what are the big challenges that organizations tend to hit as they try to adopt this and have this readiness?
Walter Haydock:
ISO 42001 is a standard published by the International Organization for Standardization, and it provides a roadmap for building what it describes as an AI management system. So, that is a method for managing the risk developing policies, procedures for using AI securely. It's quite similar in structure to ISO 27001, which gives you a roadmap for building an information security management system, and it integrates nicely with that standard. So, ISO 42001 is a key way to develop a responsible AI governance program. Happy to talk about the challenges that companies face when they're trying to implement it as well.
So, some of the key challenges that companies face when they're implementing ISO 42001 would be first of all, not aligning with the business when they're trying to roll out an AI governance program. I think the ISO standard is good in that it acknowledges business considerations as being very important to developing your AI management system. So, the security or compliance team should not try to develop metrics, for example, that are completely separate from the goal of the organization. They should develop metrics that they're tracking that are aligned to the business objectives. So, using AI to reduce manual tasks is something that benefits both organizations.
That could certainly reduce error rates, it could reduce expense that is incurred by certain tasks. So, that's the type of metric that I found to be helpful to index on. Whereas indexing on other metrics that might not be as business-oriented could potentially make it difficult to convince the key stakeholders who need to take action.
John Richards:
That makes sense. Yeah, you want to align it to something that brings value. And I have talked to companies that are like, we just want to build general AI awareness, so we're just going to open this up and run it and everybody just try it out, see what happens. What's your thoughts on folks that are like, let's try it out and then we'll put policies in maybe later for now, just everybody go use this, do what you want. I mean, again, I say that, but that's more of those people obviously aren't in healthcare that are making that choice. So, probably the answer is very different if you were in healthcare where that's actually a huge risk.
Walter Haydock:
I think you hit on an important point. Understanding your use case is key for driving or setting your risk appetite. If you were building an AI-powered meme generator that's accessible to the public and doesn't have any sensitive data, then you might reasonably have a higher risk appetite than a healthcare organization as you also mentioned. So, understanding your use case is key to setting an appropriate risk appetite. Nonetheless or irrespective of circumstance, having a clear set of acceptable and unacceptable things that employees can do is really critical in any company. And you might have different restrictions in different scenarios, but making those clear is really important.
John Richards:
And what usually drives that? Is it looking at existing regulation like HIPAA or FERPA or things like this? Or are there other things out there that's driving the choices on what you allow employees to do and what you're saying, "Hey, we don't want to use this for," or even build an application to do this?
Walter Haydock:
There are a couple sources of guidance when it comes to setting the limits on what employees can and can't do. In one bucket, I would put the traditional compliance and regulatory frameworks, whether those are voluntary standards like the NIST cybersecurity framework, whether they're certifiable frameworks like ISO 27001 or if they're mandatory government regulations like HIPAA, you mentioned. Those are all applicable and will drive the development of an AI governance program. At the same time, there's another bucket of AI-specific sets of guidance and regulations that companies also need to look at. So, in the AI space, even jurisdictions like New York City have put in place AI-specific regulations.
California has another regulation that's coming into effect. In October, Colorado passed a groundbreaking law in 2024 that they have unfortunately been trying to amend unsuccessfully several times since then. And there's quite a bit of change when it comes to the AI regulatory landscape.
John Richards:
So, you mentioned this Colorado. What can we learn from the folks doing this of like, "Oh, hey, maybe this isn't the way we do it because we need to change it for these reasons"? What are the sticking points that we should be watching out for as organizations figure out how to navigate that?
Walter Haydock:
Colorado has taken a first in the nation approach to regulating artificial intelligence comprehensively at the state level. And it's pretty clear that they took a lot of inspiration from the European Union AI Act in creating the Colorado Artificial Intelligence Act or SB205. And it is primarily an anti-discrimination law, which I think everyone can get on board with, but it puts in place a set of very heavyweight requirements on companies that are using artificial intelligence and doesn't allow smaller companies. It does have some exceptions, but it is pretty broadly applicable to regulation.
So, it can touch a lot of different companies in a lot of different areas, and some of the wording is a little bit unclear, which was raised actually by a Colorado State task force. So, the combination of the heavy touch plus the uncertainty is what's caused most of the problems here.
John Richards:
You mentioned unclear. There's an important piece in this regulatory story, if you will, that's like if you don't have anything, people are worried because you never know when regulation will come in. So, you want some level, you want it to be clear, but you also don't want it so onerous that it stops innovation and moving forward. And I know this is part of the push between maybe the United States more lax regulation environment versus the European that you've mentioned. Are you seeing maybe your focus primarily here in the US market, but are you seeing any differences on how this gets tackled by different groups based on the regulatory environments they're in?
Walter Haydock:
I think Europe has taken a approach of trying to become a regulatory superpower, which has some benefits, but it also has some drawbacks. And we're seeing European economic growth has been slowing compared to the US over the past couple of decades, and they have created almost a monster that they can't control in the EU AI Act. And I mean, you can laugh. It's not funny for companies that need to comply, but they have, the government has failed to meet its own deadlines for providing guidance on certain topics. So, for example, there's a general purpose AI code of practice that the European Commission was supposed to approve earlier in the summer and they still haven't approved it.
So, there's not even a formally approved set of guidelines for those general purpose AI companies out there. And then also we're seeing that the harmonized standard, which is a European Union term for essentially a set of rules that if you follow them, you'll receive a resumption of conformance or a safe harbor. In essence, those are not going to be finalized until the end of this year, which is already delayed. And the high risk requirements for the law come into effect in August of 2026. So, companies will have seven months from when they get these regulations dropped at the end of the year, which in Europe, yeah, good luck doing anything in the last two weeks of the year.
So, the hell have seven months to essentially develop a conformance system to a standard that no one even knows what it looks like yet.
John Richards:
Yeah, that's scary to get back to that point of if you don't know what, or it's confusing, you're like, "Well, I don't know if my model will hold up under this or not." On the other side, are there things that aren't regulated yet but that you're often like when you talk to these enterprises like, "Hey, this is a best practice that you should be adopting here, or mostly I guess securities concerns that you see, maybe what are the areas that we need to be most aware of right now when we're considering the security ramifications of bringing on the AI?
Walter Haydock:
One thing that is not traditionally thought of as a security team responsibility, but is necessarily becoming one because of the nature of AI and how it is a technology is intellectual property risk. And there are many dimensions to this issue. So, there's the trade secret issue whereby companies could potentially undermine the ability, or potentially even expose their trade secrets by allowing their employees to train third party AI systems using that information. And Amazon, for example, likely got burned by this in the early days of ChatGPT, where ChatGPT was reproducing a lot of very accurate information about Amazon's internal infrastructure. So, trade secret protection is one big aspect.
Another big aspect, and I'm not an attorney so don't look at me for legal advice, but another big aspect is on the copyright side of things. We saw recently, earlier this week, a settlement between Anthropic and a group of authors. The authors were alleging that Anthropic was infringing on their copyrights by training its models on their works, which were also illegally downloaded in some cases, allegedly according to the suit. So, the landscape for intellectual property risk is very unclear right now, and it doesn't seem like Congress is going to step in and say yay or nay one way or the other. So, it's unfortunately going to get battled through in the courts.
So, companies can be in a strange position where they don't actually know what they're supposed to be doing. And my recommendation is go back to your use case, go back to your risk appetite, look at your business requirements. Is what we're doing here is this in accordance with our risk appetite? And I actually suggest five different levels of risk when it comes to IP that companies can use as guidance for telling their employees what to do.
John Richards:
Oh, wow. Yeah, thank you for bringing up that topic because that is a huge new area, which makes me want, as folks are tackling, how do they handle their risk appetite? How do they manage all this risk? Do they need to throw out everything they've been doing before and start new, or is there something we can learn from the past of how we've handled new technological developments and apply those still?
Walter Haydock:
The classic four risk management techniques of accept, avoid, mitigate, and transfer still apply, and those are the only four ways that you can deal with any sort of risk, including AI. Using those as your foundation for risk management is a effective way to stay on top of emerging technologies like AI, but also prior existing risks as well. So, my recommendation to my clients is don't create a separate AI risk register. Don't create separate AI policies procedures, integrate those into your existing systems, into your existing processes.
The only exception would be an AI policy itself, but don't create something that stands alone and has a different scope or different applicability than your existing governance infrastructure.
John Richards:
And are you seeing most of all of this still falling under the existing security suites that are out there, or do you see new groups forming that are helming the AI wing or the AI security wing at organizations?
Walter Haydock:
Who should own AI governance is a very interesting question. By default, almost CISOs have been getting tasked with that responsibility. That doesn't mean it necessarily has to be that way. It could be a legal compliance, privacy, data science, or even in some organizations I have seen dedicated AI governance personnel. Those are at bigger organizations with more resources, but it might make sense to have someone who is a specialist in this area to look cross-functionally across all of the risks. The key thing is to have designated accountability for every risk and also for managing the risk process, making clear who's in charge of what is critical, no matter who you assign it to.
John Richards:
Are employees I guess, or are the people at these organizations going along with these policies or do you see a lot of pushback? I've talked to folks that are like, that have tried to do some level of this, and folks are like, "Well, I'm just accessing AI on my own devices now." Almost like the shadow IT, but it's like a shadow AI, if you will.
Walter Haydock:
Shadow AI is a major risk. There's one survey I saw that said 8% of employees at organizations banning the use of ChatGPT, still used it anyway. So, if 8% of people are telling a survey administrator, they're using it anyway, the real number is likely far higher than that.
John Richards:
Yeah, no kidding.
Walter Haydock:
Shadow AI is a major risk. My recommendations would be to first of all, have a flexible process for approving new tools to make sure that people aren't just doing things to get their job done using new tools that they're not sure about the security implications because they just feel under the gun. So, having a process that is responsive for approving or denying new tools is really key there. And then the second piece is education. Explaining to people why you should or should not do certain things that really helps justify it. And if people understand why they should be doing something, it makes it a lot easier to enforce that policy.
John Richards:
Yeah, that makes sense. Yeah. Yeah, well education is always a key component of security. Around that, how you see successful enterprises out there that are looking to educate and inform downstream large groups of people about the proper way to do this? Have you seen any techniques that are working really well?
Walter Haydock:
Using case studies and specific examples for specific tools and systems I think is really helpful. So, instead of saying, don't infringe on intellectual property rights or don't provide sensitive data to AI, which is very difficult to action and do in real life saying, this classification of data is authorized in this system, this classification of data is not authorized for this model. Being clear about that makes it much more concrete for people, and then they won't have this gray area they need to contend with when they're about to hit enter on their desktop.
John Richards:
Yeah, I mean, it does feel like, and I guess partly because AI also feels like it's touching every part of our life that it is touching every part within an organization, like your data has impacted, your output's impacted, the network as these agents maybe start to communicate with one another. So, it is a really broad branching thing. Are most organizations tackling this at a company-wide level, or do you see folks that are trying to do this at a more siloed approach of this is the only group that's going to use this, so we're going to just tackle it here within IT, for instance?
Walter Haydock:
Ninety percent of organizations are dears in the headlight right now when it comes to AI. They have no idea how to apply it from a business perspective, let alone manage the risk of using it. The top 10% in terms of sophistication, those are the people that I generally work with because those are the ones who understand they have an issue to contend with. There's a business need to use AI. At the same time, there are internal stakeholders who understand that there are risks which need management. So, that is generally how I'm seeing the landscape. Eventually, AI will become like cloud technology whereby basically everybody's using it to one extent or another, but we are not there yet.
So, there is quite a divide between the most and least sophisticated.
John Richards:
Yeah, I guess it makes sense that the folks that already are dealing with heavy regulatory environments and large risk. Anytime you look at the report of security breaches and things, the most costly tend to be the space in the healthcare or government or bank, and so that they're the first early adopters because they already know and have an understanding of the risk. What do you think for smaller groups or people in other industries and sectors? Where do you think they'll get most impacted if they're not putting in these policies or paying attention to the risk? Where do you think folks that are, "Okay, I don't have to worry about HIPAA compliance, I'm not in healthcare"?
Where do you think they're going to end up going astray if they don't do that 90 some percent that you're talking about?
Walter Haydock:
The three main areas that companies are concerned about AI use are one, data leakage and potential trade secret loss. Two is intellectual property infringement, and then three, which is applicable to the broadest number of companies is reputation management. So, if you're using a chatbot to help customers, that is a pretty big reputation risk surface because prompt engineering or prompt injection against those bots, it could potentially reveal sensitive data if you've not constructed your system correctly. But jailbreaking of those applications could lead to embarrassing screenshots being taken.
It could lead to even people asserting that the chatbot entered into a contract that you didn't necessarily want to enter into in Canada. Air Canada was in a lawsuit against a guy who he said, "Your chatbot told me this, but you denied my claim." And Air Canada said, "Oh yeah, don't worry about the chatbot. That doesn't speak for us." But a judge said, "Yes, it does speak for you." So, there is big reputation and contractual risk for a lot of those companies out there.
John Richards:
Yeah, I guess I'm reminded of the Grok incidents not too long ago where things went awry and it got hijacked almost, if you will. So, yeah, there is a huge reputation impact. If you're not paying attention to this thing, it'll just work out. But yeah, there's plenty of folks out there trying to... I mean, there's just folks trying to play around with it. And if you're not paying attention or not managing that risk, you'll run into issues there for sure. Let's say, you were put in a position of, "Hey, here's a magic wand. You get to create the next piece of regulation around this for, let's say, at the US level," what would you be thinking of there?
I know that's a heavy as, but is there anything that you would like to see that you think, "Hey, this is what we would need to both make sure we're safe, but also maybe promote innovation, as well as you're trying to look at that"? What do you think maybe is coming down next?
Walter Haydock:
I would love to see a nationwide AI governance law that preempts the states and would apply a light touch from a regulatory perspective that bans certain practices that most people would agree are not acceptable, anything to do with child abuse or any malicious misrepresentation of people. But aside from that, I would recommend a lighter touch approach at the federal level. And then over time, the government or Congress could increase the level of regulation if there are certain use cases that are especially damaging.
So, that would allow for an incremental approach and prevent the problem of if you create a regulation, it's just so difficult to unwind it as Colorado found out in August when they were trying to essentially undo parts of SB205, which were already going to come into effect a few months down the line, and they only succeeded in extending the implementation deadline. They did not manage to amend the bill in any substantive way.
John Richards:
Yeah, I like that approach too of the value the international. I hadn't really thought about that, but I'm in some groups where AI related that are already like, "Hey, we have to ban..." On this, it's more countries have passed things, but in my mind now, I'm thinking, "Oh, what do you do if you start at the state level and suddenly you're like, this application can only be used in these seven places and not in these other states, or however that lands, having some certainty that we're all going to be adhering to the same premise kind of would be helpful, versus each state having its own individual interpretation of what means there." So, I really like that approach. Thank you.
Walter Haydock:
We're already seeing exactly what you mentioned. For example, there's an HR tool that I have evaluated which disables its AI functionality for anyone in the five boroughs of New York City specifically-
John Richards:
Oh, wow.
Walter Haydock:
... in order to avoid being regulated under New York City local law 144, and that is only going to increase.
John Richards:
Yeah. And that will become a nightmare if you... Yeah, that's a city level. You let alone the state. Wow. Wow. Thank you so much, Walter, for walking through this. This has been so informative. I appreciate it. I would love for you to give a shout-out here about StackAware folks that want to know more about this, want to enlist your help maybe of your organization to come in and help them. What's the best way they can learn more and connect with you?
Walter Haydock:
Absolutely. Thanks, John. So, StackAware helps AI powered companies measure and manage cybersecurity compliance and privacy risk, primarily through ISO 42001 readiness and implementation. And if you're interested in learning more, please look me up on LinkedIn. I'm the only Walter Haydock out there. My father is not on LinkedIn to my knowledge, and I post once or twice a day on AI governance and security topics. So, give me a follow and look forward to seeing you out there.
John Richards:
We'll definitely make sure we have that link in the show notes. Thank you for making yourself available. Folks, check out StackAware, check out Walter, what he is doing. This has been a really informative. Thank you so much for your time. I appreciate you coming on the podcast.
Walter Haydock:
Thank you, John. Thanks for having me.
John Richards:
This podcast is made possible by CyberProof, a leading managed security services provider, helping organization manage cyber risk through advanced threat intelligence, exposure management, and cloud security. From proactive threat hunting to manage detection and response, CyberProof helps enterprises reduce risk, improve resilience, and stay ahead of emerging threats. Learn more at cyberproof.com. Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of TruStory FM. Audio Engineering by Andy Nelson. Music by Amit Sagi. You could find all the links in the show notes.
We appreciate you downloading and listening to this show. Take a moment and leave a like and review. It really helps us to get the word out. We'll be back November 12th right here on Cyber Sentries.