Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.
John Richards:
Welcome to Cyber Sentries, from CyberProof on TruStory FM. I'm your host, John Richards. Here we explore the transformative potential of AI for cloud security. This episode is brought to you by CyberProof, a leading managed security services provider. Paladin cloud is now part of CyberProof's expanded cloud security capabilities, learn more at cyberproof.com.
On this episode, Sherman Williams, managing partner at AIN Ventures joins me. We discuss the importance of identifying the right problems to solve with AI, how using AI as a tool should be top of mind for any industry and new AI threats on the rise. Let's dive in.
Sherman, thank you so much for coming on the episode today, it's so nice to have you here.
Sherman Williams:
Yeah, thank you. Happy to be here.
John Richards:
Before we dive in, I would love to hear about your background, I know you're investing, you're reviewing a lot of companies in the AI space now, and so how do you get to that spot? What's your journey been? I saw on your profile you started out as a veteran and now you're tackling this area and looking into such a new technology, what's your journey been to get here?
Sherman Williams:
I would say bottom line up front, my journey has been circuitous. I think it's been one that's been led by intellectual curiosity, which is backed up by just a ton of hard work. And so that's been my journey here. So I'm a graduate of the US Naval Academy, and my co-founder is a graduate of the US Military Academy at West Point, we came together about five years ago with an idea that in the world of dual-use technology, technology [inaudible 00:01:51] commercial application, it made sense to source the thoughts of a lot of the end users of that equipment, which are typically officers or folks in the military or folks in the intelligence community or law enforcement.
And so we're the graduates of these unique institutions, the academies, and we brought together graduates from the other three academies. There's a total of five US military service academies that are 100% funded by Congress in order to source their thoughts when it comes to performing diligence on companies, but also sourcing companies and providing post-investment support to those companies. So if you think about that through line of sourcing diligence, post-investment support, it was the thought process that a group of academy grads would have a unique perspective.
So we started off with that and did the syndicate deals, mainly one-off deals for a couple of years. And then in 2022, we set out to raise an investment fund with capital, a dedicated fund to invest into these dual-use technologies, technologies of both government and commercial application. All of those technologies fall within the critical emerging technologies list, was put out by the White House several years ago. There's been some slight modifications, but it's largely the same as it originally was. And anyone can google that, the Critical Emerging Technologies List.
We finished fundraising for our fund in 2024, took a little bit of time for the first fund, grew a journey, but it was good. And we have been investing out of that fund now for some time. We were investing as we were going. We've got about 20 some odd companies in the portfolio to date. They range in multiple of the critical emerging technology list areas from life sciences to space technology to artificial intelligence, machine learning development tools to sustainability tech. We've got a pure cyber company in the portfolio. We look across multiple technologies within that realm of that CET List.
And the key to that list is, those are technologies that the US government for various purposes determined it wants to see. The US government wants to be able to use those technologies basically. And the US government understands that to best use those technologies, it needs to do so in the context of a company, and a company can deliver that service to the US government. And so the US government will give companies that are building those technologies in those specific areas, those critical emerging technology list areas, will give them non-dilutive funding and also potentially be an initial customer of that company's technological products.
And so what the US government does, is it looks for private investors at some point to come in alongside it in order to give the company the financial wherewithal to continue to grow so that it can, in a sustained way, be a service to the government or provide a service to the government, I should say. And so we are that kind of private [inaudible 00:04:56] capital. That's how we think about it. And we invest in companies that are actively engaging with the government at the time of our investment or we've invested in with companies that have no dealings with the government and then we're going to work to help them have dealings with the government eventually. So we attack it both sides.
So we have several companies in portfolio [inaudible 00:05:15] hardly any dealings with the government, but we say, "Hey, eventually [inaudible 00:05:18]." We put our hat on, our former government hat on and say, "Hey, the government's eventually going to want this if you can actually do it."
John Richards:
That's a cool thing about being in the spot that you are is that you are almost like a leading indicator of where the market trends are going because you're hearing about... As a consumer, I could see a lot of AI tech, the technology coming out, but since you're investing before that really almost gets fully productized, you're seeing this ahead of time. And obviously as a consumer, I'm not on the government or federal side to see that at all. So I've seen a big proliferation of everybody's looking to figure out how to do AI, but what does that look like from the investment space? Does everybody need an AI angle right now? Are you seeing some really unique applications showing up for this? What are the trends you're seeing there?
Sherman Williams:
Technology is a tool. It's an amazing tool in your toolkit for something that you're trying to achieve. AI is a subsegment of technology writ large, and it is simply a tool in a toolkit. And so if you ask somebody, "Do you want to run a more efficient organization that's much more productive and makes more money and has more net profit than you would do with some other process?" I think everyone would say yes. "Hey, in order to do that, here's a tool that you need to use. That tool is called AI."
Sometimes people over-hype things or they get too excited about things, but I really do think that AI should be seen as another tool in a larger toolkit. And it's a big tool. It's a tool that you may not know how to fully use yet, and there's other iterations of the tool that the company is going to be sending along later. It's almost like a subscription tool. And they're going to send you an improved tool. Next thing you know, you're using one kind of screwdriver to do everything in the house, and it also has a hammer on it, and it has... You can pull out nails and it can [inaudible 00:07:16]. It also has a remote, I don't know, to turn on the lights, like the world's greatest multifunctional tool, Swiss army knife, whatever.
And so AI to me just brings about efficiency and productivity. It increases productivity dramatically, and that's what it's good for to me. Again, I try not to over-hype it, and there are certain instances where it's not necessary. I think that the areas where it's not necessary, depending on what you do, are few and far between because AI is a tool. It's the situation instead of the company to keep on my metaphor, sending you a new tool every year, the company's going to send you a new tool once a month. The improvements are so rapid. And so that's how I treat AI. Should everyone use AI? I think that everyone should think about some form of artificial intelligence if they want to run a more efficient organization, be more productive, and potentially produce greater profit. Yes, yeah. If you frame it in that way, everyone would agree with that.
John Richards:
Yeah. I feel like I've seen a few things where folks are saying, "My business model is the tool versus maybe solving a problem that you use that tool to solve. Are you seeing that where there's a flood of folks that are just like, "Here's AI, let's use that," or are entrepreneurs smart enough and they're saying, "No, I'm still going for the problem and here's how I use AI in that space"?
Sherman Williams:
So what you're asking is two different things. So the people who are just flooding the market with AI, what they're doing is they're flooding the market with two things. Either AI [inaudible 00:08:57] dev tools where we mainly invest, which are basically tools for software engineers or even to bring a technical element to a non-technical team. And when I say for software engineers, it's to help them go home earlier. That's what I think about AI [inaudible 00:09:10] dev tools. It is a tool to help people go home early. And then there's dev tools and then there's application layer AI.
Application layer AI is if you're working in a specific industry, I have an AI tool to help you be more efficient, be more productive, and produce greater profits. And so that is more of an application layer tool. That's meant to go to an actual company. And so I think that, yeah, are there people who are just producing AI, flooding the market with these application layer tools? I don't necessarily see a flood in the market of dev tools. There's a lot of dev tools that are out there, they're coming, but there were a lot... Before this, there was DevSecOps that was heavily being... A lot of people were building it that way or were building those kinds of companies. Have I seen an uptick? Yes, but has it been overwhelming for me? Not really. Now, for application layer AI, yes, I've been seeing a ton there.
The problem with application layer AI is a lot of the large language model companies are constantly building out their portfolio suite, or to keep this metaphor, are continuously adding elements to their tool and they're able to do what your application layer AI company is able to do within three to six months when you start your company. So they completely destroy your business model, not destroy your business model, but they bring up... That's not true, they don't destroy a business model. I'll take that back. What they do is they bring a tremendous amount of competition to the market. Everyone knows perfect competition creates zero profits. And that large language model company may have an easier time to distribute. They may have an easier way to distribute and get to the potential customer than your smaller application layer AI company.
That's the taxonomy and the context/framework, mental framework that I use when looking at these companies. With respect to the entrepreneur that's in the wild that's building and just trying to solve a problem, it's up to them what they choose. Do they want to have some DevTool products for their technical team to make them help them move faster? Do they want to have an application layer AI product to help their company move faster? They will march with their feet. I think that because AI brings about a certain level of efficiency, productivity and can increase profits, I think that every single entrepreneur should be looking at embedding AI into their processes for sure.
But with that being said, depending on what kind of company you're building as an entrepreneur, when I think about people, I don't think about building a company. I don't think about it in that way when I think about an entrepreneur. I think about an entrepreneur as solving a problem. Those are two different things actually. And the best companies are solving an acute problem. So you have an acute problem that you're solving for and per that, you should be bringing in a variety of different AI tools to help you move faster, operate more efficiently, and do so with greater net profits in order to solve that problem. And again, that's how I think about this.
John Richards:
That's a helpful perspective there. I like this analogy. So you also mentioned some DevSecOps there as the previous trend. As you're looking at specifically AI, but you're considering it from a security perspective, I assume there's even more that you have to think about when you're thinking about government contracts, things like that. And there's already this push-pull of public models versus a private or a local model or an open source model. Does that matter when you're looking at what's going to have longevity, especially in these important security areas, or are there key things where you're like, "Hey, if this is a local open source model, we've got a lot more space to move because we don't have to worry about data leakage or things like that"? Is that a factor when you're assessing these companies?
Sherman Williams:
Yeah, so from an AI and cyber standpoint, when I think about what can the tool of AI do, it can take in voluminous amounts of information, find normal distribution curves, and start to identify and start to find normal distribution curves with respect to that large voluminous dataset, those data lakes. And it can start to find patterns there and based off patterns, it can develop confidence intervals and predict what may happen next. If you think about a large language model, what is it fundamentally? A generative pre-trained transformer does next-token prediction in the form of words. And so that's fundamentally first principles what it does.
That paper, I can't remember the name of the paper from Google in 2017 that talked about transformers, that was one of the seminal papers in a large language model space. And so is there a place for them, open source versus closed source? Let me hop to that portion of the question since I set the scene on what we're thinking through now. As far as how to go about doing this open source versus closed source, it depends. I think that there are elements. You have rags, you have other things where you can leverage a large language model, but have your own instance and have a lot of information that are protected and not necessarily go out into the wild.
Yeah, certain industries, healthcare, government, insurance, anything where you have information, you don't necessarily want to be out in the wild and necessarily trained on like personal identifiable information and all that kind of stuff, or for government [inaudible 00:15:08], things that they deem to be classified. They're going to want something either on-prem or in a secure cloud, which the government has been doing for quite some time. And they're going to want their models to run in that way. So yes, you're going to have that element. But then you're going to have people who just don't need that. It really doesn't matter. Because the government and insurance and healthcare and all that, they're going to have to pay more money for that too. There's going to be a [inaudible 00:15:38]. Now, that cost is coming down, but it's still going to be marginally more than someone who's just using a standard model for sure.
Now, where do I fall out open source versus closed source models? That's another question. If you really think about it, you asked several questions in one. Open source versus closed source models, that's a great question. Bottom line up front, I'm more of a believer in open source models. So I'm more of a Yann LeCun believer from what LAMA is doing at Meta than I am where OpenAI was probably six... Or a year ago. OpenAI has since opened up, and I think they're moving to be a little bit more open source. If I'm not mistaken, I think they talked about revealing their weightings in their model. Obviously LAMA clearly does that. Famously Deepseek in China does that.
I'm much more of an open source person from a model standpoint, but I do think that there are certain industries that are going to want some sort of closed garden wall or whatever, some sort of wall for their information, but there's an entrance to let in more information from others. You want to hold out information from others, you would want to incorporate that, but keep your stuff out so it can't be trained on. That's how I think about this playing. And companies that are working on that to enable you to keep your wall garden, but have an entrance for other people's information, but keep yours in that garden, and that's how the model will work. I think that those companies stand and makes a good amount of money for the time being. So that's where I stand on all of that.
But at the end of the day, I do want to say something that if you're an entrepreneur solving a problem, an acute problem, ideally I wouldn't get too caught up in what... Even though it's going to look dire right now, I wouldn't get [inaudible 00:17:45] caught up in what the big language model companies are doing. People get disrupted all the time. Big companies get disrupted all the time. And fundamentally, if you as an entrepreneur understand the problem extremely well and better than anyone else, someone sitting in San Francisco... Remember most of the large language model companies are within 50 square miles of each other in the Bay Area.
John Richards:
That's a good point. I haven't thought of that.
Sherman Williams:
If you understand what... I don't know. If you have a software company for cattle ranchers or something like that, and you sit in Texas or you have a software company for, I don't know, dock workers or I don't know, some sort of problem there dealing with shipping or whatever, and you happen to sit in the port of Newark or Baltimore or Long Beach or LA or Houston and you understand the problem better than anyone, I think that you're going to be able to move faster and be more specific in particular in your product delivery, which means that these people are going to be willing to pay you versus them being willing to pay the large language model companies.
And that's what folks need to focus on. Don't get so enamored by the technology, get enamored by I know this problem better than anyone, and I'm pulling an amalgamation, just a range of tools to answer this problem. Because ideally what you'll see is for time, you'll see the large language model companies building out tools that subsequently disrupt people's potential custom companies. But what eventually you'll see is... Where I think the market will eventually shift, this is my opinion, is you'll see the large language model companies almost being like an app store that a entrepreneur that's solving the acute problem in a specific vertical can take that tool from the large language model company, add on some things.
The app store, you can't really do this, but this is a little bit different and that's how open source I think will win. Entrepreneur will be able to add on tools that help to specifically meet that problem. They'll take the tools from the large language model companies, they'll add to it. It's in a way that they can specifically solve the problem in that vertical. And those folks in that vertical, those customers are willing to pay for that because otherwise when they grab the stuff from the large language model company, they got to adapt it themselves and they're like, "This is dumb. I'd rather give someone 15, 20 bucks." And that's how my mind thinks.
I'll give you a quick analogy. I like steak and I like to shop. I like to cook. Sometimes meat prices go up and I'm just like, "Shoot. If I'm paying $30 to $35 a pound for a bone-in ribeye, I might as well go to a restaurant." The steak costs me $50, $60. I'll give you $15 to $20 to cook the steak for me. Okay, that's just like high school. You're giving your buddy some money for gas, whatever. Now, the steak, if it's on sale and the steak is 18 to $20 a pound, yeah, I don't want to pay anyone $30 to $40, I'll cook it myself.
And that's the psychological thought process for that customer. So I can go get open AI or whatever for $20 or use [inaudible 00:21:17] for $200, but I got to adapt some stuff. Things are always changing. I don't understand what's going on. I got to have someone full-time to do this. Or I can pay this other company $30 to $40, $50 a month and they do it for me. I'm going to pay $20 anyway, so what's another $10 or whatever. And maybe the price point is even higher.
John Richards:
It depends on how acute that pain is that you're talking about solving.
Sherman Williams:
I tell everyone pricing is, and I'm going off a tangent here, but pricing is the first order derivative of the pain that you're solving for. So if you have cancer and I have a drug that can cure your cancer, you're willing to pay all the money that you have in the world and all the money you have access to cure that cancer to stay alive probably. Some people may not be willing to do that, but probably. Now, if you've got a [inaudible 00:22:04] and you just need to let it... It'll be all right. Then you're like, "I'm not willing to spend a million dollars on this thing. I'll figure this out though. Put a band, rub something on it and I'll be all right, whatever." And so pricing is always the first derivative of the pain that you're dealing with.
John Richards:
Yeah, no, I talked to somebody once and we were talking about something that would save their... Back to your earlier discussion about DevTools, was this idea of this will save your... Your developers can go home earlier. And they're like, "Great, but I'm not willing to pay for that because I'm already paying them." That wasn't a strong enough problem to motivate them. So I see that.
Sherman Williams:
Yeah, exactly.
John Richards:
Now, speaking of that acute pain and the problem. In the security space, what are you seeing people trying to solve with AI that you think is really promising right now?
Sherman Williams:
In the security space, what are people trying to solve with AI? I mean, there's a range of things. Yeah, that's such a large question.
John Richards:
But I don't know if there's something maybe in your portfolio or something that you've seen and you're like, "These guys-"
Sherman Williams:
So we have a company in our portfolio, Phalanx, that is helping people operate in a zero trust environment. And I'm big on that in the sense of your information has likely already been hacked, you want to be able to operate in that kind of world, people already have access to your systems. So they do something where they encrypt pretty much everything, and there's a key matching thing in order to decrypt things. So even if someone breaks into your stuff, you assume that people have already broken into your stuff, it doesn't matter. It doesn't matter, you already have everything encrypted.
I'll just throw out a couple of areas. The whole deepfake area is going to increasingly become a problem. It's going to enhance the quality of phishing exercises that folks do from an email, setting up a fake company with a fake website very quickly, AI-generated images of humans, AI-generated voices. That's going to be a massive issue when it comes to security from a problem, from it being a problem standpoint. That's I think some of the key issues.
John Richards:
And do you think to combat that that's going to require another tier of AI that can identify that and try to stop it?
Sherman Williams:
Yeah, 100%. Yeah, you did a bigger wall. I'm going to dig a tunnel under it. So it's got to be always offense, defense thing. It is normal for humans to try to steal and do bad things. There are bad people that exist in the world, there are bad actors, and those bad actors are doing what they're supposed to do. They're acting bad. It is your job to ensure those bad actors do not win and use every tool at your disposal to keep those bad actors from winning 100%. So I don't get upset that the bad actors are doing what they do, my job is to protect myself and our government's job is to take offensive action to make those bad actors go away and deter future bad actors from acting. And so that's my thought process there.
Other than the deep fake deepfake world, I think that from a good side of the house, you're seeing AI a lot for anomaly detection. It can take [inaudible 00:25:37] amounts of information. It can sense when there's a problem or anomaly and it can act accordingly. That's a great thing where you're seeing AI being used. Where else is AI being used from a good standpoint? From a bad standpoint, one interesting thing that I've been seeing is bad actors are basically harvesting a ton of information.
So here's the issue. Even if your files are encrypted, what you're seeing is what the bad actors will do is they're harvesting information with the idea that eventually they can use quantum engineering in order to decrypt your information and have it. And so you're saying a race within quantum engineering in order to encrypt files that way. So even if they do have that eventually, that capability, they can't get your stuff. And so that's yet another thing that I'm seeing, and that's an area I'm heavily focused on, I've been looking at a lot now recently.
Another big thing I think about is, so with the quote-unquote, "Vibe coding," you're going to see an explosion of people that are able to code. And the way I think about it is, there's a good analogy here, and let me go through this. The analogy is almost... What was it? Gutenberg had the printing press, and I think it was the 1500s. There was a lot of things that occurred after that that actually shook society actually to its core. You started to see religious wars, et cetera, because you had Martin Luther post that thing on the church because he wanted reforms in the Catholic Church.
You have to remember back in the day, heck, even after that, there were a lot of people that couldn't read. Society, they were relying on a king or feudal lord or the church where people could read as some sort of educated aristocracy in order to inform them. And you saw society be shook when people could read and could digest information. That's a rough analogy for what you're going to see with vibe coding on these coding tools because you're going to see people be able to build their own stuff. And I really think about, I'm too young, and you're probably too young too, when the internet was really getting going, when it was an ARPA project back in the 70s.
The idea of internet 1.0 was that you actually controlled your own information, you controlled your own fate. What happened was internet became very difficult to navigate for your common person. So you have companies that built these walled gardens, to borrow a term from earlier, made it a lot easier to use, but they keep your information. I love anything that brings about a return back to internet 1.0. That's why I'm fascinated by the crypto world. Do I believe in it completely? Meh. But the concept of blockchain and the ethos of controlling your own information and being able to monetize it, I love that.
So let me bring that to my point here with vibe coding. The code editors via Copilot, Replit, et cetera, those are going to be amazing and they're going to open up the ability for people to code writ large and create their own apps. It'd be amazing what humans come up with. You're creating a platform. But the problem is that a lot of times, either with the code editor, is using it because it's use in the context of a large language model, it's going out into the wild either with that or what the person is trying to do. A lot of times they'll be taking existing functions for open source libraries and how secure are those open source libraries, and are there holes in those libraries?
I know for a fact there are government efforts in the United States government in order to sign off on some of those libraries even though there's been a Cambrian explosion of them. So it's going to be really difficult. But what are the potential holes in those libraries? And that's something that I do worry about from a cyber standpoint. You're constantly coding and taking from these open source libraries, from GitHub or whatever have you. If you're not that sophisticated, you don't have a clue.
And maybe the large language model is not able to spot that. It may need to be trained. There's a business opportunity there. For anyone listening, it may need to be trained in such a way that it tells you, "Hey, this is actually not going to cause a problem." Or, "Nope, the way you built this, these functions, this class in Python, [inaudible 00:30:39], we need to build yet another AI agent to constantly check this and make sure [inaudible 00:30:44] because there's no good way to do it."
And so that world where you're opening up to... The way that this analogy compares with the whole thing with the printing press is while you're opening up the world for a lot of people to code, you're also opening up the world for a lot more exploitation and vulnerabilities because you're going to have bad actors. And that's going to shake the coding community, society, the people who build things online similar to how the Gutenberg press and the printing press shook society because I can read and now... What my priest is telling me, what my feudal Lord, or what this king is telling you, this doesn't make any... I don't agree, because I can [inaudible 00:31:31] too. And so it's going to have... There's going to be some reverberations.
Eventually, I think it was a good thing that people can read, and I think it's a good thing that you open up coding for more and more people. That's a great thing actually. There's going be a whole world of cyber [inaudible 00:31:50] you can vibe code, but you can also ensure that the code you're laying down is secure and/or you know what those potential vulnerabilities are considering you coded this way and I just want to make you aware. And then you can build some other security tools on top of that to constantly check your processes to ensure things are secure.
I gave you a stream of consciousness, but those are some of the key areas that I'm seeing in AI and things that are interesting.
John Richards:
Yeah, no, that's so helpful. And what I'm hearing too is we can expect to see this spike in Log4j style errors down the road as everybody is building on this. So as you said, how do we get ahead of that?
Sherman Williams:
100%. And then as everyone operates online and you move to digital currencies and you move to... Dude, I go to an ATM, I don't know, a couple times a year. I just don't have to, if that. Yeah, probably two or three times a year I go to an ATM, whatever. And you're going to need to... So there's a company called Cloaked, C-L-O-A-K-E-D, that I use. It just ensures that my information is not... With all these data brokers, it removes it. I can use fake phone numbers and fake emails when I sign up for stuff. So it can protect me at the core, at the root. And so I'm really excited about things like that as we consistently operate online, a lot of commerce, et cetera, is done online. Because if you think about it, fundamentally, commerce done online and things of that nature has a bit less friction than a lot of things done in person in certain instances.
John Richards:
Yeah. A little one click easy button to just check out immediately.
Sherman Williams:
Yeah, exactly. Versus me walking around a store. And so that's something that I think about, and those are just some range of areas where I think AI could be used.
John Richards:
This has been really enlightening. Thank you so much, Sherman, for coming on here, sharing your expertise. Before we wrap up, anything you want to shout out or anything you want to promote? What's going on? Where can folks find you?
Sherman Williams:
Absolutely. Our fund is AIN Ventures. Our website is ainventures.com. You can find me on LinkedIn. I'm Sherman Williams II. I think I'm probably the only, and millions upon millions of people on the social graph that is LinkedIn, the world's largest social [inaudible 00:34:33], I think I might be the only Sherman Williams II. I could be mistaken now.
John Richards:
You were very easy to find. So yes, just search for Sherman-
Sherman Williams:
Yeah. And then look, I'll get folks in my email. It's sherman@ainventures.com. Do not ping me if you're a service provider, you can shoot your shot, but it's going to get deleted. But if you have a company that you're building in the cyberspace, I'm really, really interested. Another thing I didn't mention is [inaudible 00:35:01] able to do cyber on the edge and not in the cloud, on edge devices. That's a big thing that we've been looking at, and so I'm happy to take a look at the company and assess whether or not it makes sense for us. So yeah.
John Richards:
Awesome. Thank you so much, Sherman. Definitely check out AIN Ventures, we'll have some links in the show notes for you guys that want to jump over there and check it out. Thanks again for being on here, for sharing your expertise. We're so glad to have you. Thank you.
Sherman Williams:
Thank you.
John Richards:
This podcast is made possible by CyberProof, a leading managed security services provider helping organizations manage cyber risk through advanced threat intelligence, exposure management, and cloud security. Paladin cloud is now part of CyberProof's portfolio of solutions, extending their capabilities in cloud security posture management, and risk prioritization. From proactive threat hunting to managed detection and response, CyberProof helps enterprises reduce risk, improve resilience, and stay ahead of emerging threats. Learn more at cyberproof.com.
Thank you for tuning in to Cyber Centuries. I'm your host, John Richards. This has been a production of TruStory FM. Audio Engineering by Andy Nelson. Music by [inaudible 00:36:25]. You can find all the links in the show notes. We appreciate you downloading and listening to this show. Take a moment and leave a like and review, it helps us get the word out. We'll be back right here on Cyber Sentries.