John Richards:
Welcome to Cyber Centuries from Paladin Cloud on True Story FM. I'm your host, John Richards. Here we explore the transformative potential of AI for cloud security. Our sponsor is Paladin Cloud, an AI-powered prioritization engine for cloud security. Check them out at Paladincloud.io. In this episode, I'm joined by Jim Wilt, distinguished chief architect at Weave, and longtime security practitioner, and consultant for AI adoption. Jim shares the mindset that organizations must adopt to onboard AI successfully, and why they need to focus on replaceable architecture over reusable architectures. Let's dive in. Hello everyone. Welcome. Today's episode of Cyber Centuries. With us today we have Jim Wilt Distinguished chief Architect at Weave. Jim, thank you so much for coming on the show. You've been doing a lot of talk around AI, how to adopt that into organizations, and the importance of how you put policies around protecting that. So, I'm excited to dig into that today. Thank you for coming on the show.
Jim Wilt:
Great. Yeah, thank you for having me, John, and I'm excited to share what little knowledge I have. I think there's one thing I can assure you is there are more AI experts out there today than there are grains of sand on the beach. So, I'm really no AI expert. I am definitely curious, even though I've been doing it since the eighties.
John Richards:
Well, I'm excited to learn more. But before we do dig into that, though, I'd love to hear a little bit about how you got into the spot where you're at. You're talking about this, I know you've been working with a lot of high-profile organizations on how do they adopt the technology when it's changing so quickly. So, what led you to where you're at today?
Jim Wilt:
I guess I've been a heat-seeker missile for bleeding-edge technologies ever since I started in the world. And my story begins working, running, operating systems for boroughs in Pasadena, California, and doing a lot of innovation around there as mainframes were peaking. But then these things called PCs were coming into being, and there's your first disruptive technology, if you will, mainframes were disruptive technology, and I've always followed the wave of from PC to local area network to internet to the cloud to now what I would say AI, and generative AI as being one of the major impactors in our existence. But there's a big difference with generative AI over even traditional AI because I was doing traditional AI when I was in aerospace building aircraft. We used neural networks for lasers that we're using for composites, and this is different, and all of the things, name it, PCs, internet, cloud.
It took between five, and 15 years to get to 50 million users. Generative AI got to 100 million users in two months, okay? I've heard it mentioned before that generative AI is the new space race. Back in the sixties when we were in the space race, who's going to get to the moon first? It's now generative AI who's going to get to AGI first, and that could be 30 years away, but that's really the momentum. So, I love that I've been able to be part of the bleeding edge of everything, and technology. However, it's never been as exciting as it is right now, and you can't turn your brain off to that. It's just too exciting.
John Richards:
Yeah, no doubt. I mean, it's really captured the attention, as you said, the amount of people coming on checking this out because it is drawing everyone's attention, but at the same time, it's so new, and that's quite scary sometimes. So, organizations are out there saying, "Well, how do I understand how to use this? What should be my first use case? When should I do this? Everybody's looking at this." So, how do you look at that for organizations that say, "Okay, well how do I tackle this to begin with? What should be my first use case here?"
Jim Wilt:
Yeah, kind of where's the start here button kind of a thing. And that's a great question. I get quite a bit of people that I'm going to say want to start, but they're afraid to start. I think the best thing is just to dive into it, but you need to dive into it sensibly. And I say that because the hype is just out there so strong. What are the things that you're certain that it's going to do? You're certain it'll solve all your problems. You're certain it'll take away everybody's job, and nobody will work again. You're certain of all these things, bad things that are going to happen if I even open it up on my phone, it's going to take my bank accounts away. I mean, all of these, what I would say, fears, and certainties, and deceptions that are out there, we need to be sensible.
And so what I recommend a lot of organizations, but even individuals, is based on an article, and we'll put it in the links in [inaudible 00:05:28].com on how to adopt emerging technologies as they're coming out. Two things you have to know, there are no experts on an emerging technology yet everybody's learning it, so you're in the same boat as everyone else. The second one is it's an emerging technology, which means it's going to change over time, continuously for the next two to five years. So, you can't just become a certified expert today, and that lasts tomorrow, because it just changes too fast. So, the approach that I really encourage is that people take a technology like generative AI, and large language models, and you go through a three-phase approach, a roadmap, if you will, of adoption. And the reason I say that is you need to know what you can do with the technology before you commit to doing something with the technology.
That's a big problem. If I am asked right now before I give you access, "Jim, assure me what's your use case?" And I start thinking about that, "Oh, what's my use case? I'm going to solve people's medical problems with AI." Okay, great. Here's your money, right? And then you go into start building, solving people's medical problems, and you realize, "Holy crap, it really doesn't do a good job at this." And I made a promise, right? So, you don't want to get caught into that trap. It's dangerous. So, the approach I recommend is learn, first phase, learn. How do you learn? Well, do you have kids, John?
John Richards:
I don't. Just cats.
Jim Wilt:
Okay. Well, your cats still can learn. They learn by playing.
John Richards:
Yeah, that's true.
Jim Wilt:
And so I really recommend that you play with the technology. So, the biggest concern is security. Oh, if you want to shut down any conversation, right? You just say, "Well, that's not secure." I've been talking to my wife about wanting to buy a really beautiful Italian sports car, and she goes, "Well, it's not secure. You're shut down." "Oh, okay." So, the reality is when you don't know even how to have a conversation on something, use security is your thing. So, take security out of the mix in the learning phase, dive into the tools, and not just one tool, at least four tools. Try Anthropic. Try Copilot. Try Chat GPT. Try Lava. Play with them all. Why? Because they're not all going to be around for one thing, potentially. The other one is they all will do better than the other at something right now, especially.
So, you don't want to just say, "I'm a one tool person", because have you ever tried to put a screw in with a socket wrench? It really kind of sucks.
John Richards:
So, yeah, wrong tool for the job.
Jim Wilt:
Exactly. So, know when to use which, and understand which to use. But you play with technologies, build some no code rags, regenerative, what is it called? Rag... Retrieval augmented generation I guess is the acronym for that. So, build some rags that are no code about things I created for me just for fun. It's called Jim, the Architect chatbot. You can ask me any question. It probably answers better than I do.
John Richards:
I'll have to invite him on the podcast next, right.
Jim Wilt:
Exactly. I did that just to understand what are the limitations? What can I do? How well does it work? It doesn't totally suck. It actually is pretty good. And so in the learning phase, you're not going to worry about security. You're not going to use anything private. You're going to use only public data. You can take down your company's SEC-TENK statement, public statement, put it into your chat, and understand what I can ask about it, and learn more about my company. These are things that you do at the learning phase to get to understand, "What can I do with the technology?" So, once you get through that phase, I'm going to say your fear is abated quite a bit. You have this, I'm going to say comfort level that, "Oh, it's not going to take over the world." It's still not going to get me that cool Italian sports car that I want, but it's still going to make things better for me.
I really like what I'm getting out of it, and the benefit, and the value I'm starting to see. I'm starting to see the real value here, and it's not going to solve the medical problems I need it to because, but it could tell me about them because it's generative AI, and LLMs aren't necessarily good because they've got such wonderful algorithms. They do have wonderful algorithms. They're good because they've read everything that's out there. It's like I've read, I've got a bookcase full of programming books that are awesome, and I've read maybe 20 books that are all just about writing better code, and solid code, and things like that, but Gen AI has read every book I've ever written, so it can find blind spots in my thinking.
That's the learning the play phase. Now let's get out of the play phase, and go to the growing phase. This is now where I'm going to actually take it out of my individual acceptance of coming to terms with it, and building what I'm going to say a purposeful usage inside my organization. And generally you're going to do it with your teams that you have control over. And for me, that would be like an IT team, or an engineering team, or a product development team. The idea here is I'm not building a use case for my organization yet, but I'm building use cases for our own internal use of it. So, one of the things we found the large language models do well is code generation, or augmentation. It doesn't replace the engineer, but what it does do is it allows them to see different ways of writing the code.
Another set of eyes on your code. One of these things where I actually, I've got it in a GitHub repository. I wrote some code that had an error in it on purpose. When people are looking at you, and they say, "Well, we want you to write some code for us." I have this in my back pocket. "Well, I've got some code I want you to look at. It's like 30 lines. It's got a fatal error in it." And then what? "I want you to look at my code because I want to know that you know how to read code." Well, nobody can find the error, because the code executes, it runs fine, but it's got a fatal error in it. ChatGPT found the error, and it corrected it. Okay? There's some value there, okay? So, that's what you're doing in the growth phase. But here's the funny thing. In the growth phase, you're going to use it for internal consumption only. Nothing external. You're not going to tell anybody that you're using it. You're going to use it for internal consumption, but it has to be secure. Playtime is over.
So, anything that you're doing, if you're building a rag, or you're building anything with AutoGen, or LangChain, or whatever, you've got to have the security practices in place that you do for anything you do for internal production use. So, when playtime is over, you need to put on what I'm going to say, your fiduciary responsibility hat, and make sure that we are doing secure. But what's really great about these tools today is that these rags being made with agents, or agentic AI allow me to create an agent that is going to access my internal data. So, I don't put my internal data into the model, but I can get results from my internal data, and supply that to the model. So, I'm creating augmentation to the augmentation, so to speak. And that's good because then I have the security practices in place for me to access my data programmatically anyway, the best practices.
John Richards:
Does that work the same with public models versus private models? As long as you're using this age agentic access, you're kind of protected your data. Or do you need to think about those two things differently? Because heard a lot of folks looking at using private models to avoid some of that security concerns.
Jim Wilt:
Boy, you actually brought up a very good point. What I recommend in the first phase is that you're only using public data, but I look to use these tools in a way that use them like you drank the Kool-Aid, and I love them, and they're going to work great, and then break them. So, take in the public way, take the data you're using, make erroneous calls, try to break the APIs, try to access the public data external through the models in ways that you shouldn't to find out if they're areas where it's not doing well. So, you break it, you break it, you break. Just like a kid playing with a toy. Airplane flies, airplane's great, I'm going to fly it into the wall. Boom, it's broken, right? Now I got to glue it back together. That's what you want to do. So, it's actually in the growth phase, you want to take the learnings from the earlier phase around where it does well, and where it doesn't do well, and apply it moving forward.
But one of the key things is the models, and the tools, and the APIs are changing continuously. So, I always say, "Don't be a victim of the API change when it goes from three, five to 4.0, or whatever, be in charge, be realizing that it's going to change. Don't think it could change, it will change. Plan on it. It will change, and be purposeful about it." Getting back to your data, if I want to access public data, I'm going to access public data through public channels, and that would be data maybe I would inject into the model, because the model probably already has levels of access to public data. If I'm taking the SEC 10K statement in my organization, and I'm putting it into the rag that I'm making, it's already read the SEC 10K statement of my organization.
What I'm doing is I'm asking it to focus on this subset of SEC 10K statements specifically for this rag. And that's a good point. So, public data is part of the equation. Private data through my agent, then, I want to use access to my private systems, but I don't want it going publicly. So, maybe an example would be I want to know the demographic of all my employees, and what cities they live in, what countries they live in. I'm not going to take all my employee data, and addresses, and put it into the rag, into the LLM. That's not going to happen, okay?
John Richards:
Good, good.
Jim Wilt:
That's the test, John's testing me on this.
John Richards:
Yeah.
Jim Wilt:
What I'm going to do is I'm going to have an agent that will summarize that information, and then it'll give me all the countries, and cities, and the number accounts as to what it is. So, it'll give me the results of my query. My query will be done in my own private area. The agent will do that in my own private area, but it'll return the results back in a summary form that then my model that I'm building in the rag can use to effectively create an output that's better for what the consumer wants to have. So, let's just say I want to send a holiday roast to everybody. Now I know what countries I'm sending them to kind of a thing. So, these are just things that you need to build in this growth phase to create that muscle memory, and that skill set. But now security is a very important part of it.
You would follow all your policies. You might have to harden security because as you're doing this, you might find falls, failures, and vulnerabilities in the security that you've got in place today. And guess what? These tools do a pretty good job at looking at your architectures, and looking at your security models, and they can highlight for you areas where you might have blind spots where you're not as secure as you might think you are. So, it's kind of funny, as I'm building the security for my AI, I'm using AI to validate my security that I'm building to secure my AI. It's circular.
John Richards:
Yeah, there's so many layers to that. How long do you see folks usually spend in this growth period? What's a good amount of time to make sure you've got your hands around this middle?
Jim Wilt:
That's a great question. What's the time commitment? And the reason that's a great question is because you've got to have funding to do this, and people want to know, "When am I going to start seeing results?" So, generally it's anywhere from nine months to 18 months basically. And the reason is you're building up your skills, you're breaking things, you're learning things, and you're advancing. But as you're finding it, and you're doing it for your own internal consumption, your own utilization, you're going to find out what it can, and can't do. And in the process of that nine to 18 month process, you can start identifying use cases for the organization that makes sense, where you're going to get payout big bang for the buck, so to speak, as opposed to creating yet another chatbot that nobody likes.
John Richards:
Unlike my first question, which was starting with what's our use cases, you really need that exploration, you're saying. And then now that you're in the growth phase, you've got reasonable understandings of what capabilities are, and you can actually build real use cases instead of this pie in the sky, fingers crossed, I hope this AI solves all these problems for me.
Jim Wilt:
Exactly. Spot on. And see, that's where a lot of what I say, organizations are lax. They want to just be the first one out there with something. And how did that work for Air Canada? The famous news article where someone wanted bereavement prices, and it said, "Well, don't worry about it", the AI bot said, "Don't worry about it. Go on your trip, we'll reimburse you when you get back." And it was 800 Canadian dollars that the guy was trying to get back, and the corporation said no. So, he took them to court, and got millions of dollars back. And so yeah, you're the first one out, but you pay the price.
John Richards:
Yeah, I am.
Jim Wilt:
So, you want to be careful about these things. There are use cases that are emerging right now that are really quite enticing, and exciting. I would say FPT Software is an organization I work with, and they do a lot with traditional AI with, they have audio AI basically that listens, and they can do really neat things with that. But they also have what's called a insights platform concept where you're taking your data, and you're taking external data, and you're putting the two together to get a better global view, if you will, of your situation, be it for better operations, or more efficient operations, or be for creating what I'm going to say, better solutions for your customers, because you're looking not only at what you know about them, but what the world knows about them as well. Sometimes I call that... I coined that you talk about the 360 view of your customer.
There's such a thing that I call as a 720 view of your customer, the 360 of the data that you have, and the 360 of the data that's out there in the public domain that can give you information about them, or in the situations they're in. So, if they're in Texas, and there's a hurricane coming, I can make calls to other AI systems through the agentic process to find out is there power grid at risk? And if it is, then I know that there may be a period of time where they have no access to anything. So, start throwing things into their cache so that they can have a cache of things to keep themselves running until the power comes back on. This is just ways of using, I'm going to say AI in ways that we've never thought before, but you've got to be proactively thinking about it. You can't be, well, you can be reactive, and then after the disaster occurs, boy, I wish I would've done that.
John Richards:
Yeah, exactly. Better to learn eventually, but okay, let's say somebody's following these steps. So, they've gone through the learning phase, they're through the growth phase. How do you transition then into this third phase as you start to look towards external use, and maintaining security, and maybe even increasing it now that you're like, "A disaster is not just internal, or a problem is it... This is going to be customer facing, or whatever that looks like for their organization."
Jim Wilt:
Yeah, so great lead into what I call the landing phase, and that's where you land it to a public accessible kind of situation. There are companies out there that are doing really well in the landing phase right now. I've got colleagues at Thomson Reuters here in Minneapolis that are building out their own AI models, and one of my colleagues is the CTO said, I go, "What model are you using?" And he goes, "Well, we've talked about this." He goes, "We don't use a model. We use the right model for the right purpose. There are certain things we can use a cheaper model for that give us good enough answers, but there are other things that we'll choose between these two models, and compare to get the right data out. The use case then is built based on your own growth phase, understanding your own organization, the business model of your organization, the business of your business, and then applying it to something you put out there externally that adds value to the organization."
Now, adds value. What does that mean? That's a tough one. I just sat through two days of AI is called AI for HR, Human Resources Summit. The summit had 10,000 attendees the first day, and 11,000 attendees the second day, and it was staggering. What I learned is that a lot of what the value is, if I were to summarize the value, it's not an automation, it's not in replacement. It's not going to take everybody's job like we all think it is, and it's not going to take over the world. It's not that good yet. But what it will do is it will make your job better. It'll look, "John Richards, how can I make him better? Jim Wilt, how can I make the code he's running better?" And what you do is you use it to create a higher amplitude, if you will, of the goodness you're creating anyway, but you're doing it with, I'm going to say more assurance of confidence, and better clarity, because it's helping you find your blind spots of errors.
It's helping you optimize. I've done code tests where it can actually take some really readable code, but optimize it into really fast code, and it does a really good job of that. But sometimes it'll use syntax I've never seen before, and I can't just accept that that's okay. I got to be smart as an engineer. And I've got to look at it, "Okay, wow, I learned something here. Wow, I learned something here. Not a bad thing." So, take it to the corporate level. What are the use cases at the corporate level then that are going to make sense, and add value to your customers? And the value might be in getting them better results faster, or it could be getting them, how would I put this, the right context, or results that they need kind of a thing. So, maybe they're asking the wrong question.
I was at this Applied AI conference recently, and I did a talk, and I can give you the deck to put in the notes too. One of the talks that they had were college students, and college students were saying how they're using generative AI to help them with what I would say any kind of learning that they're trying to do. So, one of them, John, you would love. This guy, does podcasts like you, but he's camera shy. So, he loads his generative AI model with all the points he wants to make, and asks me questions to make these points. And then he records to the camera to the side, and he's talking to his Generative AI model, and is asking him questions, and he's answering them. And it's a different way of doing a podcast.
John Richards:
So clever.
Jim Wilt:
So, people are realizing the value is not going to be in what I would say the traditional avenues that we typically look at from a corporate perspective. Another gal has dyslexia. She doesn't read well. I can relate to that myself. So, she'll dump a book into her Generative AI, and then she'll ask for a summary, and then she'll ask her questions like, "Who is Ishmael in Moby Dick? And what was he thinking when he did this?" But then the model that she built, we go back to the learning phase, is when that type of work that she was doing, what she built she asks her questions, "What do you think about this decision that Ishmael made? Would you make the same decision?" And she has to think about it kind of a thing. And she's now in an interactive, immersive conversation with the data of reading a book. Now we think of thinking that to our customers. It's a whole new game. Nobody's really creative, I'd say that immersive customer experience that we talk about where it's asking you questions back kind of a thing.
I'm going to go buy a pair of shoes. Okay, I'm want to buy these. Well, I guess I should buy these lime green Nikes. And it'll basically come back, and say, "Okay, what are you going to do in these shoes?" "I'm going to go hiking in the mountains. Hey, you know what? These boots might be better for hiking in the mountains than these Nike shoes. Can I show them to you?" You see, now it's getting to where it's an interactive experience, but how do we do it securely? Meaning how do we do it without it? I wanted to capture that this guy just tried to buy shoes for hiking in the mountains. So, that's the use case that it should record, but it's got to be anonymized, and got to be in a situation where it takes the metadata, if you will, of the situation.
But not this guy in Minneapolis, Minnesota thinks he can hike in the mountains in tennis shoes. There are no mountains in Minnesota that kind of a thing. Ha-ha. And the computers are laughing in the server room, but it's really not like that. You want it to be a value that you bring to the end user, and customer. And a lot of times you hear hype about what it's doing. It's going to diagnose medicine, it's going to do all these things. At best today, and I have colleagues working on this. At best today, it's about 80% confidence on the medical side, which is great for augmentation, but I wouldn't go into surgery based on 80% confidence.
John Richards:
Right, right. So, it sounds a little bit like on the personal level, it's not AI is going to take human jobs. It's maybe humans partnered up with AI will take roles that if folks who aren't will start to fall behind. And does that scale then to this organizational level where you're saying, "Okay, it's not that AI solves your medical problems, but if you're partnering up with it, and getting this 80%, and using that wisely, you're going to outperform competitors who aren't looking at that kind of extra recommendation, or getting that."
Jim Wilt:
One of the use cases that's making a lot of sense is call centers. And it's not about throwing your phone call to an AI bot. It's about the person on the call getting more information about you from the AI, and the situations from the AI. And so one of my colleagues, I do a lot of authoring with, his name's Andy Ruth, he's just brilliant. There was a situation in the family where someone had a fall, and they had to go into rehab kind of a thing. And so as an experiment, I asked AI basically, how do I approach asking my relative to use a walker when they get out of rehab? How do I approach that? And this is why you want more than one model, right? One model just went, "Here's all the statistics about why you want to do that, and blah, blah, blah."
And it's like, "I can't hear you." So, the other three were basically came up, and said, "Wow, this is a really delicate situation." And the reality is the data shows that no matter how hard you ask, until they have what I would say, another fall, you're really not going to be able to land them using the walker, which sounds horrible, I know, but it's just statistically that's kind of how it works, so maybe this isn't a conversation you really want to have kind of a thing. And so this is now augmentation. This is what I would say, "How did it know to say that?" Well, it's read all of the psychology books that I've never heard of, and it's looking at all of the published reports, and studies, and papers written that basically have identified how many falls you have before you use a walker. It basically is giving, what I would say a sense of humanity for me as a person when I'm relating to another human. It's not trying to be human, it's just basically saying, you need to be human.
John Richards:
Yeah, helpful.
Jim Wilt:
That's something hard to hear.
John Richards:
Yeah. It's a reminder I need more often than I like to think.
Jim Wilt:
Yeah.
John Richards:
So, going that route though, how do you avoid that Air Canada example you're talking about? Those safeguards, and policies you put in place. Is that only from the growth phase of exploring that, or are there some general policies that apply across everything? Or is it just repurposing existing policies? How do you guide teens as they say, "Okay, what am I looking for in this policy space?"
Jim Wilt:
That's a really hard question, and the reason it's hard is because you could take the consultant answer, basically say, "Well, it depends, John." The reality is this is an enigma technology. It's continually changing. It may not stay stabilized for three more years, or even more so it's going to change over time. And this is where I really believe in code replaceable is the new reusable in that you've got to be able to change your policies, your code, your APIs on a moment's notice. There are no victims, and a proactive approach to emerging technologies. And I say no victims in the sense that if you're really focused on getting something out fast, but then one of your dependencies, API calls change completely, or the laws behind it, regulatory change completely. You have to abandon that, and build it new again. And to be able to do that in a week is not a hard thing to do when you're in the practice of doing that.
But if you're not in the practice of doing that, which is 99.999% of us that you can play the victim card, that doesn't help your customer, though, does it? So, you really got to be proactive about it. So, from testing to, you can even use AI to help you build your test scenarios, and simulations, and things along these lines. You need to create what I'm going to say, the muscle to be able to respond appropriately to these situations as they change. And you'll find that the first time you put something out that's your first throw away, you'll realize the second time you do it, you're going to do it so much better. You need to be able to go back now, and say, okay, I got to throw that one away, because I didn't know what I know. And it's all about learning, and continuous learning kind of a thing. But as far as the government doesn't have laws yet in a lot of this stuff there are-
John Richards:
Especially here in the United States.
Jim Wilt:
Right. And the AI Act in Europe isn't in place yet. And when it is, how are we going to enforce it? How are we going to measure it? And now you got to start thinking about, well, okay, spot audits of my systems, and leveraging the technology. One of the things AI can do is look at something, and find vulnerabilities, and let it try to help you find vulnerabilities within itself. One of the really unique things about a new, and emerging technology is right now, and you've probably heard this before, it's at the worst it will ever be. It's only going to get better from here.
John Richards:
No, I haven't heard that yet, but that makes a lot of sense as we scale here, and you're learning from your existing scenarios.
Jim Wilt:
Exactly. So, just within two years, what I can do with the generative AI LLMs that are out there today is so much better than what I could do two years ago, and it's getting even better kind of a thing. One of the cool things about cloud when it came out was the flexibility that it gave you, and the freedoms that it gave you, but there was no good user interface for it, just command line, and it just didn't have the dashboards, and the monitoring, the telemetry, I don't know, 10 years later it finally did, and that's when it really took off in the hands of your average developer. With generative AI, a lot of that does exist, but if you're building it in your organization, you need to build the telemetry, and the monitoring right now from the get-go, this is part of the... These policies are in place for my other code.
It's no different for your generative AI rags that you're building, and your agentic AI put it all in place so when the audit comes, you have the telemetry to say, "Well, here it is. I've got it here." That's one of the neat things about, I know this is Paladin Cloud sponsored, but it's one of the neat things about Paladin Cloud. Back in the day when it was part of the, started out in the early origins, and Steve, and I were at T-Mobile, he made dashboarding such a big deal that we could see on a screen all of our security, not vulnerabilities, but areas that needed improvement, and this type of dashboarding, and telemetry, and things like that. If you don't build it in now, you'll have to add it later, which is a lot harder. And the audits will come eventually, government audits will come, and fines will be listed. Think about GDPR, and all the disruption that created around the world. It's going to be no different than GDPR.
John Richards:
I mean, it seems like the real skill set that's needed at this point is this flexibility, this agility to shift, because the landscape's so chaotic. So, anything you can do to make your organization, I like your terminology. I'm going to be thinking about this for weeks after this discussion of replaceability versus reusability, because as a developer, reusability was so ingrained, but this idea of moving fast, and some has already moved there with the idea, the CICD culture, and stuff like that, but what your organization focuses on that flexibility is going to be more important now than maybe it used to be before two years ago when things were a little more settled down, and we hadn't yet had this huge disruption.
Jim Wilt:
Exactly. And the key that you're bringing to fruition here is the fact that the world's changing faster, not slower. The reality is you're going to... To survive, you're going to need to be able to respond to these things. You're not going to have the "Oh, that'll get in next year's release."
John Richards:
Yeah. Say that to the auditor when they're knocking on the door, yeah.
Jim Wilt:
Exactly.
John Richards:
Wow. Thank you so much, Jim, for coming on here. This has been so enlightening. I really appreciate it. Before I let you go, I'd love to give you a chance to share how folks who have questions, or want to reach out to you, the best way to contact you, and if there's anything you'd like to promote, and of course, we'll drop links to all the different things you mentioned in the show notes for anybody who wants to check those out as well.
Jim Wilt:
Absolutely. I'll give you all those links. Yeah. So, if you're locally in the area of Minneapolis, Applied AI is a consortium group that Justin Graham has put together that has what I would say, one of the best think tanks of AI from anybody that's never heard of it before, to seasoned professionals that are presenting on best practices, and ways of working with it. And it meets every other Thursday, and it's just amazing to get 80 people in a room, and then we go to Monster Brewing afterwards. So, it's kind of fun. The other things would be, I'll send you some links to some articles that I find that are very helpful in, I'm going to say, selling these concepts to your stakeholders, and your leaders, because a lot of times it's difficult until they can really see it spelled out, and played out is a little bit difficult for that.
And the last thing I would say that's really exciting about being in the industry as it is today, it's gotten really, really exciting. And if you're not at least playing with the simple tools, start playing with the simple tools right away. Go to whatever tool you can get access to, and just start immersing yourself into it. I mean, can you imagine a world today without Google search kind of a thing, right? So, it's not going to replace search. It's going to create a different type of interaction, and immerse yourself into the technology. And so these are things that I think all of us say we want to do. You just got to do it. You just got to jump in, and do it.
John Richards:
Wow, that is great advice there, Jim. Thank you for that. And thanks again for coming on the show. We were so fortunate to have you. Thank you. And have a great rest of your day.
Jim Wilt:
You too, John. See you.
John Richards:
Thanks, Jim, for the kind words about our sponsor, Paladin Cloud during the show, and the importance of telemetry. Paladin Cloud is an AI-powered prioritization engine for cloud security, and their sponsorship makes this episode possible. DevOps, and security teams often struggle under the massive amount of notifications they receive, reduce alert fatigue with Paladin Cloud, using generative AI, the model risk scores, and correlates findings across your existing tools, empowering teams to identify, prioritize, and remediate the most important risks. If you'd like to know more, visit Paladincloud.io. Thank you for tuning in the Cyber Centuries. I'm your host, John Richards. This has been a production of True Story FM. Audio Engineering by Andy Nelson. Music by Amit Sege. You could find all the links in the show notes. We appreciate you downloading, and listening to this show. Take a moment, and leave a like, and review. It really helps us to get the word out. We'll be back February 12th right here on Cyber Centuries.