IT Matters | Tech Solutions and Strategies for Every Industry

Matthew Martin is a seasoned cybersecurity leader with over a decade of experience in the financial services industry. Matthew is currently focused on advising and consulting as a Fractional CISO. Most recently, he was Deputy CISO at LPL Financial. He holds advisory positions on boards of Ironscales, Trustwise, Stealth and Surge Ventures. Today we discuss AI hallucinations and the practical steps any organization could take before going down the path of AI adoption.Conversation Highlights:[01...

Show Notes

Matthew Martin is a seasoned cybersecurity leader with over a decade of experience in the financial services industry. Matthew is currently focused on advising and consulting as a Fractional CISO. Most recently, he was Deputy CISO at LPL Financial. He holds advisory positions on boards of Ironscales, Trustwise, Stealth and Surge Ventures. Today we discuss AI hallucinations and the practical steps any organization could take before going down the path of AI adoption.

Conversation Highlights:
[01:04] Discussing AI article
[03:28] Introducing our guest, Matthew Martin
[03:58] Martin's introduction to security
[05:11] Why Martin chose security
[07:09] How the tech landscape has changed
[09:33] Discussing Generative AI
[11:55] The sunk cost fallacy and AI
[13:47] Generative AI use cases
[16:49] Insuring responsible use of AI
[18:25] Defining hallucinations
[20:26] Risk averse AI technology
[25:52] AI advice for students
[28:21] Discussing how Gen AI works
[35:13] Martin's message to the world

Notable Quotes:
"Don't chase the hype cycle, it all changes so fast." Matthew Martin [27:35]

"Figure out what you want to do, what you want to be, and don't stop until you get it." Matthew Martin [35:53]

Connect With Matthew Martin
LinkedIn: https://www.linkedin.com/in/mattmartin/

The IT Matters Podcast is about IT matters and matters pertaining to IT.

What is IT Matters | Tech Solutions and Strategies for Every Industry?

Welcome to the Opkalla IT Matters Podcast, where we discuss the important matters within IT as well as the importance of IT across different industries and responsibilities.

About Opkalla:
Opkalla helps their clients navigate the confusion in the technology marketplace and choose the technology solutions that are right for their business. They work alongside IT teams to design, procure, implement and support the most complex IT solutions without an agenda or technology bias. Opkalla was founded around the belief that IT professionals deserve better, and is guided by their core values: trust, transparency and speed. For more information, visit https://opkalla.com/ or follow them on LinkedIn

Welcome to the IT Matters podcast, where we explore why IT matters and matters pertaining to IT. Here's your host, Aaron Bock.

Welcome back to the show. Thank you for listening, as always. Keith, how are we doing today?

Fantastic ,Aaron.

I was reminded, my six year old began school in the first grade yesterday and his first comment to me was "Dad, my alarm makes me tired". And I said, "Welcome to the real-world son". This is the first day of first grade, and you're going to have many days in the future where your alarm increasingly makes you tired. So, welcome to Earth.

Yeah, it's a great lesson for a first grader.

That's great and I'm glad to hear it. I'm wearing my Virginia Tech polo for the start of college football. This episode will probably come out in a couple of weeks, but hopeful for the Hokies to have a more positive season than last year.

Keith, you know, something's on my mind that's been in the news the last couple of weeks and there was an article that came out, I think it was last week or the week before. It was from AMD. AMD had taken a poll of their IT leaders around AI, which we're going to talk a lot about today. But the headline was "IT leaders want AI, but accelerating the development stirs fear," and I think this is something that, you know, interests me, because we've heard this on our podcast, we've heard this from our customers, we've heard this at a lot of the industry events. People want AI and they think it's cool, but they don't really know maybe how to implement it. They're not sure like how good it can be or how bad it can be. Interesting, it's just an interesting topic and I think we're going to see more articles like this. I'm sure we'll talk about it today.

But any headlines, any funny stories that you've seen over the last couple of weeks?

I was gonna comment about the the AI intro there. Yeah, a lot of the conversations I'm having are leaders of companies are led to believe that they are behind if they are not full throatedly embracing some kind of AI strategy or tactics. So that leads to a lot of rush, a lot of hype in the market. Of course, we're in the beginning phases of the AI hype curve. But there you know, as far as AI, especially with the generative AI, there's a lot of new players. And it's difficult to understand who's who and who does what the best and there's a lot of noise in the marketplace and we don't have long term trends to go off of. So there's a whole lot of excitement and not a lot of direction and tried and true strategy of how to implement and how to make it profitable.

Yeah, and this kind of leads us, so I get the pleasure of introducing our guest today. But before I do introduce Matt, just so that people understand, like how AI can be great, but it can also be a little dangerous. Before this podcast today I asked Bard, which is Google's AI that a lot of people are familiar with. I said, "Give me a funny story about technology". So the headline story that Bard gave me was a man in Australia called the police because he thought his computer was possessed by the devil. And I said, "Okay, that's pretty funny, like, give me the link where this is actually a real news story". It can't give me the link. It's a made up story. So it has a whole paragraph about this story that's totally made up that has actually no, there's no real life story behind it. So be careful when you ask for stories and we're going to talk about this. So today on the podcast, we've got Matt Martin. Matt is an experienced Fortune 500 IT leader, Matt is on a number of advisory boards, surge ventures, stealth, trust wise and newly appointed as a board advisor for Iron Scales. Matt, welcome to the show. How are you doing today?

I'm doing great, man. Glad to be here.

We're excited to have you on. For our listeners out there, why don't you tell them a little bit about yourself, you know, feel free to share personal, professional, etc., and then we'll kind of get into it.

Yeah, absolutely. So I've been in security now for about 15 years, started off actually in logistics and made my way over.

So a little different backstory from most people. And so I've pretty much done everything in security you can think of.

Building stocks and tech risks, and IM and everything in between. And in the last probably two years I spent a lot of time really diving into startups, venture capital, private equity, I just get really excited seeing new innovative things coming along.

One of the fun things about it is when you, I've noticed that when you see like the big news story about some new thing that's come out, that's about two years old in the real world, right like because they've been sitting in stealth for a year or two. So it's really cool now being on the other side of it

You're gonna need a clone one day of yourself just where I can see stuff before the rest of the world sees it and it's just really exciting.

Personally, I live outside of Charlotte and I have 11 year old triplets that keep me quite busy with various sporting activities they get into so it's pretty fun.

to take them to all the practices that they've got.

I already need it.

Exactly. So Matt, one thing that we talk a lot about the podcast before we kind of get into talking specifically is, you said you've been in security for a long time now, over a decade and a half. What interested you to get into security then? Like, why did you know that was right for you? And why did you go down that path?

Yeah. So it starts with complete luck and accident that I got into security. I was, I was doing my MBA at UNC Charlotte, and Bank of America was building out an insider threat team at the time and they were looking for people with with business experience that they could teach security.

I had no clue what cyber security was at that point, it was a job and it sounded pretty fun. So it was, it was during that time, where I got to see what it was like, I was doing email DLP. So anybody listening to this, now, that's in security is like, wow, it's it's pretty basic stuff. But for me at the time, it was like, wow, like we can, we can see this and this is what people are trying to do.

And I just fell in love with it.

And then from there it was, I didn't go to school for it so it was a lot of self taught. It was a lot of carrying around a notepad with me and writing down everything that I heard that I had no idea what it was, and then going home and YouTubing it or reading about it or, or whatever. But it just, I just fell in love with it from the beginning, it was just something different, something you could see having an impact on the company directly. And you know, being able to protect people, especially when you're in financial services, it was easy for me to see like, Hey, these are people's, how they pay their bills, how they live, this is their retirement accounts, etc., and being able to protect those was a was a pretty cool thing.

So 15 years ago, you mentioned email DLP, you didn't start in it. Okay, so let's fast forward today. So my, it's a two part question, right? So last year alone, 1200 or more new security products came to the market, which is crazy compared to what it used to be. So I guess the question is, is, as a security leader, how has it changed since when you started to where it is now? I mean, obviously, the tech has changed, but like the landscape has changed, the risk is changed, like, like, how do you even compare and contrast security now versus then?

mean I probably shouldn't call them kids, but the people coming out of out of college now and coming in to security are unbelievable. The people we've hired, you know, at my last company, and this company has just been, you kind of just are like, wow, like the skills they come out with, what they can do are unbelievable. So I'd say the general population of security professionals are way better today than they were 15, 20 years ago. The amount of tools that are out there is insane. There's way too many, there's a lot of tools that just don't need to exist, they're very niche and can easily be incorporated in something else.

But for whatever reason they're there. So it becomes really important to like, have ways to sort of filter through that noise and that's where it becomes really important to have good partners that can help you with that. Because they're going to see everything, they can look at it and say yeah, this is legit, this is not and so as an executive, or a CISO, or whatever your role is, it's important to have somebody that's helping you just sort of filter through that so you can look at the real stuff and what you need. But I always go back to like, it's changed in a lot of ways but in a lot of ways, it's the same thing. They're saying the fundamentals are still important, you know, understand your environment, patch your environment, make sure you understand the risk scenarios that are important to you, your threat landscape, you know, the things that haven't changed, I guess it doesn't change, it's not going to change. It's just different ways that we address it.

Yeah. And so, you know, we're gonna kind of come back and forth between security and then AI in this interview, and I think our listeners are going to enjoy getting a perspective. So you've been an experienced CISO on in Fortune 500 companies, you're a board member of a number of security products or security companies, you're also the first time I met you, I remember within 10 minutes, we started talking about generative AI. And I said, Oh, it's so good. It's great.

It's so amazing. Like, this is what I was able to do at work today. And you were like, be careful. And I'm like, What are you talking about? And you were the one that first told me about hallucinations and so maybe help people understand like, there's AI, now there's the generative like, what is generative AI?

What do people need to know about it high level before we dive into details?

Yeah, I think it's important for people to understand AI isn't new. It's been around for a long time.

Every major corporation has been using some version of AI or ML for a long, long, long time.

What we're talking about now is really large language models and being able to talk to it, right.

Easy way like just to make it simple, right? It becomes a chatbot that you can just kind of talk to right, you can build solutions into teams or slack or, or whatever it is that you use and it becomes really easy.

Open AI with Chat GPT is what sort of opened it up to the masses where it's really easy for you to go and look at it and what was really interesting and eye opening to me, right? So when I first started to look at, like, how people were using it, you know, we sent out a survey of people that we knew were using it, and we're like, hey, like, what are you using it for?

Like, what, why do you do this, right? And I think a lot of it was I was thinking it was going to be getting help with marketing or getting help with writing an email or code or something like that. People were using it to replace Google. And that was the vast majority of how people were using it. And it was, you know, it didn't dawn on me, first of like that you could just, it's just a search replacement. So I think it's important to just understand the different ways that people are going to use it. But also just that it's not necessarily something new. It's just new to a lot of people. And I think in this particular case, with GenAI, it's now businesses trying to figure out how do we use this version of AI? And how do we take advantage of it? How do we protect ourselves from it understand what's possible and where they should be? I think that's sort of what people are trying to figure out right now is sort of what is it? And is this something that's a business differentiator? Or is it just a really cool thing that people can use to make their day to day more efficient?

Yeah, that's a great point, it is very new. And as Aaron alluded to, earlier, replacing Bard with Google might not be the best fact rich source implement quite yet. I mean, there's going to be a significant amount of IT leaders taking on implementation and AI oriented projects, particularly around the large language model nuance that you mentioned. And that also reminds me of a dilemma that sometimes, you know, we speak to a lot of IT leaders that they run into where the perceived value of a project is sometimes mistakenly associated with the amount of effort, and the time dedicated, and the sunk cost of that project. Since this is on the beginning of on a hype curve, getting part of a hype curve, I perceive a lot of hours and a lot of time trying to implement AI solutions and a stubbornness to switch a stubbornness to switch gears or change direction because of that sunk cost fallacy. Have you experienced this type of mentality or problems like that in your leadership career? I imagine AI would be a hotbed for that kind of fallacy.

It's not just AI, right. You see that in various things, right. And I think, I think what's your call out is important. Your leaders have to be aware of why, why are we doing this? And really be clear up front like, what is our end goal in doing this, right?

So if you're looking at an AI project, right, I think a lot of people skip some of the initial part, which is really understanding what your current landscape is, like, you know, when people start looking at at Gen AI, for example, a lot of times they don't really know what they want to do with it.

They're just trying to figure out what it can do and how do we want to manage it as a company.

But the first thing they got to really understand, right is like, what's your risk tolerance around this, right? Like, do you want to open this up to everybody? Do you want to keep it to a smaller group? Do you want to make as a part of an application? Do you want to whatever, right, and they have to understand how much risk they're willing to live with before they even start that sort of conversation. And then it's more about figuring out like, alright, if we're going to use it for this specific use case, right, like one example I've worked on in the past is around, like level one SOC sort of work, right? So if like, we want to put a chat bot out that an employee can use to report something, right? Like whether they clicked on a bad link, or they got a malicious email or they whatever, right, you can make up whatever you want to.

And they have the ability to then create a chat session in Slack or Teams or whatever you use then say, hey, I did this.

That Gen AI bot can come back and said Okay, what did you do and ask it all the contextual questions to find out is this real or not? If it is real, then it gets all the context, can automatically create a ticket into your ticketing system, and then your your SOC analysts can go from there, right? So therefore, skipping that initial part, where your your SOC analysts have to go do that investigation work, right. So like, if that's your use case, then you know what your sort of benefits are, right? You're going to have increased accuracy, you're going to, you're going to make them more efficient because your resources can now go do some other things.

You're going to have more customer satisfaction because they have a higher quality interaction. So when you have those things that are laid out, now you can measure it, right?

And you also know, what's the risk of this thing? Well, one of the risks is it hallucinates and give you and says this isn't a real thing, when it is, right.

And so but you have to know that and I think without clearly talking about, you know, what's your in state is and in this case, you're in state's creating that sort of level one chatbot, you have a hard time understanding what your risks and rewards are, when you're just doing this sort of overall.

And so I say all that to come back to, if companies don't have a defined sort of why they're doing it, then the sunk cost fallacy can become a real problem. Because you don't know, are you, is it worth it or not?

Or are we just chasing bad money? And that's, that's the thing you have to sort of work out beforehand.

You're saying that AI has been around for decades, and but it's really just the ability for it to be so conversational now with these new tools, open AI and chat GPT and bard like, that's really what's changing? Like, is that, is that true?

And art too, Generative Art. Yeah it's, the generative part, it's, it's the, it's almost like you have to be an expert in prompting now. The prompting is the key here.

You see prompt jobs coming up now, where you're hiring people that are prompt engineers, just who could write better prompts. And yeah, what it really is making it accessible and easy to interact with, for the for a normal person, I think is what's completely changed it and that at least got the height going crazy, right? And, but the the power of it man is, you know, being able to do all sorts of things behind the scenes that automate processes or do research or all kinds of stuff.

Like I guess, like, if I'm a company leader, whether I'm a director, you know, head of security, or I'm a CEO, or CFO or someone and like, you know, there's articles out there that says like, if your company's not using AI in five years, like your company's going to fall behind. I think it's true, but I think there's like, that's like the buzzy article that gets the click, like, how do you ensure responsible use of AI? Like, how do you make sure that it's not being used like I said in the beginning, the story that literally just was made up?

How do I limit that within a company?

That's not a singular use case, either. I mean, it was a couple months ago, when the lawyer presented the judge with the precedent cases that never existed, right?

Because Chat GPT made them up, I think it was Chap GPT. Yeah. So it happens all the time and I think it's important to understand hallucinations will happen. And I also think it's important to think about not all hallucinations are bad, right?

And let me explain what I mean by that, there's really, they way I think of it as three types of hallucinations. There's a, like a benign hallucination, which just doesn't matter. It's an insignificant thing. It's not what you would expect to see, it's not right, but it's, who cares. Then there's the bad hallucinations, which we all know about, right? It's when it creates a court case that never existed, and you give it to a judge, and you get in all kinds of trouble, right? Or it can be any kind of thing. But there's also what I would say are good hallucinations. And what that can mean is like if you ask it a question about your processes, right? You have access to your data, and you say, if there was a problem with this data, where would you look for it, or something like that, that's going to come up with things that it's making up, but they could be really cool new ways to do something, right, and cause you to see a different way to do a process. It's a hallucination, but it's not necessarily a bad one, because you're sort of prompting it to give you a hallucination.

Go back a second, just so our listeners all understand like when when you say hallucination, maybe define that, like, what do you mean by hallucination with AI?

Yeah, so the way I'm talking about it here is like a hallucination is when it gives you an unexpected result, right? Something that you wouldn't want to see like, so if I'm asking you from a security perspective, if there's a password document password policy out there and I say, How do I change my password? Or what's the password requirements, and it comes back and gives me a different thing than what's in there, right? My password policy is 10 characters, and it comes back and says, it has to be 12.

That's a hallucination, right?

But you can use that in a way where you can ask it to think of completely different things that aren't in your processes. So like if your your current process says do ABCD, and you ask it, think of a better way to do that, or a different way to do that, it might say ADEB, right. As far as your workflow, it may give you a different way to look at how you do things, which can be a good hallucination. Not always a good hallucination, but it can be. It just gives you a different way to see a problem. And so the thing about a hallucination is just a sort of a, an unexpected result, what you don't expect to see out of it.

Yeah, that makes sense. For the first couple sentences I was imagining myself hallucinating and coming up with benign hallucinations and so I'm glad you clarified that. I have a question as well. Let's say that you're, you're talking to a CIO of a financial institution that is risk averse, of course, but they are receiving pressure, perhaps from the board, perhaps from their colleagues and other organizations, that they should be looking into AI to increase optimization and add value to the business? What would you tell someone new to generative AI? What is the safe first landing spot? What are what are some platforms or technologies that would be the best first step in an AI journey? Because there's a lot of places to land, what would be the most risk averse, safe, mature, perhaps technology? What would you tell that leader?

Yeah, so I'd probably plug one of my favorite nonprofits, there's a nonprofit called Responsible AI Institute and they do a really good job of helping companies just understand how to do AI in a responsible way. And it goes back to Aaron's question earlier, like, how do you do it in a way where it's ethical, it's, it's a bias is explainable, it's auditable, and they just do a really great job with that. So I would point them in that direction, probably first, right, it's just that they have a really good group of people there, that can just help with, like, how to would set it up and things like that. There's so many different ways to go about it, you know, with with Microsoft and Google, and everybody's got their own version of it, I would say, for somebody that's looking to use Gen AI, I would, I would say, to start with a very simplistic process that doesn't involve a lot of sensitive data to start, right, whether it's, uh, just trying to figure out a way to sort of automate an intake process or something like that, like I said before, like a level one SOC analyst type thing, or password resets for a help desk, or something like that, where it's, you know, it's impactful, it can, it can have a positive impact on your company, but if it does have an issue, it's not gonna like all of a sudden release a bunch of private information. But I think it's important to really think through sort of where you can where you can jump in that way.

But also, like, there's other parts of your company, I think you have to be aware of is like, what is your data look like?

Alright, if you in general have a really good data program, it's gonna be a lot easier. If your data is eh, if you have a hard time getting an answer, if you know, and I think everybody listening is going to know that themselves for their own companies, right? Like, if if you're trying to find something out, you know, it's a little bit of a pain to to get accurate data, it's going to be hard to implement Gen AI in a larger scale until you fix that. And you've really got to have a pretty good data program in place to like, to really go full out into Gen AI.

Sounds like that's step one, getting, connecting with the right partners to help you get your data in order, data in house. Or else it's going to be probably a step backwards in a lot of ways. And the sunk costs, that takes a lot of effort, but doesn't need the results because the data is not where it needs to be.

Yeah, and you're gonna get tons and tons of returns on fixing your data program outside of AI, right.

There's a lot of other benefits to having a good data program.

So I would, I would say, that's a big part of it. But I don't want to make it sound like you got to do that first. Before you go full scale you do, but like you can, you can get into it without that, right? You can, you can build a small, you know, document repository of policies and just make it a bot where you can ask it questions about your internal IT policies, right. And you don't have to go fix your data program for that. That's an example of a way you could do it. But there's there's a few different ways. I think it's important to find a good partner, somebody who's who's kind of done it before that you can just talk through it and figure out if you, you know for example, do you want to private or public, right? Do you want to, are you going to use your own data, then you want to build a private one? Right? And if you're going to do private, you know, who do you partner with?

Well, depends on what you already have. If you have, you know, like an E5 license from Microsoft, it makes sense to probably to work with Microsoft, right? Those are kind of things that like a good partner will help you just think through those questions to ask yourself.

But it's, I think it's don't overthink it, would be the bigger thing I would say is like, just get started. There's ways to do it safely, without exposing a bunch of data. Just probably wouldn't throw a bunch of documents up into open AI, but there's just plenty of people out there that can help you with a private one, Microsoft or any a bunch of local places that can do that, too.

Yeah, I always tell people like, you know, it amazes me like sometimes when I read articles, and I listen to like, what's going on, like, we're over here, like we're so far down the AI path, but then I meet people who still have never really heard of it, never used it. And so like I remember, I sit on a nonprofit board and I said, Have we considered using, you know, AI to like write emails and stuff? And they were like, no, what do we do? And I just said, like, why don't you just try asking, like write an email to your donors. And they were like, Oh my gosh, this saves us so much time. And so like, I always tell people just like download one or go to the webpage for it and test one of them. Try Chat GPT or try Bard and just play around, with it ask some high level questions to get to know it. I want to go back and this is maybe a really this is a much more generational question about like, what's going to happen, etc. So, you mentioned you have 11, I think you said 11 year old triplets, right? Something that I'm curious about, like, you know, generative AI, you know, what it can do, you know, the power, but you also know, kind of the risks. For students, for young minds, you know, high school, middle school, college, I think there's a little bit of a risk out there now that like traditional industries are going to be widely disruptive. If you think about like repetitive industries, you know, I used to be an auditor, I think about some of the things I used to test, a lot of that can be done with AI a lot faster, and much more accurate than I could have done it. So if you're a student, and you want to make sure that you go into a field that is not going to get replaced by this, like, how would you advise a group of students you're talking to about that? And is this something that you think we're educating young minds enough on yet?

So I look at, you know, Gen AI, Chat GPT, all these things are just, they're just tools, right? And so I think what I would do is just teach them on how to utilize it for what it is, right? So like, I can think of like in I would imagine, in college, if somebody's not using it to do basic research for them to get started on a paper, they're crazy. I mean, that's exactly what I would be doing. I'd go and say, Hey, tell me five books I need to read about this or like, I want to make this point, give me some citations that would help me strengthen this argument. Now, I think the important thing to teach them and I tell my kids now, too, is like, go check it, right. When it tells you this, like go validate that it is accurate.

And I think that's probably the biggest thing right now. Now by the time they get to the career field, I hope that somebody has developed a way to detect hallucinations so that validation isn't as imperative as it is today. I think that's the part that's really sort of holding back Gen AI from really just completely changing the world is that you still have hallucinations that go undetected, and there's not a really good solution for identifying it and catching it.

Once we do that, then people can sort of fully trust it, and it's gonna go crazy. But I think, back to what you're asking, I always tell students that I talk to today, for the same thing, don't worry about what's cool, what technology's going crazy, or whatever, just find something you're passionate about and you'll figure it out from there.

Like, don't chase the hype cycle, it all changes so fast.

You know, the people that are going into careers as prompt engineers are going to be, you look back, when I first got into security, you know, being like an MCSE was like the greatest thing ever and now you can hire, you know, a ton of them for peanuts, right? So I just think don't chase it. You just figure out what you're passionate about, if you're passionate about AI, cool, learn how it works. So you can learn how to make it better. And that can be a really cool career path, right? How can we integrate it into different applications and have it do different things? Not just be really great at using the tool, but like, learn how it works.

So, Keith, I know you've got a question coming up, but I kind of want to ask a follow up right there. You mentioned know how it works. We kind of jumped in, like, how does generative AI work? Like, how is it actually working? Like what do people need to know about that?

Yes, at a high level, right, without without going way too much into it, it uses like you basically ask it the question, then it vectorizes it, you know, then you're going in, you're sort of, the best way to describe it. So when it's trying to figure out, so how to respond back and how to interpret, it's sort of guessing what the next word is based on everything it knows. And so when you're building a solution, what you're really trying to do is figure out when an employee asks the bot a question, for example, it's trying to figure out where do I get that answer from, right? And so what you're training it on is, do you want it to go out into the world to the public and get the answer?

Do you want it to go internally to get an answer, right, whatever. So it goes and does that, I guess it goes internally goes and finds the documents that's going to support that answer, then it converts it back and goes into the large language model, kicks it back out as the the answer in English, right, or whatever language you're using.

And there are some solutions out that are really cool that it'll give you back like the source citation where the documents that pulled it from, the other ones are just gonna give you the the answer. But that's a super high level, sort of how it how it flows go into much more detail, we'll probably need another.

One of the ways I've heard it been described as effectively there's, you have a bot that's in charge of creating bots, you have a bot that's in charge of the tests and you know, you might have the bot it creates variations of different bots. Let's say you have 100 of them. They all take the same test. And typically, it's a very simple test like do you recognize a strawberry in this image, and 0.11% of the bots recognize that there's a strawberry in the image. And then the testing bot says, Hey, this is everyone who passed, this is everyone who failed, then the bot that is creating new versions of that bot looks at the one that succeeded and makes slight variations, but much, much more similar to the one that passed the test. And then they take the test again, and you move from like 0.0001%, to 0.0002%, and eventually get up to an acceptable level, which, you know, a lot of these AI companies from the experience that I've had, they're very singular with the problem they're trying to solve. It's not broad. They, when it comes to school systems, I'm working with a school system that would like to recognize intruders that have weapons that are entering the school system. But a lot of times these cameras, they'll mistake, an umbrella with a rifle or they're really great at recognizing if there's a fight, an escalation going on in school, but they're not great at recognizing what a weapon looks like, or if a student is in an area they're not supposed to be.

So they're very focused in their initiatives. So it's almost like I see the future as there's a variety of AI companies that focus on particular issues, and whoever is able to come up over the top and make sense of all these point solutions, which is where a lot of AI comes, generative AI companies are today to pull them into a singular platform to solve a broader business problem is where I see it going. Do you have any comments on that Matt?

Yeah, I think you're right. There's lots of improvements to be made across the board. I think what's important to realize is how much it's improved in six months, right? And like, you look at like Open AI and Chat GPT is just gets better and better and better. And I think all of it will continue down that that trend, right and know, they'll continue to get better. I think, I think what's what's interesting to watch is like, like, for example, in security, I don't know, it's a bold prediction, but it's on my prediction is within two or three years, if you're a security tool and you don't have AI as a part of your solution, you're going to be obsolete.

They're all have to be like, you know, you look at like, you know, in the space where I started in like email DLP, if you're not using AI there, it's not gonna last very long. It's gonna be I think, in two or three years, we'll look back on what it is today, and we'll be mind blown on the improvements, and being able to detect things and see things I think what you're hitting on is sort of the overall stuff. What I would really like to see from a sort of overarching thing is, is the sort of governance stuff around it, and what shows up on that platform to making sure that people are actually using it correctly. And that, you know, it's not going off on its own and doing some crazy stuff. And we get out of the bias. I think we see a lot of it right now.

And in history, we've seen plenty of examples of, you know, financial institutions that ended up giving higher credit limits to men than women, because the AI was built with bias, or we see it on image recognition, because it's trained on white guys, and it has a hard time with minorities, right. So it's historically a problem, but I do think, I'm hopeful that shortly, you know, we'll see more tools come up to help us with those things. And sort of pull it all together.

Speaking of tools, you are an advisory to the board on a couple of AI companies, one of them, first of all, you're the adviser to the boards of a company named Trustwave, they're a generative AI company, you are also,

Trustwise, Trustwise.

Excuse me. Yeah, Trustwise. And then another one named Stealth, that is an AI building platform. I think it helps organizations build their own AI initiatives or processes.

Can you talk a little bit about what these companies do?

So the company in Stealth I can't really talk about, but yeah, it's coming.

Trustwise is one that is working on sort of governance over AI, right, sort of figuring out how to, how to sort of help executives and business leaders understand, are there AI solutions putting bias in there?

Are they having hallucinations?

Are they ethical? Is it explainable? Is it auditable?

Those sorts of things. So that's what that company, Trustwise is built to do. And I think it's a amazing group of people, they're one of the more fun groups of as a team that I've worked with.

And I'm pretty passionate about solving that problem overall.

Because I think there's a lot of you know, I don't want to say running wild, but there's a lot of AI out there that could use some help in just being able to articulate, you know where it is on that spectrum, right and be able to prove it out that, hey, this is good AI, we're doing it responsibly.

Matt, we're kind of coming up on time so one of our final questions that we ask all of our guests, and feel free to share any other closing or final thoughts, but you know, we say, you know, if you could speak to the whole world, you've got 10 million people listening to you all at once, and you get to give, you know, a piece of advice, or you get to share some piece of knowledge with them that you really want them to hear, can be related to Gen AI, can be related to anything else you've learned in your career.

What is that advice and what is that knowledge that you would share?

And it probably, I would say, if I only have one thing to say, it's not going to be about Gen AI, it's gonna be dream big and don't give up on it. Figure out what you want to do, what you want to be, and don't stop until you get it.

I love it. That's great advice in anything, not just what we're talking about today. Matt, thank you so much for joining us on the show. I think our listeners will enjoy this one greatly. I think there's probably enough here for part two at some point and we can maybe look, do it a year from now and look back how young and dumb we were with AI at this point in 23. But once again, thanks for joining for all our listeners out there. If you are just listening to us for the first time, thank you. Subscribe to us on your favorite podcasting station, give us feedback or reach out to Keith and I on LinkedIn and we're always happy to entertain episodes or guests out there. So thanks for listening. Matt, thanks for joining. Keith, have a great day. And IT does matter so thanks everyone.

Thanks for listening.

The IT Matters podcast is produced by Opkalla, an IT advisory firm that helps businesses navigate the vast and complex IT marketplace. Learn more about Opkalla at opkalla.com.