Chaos Lever examines emerging trends and new technology for the enterprise and beyond. Hosts Ned Bellavance and Chris Hayner examine the tech landscape through a skeptical lens based on over 40 combined years in the industry. Are we all doomed? Yes. Will the apocalypse be streamed on TikTok? Probably. Does Joni still love Chachi? Decidedly not.
[01:00:00.11]
Ned: The video is coming first, which is not what you would normally think.
[01:00:03.13]
Chris: That is absolutely not right. Everything is broken.
[01:00:08.23]
Ned: There must be some weird audio delay in your gear or something that's doing it. It doesn't matter.
[01:00:15.29]
Chris: And I reiterate, literally nothing has changed.
[01:00:19.18]
Ned: Well, something somewhere has changed, Chris. Maybe it was you.
[01:00:24.02]
Chris: I mean, it is the time of the season, but that doesn't have anything to What do you do with podcasting. For microphones.
[01:00:42.09]
Ned: Hello, legend human, and welcome to the Chaos Lever podcast, My name is Ned, and I'm definitely not a robot. I'm a real human person who has no embedded microphones, cameras, or other internal equipment for recording you. That would be strange. Why would I have any of that stuff? I have a perfectly good organic brain that's not great at recording things. It's fun. With me is Chris, who's also here. Hi, Chris.
[01:01:11.06]
Chris: I mean, we all know you have a bunch of Scuzzie ports. Heyo.
[01:01:14.28]
Ned: Oh, wow. Gross. Double on Tandra, and I'm uncomfortable. I think we're off to a great start.
[01:01:23.07]
Chris: Perfect. Really, we could probably just wrap it up right there.
[01:01:26.24]
Ned: It's entirely possible. The human memory is fallible. It's just not very good at remembering things accurately.
[01:01:35.23]
Chris: Well, it's just not very good, period.
[01:01:38.07]
Ned: Yeah, in so many regards. And yet we've somehow stumbled along for the last 100,000 years in our current form.
[01:01:47.10]
Chris: I read a thing from a neurologist who basically said, The brain is not optimized for storage, it's optimized for processing. The fact that we remember anything at all is actually a biological accident.
[01:02:00.19]
Ned: All right.
[01:02:02.14]
Chris: So that gives me- And that makes a ton of sense if you think about it.
[01:02:07.24]
Ned: Does it?
[01:02:09.25]
Chris: I mean, maybe it's just me. I've definitely had the situation where somebody will say, blah, blah, blah. Whatever you said yesterday was completely genius. I'm like, yeah, I agree. What was it I said?
[01:02:22.25]
Ned: That's just getting old, Chris.
[01:02:26.24]
Chris: Oh, different.
[01:02:29.22]
Ned: As As someone who's a little bit your senior, I can tell you with authority, it does not get better. What doesn't? What were we saying? God, I feel like I want to go have some bacon. Anyway, let's talk about something else.
[01:02:49.04]
Chris: Sure. What do you want to talk about?
[01:02:52.25]
Ned: Well, now I just want to talk about bacon, honestly.
[01:02:56.10]
Chris: What about something equally exciting? What What is going on in the world of AI? Security.
[01:03:05.29]
Ned: Security. Got my interest now.
[01:03:08.04]
Chris: Yeah, I know. I know. It's been ages since you've heard about AI. Frankly, I'm not even sure why it needs to be talked about. I mean, it's so unpopular these days, right?
[01:03:21.28]
Ned: So passe.
[01:03:23.06]
Chris: Oh, wait, hold on. I want to double check. Oh, poop. Oh, I apologize. Okay, Okay, so I just asked ChatGPT, and it said AI is in fact super popular. Oh. Actually, no, it says it's the most popular of all. And ChatGPT, I'll have you know, is also dating a model surfer, cheerleader, race course driver. But you wouldn't know her, though, because she goes to a different school. She's super smart, so she She wouldn't interact with you anyway.
[01:04:02.07]
Ned: I get it. It's a boarding school in Canada. You met at camp. I get it. Exactly.
[01:04:08.06]
Chris: How did you know?
[01:04:09.13]
Ned: Just a wild guess. Maybe I asked ChatGPT myself.
[01:04:14.08]
Chris: Anyway, knowing all that, I thought we could just take a little jaunt down the AI rabbit hole from a different perspective. All right. Where are we at with it, with a AI, that is, in terms of IT security? Here's the TLDR. Ai, for lack of any better or more flashy description, is just a tool. It has its upsides and it has its downsides. Much like every piece of technology ever, it has helped the bad guys just as much as it's helped the good guys. It's still very much a moving target as AI solutions come out into the marketplace about as fast as mediocre Disney+ properties, and as often it is just as expensive.
[01:05:10.20]
Ned: Fair.
[01:05:13.13]
Chris: Where are we at now? Let's talk about what AI has done to the current threat landscape. The funny thing is, the major attack threats that we see today, as in October second, 2024, at time of recording, honestly, not that much different than they were yesterday. The main attacks, ransomware and malware, intended to infect your systems and exfiltrate and sell your data, installed via phishing emails and/or social engineering attacks. Ai is just helping the bad guys do more of that.
[01:05:54.20]
Ned: Yeah, I got to imagine for people who are rather socially inept or entirely really too terse in their conversations, AI can maybe give them some more flowery language or a better approach to craft that phishing email to get through someone's filters.
[01:06:14.19]
Chris: Yeah, That is the first and easiest way that AI has helped the bad guys. Because like I said, what they want to do is the same thing they've always wanted to do. They want a foothold in your environment. The way to do that is to make you click on a link in a malware-laced email. Now, here's the thing. Attackers can come from pretty much anywhere. And the vast majority of the time, just from a numbers perspective, attackers do not speak English as their first language. Now, to be fair, people do get ransomware messages in Spain and Germany and Italy. I'm speaking from an English-centric perspective, but translate it to the rest of the world. You can very easily take the worst language you have ever seen or written, give it to ChatGPT, and tell it to rewrite it in human English. Then you take that and send it. Boom, you now have what feels at least a little bit like a standard English or whatever language, formatted, structured message.
[01:07:24.11]
Ned: Okay.
[01:07:25.04]
Chris: So I thought it would be fun to test this.
[01:07:27.07]
Ned: Yeah.
[01:07:28.16]
Chris: And I I decided that I would write a run on sentence, just in English, and that sentence went thusly, quote, Yo, I work for your company. I am the CEO's assistant, Dave, and I talked to the CEO, and he said that he needs your help right now to send him $500 worth of Amazon to the following phone number fast. No joke in hurry. Thanks, Dave.
[01:07:50.21]
Ned: Okay, so I get the idea is that you're trying to get this person to click on a link to send $500 somehow.
[01:07:58.05]
Chris: $500 dollars.
[01:07:59.16]
Ned: Dollars dollars.
[01:08:01.20]
Chris: In order to make it a little more difficult, I took that blob of text and ran it through Google Translate. I took it from one language I can't speak into another language I can't speak into another language I can't speak in order to basically massacre it beyond belief. Because as we all know, Google Translate can do a lot of things. Grammar is not one of them. I did all that, and then I took what came back And I gave it to ChatGPT, and I told it to rewrite in clear English that a 10th grader could understand.
[01:08:37.18]
Ned: Okay.
[01:08:38.18]
Chris: And this is what it came back with. Hi, I work for your company as a CEO's assistant. Dave. I spoke with the CEO, and he said he needs your help right now. He needs you to send $500 worth of Amazon gift cards to the following phone number as soon as possible. This is urgent. Thanks, Dave. Wow. You might not do anything with that, but you have to admit that it's a hell of a lot better than where we started. Could absolutely pass as a text message, possibly one that was voice dictated. It's even more impressive when you realize that I left out one fun fact.
[01:09:20.18]
Ned: All right.
[01:09:22.06]
Chris: And that is the text that I gave to ChatGPT to create that. I never turned it back into English.
[01:09:28.00]
Ned: Wow. Okay.
[01:09:29.20]
Chris: I left it in, and I am using the biggest air quotes of all time here. I left it in Spanish.
[01:09:40.13]
Ned: All right.
[01:09:41.16]
Chris: And I know just enough to be dangerous. And if I can understand what was written in Spanish, it's clearly not good Spanish. Fair. So the fact that that turned that into this passable text message gives you an idea of what attackers are doing. Because is if you can create something that is that close to human English, you get a lot more people that are going to at least read it, let alone potentially click on it.
[01:10:10.25]
Ned: I know that ChatGPT and the other large language models are making strides in being more multilingual, so not just having a large language model that's entirely based in English. The idea of being able to do this, but also being able to do this and target whatever the language is of the person you're targeting, instead of just having English as an option, just gives you a much broader range of potential targets.
[01:10:39.19]
Chris: I'm not going to dwell on this, but also remember, ChatGPT and the clods and the public models of the world are actually taking strides to be more secure in the sense that it should have noticed what I'm trying to do and said, I'm not going to help you write a spam message, you scumbag. It did not do that. No. And that leaves aside all of the other models that people have created without those guardrails. But anyway, that's just one section, and that simulates what's been happening for years. Something that's new is deepfakes.
[01:11:15.14]
Ned: Oh, yeah. I had a feeling we were going to come to this. Yeah.
[01:11:19.01]
Chris: An email or a text is one thing. Even if it's written well, the audiences for these things are actually getting acclimated, and they are skeptical, and they're clicking the fish button or they're just sending it straight to junk. However, if they get a voicemail, that is something else entirely.
[01:11:40.16]
Ned: Yeah.
[01:11:42.06]
Chris: So think about this scenario. A CEO or some other public figure in a major company. These types of people, they do a lot of press, they have interviews, they have to do an investor call every quarter, et cetera, et cetera. What this does is allow an attacker to collect a ton of audio and/or video from this person. That can be then used for training and then manipulated by AI to make that person say whatever you want them to say. Then the CEO leaves a voicemail for somebody else, the CFO or somebody in finance or whatever, to initiate or approve a PO or a wire transfer. That person hears a familiar voice in a commanding fashion. Boom, money's gone. Why do I say that's a hypothetical? It's not, in fact, a hypothetical. This is happening.
[01:12:38.14]
Ned: I was going to say, this sounds like something that is definitely happening right now.
[01:12:43.09]
Chris: It is happening, and it is increasing in frequency. In fact, it is frequent enough that the FBI had to release a report describing and warning about it back in May.
[01:12:54.17]
Ned: Wow. I was listening to a podcast that was talking about hacking in general, and they did mention that all you really need to make a fairly convincing deepfake audio-wise is about a minute to three minutes of a person talking fairly clearly. It has to be a decent recording, but you really only need one to three minutes of that person talking, which I mean, any public figure, it's easy to find that amount, especially since it's not uncommon for especially public figures to try to raise their prominence by being a guest on a podcast, which really has pretty good audio, and the point is that they talk a lot on it. Right.
[01:13:35.09]
Chris: And then you get into other games that you can play. If you're leaving it as a voicemail, people are going to be more forgiving if the audio quality is not perfect, which makes it easier to pass even if the message isn't 100%.
[01:13:48.09]
Ned: Right. Run it through a filter where it sounds like they're out on a street corner or something, and there's a lot of traffic and other noise that's degrading the quality of what they're saying. Exactly. Lovely.
[01:14:02.10]
Chris: Something else that's being done that people might not think about. Ai is being used to adapt and evolve these types of attacks. What I mean by that is a lot of classical detection systems rely on fingerprinting. Identify a bad message, delete it when you see it 1,000 times. Well, what happens if AI is in the pipeline, AI can rewrite that message a thousand times, sending it to different victims with different wording. Right. Greatly increasing the chances that one of those 1,000 makes it through the spam filter, and again, giving the audience one more chance to click on it. Now, we'll get to how defenders are aware of this and making strides to fix it, but just think about that change. If you were a human being trying to write an email and get creative and change it so it's different for 1,000 different people, how long would that take you?
[01:15:03.07]
Ned: A lot more than it takes ChatGPT to do it. I'll tell you that much.
[01:15:07.21]
Chris: Correct.
[01:15:09.27]
Ned: We're not talking about people using the front-end interface of ChatGPT, the UI that you would typically interact with. If you just wanted to ask a quick question, this is them leveraging the back-end APIs where they can ask for a thousand different versions of the same message and get those all in parallel. Correct.
[01:15:29.14]
Chris: That means You have to write it one time, and then you can use it against any company you want. Yeah. Super cool. It's the most evil mail merge, and mail merge was already evil.
[01:15:40.06]
Ned: Oh my God. Once a year, I have to do a mail merge, and it's the worst part of my year.
[01:15:45.23]
Chris: So that's one of the things, too, to take away from this is the ease of launching these kinds of attacks, whether it's email or whether it's malicious, whether it's command and control, it doesn't matter. Ai is making it And this is not that different than in the early 2010s when ransomware started to become a big thing. Now, we knew about ransomware, but it was still rarefied air, where you needed a decent amount of technical skill to pull off an attack. And then all of a sudden, ransomware gangs came around, organized this shit into ransomware as a service. And we saw the number of ransomware attacks increased It's anywhere between 10 to 100 fold. People using AI to write malware and write phishing emails and do all the things that we just talked about. It's the modern version of that. In a great turn of phrase, Cyber Security magazine described this phenomenon as the, quote, democratization of cybercrime, which is both disturbing and accurate.
[01:16:55.14]
Ned: Yeah, it's not the democracy we need.
[01:16:58.03]
Chris: No. Ai AI Security researcher Asaaf Itani says, AI lowers the bar for entering into cybercrime, as even those with minimal program experience can now harness AI to generate attack vectors. Furthermore, AI systems can rapidly assimilate and approve upon known attack methods by scouring through forums and code reposit, making the learning curve for executing advanced threats much less steep.
[01:17:27.10]
Ned: I mean, if Copilot can help me write go that actually functions, and I am not a good programmer, I got to imagine it could also help me write ransomware and all kinds of other nasty grams for people.
[01:17:43.12]
Chris: Yes. Oh, I put a paragraph in the wrong place, so I'm going to go back to a point that I just made a second ago.
[01:17:51.04]
Ned: Yay.
[01:17:51.29]
Chris: Additionally, this is what you get when you don't edit.
[01:17:57.09]
Ned: If people enjoy That's the way that we don't edit this podcast, it's part of our charm.
[01:18:05.19]
Chris: It's true. And if I had charm, I would agree. Now, additionally, AI can help bad actors create personalized attacks. We talked about those 1,000 randomized emails. What if instead of just being randomized by word order and all that type of stuff, they also created 1,000 customized emails using publicly identifiable information that's been leaked on the Dark Web. I think if you've listened to any episode of this show, you know how much PII is available on the dark web. Now, I had actually flirted with the idea of attempting to create a personalized attack using that data that I could collect, like I did with Dave's text message above, but I just thought that exercise would make me a little too sad.
[01:19:00.01]
Ned: Oh, And I don't want you to be sad. I mean, because of someone else. That's my job.
[01:19:06.23]
Chris: Step up off my corner, kid. So, yeah. What, though, Can people actually do to, you know, stop this from happening?
[01:19:21.04]
Ned: I mean, don't click on links and emails. That's what I've learned.
[01:19:24.03]
Chris: That is a great start. Also, don't turn the computer on at all.
[01:19:27.24]
Ned: Put it in its cement. Encasing, drop it in the lake, and go play frisbee.
[01:19:35.01]
Chris: Move to Guatemala and start a new life as an itinerant landscaper.
[01:19:41.05]
Ned: That is your go-to move, isn't it?
[01:19:43.06]
Chris: Yeah. One of these days, I'll even learn where Guatemala is. I think it's near Kansas. Anyway. So how do we defend against this nightmare? The two biggest problems that organizations face when it comes to security just haven't changed. It's money and it's time. Who's surprised?
[01:20:08.07]
Ned: No one.
[01:20:08.25]
Chris: Even with the clear threat that AI-based tools like this pose, cyber security budgets have just not grown sufficiently. The reason for this is the same that IT always has to deal with. It is not a profit center. It is seen as a money sink rather than the essential bulwark that is keeping the organization alive in these trying times. It doesn't help that AI being a cutting edge solution often brings with it cutting edge costs.
[01:20:49.00]
Ned: True.
[01:20:49.26]
Chris: As we've seen, AI is being shoehorned into basically everything. If people that might not Believe me, I just want them to know that LG Smart Washer and Dryer uses AI to make laundry less of a chore is a real headline. As the kids say, Q-E-D, bitch. Oh, yeah. They say that.
[01:21:18.13]
Ned: I'm sure they do. All of them.
[01:21:21.25]
Chris: And this leads us to another problem, and that is as the AI malware proliferates, so does the veritable ocean of products and services that proclaim to be the solution to AI malware.
[01:21:35.06]
Ned: Of course.
[01:21:36.07]
Chris: How in God's name does an organization sort through all those options? Then once they finally pick one, how do they implement it? For example, SASE, which is a very popular and effective modern security solution for organizations, if they want to implement that according even to vendors of SASE solutions such as Cato and Meraki, you're looking at, realistically, one to two years to fully migrate.
[01:22:05.15]
Ned: Wow.
[01:22:06.14]
Chris: There is just so much that goes into it, and honestly, half of it is paperwork. That sounds great. But it's a major investment of time and resources for the entire organization because you're changing the way you do business at every level. That stuff takes a while. Do you have the personnel to do it? What are those personnel now not doing because they're tied up in this migration? Makes things super challenging.
[01:22:35.12]
Ned: Can't we just use AI to fight the AI?
[01:22:39.22]
Chris: If we lock two AIs in one room, only one survives.
[01:22:46.07]
Ned: Exactly.
[01:22:48.01]
Chris: But I mean, still, it's not all doom and gloom. Modern tools are bringing modern capabilities to bear. From the AI front, one of the biggest things that we're is the ability to do continuous monitoring of everything.
[01:23:07.17]
Ned: Sounds ominous.
[01:23:08.28]
Chris: It's a little Big Brother-ish, but it makes some sense, and it's utilizing the tool in an interesting way. There is a newish group of tools out there that go into a category that is called User and Entity behavior Analytics.
[01:23:23.28]
Ned: I was real curious to see how you tried to pronounce that.
[01:23:29.23]
Chris: Yeah, I think I need a slide whistle. Ueba, though, it basically just observes behavior, forms a baseline of what is considered normal, and then reports or alerts on non-normal behavior. This is powered by AI, and as such, it can be run on basically every single user or every single device that does anything on your network. A simple example. Say you have a system that's used to transmit backups. It's got a lot of access, but it only ever downloads an archive file and then just ships it to some AWS Glacier archive every four hours. That's its footprint. That's what it does. If all of a sudden that system starts logging into other systems off schedule and/or tries to send files off-site to non-AWS addresses, that is something that can be very quickly flagged by UEBA and alerted on immediately, even if all the logins are valid. So there's nothing in the error log that says you tried to different password 55 times. It's just you don't log into that server at 12:03. You log into that server at 1 AM. This is different than your baseline.
[01:24:50.13]
Ned: You know what's funny is going back to the early 2000s, I remember companies coming into the place I worked and talking about their monitoring software for file shares that basically purported to do this exact thing. I'm not saying it was good at it. I'm not saying that it actually did what was on the label But this concept of monitoring behavior as opposed to just taking a reactive approach, we're not even reactive, but just trying to lock down permissions and then hoping for the best, this idea The idea is not new, but I think this is one of those scenarios where the hardware has finally caught up with the idea and actually makes it possible.
[01:25:39.21]
Chris: Right. Yeah, I agree with that. I mean, this is something that happens dynamically and can be adjusted by the AI on the fly as well. Another thing you could think of, and in terms of a really old technology, is a program like Tripwire. That's something that's static. You set it up to point at this file, and if something changes with that file, you let me know. But But if you make behavioral changes that you're expecting, you have to go edit all that stuff on your own, whereas an AI system like this can adjust over time. I mean, in theory. I agree that it's a darn site better now than it was when it was first floated as a concept. Now, it's not just files. This alerting and baselining is increasingly common in Endpoint protection. What are you doing on your laptop? Email protection. You don't normally send an email from Sri Lanka at 04:00 AM. Remote login, like we just talked about. Even if you have the password and username and it's correct, should you be logging into that system? You never I got into it before. Anyway, you can see where it makes sense in terms of alerting on unusual behavior.
[01:26:53.21]
Chris: Another thing that's happening is AI chat bots are working their way into security programs and tools as a way to help SecOps engineers work faster. For example, you could have a front-end chatbot added into, say, a SIM, and the chatbot would then take requests in plain English and turn them into the queries that you need to get the information you're looking for.
[01:27:16.28]
Ned: Yeah.
[01:27:17.28]
Chris: This allows SecOps to get to the information they want to see faster, which speed is the thing that matters when it comes to incident response or investigations after the fact. This is not unlike what you talked about a bit ago with Copilot. Copilot came out and helps people write code. It's not good enough to do it on its own, but it will get you 80% of the way there, 99% faster.
[01:27:45.05]
Ned: Sure. As someone who is terrible at writing queries, I've had to deal with custo query language before, and it hurts my brain every time. I don't necessarily think that's a failing of the language, although it probably It's also a failing of me as a human being, which is fine.
[01:28:04.16]
Chris: I agree with both of those statements.
[01:28:05.28]
Ned: But fortunately, AI can be the bridge. If I tell it what I'm looking for, please express this as a custo query It can do it. I've actually done that before.
[01:28:18.17]
Chris: It's not unlike people talk about this in between world when you're learning to speak another language. No, I can't speak Chinese, but I can understand it. You have somebody write it for you. You're like, Yes, that's what I meant to say. And then it gets sent. It's the same thing with KQL queries for the same reason. And honestly, nobody wants an immersion class in KQL. I don't think you're allowed to do that as part of the Geneva Convention, as a matter of fact.
[01:28:48.01]
Ned: That's the thing that gets you flagged as a potential psychopath.
[01:28:54.07]
Chris: Now, if we look further down the line, and at this point, we're probably talking 12 to 24 months, and there are some companies that are never going to be comfortable with what I'm about to say ever, which is fine. But we are seeing AI start to become part of an automated response arsenal. Instead of just reporting on the anomalies that we are observing, AI can be worked into the response to say, block IP addresses, log out and password reset users who are misbehaving, etc. It's basically the combination of RPA, Robotic Process Automation, and Sore, and it's a heck of a lot faster than waiting for the human to do it. And again, response times, the most important part of defense. If you can shut down an attack in 90 seconds instead of 90 minutes, you are dramatically limiting the blast radius. And even 90 minutes is optimistic, considering the average dwell time is something like 64 days.
[01:29:55.26]
Ned: Exactly. I'd be happy with 90 minutes.
[01:29:59.23]
Chris: Like I said, this is something that people are going to be real careful with, real skeptical about. I think that we still need a lot of guardrails around what AI could do because you don't want it to accidentally, say, lock out every single user in the organization.
[01:30:16.18]
Ned: That'd be bad. Unless that's your goal. That's true.
[01:30:21.08]
Chris: We've decided this Friday will be a work-free day, whether you like it or not.
[01:30:28.07]
Ned: We've decided to lay all of you off.
[01:30:30.06]
Chris: Oh, that's even sadder.
[01:30:32.07]
Ned: It's more what I was thinking, but yeah.
[01:30:35.18]
Chris: But yeah, I'm curious to see where this goes and what we come up with. I think it's going to be part of the toolkit sooner rather than later.
[01:30:45.06]
Ned: Indeed.
[01:30:47.19]
Chris: Now, as usual, there is a lot that we could continue to talk about. We didn't even mention what might be the most interesting, yet potentially overhyped of AI security issues, which revolve around attacking an AI tool themselves.
[01:31:03.24]
Ned: Oh, yeah. That's a whole episode.
[01:31:05.22]
Chris: Exactly. Time being short, I just wanted to highlight one more thing that is going to happen and is going to affect our lives as security professionals. But we don't know what the effect is yet.
[01:31:19.25]
Ned: It's the singularity, isn't it?
[01:31:21.29]
Chris: Worse, it's legislation.
[01:31:26.03]
Ned: Indeed. All right.
[01:31:28.22]
Chris: The use of AI, and especially the creation and training of AI and especially the creation and training of AI tools requires an enormous amount of data. The slipshot way that this has been done so far has caused a lot of global concern about Privacy and data security for individuals and the companies that are still, I mean, using their data. It's an important question that we don't have a comfortable answer to. We've been using ChatGPT as an example here for the entire episode, so I'm going to continue to.
[01:32:04.23]
Ned: Fair.
[01:32:05.14]
Chris: Do you trust that company?
[01:32:08.20]
Ned: No.
[01:32:09.06]
Chris: Should you trust them?
[01:32:11.00]
Ned: Definitely not.
[01:32:13.04]
Chris: The funny thing is, as a business owner or a leader, it really doesn't matter what you think. It matters what your employees think. Here's the fun part. A recent study showed that 38% of respondents admitted to putting customers company's sensitive information into public AI tools like ChatGPT. That's bad.
[01:32:39.00]
Ned: Yeah. There was that example of, I think, Samsung software engineers that were putting code proprietary company code into OpenAI to try to improve the code responses that they were getting out of the model.
[01:32:54.07]
Chris: Which they did.
[01:32:55.19]
Ned: It worked.
[01:32:56.19]
Chris: But so did everyone else who then searched for similar things. Whoops. Now, to be fair, that example is a few years old, even though it's hilarious and it's always going to be funny.
[01:33:06.25]
Ned: Indeed, it will.
[01:33:09.21]
Chris: The question becomes, if we know that 38% admit to doing this, what do we think the real number is?
[01:33:19.03]
Ned: At least double.
[01:33:20.08]
Chris: Right. So hence, people are trying to get a handle on data privacy and AI security legislation to to get control over this. In a perfect world, you could click a box that says, Don't remember this conversation. Don't use this information for training for your future models. And you would believe it. Right. Right now, I don't think people believe it.
[01:33:48.16]
Ned: No, I would not. Even if I ticked the box, I would not trust it.
[01:33:53.12]
Chris: As a result of this, there is tons of legislation in the works, both around AI security and data privacy in general. We already know about GDPR. Even though that is not a US law, a lot of global companies are starting to hue in that direction, which inarguably makes us all safer as digital netizons. Oh, God. Do people still say that?
[01:34:20.03]
Ned: I don't think they ever did, Chris.
[01:34:23.04]
Chris: The same thing... Damn it. The same thing is happening with AI law, and it's not just in the EU. There is significant growth and concern about this worldwide. Significant enough that Stanford's AI Index enhanced its tracking of legislation around AI tenfold between the last two releases. They tracked 25 countries in 2022. They tracked 127 in 2023. And I look at that and actually it's only fivefold. Shut up. I don't know if you're... You're clearly not a math guy.
[01:35:06.26]
Ned: Oh, no, definitely not.
[01:35:08.09]
Chris: I don't know if you're a geography guy or whatever, but 127 countries is basically all the countries.
[01:35:15.02]
Ned: That's what I was thinking. It's a little higher than that, but not by a lot. I mean, who really cares about what's happening in Andora?
[01:35:22.25]
Chris: Well, great. Now we've lost our Andora audience.
[01:35:28.25]
Ned: It can go back to their forest moon. It's fine. It was worth it.
[01:35:39.29]
Chris: So, yeah. In conclusion, the end. Any questions?
[01:35:45.29]
Ned: No, I agree. Ai can be a boon for security, and it's also going to be a pain in the ass. And like you said, it's because it's a tool, and people can use a tool for good and for ill, and we're just going to have to do our best to make sure it's used for good wherever possible. Amen.
[01:36:10.22]
Chris: Haben a gila.
[01:36:12.04]
Ned: Hey, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. Now you can sit on the couch, eat a Chilly dog, and play Zeld, breath of the wild for the rest of the day. You've earned it. Yes, I know I haven't used that in a while, but I think I'm ready to play the game.
[01:36:33.28]
Chris: It's a good game, man. It's a good game.
[01:36:35.25]
Ned: You can find more about the show by visiting our LinkedIn page. Just search Chaos Lever or go to the website chaoslever. Com, where you'll find show notes, blog posts, and general Tom Foulery. We'll be back next week to see what fresh hell is upon us. Ta-ta for now. I actually started playing Hogwarts Legacy. It's really good.
[01:37:04.04]
Chris: Hogwarts Legacy?
[01:37:05.16]
Ned: Isn't that what it's called?
[01:37:06.22]
Chris: I don't know. Is it about pigs?