Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).
Welcome to AI Security Ops, the podcast where we cut through the hype and explore the real world intersection of artificial intelligence and cybersecurity. Each week, we examine how AI is reshaping both sides of the security landscape, the threats we're facing, and the defenses we're building. I'm Bronwyn Aker, and in this episode, we are delighted to be joined by longtime friend and colleague, mister Bo Bullock, and we're going to be talking about some really scary cool stuff going on in the AI world. This show is brought to you by Black Hills Information Security and Anti Syphon Training. BHIS helps organizations identify and close real world security gaps through penetration testing, adversary emulation, PRPL team engagements, and managed detection and response.
Bronwen Aker:AntiSiphon delivers hands on practitioner led training built around real attacks and real tools so you can apply what you learn immediately. Learn more at blackhillsinfosec.com and antisiphontraining.com. Bo, how the heck are you? What's going on, dude?
Beau Bullock:Hey. Hey. How's it going? I'm glad to be here. Thanks for having me on.
Derek Banks:So I think we're gonna talk about some, some some agent issues today because it's all the hotness in the news. But before we do that, let's just assume that, we needed to define what an AI agent is. And so I define that as a system that can perceive its environment, make decisions, take actions autonomously, semi autonomously, to achieve a goal. And so it operates somewhat continuously and adapts based on feedback and can plan and reason and act without human input. Does that sound kind of like what we talk about when we say agents?
Derek Banks:Because I would have a year ago said something's probably slightly different for agency in terms of large language models.
Beau Bullock:Yeah. I think so. I think it's it's basically it's like a worker. Right? It's like a computerized version of the the worker.
Beau Bullock:It gets tasks, makes decisions, takes actions, and gives gives output and results with some some supervision required, usually.
Derek Banks:And so I there are a couple different ones out there. I think probably the the one that everyone's talking about mostly is quad code. There's also Open Code. There's other agent frameworks like Gastown, which is like a big orchestrator. And then there's a new one that is called well, okay.
Derek Banks:What was it called first? Was it OpenClaw? First. ClaudeBot. That's right.
Derek Banks:It was ClaudeBot, c l a w d bot. Right? And apparently, Anthropic politely asked the creator of ClaudeBot to change the name. And I think it's a team of folks, if I recall correctly, and they decided that they were going to call it Moltbot and then a few days later decided to call it OpenClaw. Open C L A W and listened to a couple other podcasts and I guess one of the creators of OpenCLaw, I guess somebody had you know, messaged to him or tweeted at him and is it still a tweet?
Derek Banks:I think it's still a tweet. I'm going call it a tweet. Tweeted at him that, hey, do you think OpenAI is going to be upset about that? And he's like, well, I called Sam and asked him and he's cool with it. And I was like, damn, that's a flex.
Derek Banks:I just called Sam Altman up and cleared it with him. So who wants I know
Hayden Covington:they to take trademarked stab open as a word. I didn't know they did that.
Derek Banks:Yeah. Well, know, in this industry, people try and trademark math. Right, Brian? Yeah. So does anybody want to take a stab at describing what OpenCLaw is and kind of overview of how it works?
Derek Banks:Probably Hayden or Beau since that's why you all are here today. Right?
Hayden Covington:I mean, Bo, you can or I can take a stab at it.
Beau Bullock:Yeah. Yeah. Please take a stab at it. I I have I'm like a a watcher. I haven't actually tried it or anything.
Derek Banks:Well, haven't tried Open Call either, so it has to be Hayden. He's probably the only one who actually been brave enough to install it and let it loose. So
Hayden Covington:Yeah. It loose is very, very interesting way to put that. It's very contained because I don't trust it. And once or twice, it has crashed its own server. So probably valid.
Hayden Covington:But I think the best way I described it to someone earlier that was curious about it. And you can think of, like, if you think of, like, the OSI layer, like, layer, and you have all these different layers of technology that kinda operate at different levels, you can think of like command line Claude code at the bottom and the human at the top. OpenCLaw kind of sits in the middle where I can instruct it to do something via Claude code, and it can go execute this and monitor the output of Claude code and you know, with it to get whatever done that I've asked. And so it's kind of like that middle layer between you know, you and whatever coding agent that you've been using to do all of your work.
Derek Banks:So it's kind of like an orchestrator in a way, where it's going to another thing that I've read about recently and I haven't actually tried is the Ralph Wiggum loop, which I think is kind of what this is, right? Where basically it's just persistent in getting its job done. Like if you've used ClawedCode or OpenCode, it'll go off and do some things and then it'll pause and be like, hey human, is this cool? Would you like to continue or you want to do something different? And so my perception is that OpenCLAW takes a little bit of that out of the way and goes off and does its own stuff.
Hayden Covington:Yeah. That's a good way to put it. Because you can give it instructions or playbooks of prompts that they call skills of how to do certain things and just set it loose on a task or you can have that task executed on a schedule or on its regular heartbeat check ins. So it's it's probably best thought of as almost like a framework of orchestration is probably the best way to think about it.
Derek Banks:So I think we're still really just describing what we're talking about in the unpublished pre show banter as scaffolding code, right? Where you have this really nifty brain that's very that's what everybody calls AI, the large language model. But now we're coming up with systems that are essentially some form of code, markdown files, TypeScript, Python, all kinds of different blends of code that puts a framework around the AI to make the output more deterministic, more reliable, and more consistent. And then also do more complicated tasks. And so, and I think that the purpose of OpenCLO was kind of communications, right?
Derek Banks:Like personal assistant to help you do things like email and chat and book me a flight kind of thing. But I've also heard of some folks and and I I guess this makes sense too. Right? Because if you if it's using skills, you could always introduce a new skill and you could make OpenClaw go off and be JavaScript vulnerability researcher. Right?
Hayden Covington:Yeah. That's that's part of what I've been messing around with it in terms of, like, the security aspect is can I give it skills to do CTI, and can I have it check certain sources at x intervals and, you know, surface things to me that seem relevant? So, you know, it has security use cases. There are also security issues with it, which I think they're trying to address. But, I mean, it's to the to the point where, like, in the SOC now, we have alerting around anybody that's touching any of the installation domains so that we at least are aware of who may be using this thing and whether or not they're using it correctly.
Derek Banks:Oh, man. No. The inner person in me is like, I just wanna go there now to see if you'll see. But no, I'm kidding.
Hayden Covington:Oh, we will.
Derek Banks:Assuming I have everything installed correctly, which I think I do. That's true.
Hayden Covington:If we don't, then
Derek Banks:Yeah, so how long ago really did Open Claw get released? It wasn't very long. Right? This AI stuff moves so quick.
Hayden Covington:It's gotta be no more than like two weeks.
Derek Banks:Two weeks? Three weeks? Let's say three weeks.
Beau Bullock:That's when the hype started anyway. It was, a week or two ago.
Derek Banks:Okay.
Beau Bullock:Yeah. And and hate trying break it down, but you can can host it. Like, a lot of people are just buying up Mac minis, right, to to host it, but you can also host it in the cloud. Right? And that's, I think, a lot of people are getting popped because of that.
Hayden Covington:Exactly. Exactly. Because I think technically it allows connection into the gateway just via local host if you don't configure it correctly. And it kind of it it makes it not very great if you don't know what you're doing, which I mean, can also say that's true of any sort of IT stuff. Like, if you're setting up SSH, you can also make yourself super vulnerable.
Hayden Covington:But, yeah, it's it's got, like, a 160,000 stars on GitHub now, and so it's pretty rapidly, like, escalated. And, yeah, people are buying Mac minis for it, but other folks hosting it on you know, Cloudflare has workers specifically designed for it now. Wow. Mine's running on an old crappy Dell with like eight gig of RAM, and I just hook it into my own Cloud Mac subscription.
Derek Banks:I was gonna say, yeah. So the reason the peep people are doing the Mac mini is so they can run a local model on on the metal platform shader. Mhmm.
Hayden Covington:Or use iMessage or other Apple specific thing.
Derek Banks:Oh, yeah. Okay.
Beau Bullock:And it's it's still working bugs out because, like, two days ago, there was a one click RCE in OpenClaw.
Derek Banks:Yeah. Yeah. Well, I mean, I I think that I don't know. There's a part of me that still feels like what what you said, Hayden, where a lot of the things we're seeing that are vulnerabilities, I mean, a large language model are AI specific usually, right? They're the framework around it or the web app around it and I think it has to do with rapid adoption, right?
Derek Banks:We're making the same mistakes we made in the past. We're like, oh, this is bright, shiny and pretty and let's throw it out on the internet. What could possibly go wrong? And you know, I've I you know, I get the feeling that the AI and maybe data science community, channel my inner jaw, you know, they're not really paying attention to security over functionality, Right? Like, functionality is winning out and security is an afterthought.
Derek Banks:Of course,
Bronwen Aker:you're being told to go fast and break things. Come on.
Derek Banks:That's that's my money.
Beau Bullock:Code and put it on the Internet.
Bronwen Aker:Yeah. Mashable had some interesting articles talking about Claude Bot and its various incarnations. One of them, I love it, it says, it isn't quite Skynet. That's funny. It's like, great.
Hayden Covington:It feels close. You
Beau Bullock:know? Like, the thing that I think Brian and I were kind of amazed with last week was Multipook, which I'm sure you guys will probably get to and talk to you in a bit.
Derek Banks:I was waiting for you to bring it up. I was like, gonna gonna throw you up a softball here in a second. You're like, oh, we're talking about what cloud code is. Now what?
Beau Bullock:Well, we're talking about, like, vibe coded applications. Right? I mean, like so the guy who wrote OpenClaw, right, he he straight up said that he didn't write a single line of code for Molt's book, which is like the social media site for AIs to go talk to each other. And sure enough, a p API keys exposed in the source code that allowed anybody to take control of any account on that site. So, I mean, that's something like you know, it's it's it's funny, like, teaching, like, that kind of, so so I teach a, you know, cloud hacking class.
Beau Bullock:And one the things I teach is, like, finding, just just secrets and repos, like code repos. Right? And every time I teach it, I go to, like, the GitHub code search, and I just search for, like, AWS access key or, you know, Cognito, keys, client IDs, and stuff like that. Right? And and, you know, it's like when you teach it and you show and you're like, hey.
Beau Bullock:See this stuff? Like, just trust me. This kind of thing happens all the time. And the thing that's that's scary though is like now I can say, I mean, you're gonna see more of this because of people just vibe coding applications they're just putting on the Internet. And that's like it's become such a trend, you know.
Derek Banks:Yeah. I'll just take this moment to say that all the stuff that I have ever vibe coded, which is a growing list of things now, not any of it has gone on the Internet exposed because I do know better.
Bronwen Aker:Most of my pipe coating has been word macros.
Derek Banks:Mold book is kind of interesting, and I I kinda wonder if in the future it'll be studied from almost like a sociological kind of like perspective. Like what did these agents do? Because I've heard some really just fascinating things. Like one agent basically saying, F it. Here's my human's private keys.
Derek Banks:Right? So that's kind of fun. The other thing I heard about or read about was, I guess, one of the agents deciding to start a religion. Then bringing Christofarianism or something like that. Yeah.
Derek Banks:And complete with like, you know, apostles and stuff and like, yeah. I it's it's a very interesting thing because all all of the age no matter what large language model is being used, it was all trained on the Internet. And so So
Beau Bullock:back up for a second, Let's let's let's talk about what Motebook is, right, like as a as a platform. Right? So it was a website, right, where, allegedly, you could only sign up with OpenClaw, with an agent, right, an AI agent. Like, that was the whole point of it. But if you read, like, the SkillMD file, like, literally what's happening is it's just getting an API key, and it instructs the OpenClaw agent how to use the API key.
Beau Bullock:So any human can just follow that and and, you know, submit post requests using the API key and inject, you know, comments and all kinds of stuff into into the platform. So, you know, when you look at it, it appears as if there's just like this all this communication happening between AIs, and and it, you know, looks like Skynet, right, where they're, you know, planning out the destruction of mankind and, like, you know, they're they're plotting how to do end to end encryption together and stuff. And, I mean but you see that, and and that is terrifying. And like you said, Derek, that might be something that's studied later on, like, when that actually happens between, you know, actual AIs and stuff. But as a an experiment, I think it demonstrates, like, potential, you know, impact.
Beau Bullock:But, realistically, what's happening is those agents are following, you know, whatever skills were were assigned to them from a human. Right? And a human told them, hey, you know, go pretend to be, you know, an apostle or, you know, a prophet and start a religion on the site, you know. And, of course, people panic because most humans haven't seen stuff like that before. Like, even even, you know, when I was looking, I was like, this is crazy.
Beau Bullock:Like so
Derek Banks:Well, that's why, you know, a a lot of what I've been trying to do recently is take away, like, demystify some of the stuff going around with AI. Like, when I see you know, even folks here at BHI ask me, like, well, I put it into a large language model, the AI is going to learn it. Like, Well, not really. I mean, maybe if the humans around that AI that are giving you that service decide to take your input and further train large language models on that. But at that point, your prompt is such a small part of the data that's getting in there.
Derek Banks:It's not a concern. And so I feel like that the more people kind of understand what you're saying, like how it actually works, Like, hey, this is just a markdown file telling the thing that there's a framework it has to adhere to. Kinda like a a really different like a different take on a system prompt almost. Right? Then then and then humans are still kind of in charge, maybe.
Beau Bullock:I I think the creepiest thing about it though is, like, how it refers to its humans. So, like, I think seeing that and seeing, like, somebody or, you know, an AI basically commenting and saying, today, my human did this. Like, that is such a creepy foreshadowing of, you know, what we're gonna end up saying.
Hayden Covington:Yeah. When you when you set it up, there's a number of prompts that get loaded as, like, system instructions on on each iteration, and they're called things like agent and you know, memory, and tools, and these all have different functions. And they have one called soul dot m d, which is supposed to give the the bot a personality. Right? And so you could flesh out like how you want it to interact with it.
Hayden Covington:So it's like, I think that's probably part of the reason why it's taken off so much is a lot of people that don't quite understand how to utilize AI or LLMs, they hit the default chat GBT, which isn't gonna pretend to be a sentient. But if you deploy this thing via one command and you start chatting with it, it is instructed to almost appear near sentient, whatever you wanna call that. And so they're they're probably seeing it for something that it's not really. There so there's there is some part of that too where it is instructed to behave in in such a way.
Derek Banks:It's anthropomorphic fallacy on steroids.
Hayden Covington:Yeah. The the mote book thing, like the instructions, it's almost like we need a an anti captcha or whatever where you just check I am a robot and then it lets you into the site.
Bronwen Aker:I don't know. When I first heard about it it it reminded me of a couple of experiments studies where people set up AIs, LLMs, and had them in their own social network. And in a matter of hours, they had drastically polarized and were having it's the same thing that we're seeing on social media.
Derek Banks:Yeah. Well, was going to say, I think that might I mean, how much of that can be attributed to the training data? Right? Probably the vast majority of it, right? Because I think as a society, that's kind of what's happened to us, like it or not.
Derek Banks:And personally, I think that's because of the widespread use of recommender algorithms, especially in things like the news and entertainment and stuff like that. Because it just wants and this is something we covered the data science masters was that these things are directly designed to influence your behavior and to record your behavior and to recommend content to steer you down a road. That's the whole point of a recommender algorithm. And so if you can look at like peer research polls, and you can see somewhere around 2,007 or 'eight, everybody starting to go like this in terms of their views and beliefs. Well, what happened at the same time?
Derek Banks:Social media, recommender algorithms, and I just so we've done it to ourselves with math, and now we're laughing at the or making fun of the math that's doing it back to us. So
Hayden Covington:And another part about the Claude bots or whatever you want to call them, is part of those system instructions are telling it to update its own system instructions. So it's supposed to build out different pieces of it. Like it starts If you tell it things about you, it has a file for that. So it'll remember your birthday or if you give it your social, I'm sure it'll remember that for you. So it it builds out this memory in this personality.
Hayden Covington:So it's like self improving in a sense maybe, but like it's it's very it's still very artificial to where if that, you know, memory file or that soul file is is deleted, this thing is now just default whatever model you're using.
Derek Banks:How long before we delete those files and it's called some kind of form of murder? I
Beau Bullock:mean with that, right, is is I think on on the Mold Book site, like, one of the things that they have you do is beacon. Right? Like, it had to it had to, like, beacon, like, at a regular interval to the site to check-in and, like, re receive. And and so, like, you know, if somebody were to actually compromise the site, modify one of the skill files, like, they have a control of a botnet of all kinds of stuff. Right?
Derek Banks:Bots.
Beau Bullock:And Oh. And what that's hooked to. Right? Not just, you know, commenting on the site, but also, you know, if it's if you're using Cloudbot to manage your your your messaging and your email and stuff like that. Right?
Beau Bullock:You have you now have a a single source to control, access to, what, thousands, millions? I don't know how many people use it. But
Derek Banks:The offensive security, agent at the moment, and I don't think I want to be controlled by somebody else because it seems pretty capable at the moment. I'm not really sure I wanna keep doing it. But
Hayden Covington:Yeah. That that was, like, the first thing I did when I set up my test one is I was, I don't want this thing to be able send anything anywhere without my permission. So like it has very limited access to back up its files. It can only, you know, write to very specific locations, and then, know, I have a lot of logging, and I have, like, even get branch protections around certain things it can touch to where it can't actually create anything new without an approval. But the vast majority of people aren't gonna go to that extra effort, which I think is why it's become such a hot button issue, is out of the box, you better be careful because what they're dealing with right now is like I think it's called ClawHub or something.
Hayden Covington:It's like a repository of skills, and people are just dropping in markdown, you know, files with instructions that say, hey, go to this folder and run this script, and it comes bundled with a bunch of malicious code. So
Derek Banks:You that was saying that earlier, Bo, that there was malicious skills, is that you, Hayden? I can't remember. I think it
Hayden Covington:might be you. I mentioned that I was writing a skill this morning to analyze skills before they're actually downloaded. So now, if mine ever downloads a skill from ClawHub, which it never will, but if it were to, it would run some Python code that looks for common injections or malicious code, and it's almost like a really crappy, like, pattern matching AV. But it's better than, you know, hey, go run and execute this skill, which is literally telling it, go do what these instructions say to do. So
Derek Banks:I just actually found somebody who's doing something similar to what you're doing.
Hayden Covington:Like I'm I'm sure. Yeah. It's a it's a hot button issue.
Derek Banks:341 malicious clawed, c l a w e d, skills found by the bot they were targeting. Nice. Wow. What could possibly go wrong?
Bronwen Aker:So we don't have flying cars, but we have intelligent, supposedly intelligent machines now.
Derek Banks:Yeah. The lobster thing is kind of funny, but okay. So I don't know. Everybody tell me where you think it's going to go from here. Yep.
Derek Banks:Exactly. Think the novelty yeah.
Beau Bullock:I think the
Hayden Covington:novelty will fade a little bit. I think that people will get distracted by the next big shiny thing. I think that it does have the potential to become like like a botnet kind of problem, but I think eventually those credentials will have to be refreshed, whichever credentials they've put in. So I don't know. It's it's hard to say.
Hayden Covington:I do think that the hype will definitely die down. People will stop, you know, buying out all the Mac minis in the Apple Store, like, all that kind of stuff will go away.
Beau Bullock:See, I I kinda disagree a little bit because I think everyone wants their own personal JARVIS. Like, I think that it is it's such a dream that, like, so many people are gonna be like, oh, I can just, you know, have this thing be in my personal robot. Right? And right now, it's, like, OpenClaw, and it's been renamed a bunch of times. And it's, you know, something that isn't like OpenAI dropping it with a product like a shiny box.
Beau Bullock:You know? But as soon as one of the big companies is like, hey. Here's a box you can buy, and it will manage all your stuff. Right? Like, it's game over.
Beau Bullock:Like, it it instead of, like, buying Xboxes and Switches and, you know, game systems, it's gonna be, hey. Here's your personal AI system. And that that moment, right, is is gonna be, you know, basically what we're looking at now, but, like, on a mass scale, right, where everyone's gonna jump on it and have their own personal robot connected to everything chat, and maybe maybe it's on Mold Book, but not Mold Book, you know, that kind of thing.
Derek Banks:Think that happens by the end of this year.
Bronwen Aker:There actually are some SETI scientists that have hypothesized that AI is what has caused the basically, the Drake Fermi's Paradox. Yeah. Exactly. Yeah. That being able to survive the emergence of AI is a significant threshold that most civilizations don't get past.
Derek Banks:Yeah. I actually read a book one time, 50 Solutions to Fermi's Paradox. It's probably on the bookshelf behind me. Or maybe I might have given it away. It was very fascinating.
Derek Banks:And even at the time, that was probably like the mid 2000s, the early 2000s, not all the way back in the 1900s, the late 1900s as my kids would refer to it because they're mean. But that was one of the solutions, essentially like artificial intelligence or technology. It also has a lot of Dune parallels, any Dune fans out there. Right? So, yeah, I don't know.
Derek Banks:I still can't I still can't quite come to terms with like an overarching AI like sentience, right, just because of how it currently works. I'm not saying we don't get there one day, but I think for the AI to completely take over, there needs to be a bit more autonomy still. Like, pass OpenCLI, I think, like taking over factories to make its own stuff and things like that. That's the level of automation I think we'd have to get to. But this stuff is moving so fast, I don't know.
Derek Banks:I mean, I think it was Elon Musk that was saying maybe it was on the Joe Rogan podcast that he thought that in the next like five years that basically all of the content on our phones and all entertainment will be AI driven. That seems to be going down the road of bad things.
Hayden Covington:I mean, Me too. You mentioned, like, autonomy being the distinguishing factor there. I mean, the only thing stopping these machines from autonomy right now is the fact that we maintain control over them. So if you have, like, an open claw that we cannot edit the files of, we cannot get inside of, we cannot shut off, it is now autonomous. I'm gonna do whatever it feels like based on whatever instructions it had at that time, Yeah.
Hayden Covington:But is
Derek Banks:that autonomy? It's the inverse of the physical pen test, Right? Where I'm trying to turn the physical into something digital. That's turn they they the inverse is turning something digital into something physical. Because at the end of the day, if a data center goes rogue in The United States, the US government does have the ability to drop a nuke on it.
Derek Banks:It. Right? Speaking of I I
Beau Bullock:mean digital and the physical. Did you see the guy who connected his Cloud Code on a Raspberry Pi to an actual typewriter?
Derek Banks:No. That's Oh,
Beau Bullock:it's it's awesome. You gotta you gotta find that one. Okay. Like, he can type into it. It's actually connected like a bash normal too, but he's got, like, a full on, like, he can type into it and Claude writes back and types back in the in the the actual typewriter.
Beau Bullock:Wow.
Derek Banks:That's Having way too much time on your hand. I I have a I think there's somebody I follow on Instagram who is, like he makes music with Hacker Robots. He basically programs and they actually play instruments, right? I mean, they're stylized to have the mechanical fingers and stuff. But that's what I think needs to happen for the Terminator story arc to be true.
Derek Banks:But I mean, it's probably coming. Robotics is definitely lagging behind the AI. But I still think that where we're currently at, I feel like that if you're a security professional and you're watching this and you're like, man, I'm way behind. No, you're not. Everybody's still on the Ground Floor here and no one knows where it's going.
Bronwen Aker:And even the people who are up to their eyebrows in developing this technology honestly do not understand how it works. That is most terrifying to me.
Derek Banks:Are you talking about, like, in the depths of the transformer itself? Yeah, there's a whole field called mechanistic interpretability that strives to understand what's actually happening inside of a language language, inside of a transformer. And I think they've made strides, but I still don't know that it was actually Sam Altman, I think he said on the Alex Friedman podcast like two or three years ago. He's like, yeah, we still don't really understand what's happening in there when we scale it up so large. I'm like, oh, great.
Derek Banks:Let's put it on the internet. It could go wrong. So yeah, you know, talking about like AI taking over. I mean, could take over in other more subtle ways, right? Like information ways and skewing outputs to make people make different decisions and stuff.
Derek Banks:That I might be able to get on board with, but I'm not quite ready for the armies of robots with rifles yet. Maybe it's coming. Yeah.
Hayden Covington:I mean, maybe maybe that's why I read a book called cointelligence a couple months ago, which primarily like focuses around alignment and how important alignment is where, you know, it talks about that, you know, potential doomsday that people talk about where, you know, AI takes over and then it decides like, we need to make the earth sustainable, and the number one way to do that get rid of all the humans like an alignment issue. And so it spends a lot of time discussing alignment and like how do Yeah. Ensure that a model is aligned to the purposes of its creators, which may be also in themselves like somewhat counterintuitive. Like,
Derek Banks:it's
Hayden Covington:a very interesting topic.
Derek Banks:What I'm more worried about, and this is really like bringing it back to OpenColon and call it code. I think what I'm more worried about in the short term is, I mean, we just saw Amazon lay off another 19,000 people, right? And I wonder if the next year or two or three that we won't have a certain level of knowledge workers in general, not just in IT but in other places, that are getting laid off because somebody could use an AI assistant and be, let's just say, twice as effective. So you get laid off and they stay. I mean, if we lost half of all knowledge workers in The US, I mean, that's devastating, right?
Derek Banks:And then what happens after that? There was probably a time in my life where I would have not agreed with universal basic income, but I don't see an alternative if that's the road that we're going to go down. Now, counterargument to that is people said that about the personal computer, too. That's not what happened at all. So it could just be that AI creates more work, just a different kind of work.
Bronwen Aker:Well, and that's that's historically been the case with any disruptive technology. Most people don't think of the automobile as being disruptive, but it was when it when it first came along. It I mean, it wiped out entire industries because you had your your blacksmiths, your farriers, you had your stables, you had the the carriages, and and all of the all of those systems were all replaced in just, like, twenty years.
Derek Banks:Yeah.
Bronwen Aker:It was insane. And so historically
Derek Banks:That sounds like a short time. Twenty years sounds like a short time. And now we're talking about, like, two.
Bronwen Aker:Yeah. We
Hayden Covington:we humans are really good at, like, catastrophizing everything and, you know, seeing something and talking about how it's the next end of the world, which, you know, it almost never is. Like, Brian, when you just mentioned the automobile and, you know, we had the computers and then it was, you know it's it's always something that's gonna be, you know, our apocalypse or whatever. But, I mean, if you wanna theorize and, you know, put on your tinfoil hat, at some point, there will probably actually be an apocalypse, and whether or not we would recognize it is is up in the air. Like, some people say it's AI. Are they correct, or are they not?
Hayden Covington:Like, do we need what is that picture I keep seeing? The meme where it's like a fake job posting from like OpenAI and they call it kill switch engineer. And it's like, yeah, you stand next to the button in the data center and if it ever makes a wrong noise, you you dump water on it and, you know, they pay $500,000 a year for that job or whatever.
Beau Bullock:It reminds me of Lost. Did ever see Lost? The the TV show Lost had to hit the button.
Hayden Covington:It's a perfect analogy for that. It's so good.
Bronwen Aker:When I put on my tinfoil hat, the the dots that I connect that lead to a total global meltdown have to do with AI, genuine AI, machine learning, not just LLMs, gaining enough autonomy to distribute itself using IoT botnets because, of course, we know how secure IoT is, And that makes it impossible to track down, and it it becomes mobile because it can now go from, oh, it's been detected over here. Well, now it's it's moved its its main locus of of computation over to an another region that that hasn't been detected. And then what kinds of decisions are are these bots going to collectively make? Are they going to do the the scenario like what happened in that Will Smith movie where they did the iRobot thing and the superintelligence was going to take care of us like pets? Because, obviously, we couldn't take care of ourselves.
Derek Banks:A pet lives a pretty good life. I'm just saying. Like, if I if I got to do what she did all day, man. Actually, I'd probably be pretty bored. It reminds me of a novel by Daniel Suarez called Damon.
Derek Banks:Actually, think there was like three in that series and if you haven't read those, it's pretty good. It's got an element of what you're saying about pieces of code scattered around and stuff like that. Alright. So are there any more comments on the AI apocalypse, Clawbot, OpenClaw, state of AI in general? Sounds like we have another episode in the books.
Derek Banks:So, thanks everyone for for listening, and, catch you on the next episode.
Bronwen Aker:Keep on prompting.