Practical AI

In this fully connected episode, Dan and Chris break down the Anthropic Claude Code leak, what went wrong and what it reveals about agentic systems, AI architecture, and AI safety. They also explore how the open source community is responding and why this moment could reshape how AI systems are built and secured.

Featuring:
Upcoming Events: 

Creators and Guests

Host
Chris Benson
Cohost @ Practical AI Podcast • AI / Autonomy Research Engineer @ Lockheed Martin
Host
Daniel Whitenack
CEO @Prediction Guard & cohost @Practical AI podcast

What is Practical AI?

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).

The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Narrator:

Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Narrator:

Now onto the show.

Daniel:

Welcome to another episode of the Practical AI podcast. This is Daniel Whitenack. I am CEO at Prediction Guard, and I'm joined as always by my cohost, Chris Benson, who is a principal, AI and autonomy research engineer. How you doing, Chris?

Chris:

Doing good. How's it going today? Lots of cool stuff out there, isn't it?

Daniel:

Oh my gosh. Lots of interesting, scary, intriguing, malicious things. So for context, if you're if you're joining us at another point in time, listening to this in the in the future at some point, we are in ape we're on 04/01/2026, which is interesting. It's April Fool's Day. So the what what we're about to describe is not an April fools joke.

Daniel:

Although, I think there was there was a number of AI related April fools, like, post you know, most tech companies post something here and there. But this was actually very much not a very much not a a a joke. As of today, 04/01/2026, I guess last night or last night kind of into today was this perfect storm of, leaking of Anthropic Claude Code code base and related related vulnerabilities in, tool chain of of Claude Code. And so, yeah, I mean, this is the most timely thing for us to to talk about, Chris. What what was I mean, before we get into I wanna go through the timeline and kind of all the dynamics here.

Daniel:

I mean, Anthropic was already dealing with some rather difficult things in relation to being identified as a supply chain risk by the US government. It's But yeah. I mean, coming in coming into this, what yeah. What's your how did this hit you? How did you learn about it?

Daniel:

What do you have your copy of Claude Code? I guess we shouldn't admit that one way or the other on this podcast.

Chris:

I don't know any yeah. Never never speak. The, you know, the the monkey's there. No no see, no hear, no speak thing.

Daniel:

Exactly.

Chris:

So if I did, I would never admit it. Yeah, I just, well first of all, it's kind of background. Anthropic's had a few interesting weeks here. Challenges. Yeah.

Chris:

And they got a judge on their side, so if you're joining this later or if you haven't followed, the United States Department of Defense that also likes to call themselves the Department of War, but that's not been approved by Congress, is, know, kinda had this thing where they said we're we're no longer because you're not doing exactly what we want, we're no longer going to let Anthropic be part of our supply chain, and Anthropic appropriately went to a judge and and got some relief on that legal situation with a whole bunch more to be played out that that hasn't happened yet. And and so that alone had, you know, had been kind of an extraordinary story. But yeah. Then apparently yesterday and I I woke up this morning ready for April fools jokes, you know, because as you pointed out, there's a lot of those, especially in this AI world, and when I first read it, I was like at first, I saw the headlines in the newsreader, and I was like, oh, there's gotta be no way, you know, and then started reading it, I was like, starting to feel like it might really have happened.

Chris:

Maybe this is just a coincidence, so, yeah.

Daniel:

Yeah. Maybe for those, I mean, many people have heard of Anthropic and Claude. For those maybe that just just to set the stage here. So Anthropic is an AI company founded in 2021 by former OpenAI executives. They're sort of some of their focus as a company, interestingly enough, as we're talking about this subject of security, has been around AI safety.

Daniel:

I guess, framed more as AI safety, not necessarily security for AI, or AI for security, but AI safety. So constitutional AI, enterprise focus, safety, publishing their system, prompts and other things and doing a lot of good research. But, yeah, so they were founded in 2021. They have the Claude family of models, which, now there's even TV ads about, Claude. Hopefully, many people in in our audience are, of course, aware of this.

Daniel:

Anthropic 2021, Claude Family of Models released, I I don't know the timeline on the actual release of the tool, but they released Claude Code, which is the topic of this discussion, which is amazing. Have to say is is a spectacular tool and product and has enjoyed wonderful reception in the software development world. It's basically an agentic terminal based coding agent assistant, automation tool, whatever you wanna call it. So you're in your computer, you're in your terminal, you can spin up Claude. It works in your code base and you can have it do all sorts of things from running your tests, figuring out which tests fail, making the changes to fix those things.

Daniel:

It can run bash commands. It can write whole software projects from scratch. Like, it's very this idea is very much the agentic autonomy forward view of software development. Very much not the kind of GitHub Copilot, although they've also implemented agentic things now. I so I shouldn't maybe that's not the greatest comparison, but the and a traditional GitHub Copilot model of the assistant in your IDE, which would help you kind of auto complete things or maybe even answer questions.

Daniel:

This is much more kind of autonomy agent driven development, and has very much taken the software development world by storm. I would hardly talk to any developer that is not using Claude Code, I guess, is the way to put it.

Chris:

No. I I think that's I think that's I don't think that's an overstatement at all. I think Claude Code coming out, you know, I believe if I'm just from memory, it was in May that it was released, And then in late November, Opus 4.5 was released. And and so going into the December holiday season is kind of when people Jaws were hitting the ground, including ours. It was the very first thing we were talking about engineers, obviously, and it has completely, in a very short amount of time, changed how people productively develop software now and and the way that you do that in their workflows and stuff.

Chris:

So, I mean, it's I think it's one of those history will look back and go, that was kind of the moment where it really took off. And so, yeah, it's it's the leader, certainly.

Daniel:

Yeah. And just to clear up the or so timeline wise, well, I guess maybe we should say what, the sort of what happened. Basically, if you downloaded Claude Code during, like, a three hour ish time window in the past, day or so, as we're recording this

Chris:

Right.

Daniel:

Then two things happen. One, you downloaded basically a bunch of proprietary IP from Anthropic that was kind of the agent harness and all the IP around Claude Code revealing kind of how it works. And two, you downloaded, a malicious version of a of a JavaScript, package called Axios, which, created a vulnerability on your on your computer. So both both things happened at basically the same time. So but there was a lot leading up to this.

Daniel:

So maybe, like, timeline wise, it's it's worth mentioning that. We talked about, like, the adopt you talked about the adoption of Claude Code, the release of the model. In terms of the problematic, we we mentioned Anthropic's kinda been through the ringer recently.

Chris:

Mhmm.

Daniel:

In late twenty twenty five, Anthropic acquired this bun JavaScript runtime. So this, this was something that they were using, I think, within the project, and or they integrated further. And, that's relevant because that that JavaScript run run time is the source of or or a kind of key piece of why this leak happened. So that was late twenty twenty five, not that long ago. Obviously, things move fast.

Daniel:

March early March, so 03/03/2026, so about a month ago was when the Department of War, Department of Defense, designated Anthropic as a supply chain risk. This, you know, I I has been a topic of conversation and Anthropic going back and forth with the with the government in, not not too long ago, so March 20 03/26/2026, as you mentioned, Chris, Anthropic got a judge to grant Anthropic a preliminary injunction temporarily freezing this supply chain risk label. March 27, the next day, there was a first leak, which, wasn't the code leak, but talked about Claude mythos, sort of a leaked blog post about, I mean, we hear this, I don't know, Chris, it seems like we hear this, often where it's like, oh, the model is too dangerous to release because it's so powerful. It seems like we hear that every couple months and then it's released and, you know, we we deal with it. But, that was something that was sort of leaked on on March 27.

Daniel:

So those are all, like, the the lead up to now. I I guess it it's interesting. I mean, this whole supply chain designation, overlap with AI or Anthropic being kind of primarily positioning itself almost as an AI safety company is there's kind of a dichotomy there, but then obviously, they they have integrated tools within their widely distributed project that were had had vulnerabilities in them, at least from the security side. So, yeah, it's just a weird dichotomy of, like, what in reality is the supply chain risk, and there's it's like layers upon layers of this where Anthropic was identified as that. Certainly, had a supply chain risk internally, but their position as a company was more on the safety side.

Daniel:

I don't know. It's it's a lot of safety and security being thrown around, and supply chain risk. So that that was sort of the the lead up, I guess, Chris, to to to where we're at. I don't I don't know what the discussion's like, been like with practitioners that that you've talked to. I would say from my perspective, just in talking with people, a lot of the technical community is Mhmm.

Daniel:

Is kind of like, oh, why, You know, this is ridiculous. Anthropic being identified as a supply chain risk. This is kind of ridiculous. On the customer side, like our customers who often work in regulated industries, sometimes with some relation to the government are very much like, oh, the rug's been pulled out from under us. We were not thinking that Anthropic was going to be identified in this way.

Daniel:

And they're sort of rethinking this sort of model vendor lock in the the risk to themselves if they build everything on Anthropic. Then if one day the government can just say this is a supply chain risk and they have no way to pivot, that creates liability on their end.

Chris:

Yeah. And and noting up front since I'm since I work in the defense industry that I'm only speaking for myself and and no other organization, It's definitely I mean, to your point there, it's you know, it was it seemed very kind of, you know, malicious, like like the the government said, you're gonna do what we want whether you like it or not, and and this particular vendor said, no, we're not, and so this particular thing happened. So that's that I think that has created that awareness that you spoke of throughout the entire not only the AI industry, but I think many industries are recognizing that that things can can that the rug can get pulled out very quickly, and there's been a lot of risk mitigation in recent weeks from many, many, many organizations along those lines. Now in this particular case, as we pointed out, there was a judge came in and intervened on that, that process is still playing out, and I think my sense is that the government is kind of backing down from that anyway a little bit, which is probably good in the long term. It's not the kind of situation that is beneficial for anybody, I think.

Chris:

So, yeah, kind of rolling through this, but Anthropic, you know, when I talk to people, there's a mixture of kind of support and frustration. They're, you know, some of like, even in this we tend to pick on OpenAI more often, historically, and I'll acknowledge that. I'm the first person that has made some comments about them on the show in past episodes, but their codex is open source that is the competing thing, and Claude's gotten a fair amount of criticism for not open sourcing, so I've talked to a lot of developers, you know, just in groups today, checking out, in some cases, just reading what other developers are saying, and there's a certain amount of, well, they could have open sourced it up front, and this is kind of what you get. My suspicion is this will probably lead to open sourcing of that, because now that the cat's out of the bag architecturally, and there have been a number of efforts. There's one in particular who had already previously been working, a developer out there who had been working on trying to reverse engineer Claude Code before this came out, and had done some work along that lines, and that was one of the people that that, you know, got ahold of the repo here last night, and what they did this time was instead of keeping that code out, and that was also shut down very rapidly through a legal request that's called a DCMA takedown in terms of not having that code out there.

Chris:

So the this developer rapidly organized an effort to do a clean room rewrite of Claude Code initially in Python, and there's also a concurrent effort to rewrite it in Rust. And then the repo that both these efforts are together in hit. It was the fastest repo in history to surpass 100,000 stars on GitHub. They surpassed 50,000 stars in the first two hours the repo existed, So there's been a lot of attention here, and a lot of people jumping in on it. So they're trying to get a Python version up and running immediately, and with a rapid follow-up on a Rust version.

Chris:

All of this replaces the original, would would essentially do a redo of the original TypeScript that was leaked, which was what Claude Code was written in. So with the architecture being out of the bag, my expectation is that Anthropic will probably end up just open sourcing this, because at this point, kinda why not, and at least that, you know, because you're not really losing anything at that point in terms of IP, because the IP's already out there, whether it's appropriate or not, and and at least that kinda, you know, it kinda puts some OpenAI already did that. It gets rid of a criticism without having lost anything, given what happened. So, it's quite the soap opera in the AI world today, and I just kinda sitting back and watching and seeing what people are saying.

Daniel:

Yeah, and just I guess to circle back and dig into some of the details of actually what happened, and then I think it'd be interesting that there are some things I think to learn from what was released. I mean, there's some things to learn in terms of what not to do, cybersecurity wise, but there's also some things to learn agentic development wise that are interesting, from what we what we know. So to give the specifics, what what happened first was this supply chain so basically two things happened simultaneously. A malice a malicious version of the Axios library, is published to, NPM. This Axios library is kind of a third party, helper type type library, used to make, web web requests.

Daniel:

Claude Code depends on Axios. So, you know, basically at the same time, Anthropic made their other mistake, they, they were, basically through the dependence on this Axios library created, the the second second problem. But the the other thing that they did, in addition to their dependence on Axios, which Axios had this malicious version published at the same time, Anthropic accidentally left, this, basically a dot map file in their, repository of Claude Code. So basically this dot map file, helps map generally helps debuggers map between kind of non human readable, JavaScript and and files, to human readable, you know, TypeScript. And so by leaving this dot map file in the repository or in the package, it contained enough information that, were you to want to, you could re reconstruct like half a million lines of Anthropic private closed source proprietary code, which is the main kind of guts and brain of the Claude Code package.

Daniel:

Basically, you're in that three hour window and you downloaded Claude Code or updated, you got those half you know, at least where you could reconstruct the those 500,000 lines of proprietary code, and you downloaded a malicious version of Axios, which contained a remote access Trojan, which actually compromised your your local machine. So, kind of a a perfect storm of of things, that happen. There's a security re researcher, Chao Feng Shuo, at fried rice on on on x that announced that he had re reconstructed the source code. And as you mentioned, Chris, there was an open source repo then that kind of reconstructed Claude this Claude code repository, which just elevated the there were way way more now, I'm sure, but, you know, tens and tens of thousands of forks also of of this repo. And that's kind of what we came into today.

Daniel:

So we haven't talked about, like, the guts of that and what we discovered, but that was essentially the timeline of what happened. So, I did I I can at least say on this podcast that I did not update my my Claude Code or download it overnight. So unfortunately, I didn't get the 380,000,000,000 valuation proprietary code. But, still interesting to learn learn many things from from those that have dug in and and certainly still a lot going on on GitHub as we speak to reconstruct and and, leverage some of these ideas.

Chris:

Yeah. I I think, I mean, yeah. I mean, you may not have gotten it, but the code is out there many, many, many times over. People are not leaving and, you know, back in the window, people were not leaving it on GitHub. I think everyone recognized, you know, that it was a big moment for Anthropic in a negative way, and so a lot of folks saved it offline.

Chris:

So it's it's out there, and I that's why I said, I mean, I think Anthropic's best move would just be to go, we're open sourcing Claude Code now, and you know, looking forward to community feedback to make it better.

Daniel:

Yeah, and there's kind of, yeah, I guess, there's an overall thing that we learn technically from this, and then there's specific things that are interesting to talk about. The overall thing I think to emphasize here is that, actually, it's not so much the and we've suspected this for some time. And if you're a practitioner, you kind of know this by intuition. The model itself is not the relevant component that drives performance for these systems, like Claude Code or Open Claw, etcetera. There's a model that needs to be in these agentic systems.

Daniel:

However, the real IP in these systems is actually not the model. It's this what's called the agent harness around the model. Right? It's that orchestration of how does the how is memory handled? How do you connect to tools?

Daniel:

How do you persist things over sessions? How do you wake the agent up? How do you point to certain information? How do you give context? All of that is what we would call the agent harness.

Daniel:

And it's really that that kind of lines of code that was released by Anthropic, even though they didn't release the weights of their model. That that's why I think this is so it's one of the reasons why I think this is so interesting, Chris, is a year ago, we we would have kind of been shocked and amazed if someone leaked model weights, and that would be like, give us everything we need to know about their IP. Right? We have the model weights. We can reconstruct it.

Daniel:

We have their model. Here, it doesn't really matter if we have the model weights of the actual Opus model or or whatever Claude Anthropic model because all of the IP is in this agent harness around the model, which means I don't have to use an anthropic model. I could use whatever model I want. If I'm putting the right agent harness around it, I can do extremely powerful things, which is why this is such a leak and why it's so impactful because that agent harness is where the where the IP is.

Chris:

Yeah. I think, you know, and and in a broader context, we have actually been kind of saying what you just said in different words for a long time now. We've always pointed out that, you know, while, you know, modeling the the functionality of different models has been increasing steadily, and we've been reporting on that as we go and talking about these models, but, you know, we've said many, many times, it's still software. It's still software architecture, and the model is one component in a larger architecture. And to your point, much of the rest of the architecture is in this is in the harness that we're talking about, and therefore, that's why the IP is so critical.

Chris:

And especially when you consider the fact that once we crossed that threshold of kind of Opus 4.5, getting to a point where it was really flipping the entire developer world over on its side in terms of how people productively created software, and at this point, you know, 4.6 came out early into the year, OpenAI has pushed forward with new models, and there will be many open source models coming out as well that are able to do every bit on that side. So, it kind of points at the models are as that progression goes, the models are becoming less and less important because there are going to be many of them that can do the same capability. And so these harnesses, as we move out, you know, in kind of where they're at now, but as they evolve into their edge harnesses and cloud harnesses and all sorts of different agent capabilities, this is huge, and this is a turning point. So if if folks are not paying attention to that, I think they're kind of missing the story. Think the reporting on the model is a time past at this point to some degree.

Chris:

Yeah.

Daniel:

Yeah, and if we look then into Anthropic specific agent harness, there's a few high level things which, it, I don't think it's problematic for us to talk about here because everyone's talking about them everywhere and essentially this is widely known now, even though we're only a day into this. There's a few kind of key points of what makes the agent harness of Claude Code particularly powerful. The first of those being how it manages memory. You know, most, or many AI agents, if you're not careful, depending on how you write them, they struggle with this kind of context entropy or, memory drift or confusion where the more and more you add into the memory of the agent as it operates, the the more junk is in there, the more noisy it is, the less effective the agent becomes. And this is something that it seems like Claude Code is less prone to in in many ways.

Daniel:

And so revealed in the agent harness is there's kind of three levels or layers of the memory management within Claude. And I think this is interesting to talk about because it's practical for agent developers out there. Yeah. For all of us, which is the first thing is they have this memory. Md, which basically is, it's constantly fed to the agent, but it's not all the memory of the agent.

Daniel:

Basically, it's only the pointers to where certain information is held. So it's it's kind of like an index or a pointer system to where information is. So you're not always loading in all information into the agent. You're kind of have these pointers. So, this is like an index to certain context information.

Daniel:

Then there's sharded topic topical information. So rather than keeping everything together, again, there's this index, but then there's these shards of discrete files, that have certain types of information in it. This prevents kind of, again, that noisy element of adding all memory into the agent, but only loading kind of topic specific shards when those topic specific shards are are relevant. And then, last piece is this kind of, self healing search mechanism, to where you have essentially kind of, if if you're familiar with Linux and grep, grep is a way for you to kind of search and scan logs so that the agent is actually, configured such that it can verify, actual information against the actual logs using a kind of optimized grep search rather than relying on its own generated summary. So it actually kind of, self searches this, you know, via this kind of almost like grep like type of, type of process.

Daniel:

So it's this searching, it's the topic related shards and descriptive capability. It's this topical index or contextual index in the memory dot m d that that is part of that memory hierarchy, which I do think a lot of people struggle with. It's kind of one of those points of disillusionment where I create an agent and I just keep loading it with with more and more stuff, and then it gets worse over time, which is counterintuitive and also sad.

Chris:

Yeah. I mean, and and I think the the learning, I mean, I think this is a big part of it is aside from whether, you know, what what the future of Claude Code is going to be from a licensing standpoint and and being you know, having access to the code, I think these architectural concerns about things like memory management and other, you know, innovations in how they approach the the various problems of agentic development will rapidly become very standard libraries across many languages, you know, where where folks can start implementing that. So we're kind of seeing what is likely a turning point in mature agent development going forward. Yeah. And and so that's you know?

Chris:

And and you'll see the other players reacting to that. It'll be interesting to see what kind of changes we see in the industry coming up in the in the weeks to follow.

Daniel:

Yeah. There there's also a few a couple other kinda general principles, and then one thing that's created a a good bit of pushback against Anthropic from the open source community. So the other, like, to your point, Chris, what we can learn, what we can practically apply from this leak, Claude Code also uses this strict right discipline type of principle, which is kind of a hallucination prevention. So the idea is like your agent could say, oh, you've asked me to run the test. Okay.

Daniel:

I'm, you know, I am running tests. And in the memory, it's it's kind of represented that you ran the test. Right? But under the hood in the actual system, maybe something errored out and you didn't actually run the tests. Right?

Daniel:

Or maybe a file wasn't created or whatever it would whatever happened, it didn't actually happen on the system even if the agent said, I'm going to do this now and I did it. So they have this kind of strict right discipline idea where in as you're developing your agent, you should only record to the memory of the agent when something happens if you can verify against the environment, like the terminal or the API you're connecting to or the file system that the thing actually happened. Not that the agent tried to do the thing, but that the thing actually happened and I verify it and then write it back to the memory. The other thing they have is a thing that I think is is part of this, memory management, I guess, this idea of auto dream that for agents that run for very long periods of time, even days or or weeks, kind of every twenty four hours reviewing observations and insights, and then kind of consolidating those into the kind of permanent facts of the memory, such that you're not just continually increasing the size of that and leaking all sorts of noisy things in into the memory.

Daniel:

So you can tell it's kind of like that memory management that harness these layers, this architecture around it is very much the IP. What's, the thing that people have pushed back on quite a bit is there's actually this anti distillation flag within, within Claude Code that basically tries to so so there's a couple of things. They have functionality in Claude Code to number one, prevent people from trying to reverse engineer their harness by this anti distillation, meaning they actually put fake stuff, like fake tools into the into the chain of thought of the agent to throw you off the scent of trying to actually recreate what's what's actually going on. So, it's very much a it's a totally decoy fake tool injection, reasoning masking, ploy, which fair enough, you've got a proprietary thing, you know, go for it. I think the thing that, that maybe people were, less happy about is this there's there's a file uncover.ts, which basically is meant to hide Claude or the AI's identity with the when it contributes to open source repos.

Daniel:

So basically avoiding the kind of watermarking or any identification that things are AI generated. And the open source community has, let's say, had a bit of backlash against this because, there's no transparent like, it's basically an, an explicit attempt to hide AI generated code within open source contributions, which, yeah, which as the open open source kind of likes transparency, and this is strictly nontransparent. Right?

Chris:

Well, I mean and when you really get to the heart of it, Anthropic has has built its brand on safety and transparency as noted at the top of the episode. And so when you're when you're building when you're when you're differentiating yourselves against the other major players and then something like that is found, it's one of those I I mean, this is purely speculative, but, you know, had we found that in OpenAI, people probably would have been like, yeah, that's what I would have expected from them kind of thing, just because of the the general attitudes. Whereas Anthropic, people were holding it to a higher standard based on that branding, and this is a moment where where they fall flat on their face based on the the discovery of that. So they've I mean, it's not just an IP issue for the company. You know, it's also it's also a brand perception and a trust issue within the larger developer community.

Chris:

So they have some fixing to do to put things right with the people that they are trying to serve.

Daniel:

Yeah. Yeah. And it's interesting to kind of, I guess, place this within the wider, the wider ecosystem and how agentic and autonomous systems are developing. You've got Claude Code, which is a proprietary agentic development, tool, but is still a reactive tool in the sense that it, it responds to your queries or specs or issues on GitHub and does things. Right?

Daniel:

It appears that as also part of this leak and what we learned about Anthropic, they're moving to this kind of product roadmap, where they're moving away from the kind of current reactive version of Claude Code to kind of, running all the time or background maintenance, cron scheduling, refresh, etcetera, type of model, which is very much more kind of OpenCLaw based. So what's interesting is OpenCLaw, which is an open source kind of agentic framework. Also, primarily interesting because of the agent harness around OpenCLaw, just like the agent harness around Claude Code makes it so interesting. But Open Claw kind of has caused a lot of stir because it is kind of always running in the background and listening and has this kind of heartbeat mechanism to wake up and do things in the background. It does seem like Claude Code is moving moving that direction as well.

Daniel:

And so I think if we were to look at kind of the the comparisons here, Claude Code is currently reactive in similar ways to other assistants out there, but moving to the more proactive model, not in maybe the same exact way as Open Claw, but, you know, more like Open Claw where it's running twenty four seven or as a daemon or has a kind of heartbeat or wake up mechanism. Also interestingly, Claude Code and Open Claw are both kind of local, locally driven agents. One that has maybe more sovereignty associated with it in terms of your own control of it being OpenClaw and one that's maybe, has this proprietary element pushed off to the vendor. And and so I think that regardless, we're we're if we're looking at the direction, both from what we know now about Claude Code and what we've seen with Open Claw, get ready for the more proactive background agents that are gonna be running all the time, waking up on a heartbeat, doing things for you. And I think to your point, Chris, as well, now that things have been leaked with Claude Code, there's gonna be a million open Claude this and Claude Code this and things at the sort of Pandora's box opens.

Chris:

Yeah. It'll be interesting because and and, you know, as noted, we're we're already seeing that even today on day one, you know, or day two, I guess, coming coming out of this with but, you know, your point earlier about it's all about the harness at this point. You know, once upon a time, we were reporting on on the models as they were coming out, and if you think about it, we've been talking about these harnesses and the infrastructure around it a lot more, as is everyone, and it really I think it really is a sign of the maturity coming in the industry, and I think this big oops from Anthropic will drive a lot of innovation out there in the open source community, and maybe some closed source where people are taking ideas and trying to build their own companies off of that. And so I think I think we're we're seeing a little bit of acceleration happening right now coming out of this as people are writing clean room code based on what what we've learned today. So it's an interesting moment, and I suspect in the as we work through this over the in the weeks to come, there will be some very interesting things that are popping out on GitHub and other places that we're gonna wanna address as well.

Chris:

I know for myself, I am keenly interested in going back to that that implementation that we talked about a little while ago. Anyone that's listened knows that I'm I'm into Rust, especially for edge environments, and so I'm I'm interested in how that Rust line of development matures, as well as others that may be out there. It's a it's an interesting moment to to spectate on these things.

Daniel:

Yeah. I and maybe it's a good as we as we close out here to also, yeah, encourage people to get get their get hands on. It's never been easier to get hands on with these tools and build intuition about how they work. And, you know, more is available in the open source world right now and and can be under your control where you can try maybe in sand if you're worried about security things or that sort of thing, create a sandbox environment and add in one of these agents and and try some things. And if you're building agents out there, if you're AI practitioners, I think some of the just very clear guidance that we learn from all of this is number one, think about how you manage memory in your agents and be smart about it in that in that harness using kinda sharded memory and lookups.

Daniel:

Maybe think about moving to a proactive strategy rather than a reactive strategy where you can kind of clean and clean up memory every so often every night or whatever it is, but have something working in the background that seems to be where things are headed. And also, as you're building this harness, there is very much the potential of supply chain risk within that agent harness, whether it's in the open source world or it's in the closed source world. That supply chain has risk associated with it, which is very much separate from the model risk, which Totally. Certainly there are things related to model risk and bias and blah blah blah, all those things, but the agent harness now has this kind of supply chain risk associated with it. So all good things to keep in mind as we we interact with these tools.

Chris:

Well said. That's a a good point to to wrap up on. Looking forward to hearing folks out there on our social media channels giving us a bit of feedback. Let us know what you're doing and how you're thinking about this as you bring it into your own development cycle and your own ideas. And if there's any really cool open source that you're seeing developing out of this, we'd love to hear about that and go take a look.

Daniel:

Yeah. Let us know. Thanks for thanks for clawing out all of the the good topics, Chris. It was fun to it was it was fun to have this discussion and hopefully leak it leak it as soon as we can to the There

Chris:

you go. We're gonna leak it as leak it within days here.

Daniel:

Alright. Hey. We'll talk soon.

Chris:

Take care.

Narrator:

All right. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner, Prediction Guard, for providing operational support for the show.

Narrator:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.