Practical AI

In this Fully-Connected episode, Dan and Chris start with Anthropic's Mythos frontier model, parsing what is publicly known about its cybersecurity capabilities and projecting its possible implications from "We've been here before. 🙄" to "See ya, cybersecurity! 😱"  It's the end of the world as we know it, and I feel fine. 🙃

Then they have fun with the craziest AI announcement of the year (except for the Mythos one of course).  Allbirds pivots from shoe manufacturing 👟 to neocloud provider ☁️. No, we didn't see that one coming either! 🙈

They finish with rise of “tokenmaxxing” - the gamification 🎮 of writing code with maximum LLM usage.  Incredibly profitable 💰 for commercial frontier model providers and insanely expensive 🤑 for the gamers.  Better have 10X productivity just to avoid bankruptcy! 

Featuring:
Links:

Creators and Guests

Host
Chris Benson
Cohost @ Practical AI Podcast • AI / Autonomy Research Engineer @ Lockheed Martin
Host
Daniel Whitenack
CEO @Prediction Guard & cohost @Practical AI podcast

What is Practical AI?

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).

The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Narrator:

Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Narrator:

Now onto the show.

Daniel:

Welcome to another episode of the Practical AI Podcast. This is Daniel Whitenack. I am CEO at Prediction Guard. I'm joined as always by my cohost, Benson, who is a principal AI and autonomy research engineer. How are you doing, Chris?

Chris:

Hey. I'm doing great today, Daniel. Looking forward to catching up on one of these fully connected episodes where we get to talk about kinda whatever we wanna talk about.

Daniel:

Whatever we wanna talk about. No one I I mean, I guess normally we just talk about what we wanna talk about, but at least we when we have a guest, we try to center the the conversation on maybe a few things they wanna talk about. I I'm pretty excited, Chris, because I just got a brand new pair of shoes, And I've been I've been wearing my new shoes all week, and I didn't think that that would be a relevant topic to bring up with you on on the Practical AI podcast because I thought shoes really didn't have any overlap with the AI world. Although I guess this is not the topic we're gonna talk about, but I did see a company that had, like, a like, you take a picture of your foot and, like, the AI figures out the shape of your foot or whatever and then, like, could, I guess, advise on shoes or something. Anyway, but speaking of shoes, today, I didn't even see this, but folks in in the office here at Prediction Guard were like, hey.

Daniel:

Did you hear about Allbirds? And I had not heard about Allbirds, but apparently, Allbirds is now an AI company, which is which is quite interesting. So, Chris, do you have a pair of Allbirds? I guess they're AI Allbirds now?

Chris:

I I I actually don't, but I I gotta say, I'm I'm that's that's a terribly interesting way of of retreading your business model, you know.

Daniel:

Yes. Yeah. They really, yeah, kicked it kicked it to the curb, I guess.

Chris:

So so from from from shoes to AI data centers?

Daniel:

Yeah. Well, I guess kind of background information for people here. I I sort of only know this because my wife had my my wife was really into Allbirds. She had a few pairs. But, it seems like kind of 2016 to around twin just after COVID twenty twenty one, there was this, you know, huge rise of the Allbirds brand, which was a favorite in terms of shoes that you would order online.

Daniel:

I think eventually they did have retail locations, that sort of thing. But from '22, 2022 to 2025, so through this kind of last year, they kinda consistently had a decline called growth, margins, kind of compressed and their stock price declined making the business distressed. And so I guess March, so as we're recording this, we're in April 2026. In March of twenty twenty six of this year, Allbirds exited the actual footwear shoe part of their business. So selling off all of those assets to, American Exchange Group.

Daniel:

Don't know a whole lot about them. But, basically, that sort of, ends the shoe operation portion of Allbirds. They still, of course, have the shell had the shell of a company, which had a name and an entity and a stock ticker, etcetera, and they had a bunch of cash. Right? So what do you do with a bunch of cash?

Daniel:

And I guess they did also raise additional cash. What what do you do with a bunch of cash but buy GPUs, which is Of course. What else is apparently apparently what happened?

Chris:

So that what you do with all your cash? That's

Daniel:

Yes. Rebranding as AI compute infrastructure, which I'm wondering if they'll give me AI compute infrastructure for cheaper. That would be kinda nice.

Chris:

You know? It's amazing. Like, you know, we're joking about this being kind of the pivot you didn't see coming. Quite a pivot too, but their shares jumped, you know, at least 700% based on what I'm looking at here, which is quite, that's quite a jump, you know, in terms of the market accepting, not only accepting, but endorsing that kind of a decision. You gotta be wondering if there aren't many, many boards out there and CEOs that are kinda going, we're in kind of a tough spot in our business.

Chris:

Things have been struggling recently. You know, maybe we go buy GPUs and go into the AI business. I mean, that it it apparently is a Yeah. Is a perfectly legit business plan now.

Daniel:

Yeah. I guess on the positive side, this could seem to be a rational allocation of capital. Right? So I have a bunch of capital. Well, I don't have that much capital.

Daniel:

I wish I had that much capital. But a party has that much capital. And, you know, if your core business is dying and you're able to sell that off, you have a company, a stock ticker, then, what, you know, what's what's the hot thing? And, and obviously compute is a is a core part of, the the expansion of AI everywhere, the running of these models at scale. Many might not be self hosting models, but they're certainly consuming models that are running on infrastructure somewhere.

Daniel:

Right? And and it's in yeah. And so I guess from that perspective, it could be kind of seen as a very positive and useful kind of kind of pivot. What's what's your thought?

Chris:

I well, I I apparently so. I mean, the market is endorsing it, and I think this is, prior to this announcement, this is the kind of thing nobody would have bought into. It would have been seen as a joke, you know? But the fact that, at least at this point, the market's doing that really does make such a pivot into a concern that companies may be evaluating, and you know, in these articles, they talk about Allbirds kind of, you know, once upon a time being the next Nike, and I've seen that bantered about in some of the articles, and it got me thinking for just a second there, like, what if Nike were to do the same thing? What if Nike were to pivot from shoot yeah, just do it, but I'm wondering, would they brand themselves as AIR I?

Daniel:

Sorry. Yeah. AI Jordans.

Chris:

There you go.

Daniel:

Yeah. Speaking of terms, I I was running across this term, there you know, sometimes we try to clear up jargon on the practical, and sometimes jargon doesn't make any sense to me, but this term Neo Cloud, is this is this something that that you've run across or is this is this new to you?

Chris:

This is new to me, so you'll have to you'll have to take us into Neo Cloud.

Daniel:

So apparently and this is related to the Allbirds thing because apparently Neo Cloud or sometimes, you know, referred to as AI native cloud is kind of a shift that we've seen recently where a Neo Cloud is kind of cloud infrastructure that's built specifically for AI workloads, not general computing. So in that way, Allbirds kind of would be potentially putting together a neo cloud. So like the kind of old old cloud model is you, you know, you have your, web app infrastructure, you have databases, managed databases, you have managed storage of some type, you have some IT or logging monitoring services. It's kind of general purpose, flexible, lots of different services that sort of, the idea of this neo cloud or AI native cloud. Think of, other companies maybe like CoreWeave or Together AI or Lambda Labs, right, is infrastructure that's built either for AI training, inference, or both, massive GPU workloads, kind of GPU first, not CPU first, and this exists kind of because GPUs are scarce, also in the hyperscalers in the general cloud platforms.

Daniel:

The workloads are different, right, because maybe you are running a lot of things across many nodes, running a large model across many nodes, lots of movement of of data often, and and you're kind of supply chain constrained in terms of, you know, what what you need to support. So that's this idea and kind of how it how it intersects here. This was a new one for me as I was looking a little bit at this at this story.

Chris:

I'm I'm curious. Do you have any insight into like, is if you're looking at Neo Cloud companies and you're comparing them against kind of the, you know, the the traditional cloud players, you know, which are the the Alphabets, the Apple, the Microsoft, the Metas, those. Are you, you know, how is the business model changing, and how much is is Neo Cloud eating into that? You know, I mean, is it are we seeing it very specialized, or is it making kind of general market traction?

Daniel:

I would well, I don't have exact numbers, like Sure. The, right right in front of me. I think, you know, if our listeners do have that, let us know. Point us to those on on social somewhere. But I do know that I I'm seeing a lot of CoreWeave and other, I guess other Neo Cloud type of type of companies being talked about quite a bit.

Daniel:

And I think that that it is partially because there is this sort of, there can be this specialization, towards the AI workloads and the specific compute there. And as you know, you know, going into a hyperscaler, if I go into AWS or if I go into some of these platforms, you can do just about anything. And there isn't that focus, which all which is good in one sense because you can kind of support a lot of different types of things. But if you're an AI native AI forward company and, know, maybe you're quickly spinning up, no code applications, you're not maybe doing a lot of that hosting and management in a traditional way, then maybe it makes sense for you to run a lot of that stuff serverless or or otherwise, and kinda have this pay as you go in AI specialized, specialized clouds, which is kind of interesting. I guess that's one of the things about the Allbirds case that you could talk about on the, maybe, the negative side.

Daniel:

I my hot take on this is, like, there's no really, what Allbirds is bringing here is a company shell and capital. Right? They're not bringing any domain expertise that maybe I'm aware of. Maybe there are some there's some domain expertise around supply chain and maybe manufacturing or or industrial settings like that they're bringing, but they're not bringing AI specific expertise in terms of building this kind of neo cloud.

Chris:

That's true.

Daniel:

The other thing is like approximately $50,000,000, although that's much more money than I can imagine generally, is is very much a drop in the bucket in terms of the in terms of the AI data center market. So part of my question is like, okay. Like, create your little data center. It's it is very much a drop in the bucket in terms of whether you look at, like, what China is spending on data centers or just, like, companies in The US investing billions of dollars in, in AI data centers, that maybe is the cynical take on this is like, okay. You have a little bit of this capital.

Daniel:

You don't have the domain expertise in AI, and you're gonna what, spend $50,000,000 on a on a little data center? How how is that gonna make a mark? And maybe part of it is, like, this is the foothold and more capital will be infused and they'll figure it out, and I don't wish them bad or anything. It's just more of a skeptical take.

Chris:

Yeah. I I mean, in an you know, in another industry, you know, that money would seem like quite a starter. But in this industry where the availability of of, you know, GPUs in the ecosystem is already quite strained based on the demand, and if you look at the fact that, you know, kinda globally, there's basically half a dozen key players in the GPU ecosystem in terms of supply as you as companies may pivot to this kind of business model, which is really, you know, NVIDIA, TSMC, AMD, Intel, and Qualcomm, for the most part, and each of those is is, you know, cranking out the types of chips in this capacity for AI purposes that they make. And so, I'm just, yeah, I'm just kinda I can't help wonder, but if if this becomes a trend where you see a lot of companies that are struggling pivoting into that, what does that chip supply chain start looking like? It gets even more strained going forward.

Chris:

So this will be a really interesting kind of see if this turns into a trend to watch and see what happens with that.

Daniel:

Yeah. And what do you think, Chris? There's kind of two two elements happening here. One is the centralization and expansion of these very much centralized compute resources and data centers, which will grow. But there's also this push towards, I think, I don't know if this was one of the trends that we talked about at the beginning of the year for 2026.

Daniel:

It's certainly one of the trends that I'm thinking about in terms of the market in general is the shift towards kind of physical or embedded AI, where AI is kind of living everywhere in a bunch of environments, whether that's like kiosks in a retail environment or, you know, actually on the manufacturing floor, not in a data center for a manufacturer. Of course, in in phones or we just had the conversation with Kama AI, who has AI in these devices that they're putting in cars to make them self driving. So, yeah, what what is your take on this and how people could think about the the is it both are increasing simultaneously? Like, we'll just see more data centers and we'll see more physical AI, Or is there a shift more towards that embedded edge centric model versus kind of everything being centralized in in data centers?

Chris:

I mean, I think it'll be all of the above, in my view, but I think I think the giant growth area is going to be in, what you might call, far edge, meaning, because people define edge differently. You know, some people would say, kind of the edge of the data center, edge of the cloud is edge, but if you're talking about embedded devices that are out on, embedded in physical devices that are used that are not directly cloud connected or are, but are not relying on that for all of its functionality, then, I mean, there's huge, huge, huge growth potential in that across so many different industries, and that's still in its infancy. But yes, I mean, I do think that, you know, the that we'll go, this, you know, your notion of Neo Cloud, as you instructed us a few minutes ago, is an opportunity that many, many companies will go at. My gut is that in the long run, that is not as as profitable just because there's already huge players dominating that, and as as others fill in the niche, there'll be many, many players there. So it'll be interesting to see if that continues to be an amazing strategic opportunity versus specializing out in various devices that are embedded.

Daniel:

Yeah, yeah, well, I guess that is your, set of beliefs or assumptions about something or or shall I say your mythos about, about that, which brings us to an interesting topic of Mythos. What what is the what what's the right

Chris:

I'm actually not a 100% sure. I I I've heard people say it both ways. So, either one is fine for today. Okay. Unless folks have been under a rock than the last week, they hopefully have heard a bit about this already.

Daniel:

Yeah, I'll maybe switch between the two, that way at least for part of the time I can seem smart. Yeah. But Go the ahead. Mythos mythos model from Anthropic has been in the news, or I guess the supposed mythos model that Anthropic has and is somewhere and not seen by people yet is in the news, I should say.

Chris:

Correct. So the the short of it, there is a a new frontier model that I think logically you would say is kind of the next thing past the Opus model, which has been, you know, the the powerhouse driving Claude Code. We've talked a lot about that on Opus and Claude Code on the show, and and so the next generation being Mythos from Anthropic is a powerful model, but they, back when, before any of us had heard of it, I think what's been reported is that, you know, Anthropic had it in a sandbox environment. They discovered it was particularly adept at uncovering security vulnerabilities in just about every meaningful software package, arena, you know, whatever that you could imagine, they discovered, they claimed many thousands of vulnerabilities in every operating system and every browser, and they realized that it could have profound, you know, effects out there on its own. So they, instead of releasing it, as I think they had been planning, they kept it close hold, and there are they started a new project called Project Glasswing, which is a security project that is kind of closed, and they they brought in apparently 40 companies, but only about a dozen of those companies are public, A number of companies are not.

Chris:

And those companies are being invited to use Mythos to make sure that their various systems are not exposed or give them time to fix those. So there's not a lot of information, as you would expect, about the specifics of that process. And and so that's that is ongoing right now. And so we'll see what happens. We don't really know what the future of Mythos is, but I think I would finish by saying, you know, if you just kinda look at it in the same way that you and I are often talking among ourselves and with guests about these types of models, the fact that people know it's possible now means that it is probable that, you know, other people will be developing models and stuff, as we've seen in every case ever on frontier models, and so it might bespeak of a very interesting future where we are seeing some tremendous capabilities from frontier models, that are, once again, a significant step beyond, the generation that's public.

Chris:

So, who knows? But, I think in the months ahead, I suspect this is a topic we will end up revisiting from time to time.

Daniel:

Haven't we been here before, Chris? Sure, you and I have been doing this for a while, and it seems like this is the same conversation we've had with respect to some OpenAI model releases and gated releases of this that because it's gonna end the world or something?

Chris:

It was GPT three, I believe. Am I remembering I believe

Daniel:

it was. It was like a earlier one. Yeah. We talked about the whole gated release thing, right?

Chris:

Yes. And they were holding it, and then finally they were out, and they didn't even try that on the four Yeah, they

Daniel:

And now, like, you look at, you talk about GPT-three, even GPT-four,

Chris:

feel and Yeah. Like, oh,

Daniel:

That thing sucks.

Chris:

Yeah. You think about So,

Daniel:

it's like, at the time, it was gonna end the world, but it kinda sucks.

Chris:

I was talking to somebody the other day, and they were using in their business GPT four o, and I literally said, why, why are you using that dinosaur? You know, why would you do that to yourself? And so, so yes, we are definitely, this is like definitely coming around to the same story. I mean,

Daniel:

I'm not saying it's not better. I think my point is just, hey, I don't I don't think it's world ending. People can probably rest a a little bit at night. I I do think, like, I guess they emphasize the kind of, that it is and this is, I think, from Reuters and a couple other places that it's, you know, especially strong maybe at discovering vulnerabilities, exploiting those vulnerabilities. And so this does expand.

Daniel:

I mean, even already, right, I can use whatever models and agentic coding techniques to create malware just like I can use to create great software and, I can exploit systems. So, like, there is definitely this narrowing or which maybe has always been the case in the cybersecurity world where like the the threat actors get better and there's better availability of tools and that sort of thing. This is certainly a different level of that. I I'm not saying it's equivalent. I'm sure it's a different level of that, right?

Chris:

You know, when I look at this, like regardless of what Mythos capabilities really are, like whether they're really high, low, whatever, I think that, you know, anthropic has historically been kind of the, a little bit lower key, and kind of the safety oriented, and not quite as flamboyant and kind of over the top as the OpenAI folks have been. I mean, Sam Altman's known for the kind of statements he makes all the time. And people over time have kind of learned to take those with a grain of salt, but starting to look, you know, with some of the Claude code stuff and getting into this, like maybe Anthropic has started taking a page from that playbook at OpenAI in terms of kind of the marketing aspect of this, because regardless of what Mythos capabilities are, whether they are amazing or less or just whatever, this is still an amazing amount. I mean, you and I are sitting here talking about it. We're contributing to that.

Chris:

A lot they're they were all over the news, and so it's fantastic marketing strategy on their part, no matter what the reality is.

Daniel:

So who knows how much is tied to also recent problems in terms of interactions with the government. I I do think that this is, you know, and I would be lying if I did not have a personal bias and hope for this, but I do think that this emphasizes kind of the a tailwind for governance and control capabilities within the AI world, which is, of course, an an area where I work. But also, like, letting people know that there is a risk is very different than controlling that risk. And so whether it's like AI SOC that's using AI, like, within security operations kind of, you know, on the offensive side or or defensive side, I I think that, like, it ushers in great, you know, a a tailwind for those companies, but also on the like, hey. If companies actually want to use a model like this, there's bad things that can happen as well as good things that can happen, which emphasizes kind of a push towards governance and control regardless of what model stack you're using.

Daniel:

And, one one of the you know, shout out if you're out there, and I we'd love to have you on the podcast, but there's, like, things like the AI underwriting company and others that have received funding recently where they are actually trying to establish some of those auditable certifications for companies in terms of how they institute governance, what evidence there is for that. That's a very different thing than saying there is a risk. Right? It is. It is.

Daniel:

Yeah. But, interesting. I I look forward to trying trying it out whenever I get my hands on it. I'm I'm not part of the I don't have a golden ticket, so, I'll I'll have to wait with the wait with the other ones, I guess.

Chris:

Yeah. I'm a lie.

Daniel:

Until I go for token maxing my Mythos endpoint.

Chris:

Oh, you threw that out. Now we gotta talk about that. I mean, that's like, token maxing is the hottest new term over the last few weeks.

Daniel:

Well, first first off, I feel really old because I don't I I just don't like, the whole whatever maxing, like, just I seem I feel old talking about anything with that sort of term, but yeah, I guess there is the token maxing thing.

Chris:

Oh, okay. So for those who have not heard the term, again, like, it's been all over the place recently. So, you know, stepping back, and because I think this really ties back ironically into the anthropic conversation that we had, but stepping back a little point to us talking about Opus as the greatest thing since sliced bread, and the fact that as we have discussed, you know, Opus combined with ClaudeCode made a substantial difference or change in how people were approaching coding and stuff. I know, you know, me coding last year with AI assistants in various forms versus me coding this year has, the workflow is quite different. So, and really, it really has accelerated in a lot of ways, aside from, where the models are and stuff like that.

Chris:

So the toolset has been great, and we've talked about this on the show fairly recently as well. So acknowledging this process, we've had a number of the kind of, especially like in the big traditional AI companies, especially like Meta, you know, can't imagine the Meta culture embracing this. Yeah. Meaning, of course it has, if you look at who's MetaCEO is. They have gamified the use of developers, and they're trying to basically get them to spend as much as they possibly can on Cloud Code and other competing development tools to try to accelerate what any given developer can do.

Chris:

And to the point, like, levels that the rest of us look at and go, That's insane. You know, where it's like, you know, go spend you, a developer, go spend hundreds of thousands of dollars on tokens that you're, you know, to accelerate your capability, and I guess this is trying to, you know, to kind of 10 times, you know, if you will, to use another buzzword, what any given developer is able to do from a, in terms of producing work, and they're orchestrating teams around token maxing and stuff like that, And I know at Meta, I don't know if it's still up or not, but they had a scoreboard that was kind of in one of their main areas where everybody could see who was token maxing the most, and then people were gaming the token maxing system. So, were, they would actually spend tokens on kind of trivial things just to make sure that they were showing up on the scoreboard. So, yeah, I mean, very, like in those kind of stories, absolutely doing it to excess. But that's trickled down, and so there are many other organizations that may not have the budgets of some of these top AI companies, but they're trying to figure out what can we afford for our developers to do, you know, in terms of spending money on tokens and what will that get us in terms of the production capability in our own businesses.

Chris:

So that's now another big thing that's out there in business right now.

Daniel:

Yeah. I would say just anecdotally, I I very much think that we are we meaning the company that I'm leading. I don't think we are spending enough. We are not to we're not token maxing. We have no leaderboard, but I also don't think we are spending enough on on AI usage.

Daniel:

I I think that certainly one of the things as a founder I think about and I'm pushing, I think very much like there's, it it kind of it it kind of reminds me, well, don't know. There there's all sorts of things obviously and and parallels you could draw with everything in moderation. Right? Mhmm. But certainly people have to push the boundary for us to know where the boundary really is.

Daniel:

Right? So I don't I don't think I think there's probably abuses within that and there's inefficiencies within that and things that don't make sense. But also it makes sense to me that there would be a push towards figuring out where the proper boundary is because I don't know if we totally totally know that yet. I know even, some folks I think it was I think it was Jensen from Nvidia who obviously obviously has a has a a horse in the race in terms of token maxing. Right?

Daniel:

Maybe. Maybe. As we talk about Neo Cloud and GPUs. But but I I think he was saying, like, he would be very alarmed if, if there's an engineer that was making 500 k and they weren't spending 250 k on, like, half their salary on on tokens. I don't I don't know if that's where I land.

Daniel:

I but like I say, I think we we're not spending enough on on tokens. I don't think in terms of and this is where I think in one conversation we're talking about, like, you always have an infinite engineering road map. Right? So on the negative side, certainly, you can just spend tokens on dumb things. I don't like, I I'm willing to say that.

Daniel:

But, also, I think it is some indicator of how effective you're running an AI driven engineering team in today's world.

Chris:

I think that I think that is a very sensible approach, in the sense of I think what what is unknown now is what the price to productivity translation really is. And I think it's, you know, the if you look across many organizations, I I I suspect you'd find a very significant standard deviation in terms of what that variability is, and so with, I think going forward, as this also matures, that we're gonna see best practices, we're gonna see there will be books being written that you'll start seeing in all the developer areas about how to do it efficiently, and so I think there's probably, I think we're just not there yet. Think there'll probably be guidance developing over time about how to do it without just being Jensen's spend all the money on GPUs you possibly can approach, you know, which you would expect.

Daniel:

Yeah. And I think maybe it's because we don't totally know the right metrics to optimize and max out right now. Right? The the tokens is probably a vanity metric in the same way that, like, clicks to your website is a vanity metric in terms of your top of the funnel go to market activities. Right?

Daniel:

Like you can put a lot of money into, let's say ads or something like that and get a lot of noise to your traffic or maybe a bot traffic. You have a lot of traffic on your website, it doesn't mean that you are doing a great job in terms of your organic discoverability, SEO, etcetera. Like they we have developed other metrics to judge that over time. Right? And now we kind of there's been certain metrics around velocity, etcetera, for engineering teams over time, and now all of those things are kind of like, the rules are broken to your point.

Daniel:

So, like, what metrics do we use? Sure. Token usage is a vanity metric. I think it probably is, but it's not like, it doesn't yeah. Correlation, causation, all of that stuff.

Daniel:

Right? If you are doing well, you probably are using a lot of tokens, but it doesn't mean if you're using a lot of tokens, you're doing well, sort of No,

Chris:

and I think one other thing I'll just throw in as we close out on this one is the fact that, you know, you mentioned just a moment ago the kind of infinite roadmap, you know, that a company might have on stuff, but it is possible to potentially outrun what your organization can manage in its other capacities. So even if you are able to produce a lot more, productive code that drives the capability of whatever your company does, if you're outpacing what the rest of the organization can absorb and manage and modify, then that's another form of potentially losing efficiency where pure token maxing may not get you what you want. You know, you can kinda pull it back a little so that your organization doesn't die under the weight of it. So just one one last thought to to keep in mind. It's it's a business.

Chris:

It's not just programming.

Daniel:

Well, I would be curious if someone was token maxing to take a look at a few of their chat logs to see maybe how they were doing that token maxing, which it appears could actually be discoverable information in a court of law. So transitioning a little bit, one interesting thing that I saw this last week was a ruling by a federal judge in a case where the federal judge actually forced a a defendant to hand over chat outputs. I think in particular from Claude in this case that they had used to prep some legal materials. And the overall idea here being that AI systems like this tools are not lawyers. They are just what they are, tools.

Daniel:

And so conversations with, you know, if you wanna think about you're talking to your AI legal assistant, these are not lawyers and so they aren't protected by attorney client privilege even if you're talking about these, you know, legal matters. Essentially, the court treated AI systems like this like a third party, meaning confident confidentiality was effectively waived. So that it's, kind of, disturbing in in some ways, maybe even, like, if you're listening out there, you probably are thinking of that one conversation you had with ChatGPT or Anthropic or that is like, oh man, I hope I never go to court because they're gonna find out about that chat log.

Chris:

I'm not at all surprised about this. And that, and I and I think, well, I think there's been a lot of foreshadowing that such things would happen. Most of these companies, probably all of them, have long since said this is not protected. I know ChatGPT, you know, the notion of once the voice capability way back came out and people were having live conversations, and that evolved into people kind of treating it as a confidant or friend or that special someone who understands me in that. And I think these are all kind of different flavors of the same thing.

Chris:

So I don't think it's strictly a legal field issue only. It's also a medical field. It's a psychiatric field thing, and I think I think it's kinda like, well, I think they got bitten on that one, you know, when they when they went to court. But I think it's important to remember that no matter what it feels to you personally and how you're interpreting these things, doesn't have special, you know, the courts may ask to see that, so, and that's an easy thing

Daniel:

to get. And I'm just thinking through all the implications of this, and certainly there is like the general public who might be whatever it is, divorce cases or, you know, whatever they're dealing with in their own, you know, life or criminal sorts of things. But for me as a founder, like with Prediction Guard, there's a lot of information that I process and we have a lawyer. Right? I think like most startups do or should.

Daniel:

Right? And we're processing back agreements or contracts or updates to license agreements. Right? It's so tempting to just say, well, I got this from my lawyer. Let me pop it into x AI.

Daniel:

Yeah. Well, I guess I I shouldn't use x as, like, a general variable now because x is not That's general variable. It's isn't it? Social Yeah. Media company run by Elon Musk.

Daniel:

But, but if I pop that into a random AI system, then essentially I have moved from something that was confidential to something that is explicitly not confidential now. And so if those are in draft forms or if we're like talking about things that are just contemplated within the company or dealing with a problematic customer and you know, that sort of thing, all of that essentially is then moved into what is discoverable. And, you know, I'm not hopefully getting getting sued for anything, but yeah, it does make you think in in the ways that you're using these these systems and it does seem like in some of the response to this, even before this, that some law firms are putting into kind of the contracts that they're using, that and maybe even people should think about this in their own license agreements and other things that they're doing for their products like, hey, if you're putting information into an AI system, you're essentially waiving privilege of confidentiality. Right? Because that is explicitly discoverable unless, unless it is explicitly a, you know, private or, you know, you're running a a private model locally.

Daniel:

But but yeah, there's there's, you know, warnings that need to be sent out to people to not do this. There's implications and maybe contracts that need to be updated, all of that stuff.

Chris:

Yeah. I I'm I'd like to, to kinda wind up a little bit with a question, and maybe some of our folks in the audience can educate us a little bit on our social media channels, and that is with, if you look at kind of communication things like Signal and like ProtonMail, who are catering to users who don't want there to be a record explicitly. So there's literally nothing there that a government court or whatever could access. And I'm wondering if there will be those kinds of systems for AI chat and what the legalities of those are in various jurisdictions so that, you know, you can have your chatbot, but you're literally able to honestly tell the court, well, no such log exists. So if anyone has any insight into some of that, like the juxtaposition of AI chat and kind of like no record systems that we're seeing pop up, I'd love to hear about that.

Chris:

So, Daniel, any thoughts about that yourself?

Daniel:

Yeah. I think I think it's really interesting. I also think it it brings it back to something that really companies have been forced to deal with in terms of multiple transitions of technology, where what you could send or not send confidentially in a physical letter, There were sort of rules about what you could send and not send confidentially in email. There's different, you know, intuitions that we've built up around that, and we just don't have the intuition here yet, and I think that will develop. But also, it does, to your point, present maybe some market opportunity as well or some some, process sorts of things that everyone needs to think about.

Daniel:

So, it was fun fun to talk today, Chris. Go ahead and kick your shoes off and relax for the evening. You know, invest in AI data centers. I guess that's what we should do.

Chris:

There we go. That that's that's my, that's my past time going forward, I guess.

Daniel:

Yeah. Awesome. You, Chris. Take care.

Narrator:

Alright. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Narrator:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.