Practical AI

In this fully connected episode, Dan and Chris break down one of the biggest questions in AI today: do open vs. closed models still matter? From the rise of physical AI and edge devices to the shifting landscape of open-source models like LLaMA, they explore whether the “model wars” are becoming irrelevant. The conversation then dives into a bigger transformation, the rise of agentic systems, workflows, and AI-driven infrastructure.

Featuring:
Upcoming Events: 

Creators and Guests

Host
Chris Benson
Cohost @ Practical AI Podcast • AI / Autonomy Research Engineer @ Lockheed Martin
Host
Daniel Whitenack
CEO @Prediction Guard & cohost @Practical AI podcast

What is Practical AI?

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).

The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Narrator:

Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Narrator:

Now onto the show.

Daniel:

Welcome to another fully connected episode of the Practical AI Podcast. Sometimes we do these fully connected episodes where it's just Chris and I, no guest. We talk about what we want to talk about and hopefully keep you up to date with some of the things we've heard in relation to news and trends with AI, but also talk through some things that even we're learning and and trying to parse our way through on topics related to AI, machine learning, data science. So I'm here with Chris, my cohost, who is principal AI and autonomy research engineer, and I am Daniel Whitenack, CEO at Prediction Guard. How are doing, Chris?

Chris:

Doing good. How's it going?

Daniel:

It's, it it's going good. It it's really interesting that some of the some of the questions that you brought up to me over this last week have seemed to have cropped up in a lot of a lot of the conversations that I've been having. So I'm excited to to chat through those. But, yeah, getting into getting into summer, I feel like I've had some creative space at the end of this week to think through some some interesting things related to to my company and and work. That that's been really good.

Daniel:

And, yeah, coming into coming into the summer with some good energy. What about on your end?

Chris:

I'm same here. It's been so much has happened both in my own life and just out there in the news. We'll talk about some of that stuff today. Economy, just the general autonomy space is just exploding in a good way, not exploding in a bad way.

Daniel:

Hey, this is a off off the cuff question, which I didn't plan on asking, but I was curious as to your take on, I guess one of the things that I've sometimes been saying, but also wondering if I'm saying it and it is true, is that, like, the this whole world of, like, physical AI is is very much a trend that we're seeing this year and something that's kind of going from so much AI in a centrally located in people's cloud environment to, you know, it being embedded in us around in our in our daily lives. Is is that you're much closer to to that space than I am and probably following more things in terms of the market and how you see it. What what do you how do you think about that? Like physical AI embedded AI world? I I guess what I'm referring to here is like AI, of course, in like retail kiosks and stuff, or on the manufacturing floor, in our glasses, or you know, whatever devices, cars, etcetera.

Chris:

I think I think it's a it's a fascinating space to be in right now. It is so much in its infancy because there's there are whole industry segments, you know, for every existing industry that that are being developed, and so you have so many organizations out there. You know? And I see actually, you know, in in the scheme of things, I see, or at least deeply see, a a a fairly small sliver in that I'm in defense and intelligence and national security use cases. And but, you know, that's that's only one little tiny place.

Chris:

It's really it's exploding in retail. It's exploding in marketing. Our our local Walmart has, you know, the the company they're partnering with, which I forget the name of, and that's doing drone deliveries. I know that you have the same where you're at and, you know, robots going around. You know, it's no longer you know, two years ago, it was surprising to walk into a retail establishment, maybe a restaurant or something, and have a robot go by you, and that way it was a real novel thing.

Chris:

And these days, it's maybe maybe it's just where we're at or something, but and and I I I don't know how widespread it is, but, like, it doesn't faze me at all. It's and I doubt it fazes you to see these moving around. So it's coming on really fast, and there's so much I mean, it's I mean, we we have just barely scratched the surface on where that's gonna go. And so, you know, I I keep telling, you know, friends and family and stuff that, like, they still think of it largely as as very futuristic, but, like, it's now here. And and, you know, it's now beyond the vacuums that we that we have running around our houses and stuff.

Chris:

And so I think over the next year or two, you will genuinely see so many offerings. It will be common to go into Walmart or Costco or, you know, because you can tell where I shop, or lots of resale establishments and have these both built into products and helping customers get the experience that they need. So there's just so many avenues. You know, we have talked on the show to people in autonomous vehicles and things like that, but I think I think just the sheer number of possibilities is the is the thing that we'll see dramatically change over the next year or two as a lot of companies get into it. So I'm I'm really excited about it.

Chris:

I I love looking forward and seeing, you know, really great use cases, And I'm really I get really inspired as we're know, like everyone else, we go through the the AI news that's out there and stuff and to see some of the cool things that people are thinking of. So I I'm very optimistic about it. I'm looking forward to it. I love the work that I do. Don't talk about that very much, especially on the show, but it's just really fascinating.

Chris:

I love doing this kind of work and being part of that, so, back to you, that's Well, where I'm

Daniel:

I appreciate the update on that space, it's something I'm excited about as well, and I know it's developing, but isn't always what is maybe kind of in people's view in terms of the mainstream stories in relation

Chris:

It's to really democratizing AI for so many people because just as a quick follow-up, the models that you're using for these things are much smaller to be, you know, to be able to fit on hardware, and and we've mentioned before that there is a microelectronics revolution going side by side. The the general public, I think, mostly sees AI because that's what the news coverage is, but there's also this this massive revolution in microelectronics happening, and the distinctions between different types of computing capabilities is blurring a lot. You know, what's a GPU? What's a CPU? There's a whole bunch there's a whole array of other names that you can call different types of chips, and those are all kinda merging.

Chris:

And that ability to to do things in a smaller context, a low power context, with smaller models that are designed for very specific use cases is really opening that up, and it now means that anybody out there that has an entrepreneurial bent can, without having to invest into massive, massive cloud resources, can kind of tinker. They can kind of say, well, I'm gonna go and I'm gonna spend maybe a few $100 and buy a few things here and there and download a model and do some work on it and see if I can make something that nobody else has done, and there's so many opportunities for that. So I I really like you know, to to go straight to the learning thing, I really encourage people to explore their passion in the area of their own interest and see what they might be able to do because this is the moment. We're definitely in the Wild West of physical AI.

Daniel:

Yeah. And you mentioned something which is kind of the topic of this is some of what we've been discussing over the past week and kind of going back and forth on, which sometimes corresponds to those smaller models that you were talking about. Sometimes maybe they are large models, but this is this topic, which we haven't kind of updated in a while, which is the kind of state of open or open weight or open source models, depending on what you call them, and closed or proprietary or productized or third party, models that, so that that these are called different things, but maybe just to just to, you know, we've been going back and forth because the the gap seemingly, the gap between performance on open models versus closed models was closing itself. The gap was closing for some time in terms

Chris:

of And

Daniel:

I think there's an open question now like, Hey, that still the case? Has that changed? What's been updated there? Maybe just to start out though, for for listeners that aren't as familiar with this, you you might have heard the distinction, but you might not practically understand what this means. And I find this to be continually confusing for for many, many people.

Daniel:

And let's just, let let's just take an example that I that I often use, which is, let's say a deep seek model. So let's say that you have a deep seek model. There is an LLM or a vision model or whatever it is, that this is a model that was built or trained by a particular vendor, and so deep seek in this case was the vendor that creates the DeepSeek model, OpenAI creates GPT models, Anthropic creates Claude models, etcetera. But let's say we take a a DeepSeek model. That DeepSeek model is it goes through a sort of offline training process, which is the creation process of the model.

Daniel:

And what pops out the other end of that process is not data, not code, but it's kind of a combination of both. You have a set of what's called weights or parameters of the model, which kinda parameterize how the model behaves. It defines how the model behaves. It's just a set of numbers, essentially. And those numbers then are loaded into code, which runs the model or runs the model architecture, which is like the structure of the model, how those numbers fit in, how those parameters operate, such that you need both that set of parameters and you need the code to run the model.

Daniel:

And if you have both of those things, then you can put in input into one side and get the output out of the other side. You put text in one side and generate text out the other side or whatever your model does. So let's say DeepSeek, a vendor creates a model through a training process that then results in this set of parameters and a set of code that runs those parameters to actually operate the model. Now from there, a few things happen, right, or could happen. That vendor could decide to release the model in a productized closed way.

Daniel:

So in other words, they could create a SaaS product that you just access over the Internet and you say, to a website, whatever deepseq.com or I forget what their website is, chat.deepseq.com. You can create an account. You can log in. Right? And use interact with a SaaS product that somewhere under the hood calls that code that's parameterized by those parameters and runs the model.

Daniel:

Now it also runs all sorts of other things related to the actual product, right, but somewhere in there is the model that you're accessing. That's thing one that could happen is like the SaaS productized product of the model. Thing two that could happen is they could also release kind of more direct access to the model via an API, but not, it's still a productized access to the model. So you could connect to whatever it is, api.deepseq.com and say, hey, run this input through the model, and then that API gives you back that output. That's running through the vendor's infrastructure.

Daniel:

Right? Running that code on they're running that code on their side with those parameters, you're getting the output. So all of that is happening still on the vendor side, the model vendor side, the model builder side. Another thing that could happen is that DeepSeek releases the model weights and the model code in some way publicly or open sources it or makes the weights open, open weights or parameters. And most of the time that happens on a website called Hugging Face, which is a repository of models.

Daniel:

They could release this under some license such that you could, on your computer or on your server or in your cloud environment, actually spin up the code and run the model yourself in your environment. So that's, a next thing that could happen. Then finally, if that happens and, you know, DeepSeek releases their model code somehow, Then finally, other vendors other than DeepSeek might choose to run that model and, offer it as their own kind of productized version of access to that model. This would be like AWS Bedrock runs a deep seek model in their servers and you connect through an AWS endpoint in your AWS VPC to access this model, which you are not running manually on servers in your AWS or anywhere, but that AWS is managing that on your behalf as a managed service, or, you know, this would be like together.ai or whoever's running the model. Right?

Daniel:

So just in summary, there's kind of those possibilities. Now when we talk about an open or a closed model, the open model at some point goes through that process of having the model weights and code being open to the public so that they can run it under sometimes a permissive license, sometimes a non permissive license. Right? Which kind of opens up this wider range of possibilities of how you can use the model. A closed model would never kind of leave those weights and that model code, that inference code would never leave the vendor's infrastructure on purpose, because they consider that their IP.

Daniel:

So you could connect to their SaaS product and interact with the model maybe in a no code way or you could interact with their API, their REST API to use the model, but it's that model interaction is always happening kind of behind the curtain in a productized thing. Okay. I feel like I rambled there for a bit, Chris.

Chris:

No. It's a great explanation Yeah. I think no. I think I think you nailed it. I mean, and and I think the the relevance of of going through that is that we are potentially at a at a bit of a junction right now.

Chris:

One of the the things that has happened in the news that is that is directly relevant to to that explanation is the fact that Meta, the parent company of Facebook and Instagram and WhatsApp, has long, at least within Western countries, The United States, long been the kind of the champion of open source models in terms of both the the software and the weights being available, so you could download them and run run them in your own infrastructure. And and they were in use cases for a lot of other developers out there and the organizations that they represented. That was, you know, one of the the go tos in terms of of being able to drive a lot of that. One of the you know, we there was a a bit of drama in Meta. There's been lots of drama in Meta, actually, over time, but one of the big dramas was not too long ago, a couple of several months ago, Jan Lecun, who was famously one of the three godfathers of AI and a and a major luminary in the field, had spent a decade at Meta doing research.

Chris:

And part of the the agreement on him staying at Meta during that time was that the models that they had would be open sourced. And that's that is kind of the underlying reason, as I understand it, why the Llama family has always been open sourced and available. And for a long time, you know, that was that was the company's position. So, bunch of drama happened a while back. Beyond Lagoon left Meta, and and a lot of change ensued in terms of how Meta was approaching AI.

Chris:

And, you know, as part of that, during that time period, especially more recently, Lama has started trailing farther and farther behind other frontier models in terms of performance. It used to be right up there. Wasn't wasn't the top, but it was within striking distance. And being open source with a major company behind it was one of the things that that allowed you know, anyone could download Lama, and so it also provided good learning for other companies that were doing open source. And it was as we talked about earlier in this show, the fact that for a long time, source on frontier models was kind of closing in on those closed source.

Chris:

And so what's happened is Meta has basically abandoned LAMA. What is already there will remain open source. What kind of updates? Who knows, if there are any. But they have turned to a closed source model family now.

Chris:

It's called MuseSpark, and that's where all of their effort will be going forward. So essentially, that kind of puts it back into the same closed source space that we see OpenAI and Anthropic and the Gemini models from Google, among others. And so the question is with kind of the within from the Western countries at least, that's a bit of a blow. And so people right now are looking around at what they can do in the open source context and, you know, going back to all this physical AI and things that that we were talking about and how will that affect a lot of the things people wanna do there? And right now, it's interesting because I I will finish by without diving into it, I will note that as someone in the industry I'm in, there is at least perceived national security interest in having Western created models that are open source out there, and that are kind of preferred over over specifically models from China.

Chris:

Though at this point, if you look at leading open source AI models, China is definitely taking the lead in that space.

Daniel:

So via good ways.

Chris:

Via good ways. And so that, you know, that that comes down to if there is there is concern in Western countries about potential security issues associated with models. That's out of my specific expertise, so I don't have any comment or thoughts about that right now, but then this presents a bit of a conundrum. So that's kinda where we are right now. It's, you know, what what shall open source AI developers do going forward and how where are they going to turn?

Chris:

And also, what will be acceptable to their customers in terms of being able to do that. So, you know, you're you're not gonna get if you like, in in this particular country or in The US, you're not gonna be creating a solution based on an open source model in China and expect to sell it to the US government. It's very unlikely to happen. So it definitely has some fairly big business ramifications in terms of options available.

Daniel:

Yeah. And if we were to try to define, I guess, the potentially the like, how how is it out there that we define this gap? Are open models as good as closed models? And I think if for those out there that aren't familiar with this space, generally, how that has been done in the past is via benchmarks. And these are datasets where there's an example input.

Daniel:

There's some expected output. Right? And you run your model with the input, see what the model produces at the output, compare the two, and then there is some scoring metric to, tell you how how well you did. And so these are these are benchmarks, MMLU, SWE, for the coding side. There's scoring arenas that are that are interesting if you if you look up graduate reasoning, math benchmarks, you know, all all of these sorts of things.

Daniel:

And generally, if we look at kind of the more recent models that are closed, you know, Opus GPT 5.5, blah, blah, blah, and the and the open or the, the the closed models, those, and then the open models that are more recent. So things like Kimi models and Quinn 3.5 and these models that are that are more recent on the open side. Generally, there is still a gap in terms of, like, those models being a few percentage points to maybe a little bit more scoring lower on these benchmarks than the state of the art closed source models. I guess my, I don't know if it's a hot take Chris, but maybe my divisive question would be who cares about benchmarks and why does this even matter? Or or maybe another way to put it is, that's that's fun to think about all of these benchmarks, but it has nothing to do with the real world.

Chris:

True. That's it's a good point. It's, One of the things that I think there's almost there's two considerations there is that, you know, like, much do frontier models matter for most people in most use cases? Yes. Are we are are many of us using them for our chats and things like that?

Chris:

Yeah. But the I think one of the points, and I'm certainly coming at this from that physical AI side, is it's not these giant frontier models that that people are investing in for for, you know, things like wearables and and other small devices that you may have for for things where that are away from data centers, so, know, where you're where you're having the power, and away from reliable comms and stuff like that. Think there's that. There's a whole world of models that are geared towards specific use cases, and therefore some of these big benchmarks that we're focusing on in the news so often, you know, it comes up, hey, the new model is out, these are the benchmarks, and look what it did against the existing incumbent and stuff. It's it's a it's a bit of showmanship there as well, but I I will say this.

Chris:

For for companies that have been without having some of these larger open source frontier models, like the position Lama was for a while, you have a lot of startups that are basing their companies on their access, and if you believe that your company's products and services are dependent upon a frontier model and you no longer have kind of you think the gap starts to grow again between closed source frontier and open source, then you have a big risk in your business in terms of hoping that those companies, their APIs, and and the capabilities that those companies are choosing not to get into still suits your business. And as an example, and I'm not particularly picking on mean, this happens across all of them, but, you know, two weeks ago, Claude Design came out from Anthropic, and prior you know, going into that, there were a whole and and we've had the same with the you OpenAI also has some some new image capabilities. And so, you know, if you've been building a company on their closed source API, you've just kinda gotten eaten if it's, you know, using if you were trying to provide those specific functions.

Chris:

So there's a lot of risk in terms of entirely building a business based on somebody else's business exclusively that may have an interest in taking over your thing. So I would be very, very cautious myself before I dived into that territory.

Daniel:

Yeah. I, I think I love where you're going with this because you've mentioned some things like Claude, Claude design, and we talked about Claude code in the, in a previous episode, which people can listen to, which was super interesting to get into the kind of innards of that. But I think where I'm at on this whole question is probably a year ago, I would have really been rooting for, you know, the open model to, to when, and we're gonna get there and it's gonna be all open models forever. I think now I'm at a point where my mind isn't really thinking about models. And the question of what model you're using is kind of irrelevant.

Daniel:

It's not totally irrelevant. And I the reason why it's not totally irrelevant is for the reasons that you talked about. There are very specific cases where an open model is going to be your only choice, and many cases where it might be preferable either licensing or privacy wise or or whatever. So if you're working in an air gapped environment or something like that, you need a you need a open model. In certain industries, you'll you'll need a open model In certain for certain latency or kind of large scale processing cases, you'll want an open model because it's gonna be way, way, way cheaper.

Daniel:

Right? So there are these cases where the open model, like, clearly wins, but ultimately I think the model is now a complete commodity. Right? And so this is, to me, it's like, you know, you look at another commodity, like I'm in the Midwest. We have a lot of soy around us.

Daniel:

Right? K. What do what do, like the average person, what does it matter in their dish at a restaurant that includes some sort of tofu or soy or whatever, or let's say another commodity corn, they don't care whether they got the the premium corn or the mid level corn or or whatever for the most part. They care how the dish was prepared, which involves system of things that has led to the presentation of a great tasting dish for them, which had way more to do with a bunch more things than the commodity itself. And I think that that's that's the situation we're in.

Daniel:

The model, although it is a necessary piece of the puzzle, it's such a it's such a it feels like such a small piece of the puzzle at this point. And this is what I kinda keep coming back to on the mythos thing, the anthropic thing. It's like the cybersecurity world is up in arms about mythos, and I think they should be to some degree because it's powerful. It's gonna discover all these vulnerabilities. It's gonna totally disturb how we need to do cybersecurity.

Daniel:

And I think my response would be, well, whether Meetos is released or not, there's more than enough AI and agentic harness capability out there to already totally disrupt and transform how you need to do cybersecurity. Like, it's our we're we're already there. And it doesn't have any it may have a small bit to do with the model, but I think it has more to do with how people are plugging these models into agentic systems Mhmm. How those are operated, and the transformational capability of of those systems, which, you know, whether it's an open or closed model is gonna be transformative even if another model is never released.

Chris:

You know, a good a good illustration, I think, of that fact that that's happening right now is that if you even if you do look at these big, you know, marquee names in the closed source model world that we all follow, they they are yes. You know, Methos came out recently and things like yeah. Well, it's not out, but Methos is there, and there is a certain amount of of news about that, but the place where they're really focusing and the place where if you're following AI news, you're seeing that is how people are are imagining new implementations that address business needs with these models. So, you know, the the fact that, you know, I just talked about Closed Design a second ago, and it's, you know, it's not that they need a whole new model just for that. It's that they're taking existing models and building the infrastructure and workflows around those so that you can productively use those models for, you know, for the thing that you need.

Chris:

And going back to your point, it kind of points to that commoditization that you're talking about when you recognize that the value is whatever model you're talking about, whether it's a big frontier model or a much smaller model for a specific purpose, there are those to work from that are out there and those that are continually developed, but the value that people are focusing on in 2026 that I'm observing is in developing the infrastructure and workflow to make that act so that instead of it being just a chatbot, it's now a 100 different products and services that they can leverage. And I think I think that's where I think the the smart thinking is right now in terms of of what can we do with what we have.

Daniel:

Yeah. The and I've been I've been really wrestling with this, Chris, because to your point, there's a lot of companies out there, and I've seen it, even some of my friends building companies that are incredibly innovative and get sort of knocked out as one of the model vendors, you know, releases X new capability that's built into their stack, or it just becomes easier to do that with, you know, some other tool, you know, building it itself with quad code or something like that. And so I wanna make sure that, you know, like we are providing value to the industry. So I've also had to wrestle with this, like, what is what is gonna survive in this world? Right?

Daniel:

And what is value what is the value that people could build as they build ventures in this in this space. And someone I was talking to yesterday, I think made a good met or made a good point and kind of illustration for me, which is there's if we go back to the world, Chris, that you and I both went through of everything as micro services, Start you know, we went into this world of, okay. Now everybody's gonna have micro services. And at first you have four or five, and then you have 10 and then 50 and then a 100 micro services and thousands of micro services. And then like, if you're Uber or someone, you just have an innumerable amount of micro services.

Daniel:

There's problems related to the complexity of operating in that environment, which are really profound, which is why like a product, like a Datadog or a Splunk or something like that, right? That actually ties into all those end points, helps you monitor, do root cause analysis, whatever those things, like, number one, you would never think of building your own thing like that. And number two, it's once you adopt that product, it's very, very sticky, right? There's no way that you can operate hundreds, thousands of microservices without this product. Right?

Daniel:

It's it's super sticky. So I think if we draw that parallel to the agentic world, a lot of people might be creating their very first agent right now. Right? Like, oh, I created an automation that does x. Right?

Daniel:

And if we look around that automation, there's kind of a harness around that. There's connections to systems. So that agent has actually a number of things in and of itself. It's multiple models, maybe an LLM, embeddings, re rank. It's a connection to an MCP server, or more.

Daniel:

It's API calls. It's some workflow code. It's a user interface that people interact with, and so that's agent one. And then you have to imagine that as as these agents proliferate, right, you have tens of agents. You have hundreds of agents.

Daniel:

You have thousands of agents that are all operating within your operational environment, your enterprise. And if you want to do that and manage that complexity, I think those are some of the problems that are really going to be high value in in this space? And, you know, we're thinking about some of those as related to, like, governance and and policy enforcement and and monitoring, but there's innumerable other problems there that, you know, how do you manage all those MCP servers? How do you, you know, how do you handle agent to agent communication? How do you do goal tracking?

Daniel:

You know, all of those sorts of things. And if if you put it in that context, yes, there are models operating in that environment, but they're operating in as this embedded thing in such a distributed system of things that, yes, they have influence, but they have influence in a similar way to a dependency in a software prod product. Right? Yes. They cert certainly a dependency in a software product has has an impact and should be tracked and might have vulnerabilities with it, etcetera, etcetera.

Daniel:

But the overall project is is much more important than it than that individual dependency, and that individual dependency could be swapped out for any number of things in in reality.

Chris:

I'd like to expand a little bit on that last point that you made there, and that is the ability to swap it out. I think I think that is one of the things that we're seeing now. You know, we've talked about 2026 being, you know, with ClosedCode and and others, codecs from OpenAI, and then as well as even open source models that are that are available for coding, and there's some really good ones still. And so that's but one of the things that I'm seeing is the ability to to kinda get through some of the crud work or make your systems more dynamic because you can you can take major components and and redo interfaces and stuff like that. I've seen value added there, but it really calls out the fact that the while that has changed in terms of how you might execute on something, you still have to produce value for what you're trying to accomplish, what your organization is trying to accomplish.

Chris:

And I think while that while some of the tools that we have now are speeding things up, it still takes getting maybe a novel idea or maybe just an iteratively better idea than what's already out there, and and being able to to bring that to fruition, maybe a little bit quicker these days than we used to do, something that might have taken months, might only take a few days or weeks at this point with the new tooling, and and I've been having a good time exploring that. But, yeah, I mean, to your point, some things are changing, but a lot of things are still the same, and a lot of those fundamentals still apply. And I think with the early in the year, everybody was wondering, yeah, things were moving so fast that as as Opus came out and that kind of, you know, changed the way people were thinking about coding and producing products with whatever models they were using, open source or closed source, I think the thing that I've reached as we hit May at this point is that not everything has changed. That we got some some cool new things to play with, but at the end of the day, you still have to work on on something novel.

Chris:

I know without going into detail, the thing I'm working on is is not something that you could just go and prompt clog code to generate in a couple of prompts. It's still there's still some novel ideas in that will change the business that it's trying to change, and the tool but and so the tooling is accelerated, but it hasn't really changed that fundamental, and that's really as I've worked more and more on this, that's really been drilled into me, so.

Daniel:

Yeah. Yeah. So maybe a good way to put it for people kind of coming down to this is maybe we should be talking not so much about the open versus closed model gap, but the development of the sort of agentic workforce and and where the value lies. Now I do like, I guess to acknowledge where it matters as you did, there are clear cases where open models clearly when I think that in terms of scale economics, when you have really high volume workloads in in terms of places where you need data or data sovereignty or control or kind of some sort of infrastructure alignment, like air gapped scenarios, there there are places I think where closed models make make more sense and and clearly when, especially where, you know, you don't have some of those infrastructure constraints, maybe it's not a high trust environment in any way. They do certainly they offer products.

Daniel:

Right? These are closed products. And in the same way that for any other closed product or managed service, you get a high level of reliability. You get SLAs, etcetera, around things that make them highly reliable. Right?

Daniel:

And and there's certainly trade offs through that, but it's worth worth acknowledging. But then I think you look at this really the set of other things that happen in agents, you know, RAG and automation and MCP tool calling and agent to agent communication and coding code generation, all of those things can reasonably be done with a whole host of of models, depending on the kind of agent harnesses around them.

Chris:

Yeah. It's I think and that is the focus. If if I think I think if we could steer people in the right direction to be highly productive and not get caught up in the hype as we kinda wind up here, It would be to focus on what your business and what your needs are around these agentic harnesses and how they can solve your business problems in novel ways, maybe iterative ways people haven't thought about, and worry a little bit less about the AI hype out there, even though I'm guilty of opening it up talking about llama.

Daniel:

Well, it's good. It sparked a good conversation. It's good to good to chat about it, Chris. And, you know, who knows? Maybe we'll be wrong and Mythos will release, and then we're we're back next week talking about how models matter so much and how could we have ever thought about otherwise?

Daniel:

I don't think we'll be there, but who knows? I don't think so.

Chris:

So but it's always fun. Things are moving so fast, and so I hope folks are as inspired as we are on finding things going forward and finding these new tools and new capabilities and going and doing something really cool with them, let us know in social media. We're out there on all the usual platforms. So looking forward to hearing from you. I'm on Blue Sky quite a lot, LinkedIn as well.

Chris:

And love I'm really enjoying engaging conversations to find what people are doing, so share.

Daniel:

Yeah. Yeah. And also a reminder for folks, we have set a date for the Midwest AI Summit, which which Chris and I were both at last year, and I know at least at least I will be at in the fall in Indianapolis. It's October 15. And if you just search Midwest AI Summit or midwestaisummit.com, there's some early bird pricing right now.

Daniel:

And, yeah, we'd love to love to see you in person as well. It's gonna be a great, great set of practical practical discussions happening there.

Chris:

Last year was a lot of fun. I encourage people to show up.

Daniel:

Alright. Talk to you soon, Chris.

Chris:

Take care. Bye bye.

Narrator:

Alright. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Narrator:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.