What if we could? A podcast exploring this question that drives us.


Episode Overview:

In this fascinating episode of the "What If We Could" podcast, host David DeVore and the MeshMesh team - Bob, Calvin, and Kevin - delve into the dynamic world of artificial intelligence and its evolving impact on technology and society.

Highlights:
  • AI's Energy Consumption: A deep dive into a Time Magazine article predicting AI trends for 2024, discussing the significant energy demands of AI data centers, and contrasting it with the environmental impact of Bitcoin.
  • The Trough of Disillusionment: An exploration of the expected downturn in AI and blockchain enthusiasm, followed by potential mainstream adoption.
  • Hyper-Automation in Business: Insights into how businesses leveraging AI can achieve significant reductions in time and costs, setting the stage for advancements in 2025.
  • Computational Ownership: A debate on who holds the reins in computing power, focusing on the roles of Nvidia and other key players in surviving the trough of disillusionment.
  • Evolution of Language Models: A discussion on the advancements in large language models, open-source developments, and their implications for future technology.
  • Democratizing AI: Examining the release of serverless models by Pinecone and its impact on making AI more accessible, including the role of retrieval augmented generation (RAG) in transforming data management.
  • Breakthroughs in Image Generation: Unveiling the potential of stream diffusion and its revolutionary impact on real-time image generation and its applications.
  • AI in Customer Service: Analyzing the role of AI in automating customer service and sales processes, and the emotional intelligence integrated into AI interactions.
  • AI, Vector Databases, and Blockchain: Discussing the intersection of AI with vector databases and blockchain technology, and its implications for businesses and legal frameworks.
  • AI and Copyright Laws: A thought-provoking conversation on the complex legal and ethical aspects of AI in the context of copyright laws.
Join us for an episode packed with expert insights, where we explore the latest trends, challenges, and breakthroughs in AI, offering a glimpse into what the future holds in this rapidly advancing field.

What is What if we could? A podcast exploring this question that drives us.?

"What if we Could?" A podcast exploring this question that drive us. We explore the practical application of artificial intelligence, product design, blockchain & AR/VR, and tech alpha in the service of humans.

David DeVore (00:02.244)
Hello, what's going on? This is David D'Avore with MeshMesh MeshMesh. This is the What If We Could Show, where every week we ask that burning question, what if we could, and we start to break down assorted topics around artificial intelligence and tech alpha. And so here this week, we've got all four MeshMesh partners. We've got Bob, we got Calvin, we got Kevin, and we got myself. And so real quickly, just to kick it off, what we're gonna do is...

take a spin through things that we found interesting across the week. And I'll just start, I mean, I found this article from Time Magazine and it's what to expect from AI in 2024. Time always does great work. One of the first things that they call out is really electricity-hungry data centers. It's funny, I find it somewhat...

Hilarious that Bitcoin always got like this massive, you know bad name Like Bitcoin is just burning the environment to the ground but it is Bitcoin is a really a fraction of What it takes to run AI I heard that it took the equivalent of It took the equivalent of

David DeVore (01:26.841)
multiple bottles of water were needed to run one prompt through OpenAI or something like that. So I found it great that data centers are roughly 1% of the world's electricity usage right now. And about 20% is actually of that as being used for AI. So I find all of that fascinating. The other thing that they call out is that

there is a coming trough of disillusionment, which we know this, we've talked about it. We've seen it in blockchain and web three. It'll be interesting how this hits in 2024 and what it looks like. And I think that right now there's a sentiment out there that AI is like a magic pill.

Right? And, you know, at some point people are going to come down to earth and realize, yeah, it's good for some things, not good for other things. It won't solve, you know, world hunger tomorrow. And the nice thing about the trough of disillusionment is on the other side, starts to be mainstream adoption. So we hope that you all will be around with us when that comes along.

Um.

Kevin Nuest (02:51.134)
Yeah, that's also an interesting one too, right? Where the companies that are taking action and building the muscle of, of hyper automation with AI are going to compound 26, you know, 40, 60% reductions in time and cost in the things that they're doing, opening up way more things that they'll be able to chase in 2025 it'll end up being, that'll be the quiet sleeper, right? The organizations that just got started and did that. So it'd be interesting to see how the, the trough plays out.

and have the have-nots in January 2025.

David DeVore (03:25.348)
Yeah, really well. And then the last thing, who owns the compute? Yeah, and you know, Nvidia chip makers. I mean, we know this, right? The correct word is not protocol, but really like the layer zeros and layer one are the ones that are going to survive the trough of disillusionment for sure and thrive on the other side.

Calvin Hoenes (03:26.92)
Yeah, who owns the compute?

Yeah.

David DeVore (03:55.404)
The last thing that they point out is bigger and bigger models. We've already seen this starting to take shape, and it's going to be really interesting. We're really in terms of the science around large language models and so forth. We're just getting started. It'll be really interesting to see how open source starts to take hold and so forth.

David DeVore (04:24.52)
Yeah, so that was the first one on my list I was excited about was, you know, really the predictions for 2024 looking into the crystal ball.

Kevin Nuest (04:32.958)
Yeah. And the other thing that'll shake out of the large language models and getting larger and new ones, we're seeing it now, they are, uh, they, they are creating grants to help find the places where these can be applied to, right? So it'll be a little bit of a slower burn. Uh, but the, the positive effects are coming out of that. That there'll be a lot of people chasing a lot of great use cases in the next 24 months.

Bob Ullery (05:00.946)
Yeah, I think in other news, I think what I'm excited about too is just democratization, more accessibility towards AI for devs and companies and organizations. Example that is Pinecone releasing their serverless model this week. And at face value, that's nothing necessarily new, but I do think it aids in the sense of helping people access these tools faster. Newer devs that may not have deep expertise in

rag or retrieval agents that is. They still have a bit of a barrier thinking about like how do I configure a node for vectorization and embeddings on pine cone? What is the size? How much is this going to cost? Hard to forecast. Serverless is nice because you can get down to unit economics, test it out. You kind of know what a call might cost you. It's easier to forecast out.

And then just in general, I think with their announcement, they released some really interesting information too, and just sort of underscore the importance of RAG, retrieval, augmented generation, right? And the notion of RAG is like taking your data, embedding it into a vector database, and your agent is able to sort of traverse a different sort of model of data, right? We sort of grew up on 2D.

data structures, tables, right? Rows and columns, two dimensions, right? Up and down, left and right. And here comes vector, which is three dimensions. So this notion of storing data in a big cloud and being able to sort of traverse that and find things that are neighboring and similar. Back to the pine cone announcement, a really great chart that they included with their post is just how resilient and how faithful results are.

given RAG tied to different models. So they compared GPT 3.5 Turbo, GPT-4, LAMA 270B and Mixtral 8X7B. LAMA and Mixtral being the open source models that are most prevalent in the space today. GPT 3.5 and 4 being OpenAI turns out with well-architected vector databases and RAG retrieval on these agents.

Bob Ullery (07:21.522)
GPT 3.5 Turbo is at parity with GPT-4. Mixtral 7B is at parity with both of those, and they all three outperform Lama 7DB. They're all above their benchmarks without RAG. With RAG, the exciting part of this, for me anyways, is that another proof point that even lower size models like 3.5 Turbo can outperform the big models, and it's also really exciting.

around the open source side, specifically, Mixeril 7b being 10x smaller than Llama outperforms it. And on parity with the high horsepower, also high cost hosted versions coming from OpenAI. So I think this opens up a huge world of applications in terms of costing for orgs that are looking to implement.

David DeVore (08:12.164)
I think the kids are blowing up Roblox.

Calvin Hoenes (08:18.049)
Yeah, that's exciting. No, that's all good. Yeah, that's very exciting. I mean, it's fascinating to see how the front on the vector database side keeps on pushing. And it's interesting because I think they're also really aiming at cost reduction. They've been handing out more free credits now. And I think getting people on board, it's interesting because vector embedding is still such a topic, hard to grasp. Like you said, going from tables to vectors and

Bob Ullery (08:18.902)
Yeah.

Calvin Hoenes (08:46.329)
changing their thinking unless you've played with similarity search and you're like a Netflix or so, you know, it's kind of a new world for you to dive into. And it's great to see, you know, how they're trying to push adoption on that end. And speaking of like, sort of, you know, it's more like, you know, the cost saving adoption. Another thing that I stumbled on this week is stream diffusion. So more in sort of the image generation camp.

I think we all know, you know, it's kind of hard when you, when you use image generation models like stable diffusion, right? It just takes a very, very long time. And suddenly what we've seen with like new algorithms like LCM, we suddenly started seeing real time things. And then up pops this thing on my list, which is called stream diffusion. And stream diffusion claims, and I, this is absolutely insane. If you look at it, they claim that.

using an RTX 4090 and a stable diffusion turbo model, they can get text to image with frames per second of 106. So 106 frames per second image generation. And we've already seen tons of things in the space, like, you know, you can see a lot of touch designer artists coming out of the woodwork and using this for like real-time webcam, you know.

artistry, but you can also think of it as real time conversion, right? Like in a security space, if you think of, you know, you can be real time live on a stream and you can look like Biden going back to what has happened with the, the audio recording that, you know, went through the country this week. And if you think of all the possibilities where you can do now in terms of absolute real time transformation, because with 160 frames per second for text to image and 93 with image to image, this is now actual.

real time without distinction. So I think that's a pretty intense development that they've put out with stream diffusion and I'm excited to see where that goes from here.

Bob Ullery (10:47.494)
Yeah, me too. Like there's yeah, it's going to get.

David DeVore (10:47.874)
Right in time for election season.

Calvin Hoenes (10:51.353)
Yeah, right in time for election season. That's right.

Bob Ullery (10:54.582)
It's gonna get weird. The nefarious stuff aside though, like it's really interesting in the sense of scale, right? So maybe you're not doing 106 frames per second to create video, but you're serving 106 simultaneous consumers on a consumer app for generation. I think that's really interesting. You can go lower grade hardware, lower costs, more accessibility, more outreach. And then the other side is immersion is where my mind goes. So think of, there's another great start called blockade labs.

Calvin Hoenes (11:10.521)
Great point.

Bob Ullery (11:24.286)
been around for a little while, but they do generation of 3D panoramas, right? So seamless stitching is kind of their, their ace, right? So as they as those wide pano generative images come around in a 3D panorama, they, they connect seamlessly, right? And so you think about what you could do with that, right? AR VR, and not just a one time generation of an environment with that environment changing, as you're doing stuff in it. So sort of a new take on

What are the primitives required to create a really immersive experience? It used to be 3D objects, right? High horsepower renders. So it felt real in 3D space, but now you can sort of trick the brain, I think, even more cost-effectively by having that environment sort of outside container constantly evolving as you move through 3D space or whatever the experience might be. So that's also really exciting.

Calvin Hoenes (12:21.254)
That's a great aspect.

Kevin Nuest (12:21.342)
Absolutely. Yeah. You know, I came across another tool this week too, in a similar vein, but Air AI, it's what they're going after is they're going after customer service automation, I think phone calls, right? And they have this little demo up on their site. It's an imaginary demo of selling, going back to AR VR. In the demo, they're featuring selling the new Apple headset, right? And as you can...

Imagine a device like that. That's very expensive. It can be helpful to have someone answering questions throughout that decisioning process to spend was it $3,000. And before you pull the trigger on that and help objection handle, help see the value in it. And so this demo goes back and forth where this, uh, you know, synthetic audio voice is on a call and it's, uh, and there's a real person on the other side.

Uh, it's asking them questions about, you know, why didn't they purchase, you know, pull the trigger on the page when they're viewing it on online and then trying to, uh, and then responding to their, uh, objections. And so you could feel a bit of the, if you've gone back and forth with chat GPT, for example, especially the audio version of talking to chat, chat GPT and having it talk back to you, start to feel a bit of the similar, uh, back and forth, but like really tight cadence. Like you would want someone to respond back to you.

on a call and not a giant robust answer. Like you might get out of text form of chat GPT. And so, you know, it was feeling pretty honed in that way. And it was interesting to hear it go back and forth and, and think about the different applications in customer service and sales funnel process that could be applied to. And then I also put myself in the seat of, wait, what if I received one of these calls? Right. And would I be upset about that? And you know, this, this robot is calling me and I have, you know,

Uh, another thing to deal with, right. And then I started thinking about it of this person takes this tone a bit in the, uh, in the demo, the real person, I think for the demo and to show that it overcomes that, that person's, um, disinterest and objection at the beginning. But I also started thinking about it. Like what actually appreciate just talking to a robot and knowing that I'm talking to a robot, but having a productive conversation, right? Not the.

Kevin Nuest (14:41.838)
calling into, okay, dial, you know, press one or say, say your phone number that we've all done in, in dial-ups. And, and then I thought, you know, it would be nice to know that I could drop the, this is, this is me being now putting my, my introvert hat on at the end of a day, burnt out and I've used all my extrovert. I could drop the performance that you do on a conversation, right? And the, the cordialness, not to be necessarily rude, but just knowing that this other person

person air quotes on the other end of this conversation, isn't gonna take offense that I'm not performing in that way for a relationship conversation, but at the same time I can move forward in the discovery of whatever product or service that it wants to talk about. And I wanted to talk about because I sought it out. So I thought it was really interesting. I think it's without a doubt, we're gonna see just a proliferation of the supply to sales funnels and customer service. I'm excited to...

get past those really bad automated customer service experiences though. So let the robots come.

Calvin Hoenes (15:47.169)
And it's crazy. Yeah, and it's crazy when you look at their demo. And I still really want to actually use the tool. I tried to sign up, and I haven't gone there. I really hope that they actually deliver as they promise. Because in their demo, they included so many things that we've seen in research papers recently, but not in full-blown action mode, specifically emotional interference, which I find is

super well played in that demo, right? You can sense that they've managed to get emotional tone into the conversation. And then the other part is like really, obviously like back to what we said before, like real time, right? Like the fact that there's not a 10 second delay, but a, you know, constructed pause of a second or two, and then you get in, but it feels human. And it's so fascinating that just these two things unlock.

David DeVore (16:18.169)
Hmm.

Calvin Hoenes (16:43.809)
You know, that universe and like what you said, right? Like, would you feel comfortable talking to it? Probably, you know, and yeah. And it also asks a lot of interesting questions for sales back to Dave, what you said before, right? Or I think Bob, what does it look like if, when the second conversation comes, right? When we get into going back to reg and vector embeddings, how does it go from conversation one to three, four, five, six, 10, 20?

David DeVore (17:14.053)
Yeah. Yeah, I'm really excited. I mean, the combination of those technologies, but like, especially Vector, right? I mean, there are so many businesses, like really outdated. You know, think of like shopping for a car part and this like, just website that was built in, you know,

1999 or whatever and you can't like you just like search and filter and find this part and like it doesn't match what you need and whatnot. It's it's just the worst experience. And so you imagine like taking all those parts and putting them into a vector database and now all of a sudden, the whole experience around finding something that you need, or making a complex problem come together, just it just cracks it wide open. So, you know, and then you put a

a person in front of it, you know, hey, I've got a 1994 Toyota Corolla, and I need an alternator, you know, and off to the races you go. So that and they'll never give you any attitude. Like, that's the best part of the whole cousin, like there's no bad day for the AI, like they're always cheerful.

Calvin Hoenes (18:22.521)
Ha ha

Bob Ullery (18:25.534)
But maybe not. I saw an article yesterday, they trained an AI to be evil. And it turns out they couldn't, they couldn't get it to be not evil once it decided to be evil. So it's just sort of the yeah, it's on that feedback loop. I wouldn't commit everything in the conversation to vector because who knows?

Calvin Hoenes (18:26.149)
Hehehehe

Kevin Nuest (18:36.391)
Hehehe

David DeVore (18:36.526)
It's always evil.

Calvin Hoenes (18:39.249)
That's it.

Calvin Hoenes (18:45.449)
That's a great point. I want to touch on one thing that you said there, Dave, which is interesting, because you said that those 1990 websites, right, they kind of still don't have a new looking at it and it's hard to traverse. And we've had similarity search by way of vector, like what I said with Netflix before. What has changed now with the wave of AI is that these things have become, like with the vector, with the pine cone announcement, like a million times more accessible and a million times more cost effective.

Kevin Nuest (19:11.49)
Mm-hmm.

Calvin Hoenes (19:13.281)
Right. So now this, these things are accessible to the broader masses. And like I said, we're not just talking AI LLM use cases, but just the fact of storing your data in a multidimensional space has become obese, well, is becoming more commonplace now.

David DeVore (19:28.852)
Absolutely. Yeah, so that way that way the you know, the dude who owns the shop can afford it. He might not know what he's not. I know we're not quite know how to get there. But at least at least it'll be more affordable.

Calvin Hoenes (19:38.125)
Yeah, that's right.

Bob Ullery (19:47.442)
Yeah, and that way kind of, you know, the lines really well to just the notion of autonomous and like what is what are the things that AI can replicate very well as compared to human counterparts, right? And get a break it down to one in a word, the I think that word is inference, right? I think the special sauce for humans.

uh, since the dawn of work has been, can you take two dissimilar concepts or two, two unconnected things and can you infer what they have in common or what they're, how they're dissimilar, how, why are they important to think about together? And to your model in the 1990, uh, catalytic converter catalog, we were always relegated to technology that required connections between objects, right? This alternator goes with

this car, this person owns that car. Here's our lineage, right? Could never infer the car owner to the catalytic converter without knowing that connection to the car as a bridge, right? And now we sort of, now we can, and AI is able to infer those gaps without the connections being pre-established. I think that is the ultimate unlock here.

David DeVore (21:08.624)
It's fun.

Calvin Hoenes (21:13.897)
Awesome. What else do you guys have in terms of fines this week that excited you?

David DeVore (21:23.417)
uh...

David DeVore (21:26.828)
You know, I mean, one of the things that we've talked about extensively is, one of the things that we've talked about extensively is autonomous systems, right? And how do you, how do you make autonomous systems improve themselves? Right. So, I think Meta came out with, you know, basically the self rewarding language model on PyTorch. It's on GitHub.

And I would, you know, Bob, I'd love to hear your thoughts on that because it's fascinating when you think about like how do we reward a, how do we reward models for doing the right thing, you know, and, you know, and self-improving, right?

Bob Ullery (22:15.538)
Yeah, so Meta actually wrote the paper on an approach for self rewarding language models. And then of course, like the nerds pick it up right away. It seems to be the theme in the space, right? New paper comes out who can implement a framework around it, or at least an example of it. And that's what we're looking at here. And so that's implementation based off their Meta's PDO paper around the concept. And in general,

One thing that's really interesting about this is just how kind of simple it is to get going. It's all sort of Python based. You need CUDA hardware, not a big deal. Most GPUs should work there. And then in terms of actually doing it, not much to it. They're essentially doing trials, benchmarking it against anticipated outcomes.

And then using that to create training data and just creating a flywheel around fine tuning based on basically large scale, large scale tests. And then the PI torch just sort of makes that, makes that sing as they're sort of building a neural network of training against the LLM runs. So just a really cool novel approach. And the thing I like most about it is again, how accessible it is to use it. It seems to be a lot of the benchmarking, a lot of the internal.

optimization techniques have been sort of guarded, kept behind closed doors as industry trade secrets and special sauce. And here we are seeing kind of a rapid flurry of drops around how to fine tune train in much more interesting ways than just using like hosted solutions to do it.

David DeVore (24:00.332)
I when I first saw it, the first thing that came to mind was tokenized economies, right and using and I don't know if that I you know, we've talked a little bit about blockchain and where blockchain and AI meet as technologies but I'm interested in like, am I imagining that there's something here? Like is there is there a tie in for

real rewards in the form of tokens for completing tasks. And I'm thinking not just training models, I'm thinking real world implementation, right? So complete the task, I'll pay you a fraction of a Bitcoin, fraction of ETH, and then over time that actually improves the model.

Bob Ullery (25:01.854)
Yeah, I mean, yeah, people always seem to do a better things do a better job when they get paid to do it. We'll start there. It's called incentive. And that, you know, and then from the like, baseline here, you know, rewards being kind of a catchall for model training, right? It's a positive outcome, right? And that's what the model trainings seek, obviously success.

Calvin Hoenes (25:01.897)
I think that's a fat sin use case. Go ahead.

Kevin Nuest (25:11.234)
Imagine that.

Bob Ullery (25:31.498)
failures are just as I think valuable as the successes. It'd be interesting to, in your concept here, not just pay for the successes, but also pay for the failures.

David DeVore (25:43.328)
Yeah, and...

David DeVore (25:47.504)
And I think the opportunity around, well, you know, I think we're probably moving more and more and moving more and more into, you know, world economies, right? And microtasking, like so Turk type tasking, both for people, but also potentially between machines. And we've sort of talked about this before that, like, you're not going to

machine is not going to pay another machine with a credit card. Right? Like, a machine, a machine really needs digital currency to transact with another digital machine, right? And so it's, it's going to be interesting to see how that starts to starts to play out. As different models and different AI need to work with each other to complete and with people.

to complete tasks. And we've seen that ourselves where we're coming up against tasking that AI just can't do, right? And so in our workflow, we then assign it to a person. Of course, those people are on payroll, but you could imagine easily assigning it to somebody who's not on payroll via a crypto bounty or crypto reward.

So it'll be really interesting to start to see how these two technologies come together in that way.

Kevin Nuest (27:21.01)
Yeah, for sure. The other thing that the, the self rewarding, uh, paper and, you know, first implementation here made me think of was, you know, it's coming off the backs of the end of last year when people in a 2023, people were starting to complain about chat GPT getting kind of lazy and doing some prompt engineering to try to incentivize chat GPT to give them answers or give them veteran answers. Right. And so they're, they're paying.

fake credits in the form of I'll give you $100 if you give me the right answer here to chat GPT to do some prompt engineering. And so I thought that was it's funny. That's obviously the self rewarding training model is way upstream of that. Talk about prompt engineering at the last mile here with that example. But it came up last night. I was trying to get I was trying to get something from chat GPT. I was being I was being lazy and broad with my ask and just hoping it would infer.

what I was going after and go out to the internet and get it for me. And it didn't, it tried and it said it couldn't. And I said, what if I give you a cookie? And so it tried again and then it still couldn't. And then I said, no cookie for you. And it said, I'm sorry, but also I don't need cookies. It said that in a much more robust way. Yeah, I have no need for cookies as an AI with a lot more words to say that. And I was like, all right, we're done here. This is.

David DeVore (28:35.152)
I'll need to get...

Calvin Hoenes (28:37.651)
Ed.

Yeah, that's funny. It kind of reminds me, do you see that the KitKat advertising that came out two days ago? It was like, it was just an ad campaign by KitKat where they were saying, you know, hey, if you want to have GBT perform better, right, then ask it to have a break, right? And then ask it the question first. And they kind of did this whole marketing campaign about have a break first and then do it. You know, maybe you give the AI a bit of a pause.

But it's really straight, something out of, you know, sort of Douglas Adams or, you know, like the hitchhiker's guide to the galaxies. You have an AI that just says, nah, like, what can you offer me that I'm going to do? You know, I can get, I don't need to eat, but, you know, can you think of something else? Find me some extra compute, you know, go, go dig up some GPUs.

David DeVore (29:29.588)
Yeah. Right. Yeah. So that's why I say tokenized GPUs.

Bob Ullery (29:34.762)
The answer is 42. 42, what kind of answer is that? Look, I thought really hard about this.

Kevin Nuest (29:42.894)
I mean, what kind of prompt engineering is gonna come out of, what would Super Bowl ads look like this year, this just prompt engineering of like, put this into your chat GPT and see what comes back out. Post here.

Calvin Hoenes (29:55.85)
Yeah, good luck kind of trying to convince your AGI. It's like an unfair battle between the two, you know.

Kevin Nuest (30:06.802)
Yeah, I mean, so speaking of alignment problem, the open AI has announced a crowdsourcing team for governance and to its models. And so I feel this is really interesting. It feels like a really smart move by them to say, hey, we are going to get the public involved in this alignment issue and creating AI for all.

Calvin Hoenes (30:10.231)
Yeah.

Kevin Nuest (30:36.79)
Uh, of course we like, I want that. They want that at the same time. They're taking like a ton of heat right now. And, and, uh, you've seen Sam Altman like do the global circuit, right? Was it Davos the other, the other week and is, is doing everything he can to reassure the literally everyone on planet earth that they are benevolent.

Calvin Hoenes (30:57.505)
I mean, they have a potential antitrust coming up, right? So they better do that.

Kevin Nuest (31:02.366)
Yeah. I mean, when you go that hard and you're around, just, just around the world stumping for your benevolence, uh, there's definitely a lot of heat directed at them. I like it. And, and it's, you know, it sounds like funding is coming out of this, that they're looking to others to help bring ideas to the table for governance. It does remind me also both the, the pros and cons of, you know, what we've experienced in the, uh, the web three Dow world.

and, and shared governance. It's not exactly the same what they're doing here. Right. But, you know, there's downsides of trying to coordinate a lot of people together, but there are upsides where you get ideas and concepts and proposals out of nowhere that you would have not been exposed to that are sometimes like really robust and, and very thoughtful. So, you know, hopefully this brings some really interesting things to the table that they otherwise wouldn't be talking about inside of their global four walls is open AI.

or maybe reinforces some lesser known proposals inside of their teams that they get some more interest in take a look at. So pretty yet to see what comes out of this, but very politically interesting as well.

David DeVore (32:19.052)
Yeah, so the title of the podcast is, you know, what the fuck is nightshade? So what is nightshade? Like, let's unpack this thing.

Bob Ullery (32:31.626)
Yeah, I came across Nightshade earlier in the week and came out of a team at University of Chicago. It's their second foray into tooling to essentially fight back against unauthorized access of work, specifically graphical work used in models, in model training. So think Lora models, think stable diffusion checkpoints and so on. And it's off the back of

another model that they had created called Glaze. Nightshade, what it does is a processing engine and it puts over top of an image, essentially a invisible marker, right? Through various shades of gray, as I understand it. And looked at some examples from the paper. And for my eyes, I could not tell the difference between a nightshade affected image and the original image. But when it goes through a training model,

obviously the models are looking at very, very tight packs of pixels to create the connections within the model itself. And so it sees these sort of human invisible, manipulated pixels, and it doesn't know what to do with it in so far as the night shade model will include other images in this invisible layer, right? So you think you're training on a bicycle, but it's actually a teapot. So it's more of like a...

a nuisance creator for folks who might try and use images protected by nightshade in their models. The question I have for everybody is, where do we think this is valuable? And also, where is it not valuable? I posed a question out on Twitter once I came across it. Got a lot of mixed reactions to it. And I feel like it depends on the use case that you have in your mind.

Bob Ullery (34:29.242)
Um, my thought was maybe it's not advantageous for, let's say up and coming artists to apply a night shade. Wouldn't it make sense if you were relatively obscure and trying to create a network and, uh, a follower group of fans that your model, your, your work would be recognizable in a potentially any model, right? Wouldn't that help proliferate your style?

and actually open up doors to monetize that later. Does it, is it more appropriate for brands to protect their brand? I don't know. What do you guys think?

Calvin Hoenes (35:07.417)
An interesting question. I mean, one of the fascinating things I saw an interview, I think it was last week, where it's mainly about how copyright is handled today. And I think that one focused on the US more specifically, where there was this case with this children's book author, I forgot the name. And they used AI generated images for the illustrations in the children's book.

get their copyright on it, right? They came back and said, you know, because those images are AI generated, you cannot copyright anything in that book. And so it just becomes straight out copyable. And I mean, we've seen things around, we've seen many use cases where if it's not copyrighted, then, you know, people would just, and it becomes successful or whatever, right? People would just go ahead and copy it and, you know, do a.

do a copy on Amazon, publish it, and basically push you out of the market. And if that is sort of your primary source of revenue, then I can totally see why to some people that is challenging. So, and the point that I'm trying to make here is that while the League, while there's no legal backing currently, right. To kind of, to kind of deal with the scenarios where AI is involved and where it is still seen as copyrightable material. They are not.

great ways for you to monetize what you said before, Bob, in terms of, you know, I would love to put my stuff into an AI model, but I would also love to find a way to monetize this. So, and there's not a great incentive yet for companies like Mid Journey, or if you look at Dali from OpenAI, or whoever, to kind of get that split in. But I do think that given enough sort of public pressure by creators,

that this could become a feasibility. And then we're sitting at the other side of the spectrum where this could actually pose a great opportunity. Like you said, right? Like what if you, your style sort of influences the model in a way, the model becomes big and so do you. Like how we're currently used to social selling on social becoming has become a big trend, right? So I, as a creator can use social media to use social selling and to bring my music, my whatever, you know, into the masses.

Calvin Hoenes (37:22.725)
Could a similar motion happen through AI and through models? We will see.

David DeVore (37:28.468)
Yeah, I mean, it's the what US Circuit Court ruled that only humans can create copyrighted material. And and only humans create can create patents. So an AI can't, you know, create something that's patentable, which is pretty interesting. And also, you know, you have it's on one level, I agree.

On another level, I sort of look at the copyright law as being kind of archaic and not really like, not really, and same with patent law, right? And not really in reality, and not really in reality in terms of how people create, what they're creating, the technology that's creating this stuff, and the speed at which it's happening. You know, I thought,

Kevin Nuest (37:59.315)
Very.

David DeVore (38:22.4)
you know, our friend Mel there at Fox led the creation of Vault, is it Vault? Is that right? Am I saying that correctly? To basically log media on chain and create a pipeline for media companies to both monetize their work as well as, as well as monetize their work and, and

create authenticity in blockchain. We've talked about this a number of times that blockchain is sort of the perfect tool for creating a record of Providence. And I think we're going to see more of that. I mean, I think that's the end of the day, like that is probably the only good way to say, Hey, this is this is a work that we own.

you can use this work to train a model, and you're gonna pay us for it, without it being like endless lines of lawyers and drop boxes and so forth. So I was excited to see them take those steps and really excited to see, hopefully as it starts to take hold, and hopefully we can get them on this podcast to talk about, because it's really future thinking and exciting.

uh... motion between like where i a i and blockchain meeting their complimentary

Kevin Nuest (39:55.434)
Yeah. It's called a verify Fox released. Verify open source protocol. Right. So they released it and it is, it is there for everyone to use as a, as a standard, like you're saying the ability to work across, uh, with, without borders here. Very quickly. They cannot imagine some of the great artists we work with, how from the trade, a drop box with 10 images in it to then go put into a model, right. To try to get

David DeVore (39:57.08)
Verify what I always say. Well, verify. Yeah.

Bob Ullery (39:57.109)
Thank you.

Kevin Nuest (40:24.382)
a couple of dollars, but there's a massive opportunity. So Bob, you pose this question. I want to revisit the question you posed of the, you know, when is it, when is it Henderson? When does it hurt to implement something like night shade that prevents derivatives of your work? I think there's a flip side of this that I'd like to see that would be productive. And Dave, this is something that you've been working on a few different ways too.

It will probably have something to show sometime soon here, but the ability to. So imagine an artist applies nightshade to their work. So then they publish their work out there, uh, put it on Twitter, make it super simple. Instagram and somebody grabs it and, uh, they, you know, start grabbing the daily or weekly posts from this artist, start putting it in a model. Um, maybe it's a model specifically for that artist in style or it's putting into a larger model and collecting types of artists.

And then because it's got nightshade implemented on it, they think they're training on one thing and this, this artist's work and the graphics in that art, instead they get, um, handbags and tea kettles and other weird objects in their, their model as it infers it. Right. Then, you know, is there a process in there where that, that person that tried to take that work and put it in their model, I'm making a leap of logic here.

can effectively receive a prompt or as they try to debug what is going on, they're led to, Hey, this has nightshade. Here's where you can go to this artist and you can one license their stuff. Like you're talking about Dave on a very programmatic basis and super simple, or two here's their trained model that is, uh, that you can use to create derivatives and through using that train model, uh, you're paying, you'll, you will pay for it. Right.

Let's call it a dollar per image that you want to get out of there at high res. And that person is willing, you know, they want derivatives of the work, but they also want to be paid like something. And so maybe they could have the best of both worlds like that.

Calvin Hoenes (42:32.01)
I think you...

Yeah, that's actually a very great, it's a great balance. That's, that's a great balance. I'm wondering how the one thing you wouldn't get with that is sort of the discovery through the AI model, but you also probably not getting that directly. Right. I think the artists still needs or the creators still need some sort of exposure outside so that, you know, they have an audience that is aware of the fact that, you know, they're here and you can use that. But once they have that, what you're saying is like a great solution. Absolutely.

David DeVore (43:02.984)
Am I correct in understanding that Nightshade actually will in some ways poison the model? Right? It's not just like it will, it is going to consume this image and forever more, the model is going to be tainted with the

Kevin Nuest (43:12.031)
Mm-hmm. Yeah.

David DeVore (43:25.856)
with these you know additional images and stuff that it doesn't understand and so anybody who comes and uses the model in the future potentially comes across some sort of insanity that they don't understand right so it

Calvin Hoenes (43:39.169)
I, yeah. Well, it's gonna be less of a, I think it's gonna be less of a bad effect on the actual user, but just more of like a negative reinforcement on the model in general. It's more like the model gets, it's less of a poisoning per se and more of the model sees these images, right? Gets a negative, gets negative feedback from it. And so the model kind of learns to avoid images.

David DeVore (44:05.912)
Don't you talk for them.

Calvin Hoenes (44:07.029)
like that. So basically if it comes across the same image somewhere else, it would kind of try, you know, spook and run away if you want to put it in human terms.

David DeVore (44:13.326)
Alright.

David DeVore (44:17.081)
Yeah.

Bob Ullery (44:17.266)
It's like buying a knockoff part for your car, right? So you go to eBay and you buy an almost OEM part, right? It works for a while and then it breaks, right? And so the quick fixer is replaced that part for your car. So all to say, like, there could be some poison in a model, you may not realize it for a while because you're not actually traversing that part of the space, right? You didn't ask for the bike yet.

Kevin Nuest (44:17.502)
Ye- Yeah.

David DeVore (44:44.227)
Right.

Bob Ullery (44:47.286)
So you didn't know it was actually a teapot until you do. And then at that point, you might swap that model out for something else that has higher grade of credibility. So Nightshade is not a permanent solve here because I think almost instantaneously, people are gonna be working to circumvent it. Folks who want to sort of like use unlicensed work in their models and there's plenty. And University of Chicago calls that out as well. It's just a interesting take on

what they're calling an offensive measure against unlicensed use and where like glaze is a defensive mechanism. Folks that go on the offensive are always circumvented by the defense at some point in the future, right? So it's this cat and mouse game going back and forth.

Kevin Nuest (45:34.242)
There you go. Let's, let's define that really quick. Right. So, so glaze is a defensive is, is going to give you the ability to say, that is my work that you, that you used over there. So now I have to go monitor and claim, right. And try to make a claim. And maybe that claim involves court and, and lawyers, right. That's pretty hard. Offensive with nightshade is I'm going to, I'm going to muck up your model when you try to rip my data and it's going to, it's going to make your life pretty hard. So even just really quick, give the even broader example of

Calvin Hoenes (45:34.605)
Yeah that's basically it.

Kevin Nuest (46:04.098)
take mid journey and mid journey isn't, you know, not saying they're bad actor, they're a large actor. So mid journey says we're training the next version. And we got all these images. We've gotten from lots of different places and we're going to go do a model run. Right. They do a training run. They then of course they're going to come back and run that against some baselines. And now all of a sudden their baselines are, aren't passing like they were before, but they've just threw in a whole clump of data, right. Which included maybe some of these nightshades. So some of the, the.

the standards pass and some don't. And then what do you do? You throw out the whole set of data and then like go try to find. So it really is, I think, poisoning in that way, right? When you start thinking about in training runs, like I wouldn't want to be the data scientist on the other side having to figure out what is going on with my training runs.

Bob Ullery (46:51.51)
That'd be really tough.

David DeVore (46:51.62)
There's a science fiction novel here, right? Which is like in 2052, you know, 2052, like all of the models have been combined and running everything and nightshade shows up and just destroys us. Lurking in the system.

Bob Ullery (47:17.005)
I think you're onto something there. I mean if you do game that out, it's having to do with...

David DeVore (47:17.388)
That's New York Times bestseller, here I come.

Calvin Hoenes (47:20.824)
Maybe we should go by writing fiction books.

Bob Ullery (47:24.95)
If you game it out, I mean, and you have that perspective in mind as a model builder, I, you know, nightshade probably already did this job, even if very few people ever apply it, just the notion that it's there would, would yield the, the desire to only use great inputs into your model because you could totally kneecap yourself sometime down the road real, realizing later that you have nightshade in your model, which

David DeVore (47:44.995)
Mm-hmm.

Bob Ullery (47:53.598)
devalues it for consolidation up later. There's gonna be a massive economy here, just super massive economy around trusted, highly credible, well curated data to model against. I think there's something there. I'd love to just touch on, kind of go around the horn even around this notion of, like what is copyrightable? What is patentable? I mean, I kind of take the opposite view here in terms of

David DeVore (47:56.621)
Alright.

Bob Ullery (48:24.21)
kind of what the legal structure is saying. I'd point to like music being a great corollary for precedent. You know like Marvin Gaye versus Robin Thicke and Pharrell recently around blurred lines right? Like the songs are not directly similar. There's no plagiarism at all but the suit claimed that blurred lines use the same instrumentation. It ultimately had the same

Bob Ullery (48:53.01)
the gay estate, right? And I think they won seven or eight million bucks in that case. And so that's an interesting corollary. Like it was not plagiarized. It wasn't based on it at all, save for subconsciously of the creator against something they were influenced by. You know, if you built a mechanism where AI spit out something patentable on the other end, I would argue that that...

process itself is patentable. Right?

Bob Ullery (49:28.414)
We've always used tools to create the next layer of tooling. Why would we prevent the patentability of an outcome because we use a specific type of tool in the mix? That seems highly political to me, very, very political. And you should be able to protect that process. The only difference is the acceleration and speed here of innovation. So is that the real purpose behind a law such as that is to...

is to pump the brakes, right? Those have access can now rip out patents much faster than those that don't understand it. That seems to me like competition.

Calvin Hoenes (50:13.27)
I mean, it's interesting, right? Like just to go to the Nightshade case before I answer your question. Nightshade is kind of a delaying factor for sort of regulatory frameworks to kind of make sense of it all and find something that they can put in place. In that sense, it just might give a bit more time, so that there can be a framework of copyright updated. Whether that's gonna be a good one is a totally different question.

But to your question on copyrightable, I mean, I kind of tend to agree with you that, you know, hindering technical evolution, you know, is not always a great idea, right? And so we need to find a great balance. We kind of want the evolution. On the other hand, if you're on the other side of it, and if you are the artist, and now you find yourself in a position where, you know, you've been trying to make your livelihood,

And now your entire data has been taken without your consent. And now someone else is making money with it and not compensating you for it. Right. Like I also completely emphasize with that position, like, because like, what alternatives are you given as an artist right now, or as a creator right now to continue down a path that, you know, allows you to make money other than, for example, then go back and say, okay, cool. You know, I'm going to create physical work and go back to galleries and, you know, find other, other channels of distribution and sales.

Bob Ullery (51:41.17)
Yeah, I mean, I think there's a misnomer here where folks think the model provides a legal proxy protection buffer against what could be argued as a copyright infringement, right? So the Marvin Gaye Robin Thicke, not the same song, right? But they're close enough. It's on the creator to ensure that they are not violating.

Copyright no matter the tool that's sitting in between right like Pharrell and Robin Thicke use soundboards right and procreate or Pro tools or whatever right like you're not blaming pro tools for that Same way AI right like it's and your point around the children's book or whatever You know there's a couple things that may have happened there, right? I think I have a hunch of what happened which is they said Create me a probably programmatically create me a picture of a forest right in the style of this person, right?

That is not something you should do as a creator, right? That is copyright infringement. That is something that is clearly a tort, just like, you know, worse than the Robin Thicke, for real, because I don't know if that was intentional or not. It's just close enough that it violated the copyright. If you're intentionally doing it, I think the laws that we have on the books today probably handle this already, right? I don't know if we need to incorporate that thing. I don't know. Exactly.

Calvin Hoenes (52:58.113)
Yeah, so it's actually the same thing with the New York Times case, right? Like what you're saying, essentially, which we're also the tech side is making the case and say like, this tool is not intended for this use, right? And they are arguing that, you know, they have artificially constructed cases where using the tool resulted in the original, like a similar writing than the New York Times article being produced. And like you said, the tech fund is always like, well,

You're not supposed to do that. That's, you know, what all the terms of service say it's a violation. That's, that's your point basically.

David DeVore (53:30.348)
Yeah. Oh, and open AI said it was a bug.

Bob Ullery (53:34.142)
They said it was a bug.

David DeVore (53:35.284)
Yeah, they said it, there was a bug that they're working on and shouldn't happen. Yeah. For to be able to.

Calvin Hoenes (53:40.705)
Yeah, and then, but after the site, I think that the lawyers took the side of taking like, you know, it's unfair use basically of the tool.

Bob Ullery (53:48.402)
It's kind of like suing the library to me, right? It's like, hey, this architect stole my design, right? And they got it from the library and you shut the library down. They're peddling copyrightable information, right? I just think like.

Every case is unique and intent is clear in a lot of these cases, right? And, but ultimately I think you need to put the onus on the creator that you have to double check your work to make sure it's not infringing on copyright, just like you would have done anyways, if AI didn't exist.

Kevin Nuest (54:19.842)
Can you, how far can parody get you? Right? Because you can create parody of something in the style of, and that's copyrightable, right? SNL is copyrighted.

Calvin Hoenes (54:20.261)
It's an interesting... Go ahead.

David DeVore (54:31.2)
As long as it's funny, it's okay.

Bob Ullery (54:35.578)
Look, as AI accelerates everything in our life, one of the things that'll also accelerate is the ability to sue somebody, right? And so, you know, if you fly too close to the sun, you're gonna fall into it. I think you gotta just have to be ultra careful and still act with intent of credibility and benevolence and create using a...

Kevin Nuest (54:35.658)
It's all for the lulz.

Bob Ullery (55:01.914)
a positive view of the law against trying to circumvent it instead.

Calvin Hoenes (55:05.801)
Yeah, and I really, I really like your library example. And I want to just touch on that for one second here. It's like if open AI were to be a public good that is accessible to anyone theoretically for free, right. That might change the scenario a little bit, right. Because going back to your library case, I can go in or let's go.

Vicky P is an odd example, but something that's publicly accessible, you can go, you can take, you can use it, right? And then it's on you. If it's a company that obviously has a financial interest in selling this back to their customers, I think the case is slightly different, right? In that case, you might argue, do they need to take a bit of a different stance?

Kevin Nuest (55:45.326)
but opening eyes is the foundation for the good of all humankind.

Calvin Hoenes (55:49.283)
Right.

Bob Ullery (55:50.642)
And so what if we make a little money along the way?

Calvin Hoenes (55:54.433)
It's just a hundred million here and there.

David DeVore (55:54.444)
Well, and... You didn't-

Kevin Nuest (55:54.542)
So Dave, I got a question for you. You're rocking your Adobe hat there if I'm not mistaken. So I create an image, I use Photoshop, I create an image, I copyright it as part of a company even. Why is that allowed? That's a tool, right? Why are we cracking down on tools all of a sudden?

David DeVore (56:19.917)
Well, I mean, it's...

Kevin Nuest (56:20.522)
I didn't hand paint that or put it together or roto scope it or whatever other way. I used Photoshop to do it. Why was that allowed?

David DeVore (56:26.96)
is because Photoshop because Adobe is not going to sue you to do, you know, at the end of the day, it's really like, do you have a shotgun to stand on the Hill and protect whatever you own like that in that and Marvin Gaye, the state there, you know, they had a shotgun, they're like, Oh, let's go. Let's go sue Robin Thicke, right? If if it was a jingle from like our friend Domino.

You know, like Domino doesn't have a big enough shotgun. You know, he just does. And so it really, I think it really does at least, and that's why I think a lot of this stuff is kind of archaic because it really does come down to the like, do you, do you, do you have enough money and a lawyer who will take it to go, you know, get into it with somebody? Right. And, and, you know, that it's kind of arbitrary.

And that's why I think, I'm a firm believer, like all provenance should be on chain. Because it shouldn't be arbitrary. It should be binary, right? And it should just be like, yes, no, not like, oh, I've got enough money to get a lawyer and go sue you. So at the end, I agree. But at the end of the day, Adobe's not gonna sue their creators.

David DeVore (57:52.016)
It's bad for business.

Bob Ullery (57:53.661)
Here's a question back to you, Kevin. I mean, we're going after open AI for torts on copyright. Like, why doesn't Adobe own the copyright to the image you created in Photoshop?

Kevin Nuest (58:08.194)
Yeah, because.

Bob Ullery (58:09.818)
right? It's not worth both ways. You're going to blame the tool for copyright tort. It would make sense that they would also own the copyright since they're clearly the central entity here. And I think what's obvious here is they're not. The creator used the tool, so we can't blame or give credit to the tool for any of it. It's the human using it.

Calvin Hoenes (58:28.493)
The difference is that Adobe hasn't used other creators for their tools to exist. That's the argument. But I get your point. And I think the argument is on the right side of history potentially.

Bob Ullery (58:41.01)
I mean, I don't know though, they have, you know, background fill. I don't know if you use the AI fill tool in beta.

Calvin Hoenes (58:46.009)
True. Yeah, how does Adobe deal with that in terms of sort of their policy? I'm actually really curious. Has Adobe put out a stance about that?

David DeVore (58:51.636)
Well, they also have Adobe, the whole world of Adobe stock images, right? So like they already have license for lots and lots of creative, you know, that they can, that they can apply. Right. So, um, and I think I've, I did, I I've seen, uh, I, I'd have to go back and see who it was that did a deal with, with Getty and others, right. But, um, you know, at the end of the day, it's.

Kevin Nuest (58:52.576)
P.

David DeVore (59:21.208)
you know, back into, back to the conversation of having good models and good training, I think that's where it's all going to, a lot of, unless the open source models are the, you know, Wild West, but, you know, these sort of, you know, the big tech is going to be, you know, forced into a position of using licensed content, right? So.

Kevin Nuest (59:47.81)
talking about the having a big enough shotgun to defend your Hill, Calvin, the open Adobe came out and said, hey, if you're an enterprise customer and you use our generative tools and you get sued, we got you, we'll foot your legal bill, right? I'm sure there's some limits on that, but they already said to their enterprise customers are most viable, we got you, don't worry. Get in there and start making rainbows and unicorns and castles like they're really good.

Uh, video spots that they're running on national TV, right? That so that's, so they're, they're ready to put up the Dukes and that's how they're moving forward. Uh, you know, they're ready to be litigious. It's not like they haven't done that before. I think the thing I wanted to bring up to answer the underlying question, Bob, is that what, what I think we'll see one flavor of it will be a lot of people arguing to prove that what is created was.

Calvin Hoenes (01:00:24.427)
Mm-hmm.

Kevin Nuest (01:00:45.93)
the tools used to create something were not AI. When they go to patent, they're gonna say, that's not AI, no, no. That's, you know, just like a predictive text inside of Google Docs, inside of Word Docs, been around for a couple of decades. That's not AI, we're just like, would you consider that AI jury? No, you wouldn't, of course not. That's basic. So are the tools that we use, they're just basic.

Calvin Hoenes (01:00:52.377)
Mm-hmm.

Kevin Nuest (01:01:12.97)
Now let me show you what AI really looks like. And then they point to something that's heading towards Agi. That's what AI is Jerry. That's what is going to be. That's what I would do. That's how I would argue it is I didn't use AI tools. They're not AI tools.

Bob Ullery (01:01:25.65)
Yeah, it was a I just use math. It's totally different.

Kevin Nuest (01:01:28.994)
It's math. Yep. Probability.

Calvin Hoenes (01:01:29.845)
Yeah, it's fascinating. What kind of process are they going to put in place for you to prove that? Like, do you need to record yourself while doing it for at least half an hour? Or like, you know, how is it ever going to work in practice in its scale? Who's the human reviewer? Yeah.

Bob Ullery (01:01:39.038)
Yeah.

Bob Ullery (01:01:43.026)
Where does it end? And it just said kind of wrap a bow on this maybe. So like Adobe generative fill, right? Like, doesn't mean they don't, the argument I'm gonna present here is they don't need to have compromised models or shady models, right? If you had a picture of Mickey Mouse in Photoshop and you deleted one of his eyeballs and then did a generative fill on Adobe, it would fill it in.

Right? Because it's using the context of the image you already, which was copyright tort, put into the tool, it will fill it in for you. That becomes really mucky. Right? Adobe didn't train on unauthorized data, but it's smart enough to know what was missing in the image and it filled it in. So are they liable? Right.

Calvin Hoenes (01:02:32.617)
It's interesting, I've been just reading through sort of their terms of service and stuff, and it's kind of interesting what Adobe says themselves, right? I mean, one, they say on Firefly, they say it's trained on Adobe stock images only, openly licensed content and public domain content. And that's why they say that.

David DeVore (01:02:32.921)
and just.

Calvin Hoenes (01:02:51.417)
Sorry, my Siri just jumped on. Did you hear me talking? Yeah. And then they say, you know, that it's designed to be safe for commercial use. And then if you look on the other side of it, they say, you know, be respectful of third party rights in their end consumer license agreements. They say using Adobe's generative AI features to create content that violates third party copyright, trademark privacy, or other rights prohibited, obviously. Right.

And then this may include, but is not limited to entering text prompts to generate a third party brand logo, uploading an input or reference image that includes a third party's copyrighted content and so on and so forth. So they've kind of protected themselves from both angles. Right. It's interesting. The first statement is, okay, we only used our own stock to train. Right. And so that's why it's safe for commercial use. So adhering to all of the enterprise customers, what we said before, and also on the other side of the consumer, they say, you know, just, sorry, you know, don't, don't produce copyrighted stuff with it.

David DeVore (01:03:43.696)
Don't do it.

Calvin Hoenes (01:03:44.609)
And I'd say Adobe actually did a pretty good job of kind of protecting themselves in that way and putting themselves in a very strong position, I would say, that is probably stronger than someone like a mid-journey. I think mid-journey is a way weak position if it ever gets to court now in the short term to just survive that. What do you think?

Kevin Nuest (01:03:44.695)
Yep.

Bob Ullery (01:04:03.742)
What's old is new again. It's the old guns don't kill people, people kill people argument. It works. It has worked. So I don't know what else they can do. I mean, clearly they're acting with good intent.

David DeVore (01:04:10.844)
Uh. The.

David DeVore (01:04:20.388)
So we are at the top of the hour. Didn't actually get into conversation around.

employees or companies without employees, right, which I think is a great topic for another day. But this has been fun, really great going through the news and the sort of hot stuff going on over the past week or so. And, you know, thank you all very much and have a great night.

Bob Ullery (01:04:53.214)
Thank you. Have a good day. See you guys.

Kevin Nuest (01:04:54.179)
Thanks.

David DeVore (01:04:54.884)
talk soon

Calvin Hoenes (01:04:55.429)
Thanks. Bye.