[00:00:00] Jacob Haimes: This episode was recorded on December 14th, 2025. Hey everyone. In our most recent episode, we talked about bubble dynamics and the vibe shift and a couple other things, and we still have episodes planned for that. But this episode is gonna be a little different and focus on an update that Igor had regarding ai. We'll also cover how this changes and impacts the analysis that we've done on this show. Welcome to muckrAIkers, where we dig through the latest happenings around so-called AI. In each episode, we highlight recent events, contextualize the most important ones, and try to separate muck from meaning. I'm your host, Jacob Haimes, and joining me is my co-host Igor Krawczuk. [00:00:42] Igor Krawczuk: Thanks, Jacob. This week we're talking about how I became an AI booster. [00:00:46] Jacob Haimes: Can you elaborate on that just a little bit? [00:00:49] Igor Krawczuk: I didn't become a full on AI booster of course, but I was a bit shook by the latest, um, Claude code, release and by a thing we ran. It is our company. And I figured it would be good to talk, uh, uh, about both, like my updates on this just to like remain credible because I've been like the stronger AI hater of the two of us, I think. And also about how like, it doesn't change the points we are making if AI is not completely useless, because that's gonna be an argument that's gonna try to be made like that, you know, oh, it's good at coding now. So all of the critiques are washed away. [00:01:30] Jacob Haimes: Sure. So I guess before, before we get into it, then, just to, provide context as to like, why you wanna do this. Why is it valuable that we talk about this and then put it in an episode, right? What are we actually adding to here? [00:01:46] Igor Krawczuk: Yeah, so one thing that I think we're trying to do as a general rule is that we want to give like a technically grounded, like realistic assessment of a situation. And then because the situation is in our assessment objectively horrible, we critique it and we point to what should be done to make it not horrible and. In order for that to be credible, you can't strawman. So like one of the things that is often I think like, uh, maybe in the emotional core, correct, but misguided critique is pointing at generative AI models and saying that, oh, but they can't create graphic designs the way that humans can, which is true. But it misses the point of why people are adopting them or saying, of these things will never be able to solve this puzzle that I give it. Therefore, they're not useful at all. Like these types of arguments you see them sometimes in the like, AI critic space and they're not really like the core like, uh, the uh, like grasping. Utility, why people adopt the systems. Like the reason why I was a very strong critic of vibe coding and all of these things. Even when I was hedging it a bit, there's some place for it, but overall I was like big hater and I didn't like using it was because it genuinely, I didn't see the total net time saving. And we had talked about these studies, the one by by the Danish researcher. That this is like not only my anecdotal evidence, it's also like a general thing that what time you save in getting your one shot vibe coding, you need to spend integrating it and debugging it because now you don't have a mental model for it. [00:03:45] Jacob Haimes: And I think that's not just in the study. There's also been, people even like relatively recently saying, you know. the change in vibe, coding results in shifting that extra work, you then have to work backwards and understand it from a different perspective, someone who didn't write it. And that just takes a lot more time. And also, if you actually care about how it works, it's more likely that there's an error that you don't necessarily catch on a first pass because you don't understand how something is happening or have as much context as to like what triggers something else. So I think that is like meaningful and I, I, I would say that is still. Hampering a vast majority of the utility there. But there is a section and maybe this is what you're talking about, where if you do just want to have a relatively simple functionality or output, like a dashboard kind of thing you can actually get that relatively compared to how much work you would've put in if you were coding it from the ground up it is faster in some ways. Is that sort of what you're talking about? And maybe you can give a little bit more context as to what the catalyst was for this update for you. [00:05:07] Igor Krawczuk: Yeah, so like the way I would phrase it, a, there has always been a like a little bubble of tasks where you could get value out of these. Um, Lms in the very beginning it was basically like auto completes, which is why like curse and so on became a big thing. And I think that's still like one of the ways people really like using it because you're constantly mini correcting. I'm even skeptical about how much time you say there, to be honest, but it's one of the things that. Felt the most useful for people already. The one step bigger and like more long context cohesion requiring was demos or mockups or single shot. Let me try to hack something up. Mini tools. And that's been the thing that basically drives the actual like vibe coating boom. Where. You have a problem, you want to quickly have a solution. You sit down, you don't care about the code, you just want like up to upload a CSV and get it formatted in a different way or whatever your task is. And for that, sometimes they could already do it. Sometimes you got in a one shot, sometimes you gotta after 10 hours of prompting and it would've been faster to do it by hand. But that existed already and it existed more in web sheet, like more common task with a very boilerplate and with every model update, they've been pushing that out. And also various different trends converge, you know, like, the inference gets faster so they can do more of the chain of thought token or they can do more rollouts and they. Integrate better tooling into the Claude code client and they have the prompt, better optimized and a lot of different little knobs get tuned to make it better and better every stage. And what causing like my shift is that like with the latest version of Claude Code and the Opus four five release, it got actually useful for me on things that are still basically boilerplate, but it's complicated boilerplate now it's writing a pass that before I could still do LLM stuff, but it was like engineering effort and basically just a way of compiling a bunch of scripts into a particular output. Now it's actually you describe what you want to to the LLM and. With very few shots, it gets quite clear. And the thing that fully knocked me out of my previous position was at my work, we gave a text file, a CSV file containing a particular instrument readout without any headers and without any indication except the file name of what might be inside. And just get a prompt, what can you do with this? Make something with this. It's one shot at a very reasonable, like viewer tool for this of data, which isn't it has reasoned it all out because this of data exists on the internet and it's discussed and like the same type of tools exist already. So all of the components exist. I checked afterwards, but it's a very impressive like feat of recall for the like. Like this sample from the distribution. So like the correct way of phrasing it, but like the amount of useful work that is now in distribution for this genetic model has reached like a tipping point where I can genuinely see how even if it becomes much more expensive, even if it stops now there's like a core of utility that I think will be hard to, go back from where if this vanishes, it's gonna come again. Some, someone will build it again. Someone will like, make it happen again. Maybe with like fresh funding and without the research debt. And that kind of shifted my position on this now, and it has like a bunch of downstream effects on our analysis, which we were gonna talk about. [00:09:11] Jacob Haimes: But, does it right? Like that's sort of where I was at already, right? I agree with what you're saying. The amount of effort and capital at this point that has been put into these systems, being able to do these tasks making a little demo based on the CSV or things like that. I don't know. To me that has always. Been something that I, I feel like, eventually we'll get to a point where that might happen or not, might happen, will happen for certain, data types and stuff. That's sort of what I mean when I say, I'm not saying that this technology isn't useful. Like it is useful. And that is a great example. The issue for me is much more about what it presents and how it changes the structure of thinking about these problems because you go from, going from the ground up and requiring a certain level of understanding. Of the thing that you're working with to not necessarily doing that and therefore not necessarily understanding like the risks. And maybe that's true from the first way as well, but the likelihood is much higher that you are missing. Aspects of it, you don't realize that there's an edge case that is not accounted for. Things like that. And yeah, I don't know. I totally get what you're saying, but I, I think that's where I was already at. [00:10:40] Igor Krawczuk: So for me like maybe you're just smarter and more professional than me. So if anybody wants to take advice to ask Jacob of, not me, [00:10:48] Jacob Haimes: definitely not true. Just want to clarify that one. [00:10:52] Igor Krawczuk: in the alternative explanation or like maybe a, like a complimentary one is for me there is a bit of a difference in the sense that. I like, it's a, it's about two things. It's about reliability and it's about the, how far can you get up the amount of input, effort put into making model better to useful work. Basically I previously had a mental model that no matter how hard you push, but like a certain number of steps of not even that complex, just like volume of work is gonna be out of reach because of the argument I made in one of the previous episodes. So it was like a commentator, explosion of. Like multiple steps where like every step you go further along and every additional bit of context you have to keep in mind puts a larger burden on the model to stay cohesive and is another source of error. And because there's nothing that does like. An actual cohesion check inside of a of a model. Like human humans have have this where like the symbols that we uh, get from the outside, we ground against, they force us back into a cohesion check and the whole point of notation and math and external tooling. All of that stuff is. To help us with that, but at a very fundamental level, most humans don't spiral off into random output because we have an actual internal process. I said, most humans, this is a very safe statement. And, and, And like, I thought this, we wouldn't get this, and now I, this has kinda shifted for me because one, we've gotten much higher up the capability tree of just recall where like, they still make mistakes, like I said, but it's now at the tipping point where I am able to, without a lot of effort, create systems that deal with, with this elasticity and I can make a, kind of like almost deterministically good output for particular task happen without needing to manually supervise and without insane amount of cost. And that shifts this thing from casino, which was my previous kind of mental model of why people use this stuff that like you, you pull the arm on the white coating multi-arm bandit, and then like you get your reward and. People don't do the math on what's actually going to an actual useful tool consistently. As long as the domain is kinda like worth the effort of making it work to, which will not be everything, but it has kinda like shifted the regime that I'm thinking about with stuff now because we got higher up that capability tree than I thought was possible in this investment space. Sorry, but now I'm done with like my precision. [00:13:55] Jacob Haimes: So I think that the part that I highlight there is that you said, without. A certain amount of input into developing the prompt or the way that you're inputting data or the instructions that are being provided or the tooling that you're using it and how it's being called. So essentially without, designing this system for your specific use case. Then it's much more similar to what you had said initially. But now that isn't the case. And I would disagree with that. I would say that it's just that all of that work has been done for you by the language model providers. They have spent tons of money, tons of resources, tons of time. On developing tools for this task, not just developing tools for the systems to use, but also augmenting the training data and adjusting some of the other parameters that they're using to get this system to be, what it is when you see it on the user facing end. And just because they've done that with one like I've said before, it is valuable for this thing, but that doesn't mean that yeah it's valuable for, anything right now or even like, is possible to do, these things without having to. Essentially start from at least ground like one, right? Going back to almost square zero or square one or whatever. And uh, restructuring things because like a lot of the data that's been provided and a lot of how the system has been orchestrated is to be able to do these tasks with higher confidence. [00:15:36] Igor Krawczuk: Yeah, totally. Like this just to be clear like I'm not an AI booster, like in the strong sense and this doesn't change like the like the AGI argument, for example. We're gonna get to that. For me it's like a quite big shift because, it's like actually like a a regime shift, not a, not a face transition. Jacob has has a blog post about that. This is a different thing. But uh, yeah, what you're saying is basically like, like, like that's also one of the reasons why I wanna talk about this, because just the thing being useful now. Doesn't validate the claims by the people building it, who have been saying it was useful over time because the story, like you said, I'm gonna basically paraphrase what you said. The story was that we train this thing on data and RL the hell out of it, and then just by scaling it up or because AI self recursively improves itself, we're gonna get general usefulness and we don't have that. What we have is like. the Anthropic and only Anthropic has optimized the fuck out of Claude code. The software, not only the model, but like the software piece and the combination of local indexing, efficient, local interpreter, a bunch of other stuff combined with prompt engineering, combined with all of their little hacks that we talked about with like skills and plugins and subagents and tool calling and all of this other stuff, all these. Different ways of humans engineering a system and fine tuning that stuff together has barely after billions of investment yield a tool now where I say, yeah, pretty useful. I can make this work. I am good at wrangling stochastic systems that I say I, I can make this work now. And because I'm an expert in in what I'm trying to to do, or I could do this by hand would be much, much harder. But like all of that stuff is is still required for it to be useful. It's not at all the this or generally useful case, but they're gonna sell it as such and people should pay attention to like what actually happened. So that's like one of the ways where it does change the analysis though, is like what we're talking about is like for extrapolation and for, was this a grift or not? Did it deliver or not? Can we trust what they're saying? Is the assessment of the technology, whether pushing the right one like this is kind of epistemic and decision making questions from an economic side, it doesn't matter. Like what does change There is like. if the economics work out and the task you're trying to do can be transformed into a rote task, it doesn't need to be a formalized task. It doesn't even need to be a verifiable task. It just needs to be a road task. [00:18:27] Jacob Haimes: And can you just clarify for me what you mean by rote [00:18:31] Igor Krawczuk: I I was discussing this a bit with a friend over weekend. A rote task is a task that might change every time you do it. So there it's never exactly the same task. But it's it stays very close to a single mode of like a trajectory. So in a sense that like if you squint and if you are able to kind of like see high level patterns, it is very clear that it is a very like, repeatable task where given the same input, you would do the same thing. It's just about having that recall. And you don't need to think once you've started it if you have to look at what you're doing and adjust and then like actually pay attention. It's not a rote task like a rote task, something where you can safely turn your brain off and you just execute the thing. [00:19:26] Jacob Haimes: Can you give me an example of a rote task and a non rote task just to, again, to make sure that I am fully understanding like where you're coming from here. [00:19:37] Igor Krawczuk: one rote task would be given a set of objects, like in a taxonomy of of things. So like, imagine like, like a block management system, like given giving that we want to have. Pages and blog posts and comments and users and avatars and trace backs, like all of these kind of objects that we needed to keep track of and images. I give you that list. Make me a crud app, create, read, update, delete to implement that content management system at this point. There exists kind of like a default way of doing this where if once you have defined what you want to manage and a bit like what each of these things do, it's what is also called like making widgets. Like it is a technical task because you need to be able to like understand the specification and then execute them and do things in a certain way, but it will come together safely. But without additional requirements like high performance, it's not a thinking task because you just do the standard thing. You have a lot of margins and you're done. [00:20:51] Jacob Haimes: Sure. Okay. And then the non rote task example? [00:20:56] Igor Krawczuk: Would be talking to the customer and making the taxonomy because Sure, most customers. Will want something very similar and you can probably work off off like the, like a similar one. And we might, and if you constrain and engineer your system enough, if you, for example, say you only accept customers in a certain bucketing system that you have, you might turn it into a rote task. 'cause then you can guarantee that it'll cover everything, but without further constraints like. The customer will have like a small tweak that maybe isn't co covered, your distribution, and then when you're back to either thing where it's slop and the customer gets slop, so they are not expecting quality and so there's just accept the stuff gets ignored. Or if you go to like a higher stakes thing, they would be pissed at you because like your system doesn't work and. [00:21:50] Jacob Haimes: Okay. So then the next sort of. Jump here is you're saying, okay. Well, it seems that based on this uh, your update that a rote task can be automated when there is enough data and enough effort is put into the tooling behind it. Is that correct? [00:22:11] Igor Krawczuk: Yeah. And also tasks that don't appear to be rote tasks right now. If people want to predict what like AI can do in the future is like what tasks can you transform into rote tasks? Like where can you extract the like decision making from the task? You can extract all of the like real time, high stakes decision making from it. Then it becomes LLMable basically given enough effort. [00:22:38] Jacob Haimes: I would also say that that is with a caveat of if you actually care about the output and or integrating it with something you as the person would still need to. Validate it, right? And then at that point it becomes questionable again. So if you don't need to have high confidence in it, I guess is sort of what I'm saying that, that's also a caveat, at least from how I see it. [00:23:02] Igor Krawczuk: Yeah so this is also an important thing for the impact of this. This is not, oh my God, LLMs will replace programmers. This is. Programmers will be expected to orchestrate LLMs and to basically become very good at like giving specs and validating specs and maybe like reaching in and hot fixing something. But like, basically everyone needs to be a senior programmer. That is wrangling interns. That's like the shift that comes out of it. [00:23:31] Jacob Haimes: Yeah. Okay. So that's actually very helpful for me in terms of understanding what your update consists of. I do feel like it actually ends up making you more aligned with like how I was thinking, which is funny. So I think we just touched on the financial like ish it required a massive amount of investment in order to get to this point. It also required the right kind of data, and it also is still a lie, right? Like it still isn't what was promised. It isn't what people were saying it was going to be. And so I, I. I think there is a, a small update in terms of the finances, but it still to me looks like a bubble. Like the internet is also a great technology, right? No I don't think the internet is gar, like I love the internet. I think it's awesome. And yet there was a [00:24:21] Igor Krawczuk: I mean there is a lot of garbage but we're all garbage gremlins. [00:24:25] Jacob Haimes: Yeah. Yeah. But like also the core functionality of the internet is not like the internet, part of the internet is not garbage. It's the stuff that's on the internet. [00:24:35] Igor Krawczuk: Um, So I think like the update is before and we talked about this a bit in, when, in some of the episodes we talked about the financing is like, before it was like. This is a demo technology. The payoff for like serious engineers is not really there. It's probably not worth it. If you have to run at cost, there's no way this could ever pay itself back. And the update I would make is, and still a bubble. It's still gonna cause a crash and so on. But there's a like chance that. All of the current contenders might survive if they raise enough money, like if they go public early enough if they raise enough cash basically to survive the restructuring from a massive growth and research grift to, to extremely specialized and extremely powerful like tooling. For making software systems or for making like data manipulation systems. [00:25:37] Jacob Haimes: Yeah. [00:25:39] Igor Krawczuk: and that is a small update, right? That is like from OpenAI, Anth ropic and so on are fucked to, they might have a chance. [00:25:47] Jacob Haimes: Okay. I see. Okay. No, that, that does help me. I guess I'm still a little more skeptical, although my model of like the financial side is probably much worse than yours as well. So, [00:25:57] Igor Krawczuk: I mean, I, I, [00:25:57] Jacob Haimes: I, I [00:25:58] Igor Krawczuk: more, I'm, I think I think it's more like. [00:26:00] Jacob Haimes: I guess that does also inform the Open AI and anthropic rumors of having an IPO in the near future. And also like with the most recent release of Gemini's model OpenAI, lost a significant chunk of its user base. And they're in like code red mode or something like that, some dumb way to describe it. And also even like to some extent makes the Trump executive order like makes sense, right? If you can prolong the bubble long enough to pass it off to someone else, essentially to cash out early and make the company profitable. [00:26:37] Igor Krawczuk: So like, um, there was also a rumor that, uh, OpenAI hasn't finished a pre-training, uh, run since late 2024. Everything since then has been basically post training. This came out of like a semi analysis, uh, side comment, and it, uh, wasn't denied as of, uh, 11 days ago. And it basically means, okay, like they have a kind of proven product market fit now that they can sell to people, they can sell it to decision makers, they can sell it to. The public, if they, that's a separate thing. We can just, the coding part, they can sell that angle. [00:27:15] Jacob Haimes: sure. [00:27:16] Igor Krawczuk: And there's still this like idea of, oh, this is gonna take over the whole economy because the disappointment hasn't spread wide enough yet. But like I said, like PE people are abandoning pilots. People are shrinking their investment. People are getting nervous. And it also becomes clear that the winner take all market isn't happening because like I said, like Google is back. People in my opinion, were stupidly dismissing them because they don't realize that Google makes more profit than OpenAI has ever fundraised and earned together per per year. Like, [00:27:50] Jacob Haimes: Yeah. [00:27:51] Igor Krawczuk: like it was, they were always gonna come back from this [00:27:53] Jacob Haimes: the number of people that are from Google at like NeurIPS, it is just wild this year actually. Anthropic and OpenAI both had papers that they actually participated on that were peer reviewed at the conference, which I was very pleasantly surprised to see. But like it just pales in comparison. To what Google is putting out and they're not putting out most of their good stuff, [00:28:19] Igor Krawczuk: and then on top of of that you you have like mistral who just released their like latest desktop thing that is like very competitive and people are happy with it because it's much cheaper. So like the winner take all market story, it's kind of like running away. They still have the story of like, OpenAI branding is the best branding, but we talked about the Pew research in the last episode or the one before. But people don't like OpenAI that much. So they're definitely trying, [00:28:44] Jacob Haimes: Most people don't like AI that much. [00:28:47] Igor Krawczuk: They, they're def, they're definitely trying to cash in as much as they can. And the thing that you said about the ad is the same thing that they really need to make this happen. To have a story, but at the same time people just finding strings related to ads in the. A PK. So like the Android binaries and the latest update triggered such a shit storm that they rolled it back and people getting app suggestions in in the app triggered such a shit storm. So like people really don't want ads in their ChatGPT. It's not the same as Google, at least like seemingly status now. [00:29:21] Jacob Haimes: Yet I think is the, to me, that just implies they haven't figured out the way to do it yet. Like you, for a fact that they are working on the, and actually I know for a fact, like I'm not even, like I've talked to someone who is on a team for personalization. Evaluations on Google. So of course he isn't the one that's like doing this for ads, but like the reason that they are doing that is for ads. [00:29:47] Igor Krawczuk: I meant like people ac accept it with Google because they're used to, from. People will accept this also [00:29:54] Jacob Haimes: Yeah, I know. [00:29:55] Igor Krawczuk: from the Google AI in ChatGPT, the like. It's basically, yes, they might have a better brand and people like it more than Gemini for now. But the second, their AI boyfriend suggests products to them in a way that is like legally doable, clearly marked as ads, they're gonna fucking hate it, at least status now. And so OpenAI. Is basically in a hard spot where maybe they, they will make it like they have billions of dollars who knows. But the update we were talking about doesn't change that. It's still like the current bubble to my assessment is still a bubble because the data center issues, the externalities that we we talked about and are gonna talk about are still the same. The copyright infringement and. Intellectual property violations and lawsuits related to that are still the same. [00:30:48] Jacob Haimes: So before we get to externalities, 'cause that is something I definitely want to hit before we. We end our discussion today. I think more related to maybe the bubble is that like the idea of AGI is still bullshit, right? Like [00:31:01] Igor Krawczuk: Yeah. [00:31:02] Jacob Haimes: this is a tool that is, has been optimized for a specific use and the idea that this technology could be general. Just, I don't know. It it gets me irritated just thinking about it, right? Because it's all a marketing scheme. [00:31:20] Igor Krawczuk: Like it's worth saying like what happened here, the cosmic shift was not, oh my God, it's generalizing now. AGI might be real. What? What was happen? Cosmic shift was like, oh my God, they managed to put enough of the manifold that I care about into the model that I can use this now. And they have to do the same amount of effort that visit for coding in other domains, which might not be possible because like one big thing is that like computer scientists. Love to think that they're special and unique, but like coding and computer science is not that difficult compared to lab work because if something breaks, you have a chance of finding the unique deterministic root cause, which just doesn't exist in the real world. So you literally cannot do the same type of effort that was done for coding and like fixed benchmarks. You would need to do for the, like the real world and like [00:32:18] Jacob Haimes: And there's also just a lot more data on, on the coding stuff because it is in a format that can go online very easily. [00:32:28] Igor Krawczuk: There was actually a good discussion on this by some substack person like, you need to pre-train data. To get like the representations of the thing that you're trying to learn. And then you can try to do RL if you have basically free trial evaluations. So like if, if for code, you can do this, you can actually give it like a coding prompt. You can have a model through some clever, prompt harnessing set up, write me a test for this problem. And then like a human reads the test and says, yes, there's valid test. And then like you let the LM write code until it passes the test and you use like the whole. Sample of feedback that you have from human, from a process to train the next iteration. And you repeat this, like this is like a way of quite scalably, getting a lot of signal on coding because you can have a machine that was made by humans. Deterministically check does this work or not? Basically, first the computer, when you test and so on. For real world stuff, you always have to make this massive harness, at which point. You're no longer working on an LLM, you're working on like the formalization of a physical world. And that's not impossible. Like, Again, like anything that can be made, a rote task task can be broken down with like brainless components can probably be turned into that thing. And Honda is doing it in their robotics facilities right now. And there's a whole batch of humanoid robot startups trying to do this because then they can reuse youTube videos as pre-training by having the robot look like a human. You can reuse human video as pre-training, but like it's it still is a whole separate effort, a whole separate machinery, and the AGI part gets completely lost. [00:34:07] Jacob Haimes: Yeah. [00:34:07] Igor Krawczuk: Externalities to to cap us off. Like Both stay the same. [00:34:11] Jacob Haimes: Yeah. That, that's, is that all we have to say? Because that Yeah. It just doesn't affect externalities at all. Like any of the analysis that we've done regarding, artists and like loss of jobs that paid the bills for them or journalists. The copyright infringement, the increase of data centers and just like massive energy consumption in the creation and use of these systems. All of that. Doesn't change at all. [00:34:39] Igor Krawczuk: The only thing that changes is like the ROI calculation for some people because they're not facing externalities or like maybe some municipalities and, another not change because of an update, but just because it happened. The German association that handles audio copy copyright won a court case against I think it was OpenAI? Yeah, it was OpenAI, which might set a precedent in the EU about, what they can and cannot do and claim like the equivalent of of fair use. So [00:35:11] Jacob Haimes: Interesting [00:35:12] Igor Krawczuk: they found that the models [00:35:13] Jacob Haimes: what's that actually gonna do? Can they actually make anything change because of that? Can they make people, take down their models and things? [00:35:21] Igor Krawczuk: they basically need to show against like court order level. Certainty that they can do like unlearning or they have excluded the the corporate material from the training data. [00:35:33] Jacob Haimes: Gotcha. Unlearning is like super fragile though. Why? Why'd they go with that? [00:35:38] Igor Krawczuk: And there might be damages as well, which will be negotiated later. But they have been found basically like specifically they. Found that lyrics are stored in the model. And when the model outputs was lyrics on user prompt, and the data and text mining exception does not apply because substantial parts of like works appear in outputs. And uh, so, so basically normally they would be able to claim like an an open, they were banking on that, but that doesn't fly. And that means they basically need to show that they don't. Train the latest model on copyright texts if they want to deploy it in Germany. And if this basically catches on, then like it gives copyright holders leverage, at least in Europe to either force a licensing deal out of OpenAI or. Degrade the capability of ChatGPT to interact with corporate material, which I think is important because it diminishes the commercial utility. [00:36:41] Jacob Haimes: Yeah, absolutely. Yeah, like we, we all know that these systems are trained on copyrighted data. And for that reason, I think it's probably not feasible that they could create similar systems if they were to actually remove copyrighted data. And so they'll go for the unlearning approach. Now I have my issues with that because I. I don't think that is actually sufficient. But yeah it'll be interesting to see, I guess. [00:37:08] Igor Krawczuk: a person from my old lab actually has a paper on on this that uh, gradient ascend, fails to forget, which we can put into the show notes. So this is all that I would have on this topic. I would be curious if anyone who's listening to this wants to send us their take, on whether this is like an actual update for them or whether they thought this was trivial. If it was an update what their personal takeaway from also the implication that we discussed would be yeah, but I, for me this was like mainly like a personal update and transparency and chance to discuss current events episode. [00:37:49] Jacob Haimes: No. Yeah, I think it'd be great to hear as well. I'm just thinking about, what is it that is the takeaway here? [00:37:56] Igor Krawczuk: The takeaway is that, not all that glitters is gold. And not all that looks useful is actually useful in the way that it is claimed. And AGI is still bullshit. And people should keep, pushing back against the storytelling and grifting that is enabled by the AGI narrative, as tech, encircles the commons and tries to pass it off as building like some grand future of super intelligence that's gonna save, uh, save us all, when really they're just like trying to devalue as labor as much as possible. [00:38:29] Jacob Haimes: And that's all the muck that we have for this week. If you like the show, please share it, and give us a review. Written review is actually the best thing you can do because those, algorithms on the podcast, listening platforms really like the written reviews. So please leave us one, if you don't mind, and, we will see you next time.