[00:00:00] Jacob Haimes: Welcome to muckrAIkers, where we dig through the latest happenings around so-called AI. In each episode, we highlight recent events, contextualize the most important ones, and try to separate muck from meaning. I'm your host, Jacob Haimes, and joining me is my co-host Igor Krawczuck. [00:00:17] Igor Krawczuk: Thank you, Jacob. This week we're talking ab about the mythical AI bears. [00:00:22] Jacob Haimes: Ooh. It sounds fun. Uh, there is something that I think is worth mentioning though. So we're recording this on, you know, super Bowl Sunday, which to a majority of the world doesn't mean anything. Uh, but in the US that's a time when people, um, I don't know, get together and watch football, you know, American football, whatever. [00:00:42] Igor Krawczuk: I thought it was mainly about the ads, but like the, the football is at this point, just the additional thing and like everybody is just, uh, willingly tuning in. Watch the halftime ad edge. I thought that was like the core of American culture now. [00:00:53] Jacob Haimes: No, no. People, people, well, there's been a push for, you know, sports betting that they've been doing a lot of ads. Uh, and that's, gathered some attention. Andro has, I think four, and I, I don't, I don't know which one they're gonna run. Uh. For the Super Bowl, but there are, you know, companies pay tons of money to, to make bombastic ads for the Super Bowl because it is a, a watching event that has a lot of different people tuned in. Uh, made a bunch about, uh, how. Philanthropic will never have ads in its AI systems. you know, clearly poking fun of, uh, AI and, and chat is integrating ads in a manner in the near future. [00:01:41] Jacob Haimes: Sam Altman got pissed about it. Sam Altman, Sam Altman was not happy [00:01:47] Igor Krawczuk: I mean, he, he wrote, I think over 10 paragraphs of how he isn't bothered by the ads at all, and he finds them quite funny and. He finds the irony of tropic being so draconian and authoritarian with their terms of service. But you can't do everything with, uh, the AI systems unlike open ai, which has, in mid terms of service, like only roughly the same restrictions, uh, would, but would, uh, just allow you to put ads into users, uh, streams, and at some point make porn with it. [00:02:23] Jacob Haimes: To, to be fair, the argument is like the way that it is represented in, in the commercials is not what's actually happening. Um, and while that may be. True [00:02:36] Igor Krawczuk: I, I would push against that. It's, it's, it's not at all, uh, not true. Like it's very simple, like, uh, is gonna put ads into, into ity. They did, did announce that and that changed, and the incentive structure, and Sam desperately wants to spin it a different way. But, uh, it's just, uh, in German there's a saying that like, uh, if you hit a dog, it will Yelp. [00:03:00] Igor Krawczuk: And I think this is a dog yelping. [00:03:02] Jacob Haimes: Well, that's, uh, very German. [00:03:05] Igor Krawczuk: We did make the groom fairytales [00:03:07] Jacob Haimes: speaking of fairytales, right? The back to the mythical AI bear, um, which is not, um, what you want it to be, um, which is, uh, you know, a friendly. Maybe, you know, sort of cyborg esque, uh, bear wearing like a, a little shirt. Um, maybe has some like, shifting colors and parts of its fur. [00:03:36] Igor Krawczuk: Is wise, but there's, there's, there's no bears in this story. Like, uh, we're talking about the strawman that are constructed by. Let's say AI boosters, but we don't mean this now as a like insult. Just like people who are very hyped about ai, they sometimes struggle to get where people aren't as hyped as they are, and some of them, to varying degrees construct what we call the mythical AI bear. [00:04:02] Igor Krawczuk: So like as like a strawman of AI skeptics positions. And we thought it was important to discuss these and address these and uh, complete the. Uh, framing with something a bit what we think is a more fairer representation. [00:04:22] Jacob Haimes: Yeah. And I, and I guess then the, the just like to, to give the TLDR of our, uh, is like. It is a straw man. Like, uh, these are straw man arguments that are being presented as, um, you know what? People who don't like AI are, are saying, and it really misrepresents what those people are actually saying and tries to sort of erase their actual critiques from the discourse. Um, and that's not like. As a result, people have a misunderstanding of, of what critiques actually are. Um, and it, it serves the boosters more. Mm-hmm. [00:05:07] Igor Krawczuk: Yeah, and our starting point. I think we should either look into like how people have been framing Emily Bender and Gary Marcos and et Citron and stuff like that. Or we could go into the, all of my AI skeptic friends are nuts. Blog posts. What do you think is better here? [00:05:26] Jacob Haimes: yeah, so I, I guess maybe we will just say that there, there have been quite a lot of articles. There is a recent one, um, titled, all my AI Skeptic Friends are nuts or something close to that. I might have gotten the name slightly wrong. It'll be in the show notes. But this, uh, has gotten like a lot of attraction on, um, the, the Y Combinator Hacker News, uh, forum, I guess. I think it, I think it starts by just sort of providing the, uh, like a really strong outline and then we can dig into to more aspects. [00:06:00] Igor Krawczuk: And I think it's important to say like, we're using this because, like, this is mainly a vibe driven show, you know, as is the, um, zeitgeist of our times. Uh, but uh, I would say this is like a good representation of. Ai positive opinions and like tech opinions. There's not like, uh, some fringe takes on how people respond to criticism. [00:06:25] Igor Krawczuk: This is like reasonable summary of how at least a chunk of a, of a optimist crowd thinks. That's where we're starting with this kind of like as a prototype or archetype to, uh, to frame a discussion. [00:06:36] Jacob Haimes: Yeah. So I mean, the first one, like, but you have no idea what the code is. essentially says like, if. you don't know what the code is, you should be reading it. right. So it sort of externalizes, the consequence there of not writing it yourself. It's like blaming you as the person, uh, not fully reading your code that, everyone has told you you don't necessarily need to read in full. [00:07:05] Jacob Haimes: Uh, and that, you know. Incentives and, uh, pressures from like higher ups at your company are telling you to move faster and faster. So you're, you're being forced to ship code that you understand less and less, um, it's actually your fault you aren't reading that code. and, and so you should feel bad, right? [00:07:28] Igor Krawczuk: and like let's just continue for before we give or takes like the next two butter destinations and, but quarter shit like developer, He's retort, uh, uh, to this is, is is basically, um, there's linters. There's compilers, there's toolings. Like if, if, uh, hallucination matters, you, your prog programing language is bad. [00:07:49] Igor Krawczuk: And for the, but the code is shitty. He responds. Um, but it's very cheap. Uh, and for the first cuts, uh, we shouldn't, uh, kill ourselves How good the human first cuts us, like the initial draft of a, of a code. [00:08:03] Jacob Haimes: And I did mention, but it's bad at Rust 'cause I skipped, the, the third headline, which is, but the code is shitty like that of a junior developer. Um, so let's. Let's lump, but it's bad at rust into here too. [00:08:15] Igor Krawczuk: yeah, where, where it basically is, is, uh, says like, you should choose better languages and, [00:08:21] Jacob Haimes: Like suck it rust is is basically what he. [00:08:25] Igor Krawczuk: and. Like if we step back and we go now go to the archetype that this is being done. This is basically, um, people thought, oh, but my workflow works, so you just need to get better at it. Like this is like, like one of the common framings that people give out to criticism of AI workflows not delivering the goods is, but it works for me. [00:08:47] Igor Krawczuk: So you just need to try harder enough [00:08:49] Igor Krawczuk: and. If we go to the framing of the, the, like, this misses the criticism because like the cri the, what we argue like the criticism that people bring is, is that, hey, you're promising [00:09:00] Jacob Haimes: Okay. [00:09:01] Igor Krawczuk: all of these world changing things and it's not working. And instead of acknowledging that we thought is, oh, it'll work soon. [00:09:12] Igor Krawczuk: You just need to like change the way you work or. Get good nerds or like stop using stuff that doesn't work then, and without debating. Also, I think, uh, it's important to, to note that these are not actually answers of the critique, that it isn't working as advertised. [00:09:28] Jacob Haimes: I, I, I, I think going back to the, to the first one, the, like, that really bugs me. Um, here is. People saying, but you have no idea what the code is. Um, a an argument that you could respond with is, oh, well, you should be reading the code. However, that makes a lot of assumptions. It makes the assumption that you, uh, fully understand and have the time to do that. Uh, and when you have something that works and your manager or whatever says, oh, that works, let's ship it. You don't have necessarily as many, uh, as much ground to stand on and say, actually, I, I don't know if it's secure enough. I don't know if it's robust to all of these edge cases because they're seeing it and it works. [00:10:10] Jacob Haimes: And so they're going to force you uh, ship things faster and faster and faster until you don't have that luxury of knowing what. Is written, either because you just can't like, spend the time to understand what's actually happening within the, within the processes that are written. Uh, or because like, you aren't even allowed to do that. [00:10:31] Jacob Haimes: You aren't even doing that, right. You're, you're just skipping that portion of reading. Uh, and then the, that also goes to the, you know, butt hallucination issue. Um, if you are having to, if there's even a slight concern that. You need to go back in and manually check everything. Um, then that is not as much value as, as what is being, what it is being sold as. [00:10:56] Jacob Haimes: The other thing is that I'm sure that a lot of people are, um, being able to, to do this to a certain extent, right? Where they, they write up some, some vibe coded, uh, tool or whatever, um, and then. Pass that on to, you know, either their manager or a coworker or someone else that's lead on a specific portion of the project and they think, yep, I did that without, without having to do much work at all. [00:11:25] Jacob Haimes: That was fantastic. And then you go to the person that they passed it off to and either that person also doesn't care, or more likely. Eventually you get to someone that actually does care and then that person's work is. You know, like two, three XD. Because now they not only have to do the thing that they were supposed to do themselves, but they also have to go back and make sense of, and close all the gaps for the work that was done by your vibe coding. Um, and so you've just externalized all of your work onto someone else that you work with. And I think there was a study, uh, I think it's a little old now, like a year or so. Um. show that there was like a negative, uh, sentiment to using ai, um, for certain tasks at certain in certain, you know, uh, vocations because it was externalizing the work. Uh, you know, it's, it's giving it to other people and saying, well, I don't have to do it, so I guess I saved time, but that's not looking at the picture holistically. Um, so it really, I don't know that that part irritates me a lot because, um, it's just a very self-centered. Um, way to, to think about things. [00:12:43] Igor Krawczuk: Yeah, like, like from experience. Um, if you're in a, like, ai, happy. Like leading edge environment, uh, you, you should be lucky enough that like people know that you vibe coding and there's like a bit of awareness around that, but what still remains is the, the value is greatly diminished if you have to review the code manually and you can get around it if you set up massive systems to validate the code. [00:13:12] Igor Krawczuk: But. That's what I meant, like, like the, the, the points that these three or four articles are responding to is basically about, hey, this value proposition that it was being touted is not there yet. And this massively changes the, the way I work and therefore I don't see the value of this tool. And the response is basically get good, like you lean into it, just adapt to the tool and. [00:13:41] Igor Krawczuk: Manifest like you need to try hard enough and it will work. [00:13:45] Jacob Haimes: And it also requires, um, people to, I don't know, the model, at least to me. Always requires someone to not do that because if, if you need the manual stop gap of a human that actually cares about the output to be in the system so that they actually go in and, and verify things, um, as soon as is, you know, full adoption of this, uh, mentality, um. Like it's validity goes away, right? [00:14:23] Igor Krawczuk: Well, like the, the stable point, the. Slimer point would be that it will be just part of your job to take care of that. But this also is kind of a shitty deal. Uh, doctoral coin, the term reverse center for this, where like humans become the bitches of robots and AI systems instead of AI systems being our tools and taking care of like the, um, boring shit. [00:14:50] Igor Krawczuk: And so instead, like in the best case scenario of. At least the type rebuttal that is in these three paragraphs. Um, humans spend a lot of time reading and correcting code and back fixing code that was generated and large amounts of it because the whole point is that you can, you know, churn out features faster, which is the worst part of programming. [00:15:16] Igor Krawczuk: The worst part of software engineering is having to wait through other people's code and fix it up. At least with humans and interns, they learn something and they try to not give you slop after a while. But with a lems, uh, you basically need to do the effort that that will not occur, uh, occur. But let's go to the next, uh, batch, I would say. [00:15:40] Jacob Haimes: Sure. So the next batch is, but the craft, but the mediocrity and, but it'll never be a GI. [00:15:52] Igor Krawczuk: I would bundle these as kind of trying to frame it as just bleeding hearts who care too much or who have some weird, uh, um, moralistic or philosophical sentiment and very thoughts. If I summarize the, um, paragraphs a bit, like I'll just read further now. Do you like, fine. Like for, but the craft, he says like, do you like fine Japanese woodworking, all hand tools and session one genre. [00:16:21] Igor Krawczuk: Me too. Do it on your own time for, but, but the mediocrity, it's, uh, as a late, uh, career coder, I've come to appreciate mediocrity. You should be so lucky, uh, as to have it flowing almost effortly from a tap. And, but I never beat a JI, he just says I don't give a shit. And I think it's important to say that. [00:16:43] Igor Krawczuk: The message of this article is kind of trying to convince people to adopt the tools. So from, from, I think the self image is a pragmatist stance of just somebody who wants to ship things and just get things done. And I would say that part of the, uh, the points here are like. Can correct readings of things where [00:17:07] Igor Krawczuk: the tension between the love of the craft and what is good for a business is like an, an eternal one. And there's a, I would argue, like a genuine argument to make about, yes. Artisanal wonderful benches and tables crafted from sustainably grown. Um, trees are wonderful, but they're also super expensive and people need benches and tables, and Ikea actually has a lot of value. [00:17:37] Igor Krawczuk: And if they could also be sustainable, that, that will be awesome. But industrialization of code, um, being something that you can't just slap away. With, oh, but the craft is valid. However, I still think that this is a dismissal because when people say, oh, but the craft, it's also like all of the other externalities that come with it. [00:18:05] Igor Krawczuk: We already talked about the fact that def facto, there will be a reduction of like quality in an industry where there's already low quality standards because you can't review all of the code. People also won't learn the craft anymore because they can't, like, make their own mistakes anymore because LM makes it for them. [00:18:26] Igor Krawczuk: And so it disrupts the pipeline. And when people are talking about like, oh, but the craft, it's not just like some vague bleeding, hard winding, there's like concrete critiques of about like the systemic effect of, um, AI systems in there. And the, uh, in the, in the section that respons to this. Uh, he writes something like nobody cares if the logic board traces are, uh, pleasingly rooted. [00:18:52] Igor Krawczuk: If anything we build in endures, it won't be because of the code base was beautiful. Uh, if you're, if you're taking time carefully, golfing functions down into graceful, flu and minimal function. Person alarm should ring you're yak shaving. Real work has really focus. You're not building yourself soothing. [00:19:08] Igor Krawczuk: Um, [00:19:09] Igor Krawczuk: and. His point is basically that if you make beautiful code, you are not producing value. And I disagree with that because it misses the value of like maintainability on top of everything else. Uh, uh, that, uh, I, I, I said here [00:19:29] Jacob Haimes: At least, at least for me, in, in how I understand it, the, the, but the craft argument, um, is more about the learning process, right? Like, and, and, and to be fair, to be fair, the, uh, entry level jobs. have been going away for a long time. Um, and, and so that, that's not necessarily something that's brand new with, with ai. [00:19:53] Jacob Haimes: However, this is a trend that is seen in software as a result of largely AI within the recent past, right? Uh, entry level positions are much less common, are much harder to get. Because people are just, aren't using them as much. And so the craft is, is necessarily hurt because you're not getting people up to that level of, um, you know, they can actually do this. and, and you know, that's sort of like an in acidification argument and, and there are other dynamics there as well. Um, but the argument is treated as a, um. Sometimes we like artisanal stuff, and that's nice, but in general, you don't get those things for like most use cases, uh, which just like very much misses the point. [00:20:54] Jacob Haimes: And misrepresents, I, I think a large portion of what's being stated here. Um, [00:20:59] Igor Krawczuk: like the, the, the, the point being that it's about the learning pipeline of, uh. The underlying discipline being disrupted and erosion of quality [00:21:11] Jacob Haimes: was was like, I think you've mentioned before, like the gentrification of, of like how it's now very expensive and much more difficult to, uh, come by, like high quality steel. Um. [00:21:25] Igor Krawczuk: be because the, like, the story there was that, um. Low quality steel was much more scalable to produce. And so it got used in more and more applications, which then de uh, like destroyed the, like scaling economics and unit economics of high quality steel. And now the like, uh, the, the tiny, um, parts where you really have to use high quality, uh, steel, it's much more expensive now. [00:21:50] Igor Krawczuk: Um. [00:21:51] Jacob Haimes: Right. And, and I see. I mean, it's a, it's a perfect parallel to, to this, right. and, you know, if, if we're not careful, which, you know, why would we be, we get, we, we very easily get into a situation where, um, you know, we're 10, 15 years down the line and there are only a handful of people that can actually, you know. and maintain your, your code base in, know, some area because everyone else just uses language models to do it. Um, so that's, that's sort of how I see it. Um, and I think the, but the mediocrity is just a continuation of, of this argument. I don't see anything new there to me, I mean, a lot of this, the AI stuff is about marketing, right? It's being marketed as an everything tool. And so if I critique the companies and say, uh, you know, they shouldn't be doing this, they shouldn't be marketing a tool as an everything tool when yes, it has specific use cases that are valuable, but it should not be used more generally. Um, because it just won't be good, uh, for doing those things like. This is the point this, this person specifically at the beginning says, um, I'm only talking about, uh, language models for and agentic, uh, agent coders or, or whatever, uh, for the use case of coding. I'm not talking about all of these other domains, uh, that people have critiques in. And again, that's, that's part that, missing the point because. It's not being sold as, this is specifically useful as a coding tool. It is being sold as this is something you can replace anything with. so when you've provide legitimacy to a portion of it, you're actually boosting this, uh, you know, false narrative. Um, this, yeah, I don't know. I, I feel like it, it almost intentionally. Reduces the scope by saying, oh yeah, I'm not gonna worry about all these other things. That the tool clearly is being said, uh, to be able to be doing right. Like I, I don't know. [00:24:15] Igor Krawczuk: Yep. So when do you want, do you want to go through the next five points in one goal? Because then I think we have like summarized like both like the, we have like, but they drops. But the plagiarism positive case redux and, but I'm tired of hearing about it. Then we have kind of like jumped through the whole article and commented on it. [00:24:39] Jacob Haimes: Yeah. Um, so the, the main two, like. Headings that are left. As I would say, strawman critiques are the, uh, but they take our jobs and, but the plagiarism, so, but they take our jobs is basically like, uh, yeah, so does everything. That's what capitalism does. That's what, uh, software does. Um, we already did it so you shouldn't like to other people, so we shouldn't feel, uh, bad or. Or we should be okay with, uh, doing it to ourselves. which just really like, that just sort of is like, yeah, but like, I got rich when we could do it, so it's okay. Right. [00:25:21] Igor Krawczuk: Yeah, there's like many ways of reading this and like, uh, I'm not sure if people know Terry Chard, uh, like the discord novels here, but there's a wonderful concept in, in was called as a traitor goat, which is like a goat that knows nothing will happen to it. So it is like calm as it leads other animals into the butchery. [00:25:40] Igor Krawczuk: For, and like the other animals don't come out, but the traitor goat will and, uh, old school leftists have a concept called workers aristo aristocrats, which were like highly paid specialist workers that were basically living in golden cages. And that temporarily aligned the interests with the capital class against the other workers. [00:26:03] Jacob Haimes: Right. Ice. I mean, yeah. [00:26:06] Igor Krawczuk: Uh, uh, uh, because they will basically. Be stomped on less in exchange. And, and like, [00:26:16] Igor Krawczuk: like the last point, this is like, he might already have his like, um, sheep in the dry might be a part of it as well. It's basically telling people to not worry about it if their job's being taken, because surely at some point the, the wealth will trickle down. Is a very like, disingenuous and like removed from reality argument. [00:26:38] Igor Krawczuk: And I don't think it's at all unreasonable for people to not want to feed a coding machine that will train on their code to replace them or to, uh, uh, to point out the impact that, uh, these systems have on the wider job market, which is like widely uh, uh, reported now. Like, at least as an excuse, like I'm still, like, I'm still skeptical about the actual re replacement happening, but it is at the very least like aimed at replacing people with lower quality things, uh, if they can get away with it. [00:27:12] Igor Krawczuk: And, uh, it's being used as a justification for hiring freezes and, uh, lay of waves and so on. And the whole value proposition is exactly that. So. Calling it. So does open source, which is like the leading line is very disingenuous 'cause open source was freely given by people and has created a lot of values and jobs. [00:27:36] Igor Krawczuk: This is the exact opposite. This is an enclosure of the commons. And just because you might make it big because you're like part of the tech work capitalist class does not mean like you get to dismiss people's pointing that out. [00:27:51] Igor Krawczuk: And as for the battle plagiarism, it's just [00:27:54] Igor Krawczuk: absolute, uh. [00:27:56] Jacob Haimes: Well, so I mean it, to be fair, the, but the plagiarism starts with like artists, I think, uh, you know, who make this claim are, are, uh, reasonable and so are lawyers. Um. But not coders, right? Like what? Uh, I, I, I can't even understand what the, um, like what the stance is here. Like, it's, it's the same construct. That is being discussed. I I, I don't understand why it's valid for artists when it's not valid for other professions. [00:28:39] Igor Krawczuk: It's because he is doing motivated reasoning. Like the reason why I was struggling to, uh, express it, um, is. [00:28:46] Igor Krawczuk: Uh, because no profession has demonstrated more contempt for intellectual, uh, uh, uh, property, it's okay now. That's kinda like the, the best reading I can get from this and like this kind of like stand back in like, like a non-engagement. Like, uh, it is very similar to how people during the NFT craze. [00:29:09] Igor Krawczuk: Responded to like artists and copyright holders about unauthorized NFTs being sold against the will, which basically, eh, but I really wanted to and. That's the only, uh, take I can, uh, give, uh, give both on this article, but also like on the question, like people just want to deny the validity of, of an argument with some vague hand waving of like, IP rights bad. [00:29:38] Igor Krawczuk: And, uh, everything should be free, but not my stuff, of course, that you have to pay for. The last point that he, uh, make is like positive case redux, where he like heights it up a bit. And, but I'm, I'm tired of hearing about it and on the title of I'm hearing about it, he actually like rejoins the choir. Um, but. [00:30:03] Igor Krawczuk: He kind of does in a pretty hype way. He comes, uh, says, okay, lms, uh, are being hyped to death and, uh, it's annoying that there's so much marketing. And then he goes like, oh, but the AI is also incredibly important. And he compare, he compares it to, um, smartphones in 2008. Uh, and he calls. Critiques like the, sarcastic parrot and vibe coding being called as problematic things, um, as cool hit heartiness that will not, uh, survive much more kind with reality and a quote. And when we get over this affectation, we're going to make a coding age perform more effective than we are. Um, uh, uh, uh, today. Um, by giving like a backhanded compliment to, uh, to uh, uh, to people who make those points. [00:30:58] Igor Krawczuk: But I don't think that's like genuine, like people use the sarcastic para argument or like, it's like a para phrase to dismiss repeated claims. [00:31:10] Jacob Haimes: Well, so, um, I mean, I think that the, he mentioned stochastic paras in like the last paragraph. Right. And I think that's, so the other thing that we wanna talk about was like. Emily Bender, like critiques of Emily Bender basically. And, and also, you know, timid guru and, and generally people saying, um, like, oh, uh, you know, stochastic parrots doesn't hold because blah, blah, blah, blah, blah. [00:31:35] Jacob Haimes: And then they try to say, oh, well, you know, we're ca stochastic parrots and, and all that sort of stuff. And I, I don't know, I, I think that misses the point entirely of, of what stochastic parrots, um, actually is, is referring to. Uh, and that is, you know. What we put in is what we get out, what we're putting [00:31:51] Igor Krawczuk: Yep. [00:31:52] Jacob Haimes: is. The world, essentially like as it is, and that's full of bias, right? The, the text that we're putting in is full of bias, not just in what it says, but also in how and who it was written by. Um, and so if you're saying, oh, they aren't just, uh, you know, the stochastic parrots thing isn't a valid critique, uh, because actually they can do these other things now. [00:32:18] Jacob Haimes: Like, I, I don't know. I, I feel like that's a big, uh. Mischaracterization of the argument itself. [00:32:25] Igor Krawczuk: Yeah, I'm gonna, I, I went a bit rambling and I, I will try to restate it concisely. Um, you often see the, uh, the word stochastic parrots being thrown about as a haha. See, not such a parrot now. And, uh, you see critiques. Making the argument that we are actually still dealing with sarcastic paras, that these limbs are not exhibiting like actual cognition and reasoning, but they're still just regurgitating what is being put into it. [00:32:57] Igor Krawczuk: Being dismissed as, but look at what it does. And as some fancy schmancy, semantic argument, people say, oh, but you know, like, I make mistakes like that. Therefore it is thinking like me or. If you claim this is not real human, show me what a real human is. I I will, I will make a limb do the same thing. And best health argument is, is basically the, [00:33:23] Igor Krawczuk: the attempt to reduce a valid point via mockery. Without engaging on the actual point behind it, which is what, what you just said, that we, we put in everything and a lot of effort. And then the, um, information retrieval system that we do, it can retrieve all of the skills that was put in all of the patterns that were put in. [00:33:47] Igor Krawczuk: So it is exactly a stochastic parrot, even if it is very impressive. [00:33:50] Jacob Haimes: Yeah. And. This sort of also goes to what you were saying in like, um, is trying to reduce the argument by trying to make a mockery of it, uh, and rep representing it as a straw man of what it actually is. Uh, and this is like very clear and I think, uh, Emily Bender recently wrote an article, um, that was directly addressing someone else's writing. Um. is resistance isn't denialism. Um, but basically, uh, this, the first article that was written by someone else, um, you know, essentially says, oh, now you know, all the people, all AI skeptics are just in denial that, you know, this technology is gonna change everything. Um, that it's really good. Um, and, you know, we should sort of see them as, uh, Sympathetic, you know, uh, like they're just, they're just in grief, right? And this is very intentionally trying to belittle the opposition. Uh, and, and what Bender says is, you know, resistance isn't denialism. We're not, we're not saying. That like we think that these won't be, uh, there won't be some things that will stay changed because of this system we're saying we don't think it's right. And that's a very different argument. Uh, and, and it's, you know, it's, it's really convenient, um, um, to, to frame it as denialism. Uh, and doing so just really misses the point. Um, you know, the, the article that I, I think, you know, Gary Marcus has one, that he responded to some other guy back in like, I don't know, 2024 or something, uh, where a similar thing happened. [00:35:43] Jacob Haimes: And regardless of what, uh, Igor, I think about, you know, Gary Marx's claim in general. It's just another example of, um, skeptics being painted in a light that is not actually, um. Representative of, of what they think. Uh, and then there, there was even another one [00:36:02] Igor Krawczuk: If I, if I can quickly, before we go into, into, into that one on, on what you said about both, uh, bender and Marcus, um, this happened with the NFT, um, craze as well and the crypto craze, like one of the points that people made. Oh, but you haven't tried it, so you don't know Levi. Like your, your, your opinion only counts if you have tried it enough. [00:36:20] Igor Krawczuk: But you agree with me that was like the construction was being used and this is like a rhetorical technique that, uh, tries to capture like the framing of things that like. These things are so novel that you need to deeply engage with them, and then it is very obvious that they're, uh, they're useful. You will obviously agree with me and the fact that you, that you disagree with me is a sign that your opinion shouldn't be taken seriously because you obviously haven't done that work yet. [00:36:44] Igor Krawczuk: That is like the technique that Bender calls out there. And then Astic Parrot one is kind of like, uh, a, like a twist on that where people just. Act as if somebody who is actually more familiar than them with a technique is missing the, the forest for trees where like an expert tells you this isn't what you think it is. [00:37:08] Igor Krawczuk: And on top of grasping the other expert and saying, oh, but they say it is, you also say Nah, because here and you just reverse who is missing, uh uh, missing the point and then. Pile on, uh, like, uh, the mockery. And I think it's important to track these like rhetorical techniques that are not about the content of what is being said. [00:37:31] Igor Krawczuk: Like none, nobody engages in the actual argument to try to kind of like give their own line on when they would accept the claim of a sarcastic par or engage with a points that, um, uh, Emily and, and like that she raised, you know, all of the points about, um. Like labor, um, uh, laws being vi uh, being violated or, um, questionable labor practices in like labeling factories, all, all, all of the ghost worker harm, all of these things are not being engaged with. [00:38:08] Igor Krawczuk: Instead people are, oh, but you're denying the good that could come from, uh, from this you're just a ludite 'cause it's size steps, very uncomfortable discussion that they don't want us to have because it will slow down the rollout. [00:38:20] Jacob Haimes: And, and so I mean, that's like the, the mental health issues that, you know, um, are, are, are an example of an issue that people are, are having a hard time doing that with because it's so salient. Uh, and because it impacts, uh, children, which is, uh, you know, historically a very good way to get, uh, action to happen on a political level is to say, but think about the children. But really it's, uh, it's shameful that this sidestepping has been allowed for as long as it has been. Uh, and just, yeah, I, I think that going back to the idea of, um. What's being presented, um, is not what people are actually thinking or, or what the arguments, you know, the, the against, uh, AI are. Um, there was a, a recent article, um, that talked about, um, it was a critique of the AI Con, which was, uh, bender and Alex Hannah's, uh, more recent, uh, book. came out in 2025. and there was one line that I thought was like, particularly funny. So, um, it mentions that the authors like discuss AI being used to produce slop lot articles and fill in for, you know, book writing. And that this seems odd because it con contradicts their core thesis, which is that the systems are, are not good. [00:39:55] Jacob Haimes: Right. Um. And specifically the paragraph ends with, if AI produces useless slop, then how is it replacing writers? and to me this just like really misses the point. Um, it, it is the fact that, you know, LOP can be. Used to replace, um, you know, blog post writers or, or the copywriting on websites in certain cases, um, to me is, is not an indication that there is value in what the LMS are, are outputting, but that, um, it wasn't valued to begin with, which is something that I've said, uh, before and. If it wasn't value to begin with, then you know, a company is going to do whatever they can to reduce costs. And using a language model is less costly than using a contractor. So, um, yeah, I don't know. It just, it feels very disingenuine, um, disingenuous, uh, to present these arguments in this way to me. [00:41:03] Igor Krawczuk: Yeah. And I think, uh, if we summarize it and like, like the mythical AI bear is gonna be characterized as like a bleeding heart, like woke academic in, uh, who cares about some semantic difference about, uh, special human. Uh, a quality of product or spark of a output, uh, who's in denial of the productivity gains. [00:41:32] Igor Krawczuk: And, uh, Zvi likes to use this, uh, phrase, mundane utility and just doesn't see it. There was another blog post, um, uh, called Don't Fall for the, uh, anti AI hype. Basically try to like caution people not to fall for the idea that these tools can be ignored, uh, and for as inevitable. and all of these things, these like base motives and like mainly out of ignorance and unfamiliarity and just not having engaged with it or being emotional leading to something like an AI denialism or. Unreasonable AI skepticism that also kind of invents ideas of an AI bubble that is completely unfounded and is very obvious that there's, uh, like, uh, value to to be had. [00:42:26] Igor Krawczuk: Yeah, and this is like the mythical AI bear. [00:42:30] Jacob Haimes: and you know, with that. That's all the muck that we have for today. If you thought this was interesting, it would be great to, to share it. [00:42:37] Jacob Haimes: Um, we really appreciate any and all feedback that we get, so leave a, leave a review on whatever podcast platform you listen to. They really help, uh, the algorithm, you know, even, even if it's just, you know, a comment that says like this good. Um. Or, or a, a review that says something like that. Uh, it actually, it actually helps a lot. [00:43:01] Jacob Haimes: So, um, yeah. Thanks so much and we'll see you next time.