[00:00:00] Jacob Haimes: This episode was recorded on April 7th, 2026. Welcome to muckrAIkers, where we dig through the latest happenings around so-called ai. In each episode, we highlight recent events, contextualize the most important ones, and try to separate muck from meaning. I'm your host, Jacob Haimes and joining me is my co-host Igor Krawczuk. Igor: Thanks Jacob. This week. We talk about the mythical AI booster. However, before we, uh, go into that, to talk about what the world's. Two most prominent AI booster companies have been up to it's been a pretty bad week for open ai. There's been expo on its CEO, uh, dips in secondary market, interest in the company, and, uh, everyone switching to Entropic and the CFO public publicly disagreeing with the CEO on the IPO timing [00:01:00] and. For tropic, their whole crown jewel of quote quote code got leaked and everybody's making fun of it. So we'll put some good old snarky analysis into the show notes for you guys to enjoy. Jacob Haimes: Yeah, I, and like, does anyone have. As, as far as, as you've seen, like a good hypothesis as to how that happened. Is it just that like they're running, they're running cloud code and just like not paying attention to the outputs or like, what, what is going on there? Why? Oh. Igor: get in, will get into how earnest, um. is, and I think we talked about this in our like models and, uh, inferring intent, uh, episode that like, and tropic is like drinking the Kool-Aid, like baby vibe coding and they vibe code the vibe code. I just finished a thing, uh, that had me realize that at least my quote quote sends a [00:02:00] refresh request for. Oh, of a authentication every 300 milliseconds, and I'm still not sure if that is because I fucked up a configuration or just revive coding. But after seeing the code dump, I think it's the second one. Jacob Haimes: Yeah, there can't be any consequences to that, right? Igor: I'm sure. I'm sure. Jacob Haimes: Well, only, only the one that we saw just this past week, which. Open sourced a, uh, industry lead. Right? Igor: You, you know, you know who also keeps saying that, uh, things are fine. Jacob Haimes: who, who are you referring to this time? Igor: AI boosters. Jacob Haimes: Yes. Okay. So, , I was still on the, the, the fun stuff. Uh, today we're gonna be doing something slightly different than, than normal, um, which is a little bit more, I guess, uh, opinion forward, uh, based on our sort of taxonomy [00:03:00] understanding of the types of boosters. Um, like, you know, uh. Igor: I will Jacob Haimes: Fantastic AI boosters and where to find them. Um, and that's the thing to know here. Yeah, Igor's only presenting facts. I'm only presenting, uh, I don't know, ill-informed questions or whatever. Uh, but the core is going to be that, uh, and then obviously grounded in our experiences. Um. And we'll just sort of walk through it because we have found it particularly helpful to be able to distinguish between these different types of boosters and uh, Igor: I, I think it's maybe worth calling back to our mythical AI bear episodes. Jacob Haimes: yeah. So the mythical AI bear, the idea is like this doesn't actually exist. Um, the mythical AI boosters, there are [00:04:00] some real. AI boosters, but the vast majority of them are. I like much lower down the booster tier list and the, the ones that you think of when you hear booster and the people who are, who are like really aggressively, um, pursuing AI and saying, oh, you gotta try it. It's the, it's gonna. Change everything tomorrow, um, kind of thing are not common. The, the, like, it is a very loud, very small group of individuals. Um, so yeah, I, I guess the, the, the important part is dealing with those people. Um. Not all people who are, or AI boosters, um, or, or who are pro ai, uh, or think AI is, is, has positive impacts and may not even necessarily be pro ai [00:05:00] are AI boosters. And I, I think that is missed oftentimes. So I, it's worth calling out and that's sort of what we're. Igor: It's missed in our bubble. And it makes a difference in the sense of like, you know, as, uh, AI skeptics, AI critics, AI haters, uh, enthusiasts of AI who actually wish, uh, we could talk more rational about it, however you want to call us. In our bubble, there is sometimes the temptation to treat everyone. As one ized, um, mass like you're either against AI or pro ai, and depending on what you want to achieve, that's not helpful because if you want to, for example, diminish the impact of AI on the environment, then [00:06:00] talking to the first specimen on Ourt list, the genuine enthusiast, the same way you talk to the. Once we come later, we'll basically leave potential for convincing that person on the table. That's why it's worth distinguishing it, and this might be very obvious for some people, but I've learned that spelling things out explicitly can be helpful. Jacob Haimes: Yeah, I mean, even for us as well, right? Like it's, it's useful to be able to point to this and say, oh, you, you, you wanna know my opinion about. X, Y, z Well go check out this podcast episode. Um, it's gonna be more direct and clean than, uh, what I could reiterate to you off the top of my head right now. Um, Igor: the, the most important question before we start is of course, um, if this is a tier list, do we start at S tier and go down, or do we start. At FT and go up. Jacob Haimes: okay. Well, I feel like you definitely start at F Tier, um, which then makes [00:07:00] me think that we need to start. The opposite way that we initially planned. Um, but maybe just because I don't, I don't like the, the, you know, extremely bought side of the tier list, which I would like, you know, the further bought in you go, the worst you get in my eyes. Uh, but you know, I, I think you could frame it either way here. So. Igor: Let's, let's, let's say we start an eight here with Tia being like the pure, rational thinker and, uh, moral Jacob Haimes: S tier is us. Let's be real. You know, Igor: No, no, Jacob Haimes: is Igor: no, no, Jacob Haimes: S tier is me. I, you know, I'm the best. Igor: Ts, Jacob, while, while, while, and while I, uh, am darling somewhere with, with, uh, the first specimen, Virginia enthusiast, hopefully in eight year and Virginia enthusiast would be either like a [00:08:00] non-technical user, somebody who is just like using AI and happy about it. Maybe like somebody who always thought, oh, computers aren't for me, and now they have like Cloud code Web for 20 bucks a month for now. And uh, they can do stuff with it and it can like change together like the five APIs that they need to use for work and do the data entry stuff that they always, uh, had to do and they can just. Do whatever actually useful job they're doing. Like being a plumber, you know, like unlike us who just sit in, uh, in front of computers and type stuff or talk into microphones, like some people actually have shit to do. And for them AI is pretty rare. Or like blind people who can see suddenly. Jacob Haimes: Well, c asterisk, you know, have an easier time getting around at least. Uh, but Igor: easier. Jacob Haimes: yes, yes, I just mean like, it's not, [00:09:00] anyways, um, these, this group of people are like. Doing, doing this engagement in good faith, I, I think, is an important thing to say. And all the downsides are indirect, right? So all of the major downsides of, of use in this way, like relatively minimal, um, you know, not, not extensive by any means. Um. Are, you know, the fact that it's normalizing plagiarism and sort of implicitly allowing, uh, these companies to, to get away with that. But like literally everyone is doing that. Um, and it similar to, similarly to, um, like pollution, uh, for example, the. Problem is it's great to try to reduce your own pollution where you can. The problem is not the [00:10:00] consumer, the consumer's direct consumption. It's the consumption of the corporations that are doing that consumption to, for the consumer. So, yeah. Igor: like a lot, a lot of people at this tier will be down with like regulation to price and ities in theory. then like de depending on where you are in the grading towards the next tier, they might bark at the price or they might push back against the, the legislation if, uh, it takes away the toys. Another thing that is worth, uh, pointing out that like a negative effect for this type of user is that, um. They're implicitly engaging in devaluing their own craft by giving the AI companies, um, train data. And there's a real effect of like [00:11:00] de-skilling. Like there are studies that like AI use will make you less sharp and it will stop you learning that if you don't treat it as a very carefully manageable like. Tool and like carve out explicit practice time. Jacob Haimes: And then just complacency risk, uh, potential to not realize that something bad is happening when it is. Um. You know, that sort of thing. So, uh, thi this is, this is the, a tier of, of the booster list. Um, but moving down to the, the next tier, um, this is sort of the, uh. Denial prone booster. Um, so a key difference between the genuine enthusiast and the denial prone booster is that the genuine enthusiast, when you sort of push them on their claims, you say, oh, but it's normalizing plagiarism. [00:12:00] Oh. But there are environmental, uh, concerns. Oh, but you know, it. Is bad for the data workers and is exploiting, um, you know, mar already marginalized groups. All these things. The genuine enthusiast is likely to, um, you know, agree with you and, and consider that meaningfully. Um, while the denial prone booster is more likely to deny pushback against deflect, et cetera. Um. These sorts of arguments and, and say, oh, actually it isn't that bad because of X, Y, Z. Igor: And so like the main distinction is actually just in like, do you take responsibility or do you engage with it, uh, at all? And that I said, I said as a gradient, like maybe some people will be happy to, in theory, acknowledge the externalities, but then they don't want what to pay for them anyway. And then some people just [00:13:00] think of themselves as very, very moral people and they need to deal with a co cognitive dissonance of exploiting ghost workers or. Contributing to environmental destruction by just denying it. But the main difference here is not one of like but uh, but it's one of um, or immediate consequences, but it's one of the relationship to the actual thing that is happening. Jacob Haimes: What I just heard there was you said meaty consequences, um, Igor: consequences. Jacob Haimes: understand is not what you said. Uh, but I do think that the, there's a strong parallel here between, um, people who are not vegan or, or not vegetarian, um, and. Say, oh, well factory farming isn't that bad. Like factory farming is that bad. Igor: And it, it, it is one thing to [00:14:00] say, I really like cheese. I struggle giving up cheese. I know it's bad, but I giving it up, I can't for whatever reason. it's another thing to say, no, it's not bad. It's, it is actually fine. Jacob Haimes: So just sort of acknowledging that, um, I think is, uh, and, and even to some extent, um, being antagonistic towards the people who do acknowledge it in some cases. And maybe that's, um, sliding along the, uh. Igor: Ladder. Jacob Haimes: yeah. Sliding down the ladder as you approach, um, you know, criticizing others for their habits or, or, or beliefs, um, that are not the same as your AI boosting ones. You slide towards the next category, which is the financially adjacent booster. Um, so this is like. [00:15:00] Stockholders or, um, I mean, even AI safety researchers who depend on the fact that AI is, is hot, at least to some extent, um, who are made more relevant because of the, the place that AI currently has in society. Igor: Which you could argue, includes us, uh, except Jacob Haimes: I'm remember I'm s here, Igor: Yeah, Jacob Haimes: remember I'm s here so I can't be here. Yeah. Don't forget about that. Igor: if Jake, if Jacob was ant then you, uh, you could argue this includes us except that we are completely failing at the grift and, uh, are not tapping into potential, uh, potential of AI at all. Jacob Haimes: Yeah. Also, uh, like. We are speaking out against it actively. Right. Igor: I mean, there's actually a [00:16:00] good thing here, like so is Miri, so is like all of the AI safety institutes who keep saying, oh, this is so dangerous. Like, uh, this will destroy the planet. Whose whole relevance is also like, um, depending on it being, uh. And we'll get to this later as like in-depth thing into like pretty hype. But, uh, if the only reason why people are paying attention to you is because you're somehow related to AI and you, you don't have like a fallback where like, I think you're a mechanical engineer. Originally I'm, I'm an electrical engineer. Like, if AI goes away, I'm gonna gonna do like computer shit. It's gonna be like. Nice to have minority playground for, uh, for me and my minority friends again. some people when I goes away, they have nothing because like, they're like prompt Fondles or, uh, and, uh, Jacob Haimes: That's a great name. Igor: at, at some, at some, at some point, the, the derogatory name was, um, [00:17:00] people with a degree in grading the sense studies when like, uh, you were still train on models and now we're just like prompting them. Um, and I think it's important to also say that. Financially adjacent booster and denial from booster are not necessarily the same. There will be an overlap, but some people are very much kind of like just saying out loud, they don't care about externalities, and just pointing at like, oh, will make us all so rich. You, you should get on board, which is like a slightly d thing Jacob Haimes: Do you have like a concrete example of, of that that you're thinking of? Igor: Some of the more edgy eac. Twitter, I don't have an in a, a concrete person, but like I would bet you 20 bucks that if I troll Twitter, I will find somebody who goes like, oh, Boohoo, Crimea River with uh, uh, with your Jacob Haimes: But like Igor: stuff like this. Jacob Haimes: they're literally trolling. Like I, I, uh, I [00:18:00] don't think that you can assume good faith for those, those individuals. Uh, so like. Igor: That's also true, but like then maybe like to like, keep things focused. But like the distinction here is that like have a financial interest of um. AI is a thing staying relevant. And this can also be if you're a consultant selling the services, um, or you're just like using it and you want to, uh, to have it still be legitimate as a user. So like there, there was a thing a while ago of people complaining about. The stigma of AI usage, where we should de-stigmatize AI usage. And we shouldn't shame people for using ai, for, uh, for coding and contributing their slop to open source models. And of course, we should, like, we should all be ashamed of doing like, uh, sloppy things. You can do things because they're economically useful, but, still be ashamed of them. Like there's, it's not art that we're doing here. Jacob Haimes: Yeah. Igor: we are doing art because there's s st [00:19:00] podcast, but like. Jacob Haimes: Thank you. I agree. Um, and. As you slide further down the ladder towards, uh, I guess like, at least to me, the difference between this layer and the next layer, um, the corporate booster is that the financially adjacent booster is not directly gaining from promotion or use. Um. It is a secondary effect, right? If it were to go away, if AI were to become less relevant, then that would have a negative consequence on their, you know, job prospects basically. Um, however, the, the corporate booster, these are people that are like CEOs that are. Using ai, uh, as a layoff, [00:20:00] uh, cudgel and saying, oh, well we can, we can use ai. And Igor: that. Actually. I, I would say the Jacob Haimes: really, Igor: power because, uh, because like, um, what I said is like, if AI becomes a, they all have to basically like, pivot into completely new jobs. Like, I, I don't Jacob Haimes: yeah. So it's the, it's the. Ability to impose your AI use onto other people. Igor: Uh, like using AI as a tool for power as well. Like both of these things, kind of like the previous tiers are all like recipients of AI in one way or another at the corporate booster tier, like you're talking about people who. Could choose otherwise. Like, like that. Like okay, we could use a ethically sourced AI model, we choose not to having the, like the resources and despite like having enough power to actually like make a difference. Like here [00:21:00] you're leaving the consumer choice at individual level doesn't move a needle tier and you're like talking about, okay. Even small companies can sustain whole deaf teams if, if they fully commit to a, uh, uh, to a given like product. Jacob Haimes: Yeah, I, I think that is definitely true to some extent. I also think that, uh, especially the larger extent, I think that where you, with, like in a small company scenario, I do think that it's still a little bit murky. Um, but then when you. Excuse me. When you approach, um, you know, actual corporations, then it becomes like, the, I'm, I'm trying to think of like maybe a way to, to synthesize it and summarize it. Maybe it's, uh, taking advantage. Of AI at the expense, like at the direct expense of others, uh, [00:22:00] and or trying to, um, well, I guess, yeah, that would, that would also capture trying to get other people to use it even if they don't want to. Um, so. Igor: there's, there's old lefty theory that, that can also summarize this well, where like the difference between like a capitalist. the petti bozi or like, uh, like the working class is that even the petti Bozi, when they like stop working, they don't have enough capital to cruise by just on the return on their capital. So like, your relationships to work and your relationship to like the means of production is a big thing in old school, like Marx analysis and like the more you basically, if the AI. Vision comes to pass are just fully in the clear because you own that, uh, you will just be the one who profits of all of the plagiarizing [00:23:00] and devaluation of cr of craft. The more you fall into like I would call like the pure corporate, uh, booster where epoch, think tank people, researchers, all of these people, like there's still like a labor component. To their relationship with ai. It's, it's still like something like they do work or they, they do stuff and they get money for that. And it's not that they own stuff and they get money for that. And I think the distinction here might be that you're still not actively building it necessarily. In the next year we have a like product or platform booster who is like filling AI stuff, who is like trying to maybe become a corporate booster. So this to coexist and like, it's not a strict t uh, te list at this point, but like if you are actively building something that like normalizes AI systems that tries to also like, argue that this is totally fine. Uh. You can just lovable, prompt your [00:24:00] product and you don't need any security knowledge and you don't need any database engineer. The AI will take care of it. Uh, look at our interface. It's all fine. Like. You are actively contributing to like the deviation, despite maybe also like having like more choices and like maybe you dual role as a corporate booster within, within your own own co company. But it's, it's a bit, you can also be like a small deaf who just really promoting shit uh, contributing to the. Tragedy of a comments, uh, that way. Jacob Haimes: So what do you think the like hallmarks of this category of booster is or are? Igor: I think it's like, basically like at this point you are really, really, really like tightly coupled to sentiment shift where Jacob Haimes: Okay.[00:25:00] Igor: you know, like, like the, the corporate booster, it's all about like the actual return on investment at. A certain point, like there's hype and they get like return investment for that. And, but at some point they look at the accounting book and like, if it ain't working, they, they're not, they're, they're not gonna use it. It's a very pragmatic thing. But if it becomes the same thing, if like people don't like ai, your product gets, it gets useless. When you are now actively coupled, uh, uh, to, uh, to the faith, and this is beyond the financial adjacent booster, where it was like, if AI gets more relevant, you get more, uh, get more jobs or like, like you, you become more relevant. Here's like a direct connection to your economics, uh, like survive and your, your own ability to exert power. That's like the transition that happens at, uh, at, at this level. Jacob Haimes: And I mean, that [00:26:00] doesn't exclude a corporate booster either, right? Like. Igor: Oh no. Jacob Haimes: Would, Igor: they can play the dual role, right? Jacob Haimes: so before we get to the last one, I guess I'd be curious, um, if you think like Jensen Huang, I, I believe, um, the, uh, person, uh, who, the CEO of Nvidia, uh, would you put him as like a, like a doing double duty here or would you put him in the, the next category up? Igor: Is, uh, all top, uh, bottom Jacob Haimes: three of the final ones. Yeah. Igor: Yeah. Jacob Haimes: Okay. Igor: Gen Jacob Haimes: that last one is the Model Builders. Right? So this is Anthropic and OpenAI. Uh, well specifically like, you know, Dario Amide, uh, Sam Altman, although I guess Dario is sort of in a, is his own category 'cause he, he also acknowledge, acknowledges, um, things in a different [00:27:00] way, but still not good. Um. And yeah, Sam Altman, you know, these are the kinds of people, Igor: Google as part of this? Jacob Haimes: well, I wouldn't put, well, I, I guess I'm more talking about individuals more than entities because the. I don't know. I, I feel like we can't treat like the corporations as the corporations are gonna do what corporations are gonna do, which is to maximize profits. And so like if we, if we treat them as having morals to appeal to, or, or even like, put them along the same, uh, you know, spectrum as people that we're saying have morals to appeal to, um, that. Is a mischaracterization to a certain extent. Um, so. Igor: agree to, I would agree to, to an extent. I think there is [00:28:00] something to a certain that you can apply this strength also to, um, companies, to, uh, to a certain, or like entities where like you just need to like swap out like the enthusiast and the denial pro booster towards like, you know, like. Genuine NGOs excited about being able to do their work, uh, uh, for, for less. Uh, so like more human entities will be able to do that, whereas like, I don't think you can be a genuine, uh, enthusiast if you have a market cap of a, of a billion dollars or more, like, uh, you're like more than 20 people because you, you're just like too much of a system at that point. Um, but yeah, I would also agree that like. Like Dean for example, I don't think Jacob Haimes: I don't know who that is. Igor: fits the model builder. He's like, um, one of the lead engineers or like very mi ai guy of, um, Google, not DeepMind, but uh, uh, but like, uh, he was the lead, [00:29:00] um, amongst other things around the chip effort at Google, like making their own hardware Jacob Haimes: Gotcha. Igor: I, and I'm not sure you can like put Deni De Demi, uh, I, Jacob Haimes: Stop this. Igor: name. Yes. D mind guy into this category per se, but if you, but, um, what would you say is like the distinguishing mark on this le on this level? Jacob Haimes: I mean, I think like it's why they are where they are, and without it they wouldn't be something. Igor: Like at the billion dollar scale, right? Like, like, like, Jacob Haimes: Yeah. Igor: German where there's an expression called, uh, is like the height from which you can fall, uh. Like AI bubble pops, if a sentiment, uh, teeters over and people just think of AI as, uh, stochastic parrots, let's say, or Jacob Haimes: Well, I mean, Igor: probabilistic heuristics. Jacob Haimes: I don't know. [00:30:00] I, I, I, I, 'cause there's still, there's so much power in that anyways, right? So saying that, oh, the people only think of this as a stochastic period is like, okay. Igor: But does it have, does it have enough power to sustain a $800 billion valuation free profit? Jacob Haimes: uh, that I, that I don't know. But yeah, I, I guess the, to me the important part is that their continued relevance at the level that they are currently at, is, uh. Strictly tied to, uh, how well AI does, right? Like that's sort of how I think about it. Like there's. Igor: Which, which is why we both exclude Google a little bit, and we both exclude Nvidia a little bit, even though like Nvidia probably is tied more in Google. Like Google will just keep showing you ads without ai. Like, like, like they're fine. Um, Jacob Haimes: Yeah, I'd put them more in the [00:31:00] product and platform boosters and corporate boosters category than the model builders category, I think. Um, Igor: Yeah, Jacob Haimes: but yeah, so that, that's sort of the. The tear list. Um, but there is, uh, Igor: a special case, Jacob Haimes: yeah, a branch of the tear list, uh, a shiny, um, which er mentioned briefly earlier, is just like this crita hype thing. Um. Which is something that has been brought up in, I, I think, you know, for, for a long time that people who are criticizing these systems the way that they do so amplifies the systems in public discourse as well. So you say, we can't let this company do this thing. It's so powerful. We can't let them make that decision. Their technology is too good. [00:32:00] That is, in a way, you know, really boosting, um, public perception of what the technology can do. I. Igor: In the words that you use just now a work case study, literally boosting the user base of Tropic who got, I think we talked about it in the last episode already, or like one of the last episodes, millions in free advertising by, uh, Pete Sack literally tweeting this, that there was, the technology was so powerful and so necessary that. was critical for the US government to have unconstrained effort and that, uh, tropic was too moral and they were refusing to, uh, uh, uh, to make them available. Jacob Haimes: Which of course is, you know, bs. Um, but Igor: but like the, the, it's the opposite of, of damning with faint praise. Jacob Haimes: yeah, and I think one of the other things [00:33:00] that. Is salient to me, uh, or at least like a, a parallel, a, a similar case that is kind of interesting is, you know, at least where I grew up, you know, the, the super strict parents are the ones whose children are the craziest. Um, and there's almost like a. Um, a forcing function that like when the, you know, family dynamics are very constraining, there's this desire to rebel. Uh, and I see a, a parallel that with where like, if, if people are saying, oh, it's so dangerous, you can't use it. Um, and they're just putting it out in front, um, that could have that effect in some ways. Um. Igor: I mean, Jacob Haimes: it's not to say that that's true for everyone, but I, I do think that Igor: I [00:34:00] think it is true for everyone to a, to a degree. Like, like, uh, one, there's a thing called like a me exposure effect. Like if, if I just tell you thing is bad and you, you didn't know thing existed before when I've. than likely like increase your chance that you're gonna like, uh, interact with ADV event decreased. Jacob Haimes: I, I do get what you're saying, but also like. Having grown, my parents were, I, I wouldn't say strict necessarily, but were, uh, like shared, you know, about like, you know, this is dangerous because of X, Y, Z, um, and. I did, I did not do a bunch of like, uh, typical like crazy teen kind of stuff, uh, when I was growing up. Igor: That's, that's just 'cause you're busy, uh, to become STM like, uh, Jacob Haimes: That's true. Uh, but I, I guess what I'm saying is like, it's not, it doesn't [00:35:00] necessitate, pushback, but any anyways. Igor: the, the multi causal thing again, right? Like, um, like I, I think, uh. People underestimate how much this plays into, uh, effect and, and I will bet money, but like the only reason why Tropic is talking about mid miers right now the corporate level, where I think the individuals involved, they all have like good faith things. But the reason why tropic at the corporate level is basically like teasing us with a preview on the latest AI model where they're basically saying, this is so good, we're not gonna make it available to you guys. Is the same playbook that OP AI did. Uh. When they started exactly this and it works. PE people are talking about it. People are like chomping at a bit to, uh, to get access to it, to already. Jacob Haimes: Yeah, I mean they, they literally announced it like a couple hours ago and you're talking about it right now. So, Igor: yes, it works. It works. Jacob Haimes: see, I'm out here. [00:36:00] Not knowing that happened until you brought it up. And that's why I am s tier. Um, so getting back to this mythical framing, um, the reason that we wanted, uh, to bring up, you know, the mythical AI booster and sort of with the, the previous episode of the mythical AI bear is because These top, or I guess like last, final three tiers, uh, of, of booster. I feel like that's what, what people think of when they, they're thinking about an AI booster, um, someone who's like. All in really, um, dedicated and, and bought into the idea that this is going to be, um, not just transformational, but uh, like imminently transformational in a way that no one could possibly predict. [00:37:00] Um. Igor: would include the loud and annoying ones of like the first three tiers for like, like the pro proletariat, um, boosters as well. Jacob Haimes: Okay. But then, Igor: the, the tech bros, the Twitter bros. The people who keep going on about it. Jacob Haimes: yeah, I, I guess the, If we, if we're including portions of each of the people on our, our, our tier list, then it's like, why do we have the tier list? Right? Igor: It's a framing device. This is where we break the fourth wall, and we admit that this is a podcast and, and we're trying to do educational things here, I, I, uh, I think mythical Jacob Haimes: thinking that we were serious about the tear list the whole time, because I missed here. Igor: well, you have a vested interest in materialism. Um. Jacob Haimes: Shit. Igor: I think a distinction between the mythical booster, uh, and the mythical bear is that, you know, bears actually try to avoid you and they're kind of hard to catch and they try hard to find. [00:38:00] Whereas like the thing with a, with a boost is like the true booster is very rare with the vast minority of people are the annoying loud and like the people that people think of as booster. But do you know why people think of. Them so much. Jacob Haimes: Because they're loud. Igor: They're loud as fuck, and they will make, uh, sure that you will hear of them. So like Bear is like the, like gentle creature that, uh, tries to hide in, in, in the forest and can't be seen the. The myth, the mythical aspect of the buer is more like the mythical Pokemon that like is very flashy and showy and has a shiny hologram card. Uh, and I, uh, and I think like it still, um, uh, uh, uh, carries it. It's weird. That still means that like most people that you encounter, like you should presume good faith despite what Jacob said. Like if, if somebody [00:39:00] seems like their denial from booster like as much as you can, having patience and kind of like trying to find ways of like onboarding them onto the actual acknowledgement externalities works out much more often than you would think if you're like able to tolerate it. And the main point I would say is like. Depending what your goals are and who you're talking to, change your tactics like, Jacob Haimes: Okay, so like how, how does that actually apply, like concretely in this, in this taxonomy? Igor: OR enthusiast, if they're genuine, genuinely talk about some stuff that like you can back up as being like bad and like they will probably connect with it like. One thing that is insidious is that there's like propaganda around the water usage, uh, right now where There might have been a miscalculation in, uh, [00:40:00] uh, I think Karen Ho, uh, house, uh, or ho's, um, book, people are using that to basically deflect the whole topic of water usage data centers, despite it being a thing, even if it might not be as bad as it was initially reported, but if you keep it grounded and you keep it genuine, the genuine users will engage. Denial from booster? Probably not. You need to like first like leave them out of whatever makes them be in denial. Maybe they're scared, maybe they're like a senior deaf who's trying to do like a career switch and like ageism is a thing in, in software and they're worried about like their job before retiring, but they still have like a, a kid to, uh, to feed. So like. You can't be as genuine with them. You need to kind of like understand why are they denial prone. Same thing for, for a financial adjacent [00:41:00] booster. they're more on the human side, understanding why they're coupled to that and like maybe trying to find ways out of the financial trap or like talking to them about how, um, they are. Becoming ERFs to a certain degree or they're losing their editorial freedom or whatever can work for the corporate booster. Maybe you need to go, um, hard on the business. Talk yourself and talk about like alternative funding mechanisms that are actually much more likely to go over in your market. So like from Germany. In Germany, we have a thing that could be a viable model, alternative model for. Funding or like licensing. Uh, copyright, copyright materials where every USB stick that is sold in Germany has a small, [00:42:00] not a tax, but like a fee on it is collected by the various, like copyright associations that basically you can sign up with. And then every time one of your cooperated works gets used, you get a small piece of a pot of all of those use B sticks and the printers and the copiers and drives that are sold. And there's millions of dollars, uh, every year that are redistributed through the market like this. And we could do this for ai and maybe there's like a way for. You who's hearing this right now? To, to, to raise venture capital to make it happen. So you can get to the biggest piece of that, of that pie if you switch sides and you become like more critical of, uh, of a plagiarism because you just need to like get enough, you know, like political buy-in and then you can become filthy rich by being a good person. So that's like how you talk to those guys. And then for a platform booster, [00:43:00] um. Getting difficult, maybe like making 'em aware that like product, product market fit might mean that people will hate your product if you're tied too closely to ai. So you might want to kind of like hedge your bets. I think a positive example there is a edit called Z, which is like. Both all in on ai, but it's even more all in on user choice. So they genuinely have like a single toggle that makes it super easy to turn off all of the AI stuff and, and also like gradually opt Jacob Haimes: Yeah, that's, that's what I use. It's, it's great. Igor: And it's a great editor. It's super snappy. It's like really good for like typing stuff by hand and then you can write code with it if you want. Like it's amazing. Uh. And for, for the model builders where, I don't know, what's your idea of, uh, talking to model builders? How would you, uh, Jacob Hames convinced Dario Amai to stop? Jacob Haimes: Well, I, I, I don't, I don't know if I could, um. [00:44:00] Yeah, but just, just with regards to this tier list, not with regards to being able to convince people of things, um, I don't know. I, I don't know if there is a way to convince these people, um, in a conversation, right? Like it, I mean, most of the, most of the. People in the, the last couple tiers are, are not gonna be able to be convinced in a single conversation. So if you really want to be convincing them, um, it's, yeah, you're, you're, it's unlikely that, uh, maybe, I mean, one might argue impossible, um, that that's gonna happen unless you're like. It is a much longer effort to do so, um, which basically no one has access to. Igor: And Jacob Haimes: Yeah. Igor: thing to save is that like if you have a level and maybe also like at, um. [00:45:00] Lower levels where like if you're like talking to like a think tank person or like anyone who's like material or power, like welfare or position is too closely tied to boosting ai, you might want to like be more adversarial and actually like try to like debunk them or like destroy their credibility or even just make fun of them. Where like I'm, uh, I'm, I'm pretty sure that like, making open AI look creepy was one of the best things that o, that Tropic did for like overall AI safety by, by reminding people of like how dangerous it can be in this creepy way. And I, I would also like give them credit for like genuinely believing in that stuff. I think there's like observed, uh, evidence and preferences towards this. Yeah. This is how I answer your [00:46:00] question of like how concretely to use the steel is to guide decision making. Jacob Haimes: And like we said at the beginning, there wasn't much muck in that because it was all our opinions, which as we know are s here. If you, if you enjoyed this episode, if you found it informative or helpful, uh, or you disagree with us, uh, or funny Yeah. Uh, please give us a share on whatever platform, um, it's, it really helps, uh, get the show out to more people and we really appreciate it. Um, so we. Igor: write an analysis of this episode, I will, uh, react to it in a dedicated video. Jacob Haimes: Wow. What a way to end it. Igor: Jacob so he can like hunt me down for it because I'm very busy. But this is a on the record. Jacob Haimes: Uh, yeah, what, what a way to end this episode. We'll see you next time.