[00:00:00] Jacob Haimes: This episode was recorded on March 17th, 2026. Welcome to muckrAIkers, where we dig through the latest happenings around so-called ai. In each episode, we highlight recent events, contextualize the most important ones, and try to separate muck from meaning. I'm your host, Jacob Haimes, and joining me is my co-host Igor Krawczuk. [00:00:22] Igor: I wish I had a funny tidbit or neat little story for you but the world is depressing right now So if you have a funny story that made you chuckle this week please leave it in the comments [00:00:34] Jacob Haimes: Yeah. And on the, you know, sort of depressing note, we're gonna be talking about AI and warfare. Yay. [00:00:43] Igor: Yay [00:00:44] DOD-Anthropic Standoff --- [00:00:44] Jacob Haimes: But specifically, we're gonna be looking at the DOD Anthropic Standoff. Spat, whatever you wanna call it, uh, which has resulted in Anthropic being designated a supply chain security risk by the US government. [00:01:02] Igor: Oh wow How did that happen [00:01:04] Jacob Haimes: They, they, they just said, you're a supply chain security risk. Uh, because that's all it takes these days. and one thing that we, we did want to mention here as well is that while we want to talk about this, it's not just like a rehash of the news. we're wanting to take this more as a case study in an approach that we use, when thinking about how actors of this level, so like governments or, you know, super. [00:01:34] Jacob Haimes: Rich, powerful people or very large corporations conduct themselves. Um, and like, how to make sense of it with this as an example? [00:01:45] Igor: Yes And did say they just declared it but there was a little bit of let's say foreplay before we declared it right [00:01:56] Jacob Haimes: Well, yeah, I just meant in terms of, why it has happened, why the designation exists, like I, I guess the, that the government, the US government can just, uh, like once they decide that that is, uh, how it goes. But in terms of like the timeline, I guess the first thing to note would be if you want a more in depth version, there's a really good article. [00:02:19] Jacob Haimes: Uh, which we'll link in the show notes from Tech Policy Press, where they go through this in much more extensive detail, including citations and all that stuff. So check that out if you're actually interested, but we'll give like the, the very high level Cliff notes version. [00:02:37] Igor: that Cliff nodes version starts with kind of the Maduro rate Was that the earliest event [00:02:43] Jacob Haimes: Yeah. Based on, so this is also based on that article, um, but it seems like that was really the catalyst for this, is that there were rumors and then confirmation that Claude or some, some systems leveraging Claude was used for aspects of the Maduro raid operation and. Following that, uh, hegseth also made some claims about how, uh, the military is only gonna use AI that lets them fight wars and referenced the No woke ai, uh, which is just, I guess, a real name for the, the thing that was put out by the US government. [00:03:31] Jacob Haimes: Um, but yeah, so he got a little bit upset about that in, in the aftermath. Uh, and since then, it seems to be clear that it was because there were some issues, uh, with, with Claude [00:03:45] Igor: Just being probably some refusals or some limitations that were actually [00:03:51] Jacob Haimes: Right. Uh, and so, you know, I guess my assumption is that this was also like the first. Larger scale. Um, example of Project Maven, which is, uh, the software as a service platform that is provided by Palantir to the Department of Defense. Um, and that is, you know, running, running Claude as, as part of its, uh, workflow. [00:04:22] Jacob Haimes: So my guess is that that is sort of where this came from. But anyways, after sort of a another month, uh, after that, a month and a half after that, on February 16th, the Pentagon threatens to designate anthropic as supply chain risk. So this is just them saying, Hey, you better comply, [00:04:40] Jacob Haimes: But then. [00:04:40] Igor: uh complies [00:04:43] Jacob Haimes: No, so, so then another like eight days or whatever passed, and it was February 24th. Uh, at that point, I think it was either the day of or the day after, Dario Aade actually went to, uh, meet with we with them. Uh, and at that point, hegseth threatens either the defense production Act to compel Anthropic to comply. [00:05:07] Jacob Haimes: So this is national. We're gonna nationalize your technology. You're not gonna have a say or designate them a supply chain risk. And then they wouldn't be allowed to be involved, um, if they didn't. [00:05:21] Igor: even at this moment I think it's important to note that threatening both of these at the same time is something to put a pin on [00:05:30] Jacob Haimes: Yeah. [00:05:30] Igor: we will not get into it right now but this put a pin into into into that fact [00:05:34] Jacob Haimes: And so they give 'em an ultimatum, ultimatum rolls around, aaro rejects it. then they are announced as the supply chain risk is, is the, the one that they went with. Uh, and then the lawsuit began on March 9th. So this is all like very much a developing story, but those are the important beats. [00:05:53] Igor: Okay So uh so this leaves us here now with entropic o officially being a supply chain risk and presumably other stuff still ongoing in in the background [00:06:04] Jacob Haimes: Yeah. [00:06:04] Igor: uh like what was the public reaction and maybe like before we get to that what was your reaction as [00:06:10] Jacob Haimes: Yeah. So I was actually at, uh, ICI when this, uh, was unfolding. And so that was, that was an interesting place to be because it was one of the more policy centric, uh, conferences, uh, within this sort of ai, uh, space at all. And so we had people that, uh, you know, from all over the world. Um, and so there's some people that know about it, some people that don't, uh, some people talking about it, some people not talking about it. [00:06:37] Jacob Haimes: But that was an interesting place to be. And when I learned about it, my first thought was just like, huh, I'm surprised. I, I, I really did not expect Anthropic to hold, [00:06:51] Igor: Before we go into that what is ICI [00:06:53] Jacob Haimes: International Association for Safe and Ethical ai. It is a conference, uh, this past year. It was in, uh, uh, France, in Paris. Um, I was just presenting some work there, but the important part here, is that I, yeah, I was surprised. I mean, May, maybe you can just share what your thoughts were and then we can talk about it. [00:07:14] Igor: For uh for for me I was like genuinely like very happily surprised because I'm a very cynical person in regards to Ming a corporations [00:07:24] Jacob Haimes: Mm-hmm. [00:07:25] Igor: I wouldn't call myself cynical Actually there's a difference between being cynical and being realistic and you should try [00:07:29] Jacob Haimes: Yeah. [00:07:29] Igor: realistic and not cynical We will get to that as well but normally you don't see genuine Like full on resistance like this unless forced and tropic did already collaborate with uh US government on warfare and uh related things So this was a genuine surprise Even if it's like a late line it is like a welcome line [00:07:58] Jacob Haimes: Yeah. I would say the same thing, like the, the, the reason that I think. Uh, at least like my prior, I guess, uh, is that they would've caved. Is that like, it's pretty well established that Palantir, uh, is, uh, a completely unethical operation. Uh, not just the people in charge, but also the products they provide, uh, and the way that they do business. [00:08:27] Jacob Haimes: Like, there's so many issues. and [00:08:30] Igor: Proudly uh enabling warfare and domestic surveillance [00:08:36] Jacob Haimes: yes, and I mean, even recently, I think like within the past week or so of recording this, the CEO mentioned that like expansive use of AI would, uh, delegitimize, uh, educated, Individuals, especially women or something like that. Uh, which is like, okay, great. Thanks for, thanks for sharing that. Um, I guess we already knew, but now the quiet part's been said out loud. [00:09:03] Jacob Haimes: Anyways, the line comes, uh, the line that, that, that philanthropic has drawn has come after saying that Claude is allowed to be used in Palantir's products and making agreements with the Department of Defense. and again, since this has, sort of been coming out, uh, well, I guess since this started, but not since this has been coming out, uh, the war in Iran began and it has become apparent that there was a strike on, uh, girls school in Iran. [00:09:37] Jacob Haimes: And based on the evidence we have, I think it is highly likely. That Project Maven was used in the, selection of that target. and so effectively then Anthropic's line comes after tightening the kill chain in the words of, of Palantir. That's what they would say, um, loosening oversight and as a result, uh, bombing a, a girl's school. [00:10:04] Igor: But it's not as simple as just saying uh oh An tropic is uh is good now uh for example tin Tin Gabriel pointed out in a LinkedIn post [00:10:17] Jacob Haimes: Yeah, I thought, I thought that was interesting. Um, because like obviously you're seeing this sort of outpouring of support, um, because [00:10:27] Igor: Tropic put up the bare minimum of resistance and put like a red line of saying Hey we want you to the specific red lines that we know about were we want you to not do human out of the loop fully autonomous autonomous weapons [00:10:45] Jacob Haimes: mm-hmm. [00:10:46] Igor: and you can't use our systems for mass surveillance of American citizens [00:10:52] Jacob Haimes: Yes. And then. [00:10:54] Igor: means that I am okay to mass surveil but Jacob is not [00:10:58] Jacob Haimes: Yeah, that is correct. Although, arguably then, you know, your government would have something to say about it, and if their security was good enough, they would just prevent it, you know? Uh, but anyways, [00:11:10] Jacob Haimes: the response that, that, like the DOD then has, that he Seth has, is like, oh, we didn't want do anything illegal with it. It's like, Uhhuh, okay, you're not gonna define the things that you want to do with it as illegal, but everything that you, there's so many things that you've done that are illegal, that you've changed the rules for, or just ignored the rules for that. [00:11:33] Jacob Haimes: Like, actually we should trust that you would do something illegal based on precedent set. Um, regardless though, like Tim Minutes post, uh, was referencing. When Elon Musk and, and Trump had their little spat, and how people also back then sort of jumped on the, like, pro Elon trend. It less, to less of an extent than I would say people jumped on the pro philanthropic trend in this case. [00:12:05] Jacob Haimes: Um, but it still holds. And uh, the, the thing that she was pointing out, which I think is, is very valid, is that, um, we shouldn't be lauding a corporation or individual or, or actor that is behaving in their own self-interest. And because it's convenient for them, uh, they sort of play it off as being part of their, their values. [00:12:37] Jacob Haimes: Um, but really they're just taking advantage of. Like public sentiment, uh, I guess, and passing off their behavior in that moment as more general than it actually is. Mm-hmm. [00:12:54] Igor: And like I think we should link um tenets uh or Dyers uh we should say bundle and uh mystery a AI hype theater 3000 um as resources there to explain where Tim is coming from But I would summarize like the the post is like uh the enemy of my enemy is the enemy of my enemy Nothing more And criticizing people who go oh now that they're like they're fighting This is another good guy We need we we we need to support them And morally I fully agree with her basically but there's also like way of looking at it uh which is like a more pragmatist viewpoint And that's kind of what we're gonna be um which like people use pragmatic as like reasonable That's not using this here The philosophical thing of p pragmatism is just looking at things as being because you have a goal there's like actions that you're trying to do and whatever you're doing is like serving a specific goal And your goal is Do whatever And for that you now want to figure out how to assess tropic and uh what we're doing and how to interpret it That's the playbook the stuff we want to break down in uh in this uh this episode And not only tropic also any other like multinational corporation any like large entity where you don't get to things at face value because they lie they have an interest in behaving in a certain way And so even though they themselves will fully believe the story that they're selling that story can change on a dime as is convenient So like individuals this would be like a narcissistic behavior For corporation It's just the way they are But you know like uh systems will will change and [00:15:08] 3 Buckets of Motivation --- [00:15:08] Jacob Haimes: So that would mean then, like the first thing that we're thinking about is motivations. I, I guess is, is sort of a way to, to put this, like what, and, and you can't think about all potential motivations, um, ever that a organization or actor or whatnot, uh, could have, but we can think about them in, in buckets. [00:15:32] Jacob Haimes: Um. [00:15:33] Igor: yeah it's kind of like if if you want to figure out like I want to prepare for whatever happens with AI now or I want to pick where I direct my employer's AI budget as a technical person Uh one thing that you can use to make the choice like the actions that you're trying to do is to understand the motivation of the actor to uh and see how they align with your goals your motivations cause then it's safer to use them where like Russia famously used uh SpaceX for communications and the motivation of a US based entity is gonna be kind of clashing Russia right now So that's like a thing to avoid and figuring out why Atropic is behaving why we are as this first step [00:16:30] Jacob Haimes: So what, what would you say like the most interesting or, or valuable buckets to start with? R at least in this case? [00:16:41] Igor: the big buckets I would point out is like uh intrinsic motivation rational self-interest and like real politic where intrinsic motivation is often a bit underestimated but like a lot of people's behavior is because of like what they actually believe in or like what like genuinely motivates them [00:17:05] Jacob Haimes: Mm-hmm. [00:17:06] Igor: often you can't you earlier I said you can't take him at face value but sometimes you can [00:17:13] Jacob Haimes: Yeah. You couldn't take Igor at face value when he said that. [00:17:16] Igor: but sometimes you can because there's nuance to that statement right Like you shouldn't take people literally and stop thinking But like for example Trump a lot of the problems we're having is because people don't take him literally Where like if he says [00:17:33] Jacob Haimes: Yeah. [00:17:33] Igor: will do a crazy thing people go like oh no he's just bluffing It's like no he will do the crazy thing [00:17:39] Jacob Haimes: Yeah. At this point he is like, it, it's been very well established. Like, nah, he's, he, he, he'll just do the crazy thing. Uh, and so that, that's why it's so crazy that like people tolerate it when he says, maybe we should run for a third term. You know, especially when he said it multiple times. Like he, he, he's not being facetious. [00:17:59] Jacob Haimes: There, he is, seriously considering that, um. [00:18:04] Igor: we're gonna link to a um post by a like minor like analyst that I like that makes a point of like people can't lie by which it doesn't mean literally people can't lie but people really struggle to lie about their core values or their intrinsic motivations And so if we look at like what entropic the entity and also the people at Entropic might actually believe They split off open AI specifically because they to some like were unhappy with the AI safety there [00:18:34] Jacob Haimes: Yeah. [00:18:36] Igor: their whole advertising branding like everything is aligned with um caring to some degree So it still is a bit surprising that they actually that that's not all bullshit but it's not completely unthinkable But yeah we found the red line This is this is what we meant with like ethical ai [00:18:58] Jacob Haimes: Yeah, I think that's an important to say too, is like they are about safety, but the safety is then defined in the way that they define it, which of course it is. Uh, but in this case, I would say it's, it's very explicitly about, um, safety from the existential risk, Skynet style scenarios. Um. [00:19:18] Igor: I I think it's important to point out like the two red lines Mass surveillance of within America and autonomous weapons is literally just don't do Terminator [00:19:29] Jacob Haimes: Yeah. So I, I think that's like, that's wor that's worth acknowledging. Another thing that I think is, is interesting as an example of this, they just might mean it, um, in this, in this case study is dario's leaked memo. Uh, 'cause this is kind of like, uh, a sneak peek into the actual state, what, what he's actually thinking. [00:19:57] Jacob Haimes: Um, because o obviously this situation has pissed him off, right? Like it is costing money, it is costing way more time than what he allotted for it for certain, um. And it's irritating to have to deal with, um, incompetence as well, which is what I would describe, uh, as the Department of Defense, at least at the level, uh, that he is dealing with it, which is hegseth. [00:20:25] Jacob Haimes: Um, no statements on, on the rest of the Department of Defense, but I, I will say that Hegseth is incompetent. Um, and so he said something along the lines of, of, uh, Trump being, or, or, or the government being, uh, what was the word? Do you remember the word mendacious? Which is a word, is a choice, is a choice that Dario Amay would make. [00:20:52] Jacob Haimes: Um, but that is what Dario said when he believed there was a little bit more safety, uh, in being able to, to say that, right? 'cause it's a internal memo. He doesn't expect it to go public. Uh, and. He's not trying to to lie there, but then publicly he walks it back. Um, still that's like an example of, [00:21:16] Jacob Haimes: I I think we can, I, I think we can be relatively confident that Dario probably does think that Trump is being mendacious. [00:21:24] Igor: We we will we we will get uh get um deeper into this later as well when we go like into like how can you figure this out like systematically Um but the next thing I Is maybe like what people are more easily going to uh believe when you point it for for like company's behavior That's like rationals is a self-interest Like a nerd and you know like you listen to this podcast so you probably are then [00:21:54] Jacob Haimes: Mm-hmm. [00:21:55] Igor: large corporations are interesting because they're the closest thing we we have to like fully rational actors or like countries are like uh another candidate So they will actually do the game theory shit And that means you get to like apply all of the different game theory techniques and like overthinking things to them And then if you do that you actually see there's a bunch of good reasons why you might want to do what Tropic as the entity number one Connecting a bit to the original like motivation of like if you have spent your whole existence hiring people with like we are a good AI company we might pay less than open ai which I'm not even sure that is true but like let's [00:22:42] Jacob Haimes: I don't think that's. [00:22:44] Igor: like assume or like we might have less money so it's a bit less certain I think that is like an actual thing [00:22:51] Jacob Haimes: Yeah, yeah, yeah. [00:22:52] Igor: at least in the be beginning they they were the underdog They don't have 80 of a market share People don't say I asked my when they talked to like like people don't know clue lots of of coding [00:23:03] Jacob Haimes: You mean people don't say, I'll ask Claude. People say, I'll ask Chachi pt. [00:23:09] Igor: yeah And uh all of that stuff is like thing they have to like or still have to fight against And they use this we are the good guys heavily So [00:23:22] Jacob Haimes: Mm-hmm. [00:23:23] Igor: is like the people that they have hired probably to a good degree go there because of also the culture from from the very beginning [00:23:33] Jacob Haimes: Yeah, absolutely. [00:23:35] Igor: and [00:23:35] Jacob Haimes: And and they also very intentionally recruit out of that culture as well. So there's a whole. [00:23:39] Igor: yeah [00:23:40] Jacob Haimes: Another sort of can of worms there. Uh, but it does mean that if they were to make a decision on a corporate, uh, level that very blatantly goes against those stated shared values, they could have a, a legitimate, issue in terms of, uh, people leaving and given the salaries that they're paying, the time it takes to train people up, uh, and the importance of time in especially in their, um, their framing of the world, it's likely that that is actually worth far more. [00:24:22] Jacob Haimes: Then the amount that they could possibly lose as a result of this decision. [00:24:28] Igor: Yeah Also in terms of opportunity cost uh that like if there's like one uh one team that is like well tuned and they're like working on like a thing that is gonna work well or like it can make cloud dance then replacing that costs a lot Um and they are already struggling with this but like uh one of their researchers quit in a very like public letter saying that like um it was like Marina Shama I [00:24:56] Jacob Haimes: I, [00:24:56] Igor: the name Mangling um who had led the research team He quit and said [00:25:05] Jacob Haimes: hmm. [00:25:05] Igor: gonna like study poetry now and he hopes to be braver Um when when he was before um [00:25:14] Jacob Haimes: Every, every time I hear about. I always feel so bad, like [00:25:20] Jacob Haimes: the world that they live in is [00:25:22] Jacob Haimes: fraught. Yeah. [00:25:24] Igor: when [00:25:25] Jacob Haimes: But anyway, sorry. Please continue. [00:25:26] Igor: remind myself how much money they make and I was a kind of like if if anybody from Tropic Open is watching this and is like very anxious about agi I talk to me My rates are very affordable Like I I can try to like reason you out of your anxiety Like like like uh there's hope that that that I can sell sell you for for a very reasonable price Uh [00:25:49] Jacob Haimes: Yeah. [00:25:50] Igor: But the other aspect is that all of these guys have a lot of money like uh after a while So like [00:25:55] Jacob Haimes: Yeah. [00:25:55] Igor: to actually keep a story up so you need to at least be able to plausibly sell that You're still the good guys and if you cross all of your red lines in very quick success succession don't have time to like you know boil the frog and like point at like oh we have to do this So even if you assume that they are all like snakes at the C-suite at some point they have to like draw like a token line and you could hope that okay maybe these two lines are small enough that they're not gonna explode at us for this So that could be like the calculus that is being done [00:26:28] Jacob Haimes: But at the same time, they did, uh, I think on like the 23rd or the 24th of February, also say that they were removing some of the, the safeguards in their, uh, like responsible scaling policy. The, I think specifically the commitment to have, uh. Safeguards in place prior to training the model is something that they removed. [00:26:49] Jacob Haimes: So again, also in, I'm sure that that was a longer process involved, that wasn't a decision that was made that day. So that was in the works, uh, for, you know, a month or so at least. Um, meaning they can't really stop that at that point. And now they've been given this other very critical decision on a similar topic with a short deadline, meaning it has to happen around the same time. [00:27:17] Jacob Haimes: That could also be involved, although there were also people saying that potentially that was in response to this scuffle and was trying to appease them, which maybe, maybe is the case as well. [00:27:30] Igor: I I don't know Um yeah that's for me that's a bit uh like orthogonal to the overall uh topic there Um [00:27:39] Jacob Haimes: was just a side note, [00:27:40] Igor: another aspect that I don't think they saw coming but it might have like fed into their their like calculus for the first point was we will put a screenshot of a status page I grabbed like a few days ago You can see the Reliability of cloud code and uh AI and all tropic uh pages and APIs the US government chose to give entropic billions of uh dollars of advertising spend equivalent of a of marketing [00:28:15] Jacob Haimes: Yeah. [00:28:16] Igor: a massive inflow of people who uh who reacted to uh to the tweets of he exit who said I'm paraphrasing a bit and we'll try to find the tweets but like there are two things One how dare you be so ethical We must have uh have uh afford to spy on Americans and tropical refusing uh uh to allow us this is horrible And two is critical technology We cannot uh act at our best capacity Like like we need to uh to have this It's kind of this pin we said earlier about the Um chain uh risk and the uh defense uh act like this is like half [00:28:58] Jacob Haimes: Hmm. [00:28:58] Igor: pin and like saying that you're so critical that you you you're necessary for the war effort and you're so ethical that you refuse to let fund the Americans Like you couldn't have made a better gift to any like that wrangles people's private data if you tried [00:29:16] Jacob Haimes: Yeah, like standing up to the DOD, the best advertising money can't buy, right? You, you need to get in some sort of aggressive confrontation in order to get that kind of advertising, and it is worth a lot of money. [00:29:31] Igor: This is like after Open AI did like the previous record of uh of uh of like announcing that they would be adding ads uh and and tropic like capitalizing on on on web They they didn't even need to do a Super Bowl ad for this one but I don't think they they were gambling on that I think this was like a nice like Softening of the damage that this took I think they they were hoping that the DOD wouldn't do this and this is just like a a nice like free win for uh for them [00:30:02] Jacob Haimes: And then the, the last area also, sort of, at least in, in how I'm thinking about this, uh, motivation also blends with the real politic. Uh, although, correct me if I'm, if I'm misunderstanding here. Um, but essentially like a, a certain point, you need to be able to demonstrate that there is a line, um, and that you won't. Cross it. And that line should probably be informed by something that is like, actually you think is bad. Um, but then simply having that line and standing firm at it is what allows you to play ball with the likes of Trump, because it's been very well established at this point. [00:31:03] Jacob Haimes: If, if you give him an inch, he will take a mile. And so at a certain point you have to stop giving him an inch. [00:31:10] Igor: So uh so so this is um the real politic one Um connected a little bit to this like establishing yourself as responsible from T Lab kind of like like committing to your brand Like actually like showing people okay Like what we're doing actually means something is a bit like this Like at some point you need to Put up right Like um you need to show that this has some meat behind it And the last one that is that is more on like rationalists of interest um is uh one of the ways autocrats common try to like get a leverage on you by making you commit a crime together Because when you know like you invest together and if people snitch on you or if uh if like it goes done better and it's also their next so they're now invested in also keeping like you know the con going And it might be fully rational if Tropic has like an assessment that you know just bombing I Iran without congress approval uh might be a tad bit illegal And participating in things like mass events of Americans and autonomous weapons without vision also be legal And if you are not like credibly putting up resistance you you are co liable to that If it if there's ever an investigation and the u the US government is gonna be fine and probably like the the president has has immunity but you don't so there's a reason to at some point say no we're not gonna do that Actually Like uh we're we're sitting this one out [00:32:48] Jacob Haimes: And I see that. So I, I, that was the one that I, I was tying to real politic in a way. It, it's a different, it's a slightly different motivation there. Um, it, you know, it's protecting your own skin as opposed to, um, trying to establish that you can, uh, essentially have a seat at the table. But [00:33:10] Jacob Haimes: it, it, they're very intertwined in this case. Um, [00:33:15] Igor: Yeah like [00:33:16] Jacob Haimes: I think that, [00:33:18] Igor: is basically rational of interest aimed at power plays It's it's gonna it's gonna that you're like very like pragmatic ruthless beyond even just nor nor normal behavior And you see uh like you put it to the extreme basically [00:33:31] Jacob Haimes: So are you saying that you would disagree with what I'm saying? [00:33:35] Igor: No I'm I'm agreeing I'm just like for giving my connection of uh of [00:33:38] Jacob Haimes: Okay. [00:33:39] Igor: these things uh things tie together And I just wanted to like um give people like a concrete example of like a past company that really wishes that like the XX had acted like this which is Arthur Anderson which some people [00:33:53] Jacob Haimes: Yeah. [00:33:53] Igor: too too young to know about Right right now there's the big four um auditing and accounting companies It used to it used to be the big the the big five and it's no longer the big five because Arthur Anderson went down with Enron because they didn't have like enough internal resistance against going along with a con Uh and they were held liable So that could be part of why either individuals or entropic entity Did what Did what they did [00:34:27] Jacob Haimes: And then also looking at their reaction, I, it's not just how they behaved or, or their, uh, track record here, but it's also how the environment around them behaved, how their peers behaved. Um, so like at this point, I believe all of the remaining. Uh, frontier Labs, so to speak, have backed up, including open ai, which opportunistically, uh, took the DOD deal as soon as Enro, uh, was designated a supply chain risk. [00:35:01] Jacob Haimes: So including the one that's ready and I'm sure foaming at the mouth to get one up on their, um, competitor, they also threw in with them and, and backed this, uh, sort of, oh yeah. We, we don't think, uh, that this is lawful. Uh, so I, I, I think that's telling in itself as well, right, that they, they took advantage of the situation, but it is still not in their best interest for open AI or for anthropic to lose this lawsuit. [00:35:39] How to Read What They're Actually Doing --- [00:35:39] Jacob Haimes: so given these motivations though, it's important. To think about, um, why the, like what is informing our understanding of these motivations? Why are we saying, yeah, we think these things that we just listed are, are playing a role in their decision. Um, like, like where do you go to look at that? [00:36:07] Igor: Yeah And like how how do you trust certain things This maybe like the other question is implicitly there right Be I I don't think people emotionally have pro have problems like that corporations are self-interested and uh alpha power but the they actually believe it might be a thing that people wants to like know Okay Like how do you know [00:36:30] Jacob Haimes: Sure. And I also think that, I mean, the goals that are being pursued are, are not necessarily what, what's being stated e even if like you are willing to treat a company, uh, or a corporation as a rational actor, um, a rational actor with a different goal might not behave in the same way. So how, how do you suss out what those goals are, uh, or what those, uh. [00:36:59] Jacob Haimes: I guess like believe it's, uh, cases are, if that makes sense. [00:37:04] Igor: Yes Um so like the the usual kind of like thing to point at there is what's called credible commitments And this comes from [00:37:16] Jacob Haimes: I. [00:37:16] Igor: of a game theorist or economist um called shelling Uh and the idea is basically uh like the the branch called signaling Theory The idea is that talk is cheap but you can invest talk with like credibility by making it um for you And it doesn't necessarily need to be like literal like money costs but it can also be like irreversible um PR damage That's where how it works [00:37:48] Jacob Haimes: Y [00:37:48] Igor: humans [00:37:49] Jacob Haimes: Yeah. And I, I think this is intuited relatively well, uh, by humans. Uh, just like our, our day-to-day lives. Um, the person that you trust, you do not trust them because they said that they would do something. You trust them because there is an established history, whether that is in a small subset of consequential moments or a very, very long history of smaller instances, they have accrued this, uh, value of signal that they should be trusted. [00:38:27] Jacob Haimes: Um, and, and so it's the same way with corporations. Except we can't, uh, the, the accrual doesn't make as much sense here. Uh, and instead what makes more sense is what are the things that are actually binding? What are the, the things that we can tell, um, is the case. So like Anthropic said in, um, well, anthropic said they lost 10 billion in, um, in deals, I think. [00:39:02] Jacob Haimes: Uh, and that's, that's a lot of money. So this is actually costly for them [00:39:06] Igor: And this [00:39:07] Jacob Haimes: that, [00:39:07] Igor: predicted [00:39:09] Jacob Haimes: yes. Um, and we're also not just taking that at face value because that itself was in a court document, uh, which has more weight to it necessarily. Right. If they lied in that. That could completely destroy their case. So we can be relatively certain that that is actually representative of [00:39:41] Jacob Haimes: what the case is. [00:39:43] Igor: In in German the there's a nice imagery there We uh we say that you can put weight on a statement in in English it just goes into like reliable uh as uh statement like the statement ho holds up to pressure Uh basically and like if your statement can get you into jail or can uh terminate a lawsuit that you're trying to do to like stave off a possibly like killing designation as a supply chain risk then that is a like reliable statement and it's more reliable than like a tweet that doesn't have any legal force behind it [00:40:23] Jacob Haimes: Yeah. [00:40:24] Igor: And uh worked example of credible statements is what [00:40:34] Jacob Haimes: Uh, yeah, I mean, I guess the, the, the, the fact that I'm, I'm sharing my opinion, uh. It makes me harder to employ, uh, at least I would say within this space because it does not coincide with, uh, the opinion of many of the people hiring. So, uh, I promise it's a good thing it that I'm harder to employ and I'm not just. [00:41:02] Igor: Or your credibility like this This has like this also important to say like this does not mean that the statement on its own is in the like quality or like accuracy statement right Like this is more like you actually believe in what you're saying independently of that you're very smart and educated and and you know what you're doing but is a separate evaluation This is about like does Jacob actually believe the things we say on this podcast And also me and also like another person to point is Tim net Like the reason why I respect Tim a lot and like uh I trust Tim net in a sense that like she seems to genuinely mean the things she uh she says and I put weight to what she says is she killed a very lucrative career by being very annoying about actually be having ethics [00:41:55] Jacob Haimes: Yeah, [00:41:56] Jacob Haimes: and that. [00:41:57] Igor: yeah [00:41:57] Jacob Haimes: I, I think, like leads relatively well into the next idea, uh, which is look at whether the organization is, is walking the walk and not just walking the walk once, but walking the walk as a pattern. [00:42:16] Igor: this is a thing for organizations and for uh people but we split them a bit because you don't need to to weight things a bit uh differently but it's the same idea [00:42:26] Igor: And for abstract systems or entities the principle is called or the purpose of a system is what it does [00:42:38] Igor: this is from a seventies so this this is like uh like an OG out the acronym [00:42:43] Igor: This is a principle formulated by Stafford Bayer who was like a systems theorist or a cyber ethicist And it's really just like okay like if you want to understand like what why does a system exist and how does it keep existing and what is its purpose Um then you should look what it actually like achieves in the world and like how it keeps existing And so if you have a system that Officially is meant to like create wealth for everyone but it actually creates wealth only you know a a few few Then the actual purpose is not what is stated but what what you observe And this is a bit different than credible commitments where really like sacrifice things in order to show the signal here just about okay Like if there's a continuous pattern that points into one thing being the uh the goal then you can just assume that it's the goal until you get any other evidence to uh to to the contrary And now is the remainder of a pin from earlier which is is the supply chain risk designation An actual belief of the US government or of H Exit of Trump do they really think that is dangerous to the us [00:44:04] Is This Designation Even Real? --- [00:44:04] Jacob Haimes: Right. And what I believe is that of course not, uh, it is, it is being used as a punishment. It is being used as a sort of cudgel in order to try to get them to stay in line. And the reason that I believe that is because there is an abundance of evidence that that is the case. Um, The fact that they opened with either we're going to designate as supply chain risk and remove you from all of our, uh, all of our workflows, all of our technologies, or we're going to say that your technology is so critical to the defense of our nation that we're gonna nationalize you and take it. [00:44:50] Jacob Haimes: Those are not statements that can live together. Those don't make sense as, as a two suggestions, uh, or two threats as to what will happen for the same reason, unless the whole point is the punishment. [00:45:12] Igor: Small nitpick I don't think defense protection uh or production act is technically nationalization It's more like government tells you what to produce but because this is freedom country it's still yours and you will get it back after the war [00:45:26] Jacob Haimes: Okay, soft nationalization. [00:45:29] Igor: I mean like like like this distinction does matter for this point right Because it was nationalization when you like you swap swap out all of the people and like you can like um diffuse the supply chain risk that was there before But because that's not how this works But like there's no inherent thing that that we will like swap out all of the decision makers all of the people who might have constituted the supply chain risk previously It just telling you what to do That's what makes this like a hint that this is bullshit and this this this a culture at least like one aspect of of it is this [00:46:07] Jacob Haimes: And then another aspect is, I don't know if it was a leaked memo or it just like is, uh, put out publicly, I forget. Um, but the way that they are rolling out this, uh, removal of anthropic, uh, products from their workflows. Is that the entire military is given six months to remove it and they can apply for an exemption within that period. [00:46:35] Jacob Haimes: Um, and [00:46:37] Jacob Haimes: I, I, I don't think that that is the kind of behavior that we would see if there was actually a concern about it being a national security risk. [00:46:50] Igor: One would hope so [00:46:51] Jacob Haimes: Well, yeah, I, I hope so. Um, I, I'd be curious to know if, maybe I'll, I'll look this up later, but I feel like the ban on, on TikTok, on government, people's phones had a shorter turnaround than that. Um. Which to be fair, it is a, an easier thing to remove, but still, um, [00:47:14] Jacob Haimes: yeah, the, the level of, uh, urgency that is brought to the table does not indicate that it actually matters that much. [00:47:24] Jacob Haimes: Another thing that I just wanted to, to bring up around like this idea is that, um, [00:47:31] Jacob Haimes: atropic says it's worried about safety and then partners with Palantir, right? And, and. They pioneered computer use and they've quietly walked back safety pledges. Uh, these are examples of what they've done. Uh, and [00:47:48] Jacob Haimes: I think that should also be taken, into account here. [00:47:53] Jacob Haimes: So what's the, you mentioned earlier that there is a, uh, another framing for, for people. [00:48:01] Igor: Yeah so like for for systems It's about like purpose and dynamics and and like uh how things are structured is kind of like no no matter how you try to work a system it can't really go against its its purpose Like a knife that remains a good knife is not a good hammer or a good uh bandage There's like intrinsically opposing properties there with people It's it's really more about like what do people want to do And uh part of that is then just about like the consistency and uh and the the way uh people act But it's also about like you know like the positive thing just on a human scale And one direct example is like um uh Sam jumping at the opportunity of cutting a deal is exactly what what you would expect But uh it's also exactly what what what you would expect that he will lie and misrepresent and try to have it both ways in the announcement So after they failed to cut a deal with Tropic May I jumped in into the breach and announced that they were able to cut a deal despite having the exact same safeguards as tropic And then people were like what How are you sure you have the same safeguards And then open said Yes We also want only lawful uh behavior which is the same thing as in Tropic And we were like no Tropic was a bit more specific because this is the government they can just claim it's lawful What do you do then And then there was like awkward silence And then a little bit later they amended the deal or they claimed to have amended it But the actual like observer behaviors like okay open air cuts the deal and then they walk it back and tropic says No And and then they try to like make that workable And these are two different behaviors and also on the level of the people don't think Sam like this is also like a a a testament to Sam being a better player at this venio I don't think Sam wrote a memo like that even when he was very pissed I I have never heard for any rumor mill anything like that of Sam and Stuff like this shows you indirectly and with lots of like wide confidence uh intervals what people actually about [00:50:52] Jacob Haimes: Yeah. [00:50:53] Igor: Also Sam being ex extremely triggered by the uh by tropic uh counter ads is one of those examples Like and you really need to integrate always and uh try to look at things not like you do in the positive case on the systems level as rational actors but as bound rational Like with humans you're trying to see what makes them tick on a emotional level because that's usually [00:51:21] Jacob Haimes: Mm-hmm. [00:51:21] Igor: what uh what people underestimate uh drives decision making There's also a last way of like a better on situation and understanding motivations which is kind of related to both what uh the things we just talked about And that's if you can get it inside the info [00:51:42] Jacob Haimes: So I sort of see this as like, uh. A special case for both of, of these, like the, the, uh, revealed preferences and possy wood. Um, basically it's, it's extra information that indicates something about the, the actor. Um, and so maybe, uh, an employee has insider information about the dynamics or the rules or the stated goals internally, but not externally, and that in itself is valuable. [00:52:16] Jacob Haimes: So, um, yeah, that in itself is valuable because you are. Opening up and understanding, uh, you know, someone who has a better understanding of the, of the system and saying, they're saying, oh no, this is actually how it, how it behaves, why we're doing this thing, et cetera. Um, and then the other is for people, I mean, insider knowledge is really just like how they react, right? [00:52:44] Jacob Haimes: Like intuitions. And some of this is, I mean, obviously is this is, uh, a noisy process, but it can be still valuable here. [00:52:54] Igor: And uh At least in my extent like I don't know anybody who works at at at entropic Um but uh I know people who know people or who talk to people and consistently I hear like the same like secondhand things that people are genuinely anxious about Um a GI Skynet and shit like that people genuinely care about not being evil and they are very like internally torn which is also why they need so much money to like keep them like rooted on uh on one place so that's part of why I buy it to large degree when something like is is is it is being partially intrinsic motivation [00:53:39] Recap (Pragmatist's Playbook) --- [00:53:39] Jacob Haimes: Okay, so. Yeah. Before we, we close out, I think it would be helpful to just do like a, a lightning round of what we've said and like roughly what this, this sort of approach is. and it starts with like naming your lens or your framework of analysis. [00:53:57] Igor: yeah And uh we picked this pragmatic lens where we just look at things like with what effect they will have And it's maybe informative to to contrast with uh uh this with timid lens which was like a more or less is a loaded word but it is very much a moral lens where looks at the sensation and she doesn't see What we see at least in her post because she doesn't care about it Like like her her analysis focuses on different things in the way it is stated and and by making it explicit you can keep yourself honest about the possible blind spots that you might be having [00:54:38] Jacob Haimes: And then the first thing to do after establishing that is look at binding documents. So this is, uh, essentially. Where are the credible commitments in court documents, in case filings, in financial disclosures, in use agreements. Um, what are the ways that people are actually making, uh, something binding, uh, saying something binding? [00:55:05] Jacob Haimes: Because those are going to be the most valuable and the most trustworthy signals. [00:55:12] Jacob Haimes: and along those lines, following the money is also valuable, right? Because I mean, the money is a proxy for, uh, power. I, I guess. So who is paying whom? Uh, why are they paying them? If you can figure that out, and if not, who precisely is paying whom? Uh. Because oftentimes that can give a lot more insight into how a person got to where they are, or why an organization is behaving a certain way than one might initially expect. [00:55:47] Igor: And then all of these things give you like individual uh data points But the the important thing is that stuff is a bit consistent over time Like everyone can be like ones but if people are continuously Drawing lines even if they're like small lines that gives you like a direction uh uh to uh to go from [00:56:10] Igor: and then another consistency check there is kind of like does it make sense how their peers or foes react So like in in in Andros case like them being backed up by all of the peers it's partially self-interest but it's also because like nobody genuinely thinks that uh this is uh a supply chain risk And they all know it's a cuddle and they want to like avoid the cuddle [00:56:35] Jacob Haimes: Yeah. And then going back to the beginning when sharing, name your lens again, because this establishes like credibility and a little bit of understanding, uh, as to how you're approaching this problem and know why you're making a claim. So for, for all of these things that we've said, we've been giving the reasoning, uh, behind it. [00:57:04] Jacob Haimes: And I think that that is quite valuable just Going through the exercise of making yourself have, have a why there. So hopefully that will give you something else to make sense of all the AI muck out there, [00:57:22] Igor: And I think it's important to say that like this is basically just like a small part of the investigative journalism and critical analysis and media studies toolkit Like there's a lot a lot of different traditions of how you assessments like this And they're all like follow roughly the patterns that we outlined and we'll put up some uh additional reading links uh for uh uh for this stuff It was all [00:57:52] Jacob Haimes: Yeah. Okay, cool. So hopefully that will give you something else to help make sense of all this AI muck that's out there. Uh, but that's what we can take for this week. If you thought this was valuable, please subscribe so you don't miss any future episodes. And. With that, we'll see you next time.