TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.
Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.
You're watching TVPN. Today is Monday, 03/02/2026. We are live from the TVPN UltraDome, the temple of technology, the fortress of finance, the
Speaker 2:capital of capital.
Speaker 1:Let me tell you about ramp.com. Time is money. Save both. Easy use corporate cards, bill payments, accounting, and a whole lot more all in one place. It was a massive weekend.
Speaker 1:So much news. We are very fortunate to be joined by Ben Thompson at noon. Let's pull up the linear lineup and show you the run of today. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on Linear are using agents.
Speaker 1:We got Ben Thompson, James Bashar, John Quinn's coming back in person again. We're very excited to
Speaker 2:be joined by him. We'll be talking about tariffs.
Speaker 1:A monster lightning round with five different guests joining. We got some acquisition news. We got some funding news. We got some takes on tech and AI and media. We're going all over the place.
Speaker 1:It's going be a fun, fun show. But we missed you. We missed you on Friday. We were traveling. We went to Montana.
Speaker 2:Terrible day to be out.
Speaker 1:Terrible day to be out because it was
Speaker 2:Every single time
Speaker 1:we've
Speaker 2:an off day. Yep. It ended up being a massive news day. So lesson Yeah. Never take a day off.
Speaker 1:Yes. Never take a day off. Truly. What an absolutely crazy weekend. Of course, there's the war with Iran.
Speaker 1:The big news in tech was The U. S. Halts the use of anthropic AI after tension after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escalation of the government's clash with the company over how its technology can be used by the Pentagon.
Speaker 1:Quote, I am directing every federal agency in the United States government to immediately cease all use of Anthropix technology. We don't need it. We don't want it, and we do not do business with them again. We will not do business with them again, Trump said Friday in a social media post. The defense department and other agencies using Anthropics Claude models will have a six month phase out period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition.
Speaker 1:Six months to switch from one LLM to another feels like a long time. But I guess a lot of this has to do with like FedRAMP and actually getting
Speaker 2:new But this is a lot more switching to a new model to run deep research reports
Speaker 1:Yep.
Speaker 2:Where this year involving classified systems. Sure. The context that people didn't have Yeah. Last week
Speaker 1:Yeah.
Speaker 2:Was that The United States was headed to war. Yeah. Right? And so even having that context, feel like, is is pretty important. Right?
Speaker 2:It sort of explains the 5PM deadline. Urgency. Anthropic had taken issue with how their products were used in the Maduro raid. Yep. There's a new conflict that's unfolding.
Speaker 2:Yep. And so that makes the the aggressive timeline make a lot more sense. It also makes the six month
Speaker 1:Phase out make more sense.
Speaker 2:Phase out make more sense because national security is on the line. Yeah. This morning, Scott Besson said at the direction of the president, The US Treasury is terminating all use of anthropic products, including the use of Claude within our department. Yeah. The American people deserve confidence that every tool in the government serves the public interest.
Speaker 2:And under President Trump, no private company will ever dictate the terms of our national security. Yeah. The US US federal housing Fannie Mae and Freddie Mac are also terminating the use of anthropic products, which was announced this morning.
Speaker 1:Yeah. Which I think goes in line with the original direction. Trump said, I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these statements come out from sort of every different federal agency as they sort of get their transition plan together, figure out what are the requirements for their particular agency. Because I imagine some agencies aren't operating in classified environments.
Speaker 1:It's going to be much easier for them to onboard to a Gemini or an OpenAI or a Grok very quickly. Some of them, it's going be a longer plan. But they're all getting on board, and there's been a big debate over how Dario has handled this. Is he in the right? Where is he in the wrong?
Speaker 1:Where is the where has the government potentially overstepped? Have they been too aggressive? Or are they doing everything appropriately? Everyone is weighing in. And we're going to take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on.
Speaker 1:It's in many ways, Ben Thompson does a great job sort of painting the broadest picture around like, what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor side, which is, you know, you're talking about a $200,000,000 contract for a company that does 10,000,000,000 in ARR. This is 2% of revenue. In many ways, it's a bump in the road.
Speaker 1:And so I think a lot of people will be squaring, how serious is this for Anthropic? What does this mean for the other Foundation Model companies? What does this mean for the future of the relationship between tech and Washington, D. C? But there's a lot more context.
Speaker 1:So my the way I process this was interesting because I was very I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American. I've run businesses.
Speaker 1:I've never actually sold anything to the government. But hypothetically, I could imagine the government coming and wanting to buy, I don't know, ads on TBPN or Lucy products or any other consumer packaged goods product that I've made. And my assumption is that the private companies should have very little say in how the government uses those products. And I was trying to zoom out and think about like, AI is so complicated because it could be superintelligence, could be autocomplete, it could be coding help, could be knowledge retrieval. There's a lot of different things that AI means.
Speaker 1:And in some scenarios, it's like super critical, really complex. And in other ways, it's just a product. It's just a it's just a service like an Excel sheet, like Microsoft Windows installation, like a car. And so yeah. And so I was thinking, like, if I was the CEO of Ford, how and I make Mustangs and Ford Explorers and f one fifties, and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer.
Speaker 1:I probably shouldn't say, no. No. No. I don't approve of this particular govern what the government's doing, so I'm I'm just not gonna sell you any Mustangs to drive around on the military bases because I don't like the military. But but it then if they ask me, hey.
Speaker 1:We love the Ford Mustang. We love the f one fifty. We love the Explorer, but we're going to war, and we want you to put bulletproof glass on there and armor. That seems like a different discussion. That seems like a that that seems like I might need to, you know, set up a different manufacturing line.
Speaker 1:I might need a different assembly line. Like, the car is gonna be heavier. And if I put bulletproof plating on all the cars, well, like a lot of families are going be like, don't want an armor.
Speaker 2:It's to hurt my business.
Speaker 1:Yeah, it's going to hurt my business. Exactly. And so that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this. Like Humvee, of course, the Hummer is owned by General Motors, and that brand has separated.
Speaker 1:And now most military vehicles are made by defense contractors. But there is some bleed over. And there's some times when private companies do dual sourcing or dual use technologies. And so but all of that is just like a discussion. And that cost should be part of a new contract effectively, my case.
Speaker 1:And this was loosely what was happening. But
Speaker 2:Yeah. And Dario, in the CBS interview, quote, We are a private company. We can choose to sell or not sell whatever we want. There are other providers. Yes.
Speaker 2:Which feels like
Speaker 1:Yes. Like I'm dipping out of it. Now, it is weird because he, at the same time and we'll get to the actual CBS interview. But he said, Anthropic has been one of the most proactive AI companies in working with the US government. We were the first to deploy models on classified clouds and the first to build custom models for national security, which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge.
Speaker 1:And so this was sort of predictable that you would get to
Speaker 2:this Yeah, this the moment he had been waiting for.
Speaker 1:In many ways. And so it's weird that you would be able to predict that this would happen, that there would be this question of like who gets to decide how the technology is used. And you wouldn't just be like, Well, I know how it's going to play out, so I'm not even going to go in the lion's den because I don't want to be in that scenario. Instead, it was like, we're leaning in with the government. We're deploying unclassified clouds, training custom models, but we still want authority over the final, last sticking point on how these models are deployed, what they're used for.
Speaker 1:And that feels a little odd. Like, in the Ford example, like, if I sell them a Ford F-one 150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going be like, look, like, it's not ready for that. It's not armored. You shouldn't do that. But if they do it, then it's kind of on them.
Speaker 1:I should be clear about the capabilities of the vehicle and how much you know, how bad it would be in that situation, but it's on them to go retrofit it, figure out what's what's, you know, legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty, right, based on what you know about the capabilities of the model. And so I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now it's bad salesmanship.
Speaker 1:Most salespeople would just be like, Yeah, everything's great. You can use it for anything. They overpromise and then underdeliver. He's doing the opposite. But it's certainly responsible if that's his true belief.
Speaker 1:Like, if he believes that these models are not good for a particular use case, telling your customer that, Hey, it's just not ready for that. You're just going to have a bad time. It's not going to work. That's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly.
Speaker 1:So he's saying like, right now, it's not good for X, Y, or Z. Well, what about in two months? Like, it might be better. And then I think the government should be able to determine when and where they're effective. Now, they can't break the law, and Congress and the American people by extension are free to create new laws to restrict or encourage the use of technology in all sorts of ways.
Speaker 1:And that's like the way America works. That's the American project. But it's not unreasonable to share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons.
Speaker 1:And there's been a question as to why OpenAI was allowed to include that language in their contract and say, like, hey, like, we don't we we don't think our technology is ready for that either. Let's do a deal that says that.
Speaker 3:Yeah.
Speaker 1:And people are like, oh, like like like, what's different here? Why could OpenAI
Speaker 4:Well, here's the
Speaker 5:thing, though.
Speaker 2:So we we know that Anthropic took issue with the way that Claude was used in Venezuela.
Speaker 1:Yeah.
Speaker 2:Yeah. And the Department of War Yeah. Would have known Yeah. That, hey, we're going to war. Yeah.
Speaker 2:Right? Yeah. You can imagine that Anthropic, a private company, does not know that. So they have this deadline There's this information asymmetry. Yeah.
Speaker 2:This information asymmetry. Yeah. They have this deadline. Mhmm. The Department of War knows that they're going to war.
Speaker 2:Yep. They're like, we need reliable AI systems Yep. For this conflict. We now know the war the president said this morning, said the war is gonna stretch four to five weeks. Mhmm.
Speaker 2:Right? I think on Friday, we all assumed that it was gonna be in and out super quickly. Yep. So the timeline is extending. Yep.
Speaker 2:And the Department of War is sitting there being like, we need to know that our the provider of these AI systems is gonna be reliable.
Speaker 1:Yeah.
Speaker 2:Just a little bit ago Totally. They took issue with it.
Speaker 1:Yep.
Speaker 2:Right? Yep. Can we count on them? Yes. They start this kind of renegotiation process, Yep.
Speaker 2:And to to try to build up confidence that, hey, we can rely on these systems in an active conflict. Yep. In a conflict that is feels already much more serious and will have much greater implications than the Venezuela conflict, right? No, totally. And so Anthropic is looking at this in a different way.
Speaker 2:Mhmm. And clearly is like leaning in and like really in some ways felt like they were kind of like stirring like really really like not not not respecting the process. So like when I when I or even the deadline, right? So Emile Michael came out Friday night and said, it was 05:13, thirteen minutes past the deadline, I'm trying to get in touch with Anthropic. I try to get on the phone with Dario, Dario says he's in a meeting.
Speaker 2:Mhmm. And I feel like in that in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go. Yep. But the Department of War is sitting there being like, you won't even jump on the phone, you're telling me there's a meeting that you're in that's more important than And that just screams to me like, hey, we can't count on this. We can't count on this provider.
Speaker 2:Like, we need to take drastic action. Yep. Now, this whole supply chain risk designation, we'll get into that Yep. Later. That's a whole other thing.
Speaker 2:Yep. But I can see why the department war came out of last week and was feeling like, hey, we cannot rely on this provider. Yeah. We need alternative solutions.
Speaker 1:Yeah. Yeah. If I'm shipping cars and I'm like, oh, I I actually I I disagree with the latest decision. I'm not I'm not gonna put the cars on the transport. Like, that's an odd scenario to be in.
Speaker 1:There there's also this question of, like, these a lot of people were, like, really, really keen on boiling down the the the the terms to, like, these two, like, buzzword y lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous? What is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive?
Speaker 1:And that's where you get into, like, the ideas of of deals that stick, basically. Like, you can have the same exact contract line item or terms of of a deal Signed agreement. With with with two different people. And it can be a wildly different experience. Most entrepreneurs have have felt this because they were like, yeah.
Speaker 1:I had a handshake deal with one VC. It was 20% and a board seat. And I had another deal with another VC, 20% and a board seat. And the one VC was, like, suing me and threatening me the entire time, and the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability that when the hard decisions come, that they will be made in a legal, logical, consistent with American values way is, I think, what you need to put forward if you want to work with the government effectively.
Speaker 1:So if you've developed if you have a good working relationship with someone, it's much easier to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, Who is Nicolas Maduro? But I don't know how much of a joke that is, and I also I don't know how bad of a thing that is. I actually think yeah.
Speaker 1:Tyler, what what
Speaker 6:I was do have some context on that? On the context of of Venezuela, like, specifically, like, is actually reported is is that after an Anthropic employee inquired with Palantir about Claude's role in the raid
Speaker 7:Yeah.
Speaker 6:A Palantir senior executive notified the Pentagon. Yeah. So I think it is, like, kind of blowing it out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right? It's it's an employee.
Speaker 6:It's not an executive.
Speaker 1:Article about that too, though?
Speaker 6:Maybe it's, like, Dario telling an employee to go check on that.
Speaker 2:But, like,
Speaker 6:we don't know.
Speaker 7:It could just be,
Speaker 6:like, a random employee.
Speaker 1:Yep. Yep. Yep.
Speaker 6:So I think it's probably unfair to say that Anthropic as a whole is, like, we are firmly against Claude being used in
Speaker 1:What the happened during the Maduro array? Yes. We don't even know. And of course, it's classified. So like, don't know if they will ever know because like, should we know?
Speaker 1:I don't know. Like, it's if it's an important capability, you don't necessarily want that to be public knowledge that then the the adversary is instantly aware of. And so I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson, where where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And and it was framed as like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population?
Speaker 1:And that's, like, somewhat fair. You could go either way on that. But I just think, like, LLMs are good for that type of thing. Like, we do it it is what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in, and LLMs can help with that. And so I feel like that's just a good thing.
Speaker 1:Like, if you just zoom out and just ask, like, do we want a more knowledgeable and educated government workforce across everything that they do? Like, it seems like absolutely yes. And so I just think that that's something that is maybe lost as people go into like more of the sci fi, more of the frontier stuff that there isn't a lot of evidence that's happening yet. And on the supply chain risk, Ben Thompson, who's coming out at noon, makes a really strong argument for why government pressure like this is actually reasonable in this situation. He takes it a lot further, plays it out, and lays out a scenario that seems somewhat inevitable.
Speaker 1:But what I'm still wrestling with is just how real the supply chain risk designation is. Like many reports are treating the supply chain risk label as like an established fact.
Speaker 2:Yeah. Which all it is, is a tweet from Hagsteth.
Speaker 1:It's a tweet from Hagsteth right now. Dario went on CBS and said that he has not received a letter, that there's no definitive ruling yet. Khalshi has the odds that this actually happens at forty two percent and so and by April 1, so a full month for the DoD to actually roll this out. And then there's other nuance where the law says that there was a perception that this was like going to kill Anthropic because if NVIDIA has a government contract, then they can't do any deals with Anthropic whatsoever. And that's not true, apparently.
Speaker 1:The supply chain risk is specifically if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract, but you could use that product in a different piece of your business. And so it's still dramatic. Still, I think Darius said, was unprecedented. It's only been used for foreign countries. Kaspersky Labs was a Russian cybersecurity company that was deemed to be a supply chain threat.
Speaker 1:Huawei is a supply chain risk because of the five gs towers that potentially have back doors.
Speaker 2:Somehow DJI still is not.
Speaker 1:Crazy that DJI isn't. And I think that a lot of people would be very upset if Anthropic got a supply chain risk designation before DJI based on just what we talked about last week, where DJI was found to have a whole bunch of backdoors on robot vacuum cleaners and whatnot. So lots of nuance there. But we'll see where the supply chain risk discussion actually goes. It feels like the pressure's on, and there's probably more negotiations happening as we speak.
Speaker 1:And so we'll be following the story.
Speaker 2:Yeah. Emil Michael was going through the timeline. He said, Today at 09:04 p. M, no response yet messages to Dario. Today at 08:25
Speaker 1:Mhmm.
Speaker 2:Anthropic writes, we have not received direct communication from the Department Of of course, Emile Michael is the undersecretary of war. Today, five fourteen Secretary of War tweets supply chain risk designation. Today, I called Dario's business partner at 05:02 asking to speak to Dario because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room with no notification to me. That's a guess.
Speaker 2:I I called Dario at 501, no answer. I messaged Dario asking to talk as well. And and anyways, he's just arguing like they're not negotiating in good faith.
Speaker 1:Yeah. Let me continue. But first, let me tell you about Figma. Ship the best version, not the first one. With Figma introducing Quad Co to Figma, explore more options, push ideas further.
Speaker 1:And let me also tell you about Cognition. They're the makers of dev and the AI software engineer. Crush your backlog with your personal AI engineering team. So speaking of Dario on CBS, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts.
Speaker 1:There were a lot of, you know, anti posts, but it it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology hallucinate and should but should not be used for autonomous weapons, which is which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, like much more stronger communication for him to say, Hey, look. We're Anthropic.
Speaker 1:We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Like, our system is awesome at that, but we don't make a product that we'd recommend using for autonomous weapons. And it's tricky to try and, like, twist arms here and sort of, like, because he's in a leadership position, act as, like, the steward of what like, he is an expert in LLM capabilities, but he's not necessarily an expert in DoD capabilities. And so it was odd to hear that he was like sort of painting with a broad brush and clearly believes which is fair, it's his belief but he clearly believes that the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems.
Speaker 1:So I thought that was just sort of like a like a like a mistaken comms opportunity there. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally the right of the people to be secure in their persons, houses, papers, and if facts against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that, but there are obviously a lot of nuance and different things.
Speaker 1:Like if public information can does that count as surveillance? Does the IRS count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that where surveillance is broadly popular. There's other things that where it's massively unpopular.
Speaker 1:And of course, it gets into the actual definitions 20 lines deep to understand what happens in the court. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable, but the court gave notice that going forward this should not be used and that the laws need to change. And the judge was like, This is, like, technically legal, but it's not in the spirit, and so, like, we need to revisit this a a country. And that's a lot of what's coming away from this is that if you put there's a view of, like, Dario as as sort of, like, making this, like, last stand, which in the best case sort of just actually kicks it back to the American people. Because the whole debate right now is is is Dario, like, the god king corporate emperor of this private company that he has control over and, like, you don't get to vote if he what he does versus democracy, America, government.
Speaker 1:Right? And and the good case is probably that, you know, he makes this stink, and his deal sort of falls apart, but then America responds. And and the populace votes for what they think responsible use of artificial intelligence technology broadly is. And that would be something that I would certainly stand by as a fan of American democracy. Let me tell you about Okta Okta.
Speaker 1:Helps you assign every AI agent a trusted identity so you get the power of AI without the risk. Secure every agent. Secure any agent with Okta. And let me also tell you about Lambda. Lambda is the super intelligence cloud, building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands.
Speaker 1:Let's go back to the timeline. We have Ben Thompson joining us in about thirty minutes. There are other reactions and other breakdowns. We can we can actually kick off with this breakdown of Ben Thompson's piece because I think Dan IRL underscore Dan B summed it up pretty well. Do you want
Speaker 2:You can, go for it.
Speaker 1:I'll take a crack at it. Ben Thompson, as always, lays out the reality more clearly than I could have despite my attempts. By Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude and as much as I dislike Hegseth's extra legal might makes right maneuvering, I will ask you again, what did you expect?
Speaker 1:Vibes? Essays? This is the reality of all too many of my EA followers that that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Dario's favorite books is The Making of the Atom Bomb, The Making of the atomic bomb.
Speaker 1:And it tells the story of the scientists that built the atom bomb, and then eventually, that technology was nationalized. And he apparently gives this book out to anthropic employees and has sort of seen it as, like, a road map for what might happen with AI. And and I was struggling with it because I was like, is it a cautionary tale? Like, haven't had nuclear war in seventy years. Like, the outcome seemed pretty good.
Speaker 1:It made it so controversial to say, but I I feel like I feel like we built the nuclear bomb, which, like, probably, like, not the best technology, pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the the system that we developed to prevent nuclear war has been success knock on wood, but it's been successful in my entire life and my parents' life. Like, the bombs haven't fallen since the forties. And so this idea of of the government having authority over something that is as powerful as nukes. I I I feel like, why fix it if it ain't broke?
Speaker 2:I I don't know. Do you have a I mean, different a different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing
Speaker 1:Seems crazy to me. Don't know. Debate. Yeah. Defend McNukes.
Speaker 6:Well, no. No. I I think it it's kind of this, like, weird contrast because, like Yeah. Like, basically, until, like, last week
Speaker 1:Yeah.
Speaker 6:Dario has been, like, the AI CEO that's been, like, we need government regulation. Totally. He said this again and again Totally. On on on whatever. Yeah.
Speaker 6:But then it's, okay. How do you swear that with him saying, we're gonna do the stand against the DOD? Like, it it seems kind of like
Speaker 1:It is a little odd. Totally.
Speaker 6:It's it's in contrast somehow. Right?
Speaker 1:Yeah. Yeah. It's like, I don't know. The there's just a much better way to handle which is which is, you know, put up billboards. I don't know.
Speaker 1:Like, it's like, it's like fun to pack. Like, do more stuff to actually make the law happen.
Speaker 2:Yeah. And the way that I was personally processing it, was Yeah. I I saw that the the CBS interview had happened. Yeah. This was Friday night.
Speaker 8:Yeah.
Speaker 2:Right? I went to the Paramount app to try to find the interview. I couldn't find it.
Speaker 1:I went to the RSS meeting
Speaker 2:and couldn't
Speaker 1:find it either. On YouTube it has a 0.3 views.
Speaker 2:Yeah. So it went out over the weekend. Yeah. Yeah. And then almost in the same session, I'm seeing that we are now at war as a country.
Speaker 2:And so the All all the the kind of blowback against OpenAI, I was processing that of like, we want our this technology is critical. The government, like, clearly needs it. And now we want the labs leaning into working with the Department of War Yep. At this critical moment in time. Continue continue on this post.
Speaker 1:Yeah. One one last thing on the nuclear weapons thing. It it is very interesting to see the actual structure of the nuclear weapons industry because I I think people don't realize where that industry wound up. Like, yes, it got, like, nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say. But, basically, the the IP is owned by the Department of Energy.
Speaker 1:The warheads are manufactured at at facilities that are owned by the Department of Energy, by the government, but they hire contractors from private companies to actually operate those facilities, and then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop Grumman, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U. S.
Speaker 1:Government. And so they wound up in this, like, you know, hybrid public private partnership. And I don't know. I I it just feels like maybe this is maybe it's like I'm left curving this, but, like, it it feels like it's good. It feels like it worked out.
Speaker 1:It feels like like, the nuclear weapons thing is the correct formulation. And I don't know that I would be like, yes. Boeing needs nukes. Like, let's give Boeing nukes. That's great.
Speaker 1:If I have a problem with how nukes are rolled out, I'll buy shares in Boeing and sue them and join the board and try and get the CEO fired if he fires off nukes. Like, that feels weird.
Speaker 2:Continue continue with this. Continue with this.
Speaker 1:Okay. Yeah. We'll close Even now. Even now, I hear many of you say something akin to, if this is what it comes to, I'd prefer King Dario to King Hegseth. Listen to yourselves.
Speaker 1:This is a declaration of war. Given this, of course, Hegseth is taking the action he is now. You decide you thought I was joking when I referred to this situation as a Thucydides trap. Anthropic is a rising power by your own belief system. While I may share your preference in the abstract, I disdain your faux surprise that this is the resulting trajectory.
Speaker 1:And if the surprise is genuine, I ask you to dig deeper and reconsider the actual consequences of your worldview about what it means for a private company to build ASI.
Speaker 2:Heading over to Palmer, he says, this gets to core of the issue more than any debate about specific terms. Emil is sharing, prior to their new constitution, Anthropic had an old one they desperately tried to delete from the Internet. Choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort. Palmer says, this gets to core of the issue many more than any debate about specific terms. Do you believe in democracy?
Speaker 2:Should our military be regulated by our elected leaders Or corporate executives seemingly innocuous terms from the latter, like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer.
Speaker 2:Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like what level of information classified and otherwise does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more?
Speaker 2:What if an elected president merely threatens a dictator with using our weapons in a certain way, a la madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of these determinations vary if the current corporate executive happens to like the dictator or dislike the president? At what level of confidence does the cutoff trigger, both in writing and in reality?
Speaker 2:The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use. But you immediately get to the same issue and more. What is autonomous?
Speaker 2:What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisors, I still believe. And that is why, bro, just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree?
Speaker 2:It is so simple. Please, bro, is an untenable position that The United States cannot possibly accept. And Emil Michael had said that Anthropic wanted to block searching over public databases as well. Like you might wanna search over LinkedIn to look at recruiting. Right?
Speaker 2:So it's
Speaker 1:like Yeah.
Speaker 2:These sort of like blanket bans Yeah. Are gonna make the product like functionally
Speaker 1:Yeah. It's it's not it's not really like a blanket ban. It's more just like the discretion lives with the private company. And so the the you always have that you always have that ability to change the terms of the use, which is it's just tricky. It's just tricky.
Speaker 1:Well, people are at least some people are having fun with it. Roman Helmut Guy says, hi. I'm a private citizen who developed a super weapon potentially a thousand times more powerful than nukes, and now I'm selling it to the government. But I get to choose who they fire it at and how everyone and and and how. Everyone, please respect my decision.
Speaker 1:People are all over the place with this. Well
Speaker 2:There was also coverage. David Sachs had shared a clip alongside Beth. We can pull up Mark Andreessen talking about his experience with the Biden administration.
Speaker 1:People are going really, really hard.
Speaker 2:Let me pull this up.
Speaker 1:Iran is bomb here. AWS data centers. Lots of lots of stuff going on.
Speaker 2:I just dropped you guys a link.
Speaker 1:Keith Ruboy said, imagine Apple sold computers or iPads to the DOD and tried to tell the Pentagon what missions could be planned on their computers. A lot of people are upset about this.
Speaker 9:Meetings in DC in May where we we talked to them about this, and the meetings were absolutely horrifying. We came out, basically, deciding we had to endorse Trump.
Speaker 2:Mark, Mark,
Speaker 10:get add just a little color to absolutely horrify. What what what what did you hear in those meetings?
Speaker 9:They said, look. AI AI is one of these techno AI is a technology, basically, that the government is gonna completely control. This is not gonna be a start up thing. They they actually said flat out to us, don't start don't do AI start ups. Like, don't don't fund AI start ups.
Speaker 9:It's not something that we're gonna allow to happen. They're they're not gonna be allowed to exist. There's no point. They basically said AI is gonna be a a game of two or three big companies, working closely with the government, and we're gonna basically wrap them in a you know, they I'm paraphrasing, but we're gonna basically wrap them in cocoon. We're gonna protect them from competition.
Speaker 9:We're gonna control them, and we're we're gonna dictate what they do. And then I said, well, yeah, I said, I don't understand how you're gonna lock this down so much because, like, the math for, you know, AI is, like, out there and being taught everywhere. And, you know, they literally said, well, you know, during the Cold War, we we classified entire areas of physics and took them out of the research community and and and and and, like, entire branches of physics basically went dark and didn't proceed. And that if we if we decide we need to, we're gonna do the same thing to to to the math underneath AI. Wow.
Speaker 9:And I said, I've just learned two very important things because I wasn't aware of the former, and I wasn't aware that you were, you know, even conceiving of doing it to the latter. And so they basically just said, yeah. We're gonna look. We're gonna take total control of the entire thing and just don't don't
Speaker 10:And what was their and, Mark, what was steel mannered for the listener? Like, what was their argument? Why was
Speaker 9:Well, it's well, so this gets into this whole, like, all these debates around, like, AI safety, AI policy. So there there's sort of several dimensions on it, and I'll I'll do my best to steal, man. So one is just, to the extent that this stuff is relevant to the military, which it is, like, if you draw an analogy between AI and autonomous weapons being, like, the new thing that's gonna determine who wins and loses wars, then you draw an analogy to the in the Cold War, that was nuclear that was nuclear power, that was the atomic bomb. And, you know, the federal government the steel man would be the federal government didn't let start ups go out and build atomic bombs. Right?
Speaker 9:You you had, know, the Manhattan Project, and everything was classified. And, you know, at least according to them, they classified, down to the level of actual mathematics. And and, you know, they tightly controlled everything. And that and look. You know, that that determined a lot of the, you the shape of the world.
Speaker 9:Right? And so there's that. And then, there there's there's the other that that's that's part one. And then, look, I think part two is there's the social control aspect to it, which you which is where the the censorship stuff comes comes right back, which is the the the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how it and how the government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem. Like, I that is happening at at, like, hyper speed and AI.
Speaker 9:And, you know, these are the same people who have been using social media censorship against their political enemies. These are the same people who have been doing debanking against their political enemies. And they basically I think they want to use AI the same way. And then, look, I think the third is, I think, this jet this generation of Democrats, the the ones in the White House under Biden, they became very anti capitalist, and they wanted to go back to much more of a centralized controlled planned economy. And you saw that in many aspects of their policy, but I think, quite frankly, they think that the idea that the private sector plays an important role is not high up on their priority list.
Speaker 9:And they think generally companies are bad and capitalism is bad and entrepreneurs are bad, and they've said that a thousand different ways. And, you know, they they, you know, they demonize, you know, entrepreneurs as much as they can.
Speaker 2:It's interesting. A Canadian publication, The Globe and Mail came out yesterday and says, Canada needs nationalized public AI. And Toby, the greatest Canadian entrepreneur in history, says, deranged drivel in response. Yeah. But but, yeah, Elon also piled on to to Saks's take which, you know, centered around a lot of those staffers allegedly going over to Anthropic.
Speaker 1:It's interesting. We were talking about like these alliances that happen Like, there's the anti Netflix alliance, the anti YouTube alliance. There's, like, a little bit of an odd alliance happening against Anthropic right now. Let's move on over to Netflix and Paramount because there's news in the bidding war. First, I'll tell you about Graphite, code review for the age of AI.
Speaker 1:Graphite helps teams on GitHub ship higher quality software faster. And I will also tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web app servers, databases, and more. Well, railway automate automatically takes care of scaling Yeah.
Speaker 1:Monitoring and security.
Speaker 2:We will come back to this story
Speaker 1:Yeah. On Ben Thompson.
Speaker 2:E and W and with So none none other than Ben Thompson in twenty minutes.
Speaker 1:In The Wall Street Journal, in the exchange section this weekend, they have a full bleed article. How David Ellison finally got what he wanted. And I love the subhead. No. No.
Speaker 1:No. No. No. No. No.
Speaker 1:Okay. Yes. He got 10 no's and then finally got it done. And
Speaker 2:Never give up.
Speaker 1:Never, never give up. For six months, the son of one of the world's richest men kept hearing the same unfamiliar word, no, Even before he closed a deal to combine his company with a much bigger one, David Ellison was already plotting to do it again. Once his Skydance media took control of Paramount, he turned his attention to a Hollywood icon launching an audacious takeover bid for Warner Brothers Discovery that would give the Ellison family full control of a sprawling media empire. So he came in with an offer of $19 per share, finally got it done at $31 a share. The final Paramount winning offer, $81,000,000,000.
Speaker 2:And again, as we are covering this live, every time that Paramount made an offer, they were very clear that it wasn't their best and final.
Speaker 1:Yep.
Speaker 2:It makes sense that it kept getting ratcheted up even though Netflix obviously played a pretty big role in it ultimately getting priced where it did. Sleep Well says, let me get this straight. Paramount approaches Warner Brothers for acquisition. Netflix puts a higher offer for Warner Brothers. Paramount puts an even higher offer at seven x leverage.
Speaker 2:Netflix declines to match offer. Now, Paramount and Warner Brothers will have to license all their content to Netflix to pay off all that debt, three d chess. A lot of people were thrown around the succession
Speaker 1:Oh, yeah.
Speaker 2:Moment. Congratulations on saying the biggest number.
Speaker 1:So Paramount will be footing the $2,800,000,000 breakup fee paid from Warner to Netflix.
Speaker 2:Which was paid Friday.
Speaker 1:Oh, it was paid already? Yeah. Yeah. And Netflix stock is up. Paramount stock's also up.
Speaker 1:David Zaslow has to be one of the greatest deal makers in history now. Yeah. Got got the absolute
Speaker 2:maximum price Dan Fifer says, so somehow Netflix was able to force one of its rival to overpay for another one of its rivals, putting them into a messy long process process of unification and got paid 2,800,000,000.0 for it.
Speaker 1:Yeah. So I I'm I feel like the idea of Warner Brothers licensing content to Netflix to pay off the debt is one possibility. But there are other streamers that they could license to. And so I'm not entirely sure they could like, you could just put The Dark Knight on Apple TV. Like, you could license it to Apple.
Speaker 1:You could license it to Prime Video. There's a whole bunch of different buyers of that content. So it's not like, oh, now because of the economics, like, it's Netflix gets the whole library for sure at a good price, license forever. And also Netflix has other like, there were more synergies than just, Okay, we're going to put Batman on Netflix. There were other things that they were going to be able to do, new series, new spinoffs.
Speaker 1:Like, they would have more creative control, more creative direction. So I'm not sure if that's 100% right, but it does seem like a good outcome for Netflix and one that I did not predict. I I really thought Netflix was gonna was gonna get this done, but it didn't happen. So, what does Peter Kafka have to say?
Speaker 2:Peter Kafka and Business Insider had some reporting. They said from Zaslav apparently said the deal may not close. If it doesn't close, we get 7,000,000,000 and we get back to work. There we go. Also said if Warner Brothers is going to survive, they needed to be bigger and we Need to be global.
Speaker 1:Yeah. Yeah. I mean, there's a there's a lot of there's a lot of opportunity here. Netflix confirms that they that they received the $2,800,000,000 termination fee. Somehow Netflix was force was able to force one of its rivals to overpay for another one of its rivals, putting them in a messy long process of unification and got paid 2,800,000,000.0 for it.
Speaker 1:Okay. So we have a new leak. A new leak. The OpenAI device has been spotted in the wild on none other than Joe Gebbia's head, which is amazing. If we can pull this image up, yeah, either the video or zoom in on the image, it looks like what we saw in that leaked preview of that, Super Bowl commercial.
Speaker 1:There's a device on the table that looks sort of like a hockey puck. I am so excited for what this is. I love new hardware even when it's, you know, still languishing in the early adopter world. I'm a big Apple Vision Pro guy, as all of you know, and I will be very, very eager to daily drive a new hardware product for a little bit. Let me tell you about Turbo Puffer, serverless vector and full text search built from first principles on object storage, fast, 10x cheaper and extremely scalable.
Speaker 1:And let me also tell you about the New York Stock Exchange. Wanna change the world? Raise capital at the New York Stock Exchange. Still gearing up for a banger year at the New York Stock Exchange.
Speaker 2:Do you
Speaker 6:guys think this was leaked on purpose? Because if if you watch the video, the second video
Speaker 11:Yeah.
Speaker 6:It looks like the the framing of it looks very much like a Oh. Found footage like he Yeah. At end of the video, he he pans the and it's like, I gotta hide the camera. Yeah. It's like it looks kind of plant.
Speaker 1:It's possible. It would be a cool rollout to, like, tease little leaks here and there, build a little bit of of attention without doing, a look at me campaign we're doing, you know you throw leaked in front of something and it just gets more attention. So it's totally possible. This is some more four d
Speaker 11:Yeah.
Speaker 6:Zach is not saying, oh, look at
Speaker 3:the there's a new device right there.
Speaker 6:He's like not, you know, saying it directly.
Speaker 2:It's very like
Speaker 1:Okay.
Speaker 5:I like that.
Speaker 2:Yeah. I can see it.
Speaker 1:Milkman says, just shipped a feature to a client thirty minutes after he asked for it. Big mistake.
Speaker 2:Just shipping That's too the new timeline for every project.
Speaker 1:What is going on over at Semi Analysis? Adi says that.
Speaker 2:I said, ask my wife what she was reading to fall asleep. She said Semi Analysis. Baster, not gonna make it. Dylan Dylan says, that's our wife now.
Speaker 1:Dylan was on an absolute terror this week posting on
Speaker 2:says, when your friends go to the Brazilian steakhouse without you, that's fomo de ciao.
Speaker 1:It's true. I had some fomo de ciao when you banned me from going there, unfortunately. I was so excited when we were in San Francisco to visit the Bain Capital backed steakhouse, but we had to go to some other place. It was good.
Speaker 2:Getting into the block news which happened on Yeah. Thursday and we didn't get cover the That
Speaker 1:happened like six months ago. Right?
Speaker 2:Yeah. That's six month.
Speaker 1:Okay. Six months ago.
Speaker 2:Months ago. Seems about right. AGI age. Yep. Daniel says, what we've really learned from the last five years is that Jack Dorsey runs extremely bloated companies.
Speaker 1:There was some there was some other news on this with Block. Someone posted the mix of yes. So Saket. Oh, it was deleted. Interesting.
Speaker 1:Well, we have it saved. We won't fully dox the account. But I I don't know if this is real because it's been it's been deleted, so it might be fake. I don't know. But most of you have heard about Block's 40% layoffs by now, but the numbers are even worse.
Speaker 1:Engineering was hit harder. We've lost close to 70% of our engineers. The company you once know is a prolific open source software contributor no longer exists. And so I was wondering, like, they're laying off 40%. How will they be shifted?
Speaker 1:Because the AI narrative, the job displacement narrative, that could be back office people that are processing manual workflows, or it could be software engineers who now there's a smaller team that's getting more leverage out of AI tools, and so you write more off. There's also just the world where you're a mature software company and you have lock in, and you're like, yeah, we actually don't need to ship that many more features. We can we have sewed for so long. It is time to reap. But I am still I am still bloat pilled.
Speaker 1:I I I still believe that this is somewhat of a unique Bloat driven. This is this is somewhat of a unique situation. But it didn't
Speaker 2:stop the market from absolutely puking on Friday. AmEx at one point was down something like 7%. MWT says, I'm fully on board with spiraling into a depressive episode over the rapidly approaching neo feudalist breakdown of society, but I worked at Square in 2017 and my job had no tasks. I sat on the roof eating free snacks all day with a MacBook. Ben Carlson also calling out that he says, maybe block laying off a ton of employees is a sign that AI is gonna destroy everything or maybe the stock is down 80% from the highs and they over hired and AI is a convenient excuse.
Speaker 2:So yeah, we've called this off called this out so many times over the last year as companies did rounds of layoffs and said it was because of AI related efficiency. But again, it is oftentimes a the best possible reason Yeah. If you're going to better better than saying Yeah. You know, we don't know what we're doing and Yeah. We've been running with 4,000 too many Yeah.
Speaker 2:People for a while now.
Speaker 1:Yeah. At the same time, I mean, has the market continued to like it? Because the stock popped a bunch and it felt like that might cause a continuation.
Speaker 2:Yeah. Mean, it's stabilized. Up 28% over the past five
Speaker 1:days. So it does feel like this could have some sort of contagion effect. A lot of other CEOs looking at this and saying, okay, well, like, I'm at least a theoretical victim of the SaaS apocalypse. Need to do something. I'll do it.
Speaker 1:So we should we could see more layoffs from tech firms. It doesn't seem unreasonable. But at the same time,
Speaker 2:it does the irony is that it's it's only Dorsey companies that have run these sort of mass layoffs. Right? Yes. Yes. Yes.
Speaker 2:I I I ran the numbers and this was the largest rift in S and P 500 history. Yep. Somebody was in my comments saying, you know, sharing And Lehman Lehman Brothers was actually interesting because they went bankrupt. They were delisted the same day And then the rifts actually happened over time. Mhmm.
Speaker 2:But a lot of people were shifted around Yeah.
Speaker 12:Because you go
Speaker 2:into transferred over to different jobs. Interesting. And ultimately, the company just ceased to be in the S and P 500. Yeah. Buko
Speaker 1:Oh, so it wasn't in the S and P 500 when the
Speaker 2:last They were cemented the same day.
Speaker 1:That's a good point.
Speaker 2:Victory for Jordy. Day. Never questioned. Buko says mostly he's talking about the cuts mostly about XYZ which is of course the ticker being poorly run, not really about AI but most others small to medium cap tech also poorly run expect many more cuts. Below, I tweeted that they only needed 60% of their company.
Speaker 2:That wasn't a random number. Pull up any FinTech SaaS chart and you can see that employee count exploded Yep. And demand exploded in 2020. But now, these companies are way too bloated. I did not expect them to cut 40% at once.
Speaker 2:I think it's basically impossible to identify the right 40% at one go. Yeah. So a huge operational risk there. Yep. But maybe better for morale than multiple cuts.
Speaker 2:Who knows? Unprecedented. We now have two examples of this happening with Jack, so it's easy to say he runs a bad bloated business. But I I have been vocal about this. Toast and Clover should not be anywhere near the scale they're at.
Speaker 2:Title? Afterpay? Come on. Pretty sure he threw a $70,000,000 party for the team last year. Think it was 68,000,000 for some off-site that they did.
Speaker 2:I also think it's a mistake to define this purely as a jack issue. As I said, pull up the employee's charts and the revenue charts. I'd say to pull up the earnings charts. But for many, are negative, which we all know. These companies are way too bloated and they are having their clocks cleaned by smaller, more nimble start ups.
Speaker 2:They have to get lean to survive. I think the realistic average number is 20 to 25% for many of these companies. Mhmm. But there are plenty that could cut 40 too. I think this basically has nothing to do with AI but there's some roles they can eliminate and somewhere they can increase scope.
Speaker 2:Let's call it 5%. So again, if that if that now deleted post is real Yeah. And that 70% of the engineering team, at least in that person's were cut. But you don't know. That could have just been the open source kind of True.
Speaker 2:Like the open source focused team. Yeah. Right? And that's just like a, hey, we don't have time to contribute to open source if our stock's down 80. Yep.
Speaker 2:Yeah. I Own over it.
Speaker 1:I do think that I do think that most CEOs will will maybe look at the block news and say, Okay, need to right size the organization. I need to do some layoffs. But not all of them will be convinced that a 40% cut is the correct move. Might They might say, Actually, like, we think that 20% here and then 5% there and then 10% there is just a more better for morale because it's more clear who's still on the team.
Speaker 2:OWN says, I think using AI as cover for right sizing your bloated org is pretty unhelpful. Honest, this false data point will be cited by every anti AI container within the next twenty four hours. Oh, This is something I've said. I've seen number of viral Instagram reels from people saying that the AI Mhmm. AI induced job loss is already happening at massive scale Mhmm.
Speaker 2:And they're pulling up quotes from CEOs that conducted layoffs in 2025 as evidence simply because the CEO said that they were getting efficiency
Speaker 1:Yeah.
Speaker 2:Out of AI.
Speaker 1:Well, let me tell you about Eleven Labs, build intelligent real time conversational agents, reimagine human technology interaction with Eleven Labs. And let me also tell you about Console. Console builds AI agents that automate 70% of IT, HR, finance support, giving employees instant resolution for access requests and password resets. TBPN simulator is here.
Speaker 2:Let's do it.
Speaker 1:TBPN simulator is here. You've been asking for it. There's a data center simulator. There's an insider trading simulator. There's a capybara simulator where you just do nothing and you just sit in the forest.
Speaker 1:But now you have TBPN simulator, and we can play it here on the show. You start out outside of the TBPN UltraDome, and then you control a character who can walk inside of our studio. You see our bathrooms on the left, our couches on the right, our American flag up top. And once you get prepared to go into the actual studio, this is a real recreation of our here we go. TBPN is live now.
Speaker 10:Live.
Speaker 1:And you can experience the joy of being an in person guest
Speaker 2:on And if you're coming on the show in person
Speaker 1:This is a good way to prep.
Speaker 2:This is good way to prep.
Speaker 7:It's a
Speaker 9:great way to prep.
Speaker 2:You should put in ten, twenty hours in here.
Speaker 1:For sure.
Speaker 2:Understand the layout.
Speaker 1:Yes. It also I love the accurate of how
Speaker 10:many It's
Speaker 1:very accurate. And it's also accurate to the more recent. We recently changed the the desk setup for where people sit, and this and this reflects the new setup. So thank you to Ben and our team who put this together. It is fantastic Just and remarkably
Speaker 2:just a few hours
Speaker 1:Yeah.
Speaker 2:Effectively one shot. Incredible. Incredible. I've never
Speaker 1:Yeah. I love the details, tracks on the ground, everything. Just fantastic work. TV simulator will be available everywhere video games are made possible, aka the Internet. What happened at Little Caesars?
Speaker 1:Little Caesars Arena had a malfunction tonight where their air horn was blaring for over five minutes straight during the Let's pull
Speaker 13:it up.
Speaker 2:Students back. This video.
Speaker 1:I have not seen this video, but I
Speaker 2:This doesn't sound like a malfunction to me. This sounds like exactly what they should be doing.
Speaker 1:Let's hear it.
Speaker 9:Find out what's going on with his horn, George.
Speaker 14:So, guys, I'm here at the scorer's table, and there was a complete malfunction here electric electrical wise here at the scorer's table. You see this gentleman here working frantically to try
Speaker 1:Live production's hard, folks. Live production's tricky. Went out.
Speaker 14:The score went out across
Speaker 2:Ben, they're saying your your simulator is fake because it has no no goalpost.
Speaker 1:Oh, no goalpost. Gotta add that.
Speaker 14:Both coaching staff, JB Bigger staff And
Speaker 2:no horse. No horse?
Speaker 1:Okay. Back to Square 1. He
Speaker 6:said it's on the way.
Speaker 1:It's on the way. Okay. That's v two. Will be a maybe that should be DLC. You have to pay $50 for that.
Speaker 1:If you get the the all access season pass, we'll we'll we'll include that, but, that's definitely some DLC getting the horse. Leonardo DiCaprio has been quietly funding the Los Angeles Public Library's Los Feliz Branch, a facility located on the site of the actor's childhood home. That's sweet of him. Inside the computer room which is named the Leonardo DiCaprio Computer Center features several signed postings of the actor films he started including Titanic and The Great Gatsby. The tribute filled space has become a distinctive feature of the library offering both technology access and a glimpse into the actor's career.
Speaker 1:And Brooks Otter Lake says, I like that this entire article, that this is the entire article and it seems to fully negate the quietly part of the headline quietly funding the branch. This is awesome. And I feel like if I'm if I'm a kid and I go to the library and I see Leonardo DiCaprio, that's gonna inspire me.
Speaker 2:So I'm I'm I'm a you think this this story went out last week. Do you think the posters are still there?
Speaker 1:Probably. I I'd say just double down.
Speaker 2:Worried about I'd those posters.
Speaker 1:Anyway, let me tell you about Vanta, automate compliance and security. Vanta is the leading AI trust management platform. And let me also tell you about Cisco. Critical infrastructure for the AI era, unlock seamless real time experiences and new value with Cisco. And without further ado, we have Ben Thompson in the Restream waiting room from Sir Techery.
Speaker 1:Welcome to the show, Ben. How are you doing?
Speaker 3:I'm good. Hopefully, have the right microphone turned on this time.
Speaker 1:You do, and it sounds fantastic. Thank you so much for joining on short notice. Thank you for writing Anthropic and Alignment. It is a fantastic piece that I think covers all of my questions. But I want to start with, like, just how did you process the the the weekend?
Speaker 1:How did you get to this particular place? And then, like, what is your key thesis with Anthropic and Alignment?
Speaker 3:I mean, this is one of those ones I don't know if it's good or bad that it came out sort of at the end of the week, so I had a lot of time to think about it.
Speaker 1:Yeah.
Speaker 3:Ultimately, I think it was good because I'm not sure anyone very as explicitly made the point I did.
Speaker 1:Yeah.
Speaker 3:And maybe it was bad because I feel like there's a lot of, like, caveats. Maybe in retrospect, I should have put in the article that would have addressed a lot of the points that people are upset about.
Speaker 1:Yeah.
Speaker 3:Basically, zooming out Mhmm. This was not a normative article where I'm saying what's happening is good or bad. Mhmm. And that's really the one caveat I really wish I would put on there. Mhmm.
Speaker 3:I mean, I'm being out there accused by, like, Neil Patel, like, the full throated fascist endorsement of fascism or something like that. And it's like, relax. Okay? Can I get get some some some credit for the last x number of years? Basically, the and there there is a deep rooted concern that I've had for a long time about and I'm now hesitant to even use sort of EA as a term because it's kind of now politicized, thanks Yeah.
Speaker 3:Thanks to the events of the last week. But a failure to grapple with a world of guns is basically the long and short of it. And I actually think Alizar has been the one guy who's been honest about this where he wrote that Time article about potentially bombing data centers someday. Yeah. And that's actually a point worth bringing up, which is all this stuff is right now in the digital realm with robotics and potential other applications, and it's obviously being used for military operations.
Speaker 3:It's crossing over into the physical realm. But if AI is as powerful as people say it's going to be, then there are going to be real world reactions to that. And if we're going to analogize it to nuclear weapons, as Dario Amade has done repeatedly, you have to think through what's what would happen in a world where a private company developed nuclear weapons. Mhmm. What would the government's response be?
Speaker 3:And that's not to say that the government response in that case is good or bad. Yeah. Or does it follow sort of constitutional principles or whatever it might be? Obviously, I want them to. On the surveillance point, I've been concerned about the application of computers to our surveillance laws for years.
Speaker 3:Like, so many things in our society assumed a certain level of friction in doing things that computers already obviated, and AI is gonna just do that on steroids. Mhmm. I do think we need new laws. I think all this stuff is is correct. And I think the idea that AI being applied to these commercially purchased datasets, for example, is a huge problem that I don't want to happen.
Speaker 3:The concern I have is that if this technology is as powerful as it is on pace to be, unilaterally imposing restrictions, even if those restrictions are good, isn't just an issue as far as who rules us, the democracy issue that sort of Palmer Luckey, I think, very eloquently raised. It's inviting very bad outcomes for those asserting that in general. And I feel there's been a lack of awareness of this. That's why I brought up the the Taiwan China thing. This has been a frustration I've had with Anthropic generally.
Speaker 3:They talk about, you know, Amade has been very outspoken in terms of opposing selling chips to China for in a narrow rep, you know, aspect, very, very good reasons. My pushback has always been what happens if we get super powerful AI and China doesn't? What are they going to do? Sure. The optimal thing would be to just bomb TSMC out of existence because suddenly that becomes optimal even with all the cost that that that does.
Speaker 3:And then what? Then what are gonna do? Yeah. Like, we're entering this like, I don't like getting into political Same.
Speaker 14:Posts. It's
Speaker 3:not fun at all. I've I've I'm not having fun with this. It's not enjoyable. I could promise you this. Yeah.
Speaker 3:And some people are like, well, you should've just made the post private. I'm like, no. I actually I really want Anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful. And now it's starting to happen for real.
Speaker 2:Mhmm.
Speaker 3:And I've I guess over the weekend, part of it was just I felt compelled to say this and girding myself to do so. And even then, I still wasn't I haven't I haven't waited in this for a while.
Speaker 14:It's it's no fun, but it is what it is.
Speaker 1:Can you unpack a little bit more of that that tweet that you posted where you did the find on the Dario article for Taiwan and saw that wasn't mentioned.
Speaker 3:Is I mean, I've just got I've sort of griped about this in general. I I think that
Speaker 1:So so do you just think he should be he should be talking about the Taiwan issue more deliberately? He should be messaging that? Like, why is it important that why is it why is it significant that he doesn't mention Taiwan?
Speaker 3:Well, I think the position about not selling chips to China is a totally legitimate one. I understand the argument. I could make that argument if I needed to. Like Yeah. I have advocated the opposite.
Speaker 3:That number one, not only should we should we be selling chips to China and a generation or two behind, which has always been sort of our standard practice with chips, should also be allowing Chinese companies to fab with TSMC. That is a restriction that has come down. Now these Huawei chips are somehow manufactured by TSMC. Let's let's not look too closely at it, but we should explicitly be allowing it.
Speaker 1:Okay.
Speaker 3:And the reason for that is I think it is a safer equilibrium to have China dependent on Taiwan Mhmm. Than to try to cut them off from Taiwan Yeah. While we are dependent on Taiwan. Mhmm. Taiwan is 70 miles off the coast of China.
Speaker 3:It's not an ideal position in the world for us to have a dependency on it and China to not have a dependency on it.
Speaker 1:Yeah.
Speaker 3:So this and this is the problem. All this stuff has everything going forward has massive trade offs.
Speaker 2:Yeah.
Speaker 3:The implication of letting China fab with TSMC or the implication of letting them buy NVIDIA chips is that they gain these incredibly powerful AI capabilities that is driving this entire debate. That is, in a vacuum, not a good thing. But nothing's in a vacuum.
Speaker 1:Yeah.
Speaker 3:Everything is a trade off. And in that specific area, I think that just it's repeatedly, again and again, being absolutist about the chip issue when I am frustrated to not see any public comment about the that's not quite fair. He has made comments about, oh, yeah. That would slow down sort of the adoption of AIR the long run if Taiwan got got bombed. I'm like, that's my mind, that's an insufficient consideration of the possibility of Taiwan getting bought.
Speaker 3:Now, again, I'm biased in that regard. I lived there for for nearly two decades. But it's just the the reason I brought it up in this context is if AI is what it is, the people with guns are going to want to have a say. Yeah. Whether that be domestically, whether that be internationally, that might be in the context of the US government just taking it, trying to kill your company because they feel you're not cooperating, or it might be the context of China deciding it has to act because The US is becoming too powerful.
Speaker 3:Mhmm. Because the you know? And it's not a fun debate. It it it does I do think the nuclear angle is a good one. Mhmm.
Speaker 3:It has echoes of the proliferation, question of mutual assured destruction, all those sorts of things, and that's just gonna be the reality of the debate going forward. And, again, it's not very fun, but I think it's also irresponsible to sort of run away from
Speaker 2:How much attention or what kind of factor do you think the information asymmetry between the Department of War and Anthropic played last week? It felt like, in hindsight, Department of War knows they're headed into a major what is now looking like a drawn out conflict. Anthropic sitting there thinking, hey, that we got this like arbitrary deadline. Why do we need to renegotiate this now? And then if if going off of Emil Michael's timeline, it sounds like they were still in the final hour trying to make a deal happen.
Speaker 2:And according to Emil, Dario was in a meeting and and was busy and wasn't really respecting the deadline, which maybe he felt was kind of artificial, but in hindsight now looks like it was significant because the Department of War was, you know, taking the country into a conflict and wanted to know, hey, can we lean on one of our AI partners?
Speaker 3:I don't know. I mean, I think that seems pretty arbitrary to have cut I mean, I'm hesitant to speculate. I don't know what was going on. I don't know the angles. I think and that's why I didn't sort of delve too deeply into it.
Speaker 3:And I also think some of the specifics like this supply chain risk probably over broad Yeah. And almost certainly the way it was stated in the tweet is definitely overbroad if you actually go and read read the statute. I think the goal that I was and again, this is where I wish I had sort of put more caveats to say, look, I'm not actually talking about all that stuff. I don't really care. I do care, but that's not the point of this article.
Speaker 3:The point of this article is there's all this talk about alignment. That's why I put that in in the headline. And on one hand, alignment is aligning AI with humanity generally. But for the foreseeable future, and you could have a philosophical argument about the long term viability of nation states in the age of the Internet, much less the age of AI and whatever that might be, that certainly is, you know, a more pressing conversation than probably ever before. Anthropic exists in the context of The United States.
Speaker 3:Mhmm. And that's why I put that quote, you may not be interested in politics, but politics has an interest in you. What is politics? War by other means. You might not be interested in that.
Speaker 3:It is going to have an interest in you. And my there's a like I said, a certain long standing frustration of not fully grappling with that fact, having dorm room theoretical arguments about AGI. You go back to that post over Christmas about, like, AGI in, like, a hundred years and no one having any jobs or being worthless or pointless or whatever, which included some implicit assumptions around property rights existing in a hundred fifty years as they exist today. News flash, if that happens, property rights as they exist today are going away. All all these rights and this is a philosophical That's why I started with the international law concept.
Speaker 3:All these rights, all these laws are subject to the agreement of those governed by them to follow them. Mhmm. And the final say is those who successfully inflict violence. Mhmm. And again, this isn't fun to think about.
Speaker 3:It's not pleasant. You would like to assume we operate in a world of laws that everyone follows them and goes by them. But to the extent AI is as impactful and powerful as it is, the more these questions, fundamental questions that we thought have been settled for hundreds of years, if not thousands of years, are going to be raised. And this is just the first of several episodes where I think that's going to happen.
Speaker 1:I grew up in sort of like post cold war, no ducking cover, didn't have a lot of fear of nuclear Armageddon. But Dario Amade is, you know, a fan of this book, The Making of the Nuclear Bomb. And it seemed like he sort of predicted that if AI becomes super powerful, The US might take a similar approach that they did with regulation of nuclear weapons. And I'm and when as I was thinking about that, I feel sort of good about the way nuclear weapons are regulated. Like, I feel we like it got the good ending and we haven't had nuclear weapons drop in seventy years.
Speaker 1:And it seems like things are going well there, as well as they can, considering there's this amazing, you know, tremendous, like, dangerous technology that exists. But it hasn't been deployed and it hasn't actually, you know, bombed anyone. But how do you think he's processing that book? How do you think you're how do you think we should be processing that idea of the government running the same playbook that they did with nuclear weapons?
Speaker 3:It's pretty interesting. I mean, on one hand, just from sort of a physical perspective, dealing with weights and software
Speaker 2:Mhmm.
Speaker 3:Is very different than dealing with fishable material. Or I guess the the super bombs are like they're actually like fusion
Speaker 1:devices. Right?
Speaker 3:Yeah. And that is trackable. It is Mhmm. Interceptable. You know when Iran, to take a pertinent example
Speaker 2:Yeah.
Speaker 3:Is trying to build enrichment facilities
Speaker 2:Mhmm.
Speaker 3:All of which makes the problem easier to solve. Yeah. So that's difference number one. Difference number two, and I really wish I would I I had this included and I cut it so that the sort of the the article will be tighter, but there is a very interesting point in technological history, which was the early days of Intel. Mhmm.
Speaker 3:And Bob Noyce made the decision that we will sell to the government, but we're not going to design chips for the government. And the distinction there was you had guaranteed orders, which was great. The government would take your IP, and there was and in his mind, the more important thing is there was limited volume. Mhmm. And the way that he foresaw correctly that this was going to be a very upfront capital intensive process of designing shifts.
Speaker 3:You have to design them, you to have equipment, all of which is in the billions of dollars today. Back then, it was in the tens of millions and hundreds of millions, is you need to find the largest possible market, which was the consumer slash business market. You design for that, that will accelerate your improvement and your capabilities so much that you will end up having better devices than the government could have ever requested or made for itself.
Speaker 2:Yeah.
Speaker 3:That is at stake on steroids with AI. Yeah. People like, I was talking to someone, like, why doesn't the government just have just get someone to make their own model? It's like because it's like, you talk about government contracts, we're, like, single digit billions. We're we're talking about for the the amount that's going into CapEx, the cost of these models.
Speaker 3:We're talking hundreds of mill you know, hundreds of millions of dollars for the models and hundreds of billions of dollars approaching a trillion dollars a year Yeah. In CapEx. That is only sustainable and viable if you're selling to everyone. And but that introduces the entire new dynamics where the government built nuclear. It started there, and it started with a lot of assumptions because it was a government program.
Speaker 3:We are necessarily for economic reasons because of all the upfront costs entailed, starting with private companies of which the government is one of many customers. And that introduces the assumption that, well, it's a private company with private property rights and all those sorts of things, all of which I want to be true. Again, I don't like how this is going down at all. The point here is to say there's a good reason why it's not going down that way. And there needs to be cognizance that even though this is a private company that is building the model general purpose and for very good reasons wants to put restrictions.
Speaker 3:Again, I think the the the same variance one is a very powerful argument that I agree with. The problem is that you just need to be aware of, yes, the government is a small customer. The government is also the entity, again, not to be blunt, with guns. Like, they they you know, like, why do I pay taxes? Because the law says to pay taxes.
Speaker 5:Yep. No. At the end
Speaker 3:of the day, I pay taxes because, you know, if you really wanna distill down, if I don't, someone with guns will come to my house and throw me in jail. Right? Like like, we don't think about that. But at the end of the day, where do these assumptions and laws and rights flow from? And as long as that is still the case, that it needs to be a decision making factor for these companies.
Speaker 1:Mhmm. How do you think this plays out for Anthropic? It's such a small contract, but it's so important in the zeitgeist. There's a lot of people that are rallying around Anthropic because of this. Yep.
Speaker 1:There's lot of people that are pulling away from Anthropic because of this. It feels like there is a business to be built that doesn't work with the government, but delivers coding models and knowledge retrieval systems and a whole bunch of really valuable products and technology, and it winds up being fine. But at the same time, you don't want this, like, hairy relationship with the government adversarial to go on for a long time.
Speaker 3:I would like them to sell to the government, and I would like Congress to pass a law addressing these digital surveillance issues.
Speaker 2:Yeah.
Speaker 3:And a lot of people are like, that's unrealistic, which I'm amenable to. But at the end of the day, if you don't have it's legal or not legal as your guiding standard, the only alternative is someone has to decide. Yeah. And the implication of that not being a sufficient justification is that means a private executive is deciding. Yeah.
Speaker 3:And if AI is what it is, I think that's going to be I use this word intolerable. I didn't mean intolerable to me. I meant intolerable to those with power to have a private executive making those decisions or not. And if you think about if power if we're gonna have this very sort of brute analysis that power flows from or laws flow from power. AI is a source of power.
Speaker 3:Yeah. So it's not just that and I think this is where the supply chain again, which I'm not endorsing, but I think that's where the motivation is coming from. The goal isn't to find we just won't use Anthropic. I do think the goal is to hurt Anthropic.
Speaker 1:Yeah.
Speaker 3:And you're if you're not going to be subservient to us, you're not gonna be allowed to build a power base. Period. Mhmm. And again, I'm not endorsing all this. Yeah.
Speaker 3:It's just a matter of it's not a surprise this is happening. Yeah. And it be this needs to be just a there's a real risk factor, a real that has to be considered in all these decisions.
Speaker 1:Putting on my Dario hat, I'm thinking about a different way to achieve the goals with maybe less acrimony. And I threw out this idea that maybe the better solution is, like, work with the government, but then lobby for a surveillance act and actually
Speaker 3:try to I wish the White House would come out and say, yeah. There's a digital surveillance problem. Let's work on a bit. Like, I I I don't Yeah. Probably another regret I have is sort of putting this all on Anthropic.
Speaker 3:That that was sort of the angle I was concerned about. I and that left me, I think, fairly open to the critique that this is just, defending the White House's approach. And that was again, that was I was trying to be a higher level that's saying, look. This is what's gonna happen. But yeah.
Speaker 1:The the I'm just thinking from the perspective way
Speaker 3:to find a middle ground here.
Speaker 1:I'm just thinking of, like, from the perspective of, like, the if the White House is, like, this immutable thing, I mean, like, you are in you know, involved in Anthropic, like like, one advice would be, hey. Okay. Instead of going in and having this confrontation with the government directly, go and start a political action committee that lobbies for change in the way that you want through the democratic process.
Speaker 3:Yes. That is the ideal process. I understand why people are frustrated and skeptical about this.
Speaker 1:Okay.
Speaker 3:I used to have this debate a lot in the context of antitrust and aggregators. And one of my sort of theses about the aggregators and antitrust is that the the antitrust laws are fundamentally unsuited to dealing with aggregators because antitrust laws historically been about control of supply, and the power of aerators flows from control of demand. And so you end up with all these solutions that I call pushing on a string. You're just trying to get people to change how they behave Yeah. And that doesn't work very well.
Speaker 3:Like like, Google has always been right. Competition has always been just a click away. The problem is people aren't clicking. And like like so so the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it.
Speaker 1:Yeah.
Speaker 3:And therefore, my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work. And the reaction is always, that's impossible. We can't pass new laws. And okay. But realize the implications of what of what you're saying.
Speaker 3:I mean, I saw a tweet. I I again, I I I didn't like it, so I lost it forever. It's one of the most infuriating things in the world. But someone was like, I would definitely rather have Dario Amade make these decisions than and he and to this tweeters credit, he wasn't limited to Trump. To me, this isn't a Trump issue.
Speaker 3:This is a any politician issue. Yeah. He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process.
Speaker 1:Yeah.
Speaker 3:And points for the honesty because that's the actual choice that is that is being put forward. Mhmm. And you could say congress isn't gonna do anything. Therefore, Amade should. Just appreciate that is giving up on the democratic process and saying we should have unelected, unaccountable individuals making weighty decisions.
Speaker 3:And, I understand the sentiment. It's hard to imagine Congress passing laws about anything.
Speaker 1:Yeah.
Speaker 3:But just realize that's, like, that implication is is quite fraught.
Speaker 1:Yeah. It's a huge change from I mean, I just spawn in and believe in democracy understand it and study economics and just have reinforced my belief in the American project throughout my entire career. And now it really is people discussing an entirely different world of governance, which is has been not something people have talked about publicly for a very long time, but it is here
Speaker 9:for sure.
Speaker 3:Right. And and they always come in on these Trojan horses that are eminently defensible. Again, I'm with Anthropic on the digital surveillance point. I've been I I've been concerned about it for years. Been writing about it for ages.
Speaker 3:And it's similar there is an analogy to the to the monopoly. Like, you have all these laws that assume someone has to actually physically go somewhere and tap into a phone line. Yeah. But if you can do it with computers at scale, like, suddenly, you you had all these assumptions that limited what the government could do that magically disappeared, not because the law changed, but because we got computers that could do the job of an individual at scale Mhmm. Infinitely.
Speaker 3:Mhmm. And AI, again, is going to the idea that the NSA by the way, this is my sort of like I had to admit this in the article. Yeah. I was so confused why the Pentagon was so obsessed with domestic surveillance. I didn't realize the NSA was part
Speaker 2:John and of I had
Speaker 3:of the Pentagon. John and
Speaker 2:I had the same moment.
Speaker 1:Yeah.
Speaker 3:Yeah. Yeah. You sort of thought about it as like an independent agent like the CIA. But but that that's a lot of the story make more sense. Right.
Speaker 3:No. Exactly.
Speaker 1:I feel like a lot of tech people are, like, reading the Fourth Amendment today and understanding, like, some of these, like, pretty basic processes.
Speaker 3:Well, yeah. But, like, it's pretty like, the whimples are massive. Like, like, I'm not denying it. Like like and it it it's similar to the chip thing with China. Like, my prescription for anthropic to give in is to allow these massive loopholes to be exploited and for the NSA to allegedly in the service of investigating foreign adversaries, but by, you know, the process basically surveilling the domestic population, I think is bad.
Speaker 3:And the reality is the nature of trade offs is you're choosing between multiple bad options. Mhmm. And at some point, it's like, which team are you signing up for? Yeah. They both suck.
Speaker 3:Choose one.
Speaker 1:What do you think of the messaging around like the models themselves not being capable enough to be used in the context that the Department of War asked for? Because I felt like Dario was sort of speaking for all frontier labs. He said that these technologies broadly are not suitable for these missions just yet. I'm not sure that he has all of the information on the other side to know about the advocacy. He certainly understands his models and what's capable in I the
Speaker 3:mean, I think that yeah, would assume they're definitely not capable. I think that point is more of a precedent setting one. I think Anthropics' position is significantly weaker on that point.
Speaker 2:Mhmm.
Speaker 3:Like, at the end of the day, we either trust the military or not to make these sorts of decisions. That's why we have a military.
Speaker 1:Yeah.
Speaker 3:And and so I I just I have a harder time. And I think the the digital sales point is so compelling for them because I I think it may be my personal biases. Totally.
Speaker 2:I think
Speaker 3:it's a huge problem. Yeah. The you this various anecdotes. Again, I hate the reporting from these because you can tell, like, the weeks coming from which side for each of these.
Speaker 8:Yep.
Speaker 3:But, you know, this idea that putting forward these hypothetical examples of, like, oh, you could call us and we'll figure it out then. It's like, no. Come on. Let because you'd be serious about this. Like like so, yeah.
Speaker 3:I think that's a weak argument for them. So that's why I almost focus more on the digital sort of like this one just because I think it is a very compelling argument in favor of the anthropic position.
Speaker 1:Mhmm. Mhmm. Jordan, anything else?
Speaker 2:Oh, there's a lot more. What are you what are you gonna be tracking going forward? Obviously, the story Yeah. Is
Speaker 1:Good luck. Stay strong.
Speaker 3:No. I mean, I I the OpenAI angle is obviously interesting. I didn't really get into OpenAI. Yeah. It's hard to parse exactly what's going on.
Speaker 3:It seems to me they have agreed to the to the Pentagon that they will be the Pentagon will be limited by lawful capabilities.
Speaker 1:Yep.
Speaker 3:And they make their own judgments about weapon usage. And as I understand it, OpenAI is like, we will on our side be free to stop the model from doing digital surveillance.
Speaker 2:Mhmm.
Speaker 3:Which sounds like you're in sort of a jailbreak competition. It's like, we're gonna agree to have a jailbreak competition with the US government, which I again, it's an example of how fraught this is that that's probably the good place to come down on. Now there's obviously these dynamics of competing for the same talent base. Being in San Francisco, you know, the this is part of, I think, Anthropics. Anthropic has a local advantage Yep.
Speaker 3:In that most people, I think, in the industry are with them, and they have a national PR problem in that I think a lot of folks outside of tech don't understand why tech companies always try to or resist helping the US government. Mhmm. And so it it's kind of an interesting dynamic where I think OpenAI is in step with the broader public and very much out of step with sort of their talent base in in San Francisco. And so that's gonna be very interesting to to see how that plays out.
Speaker 1:Yeah. It's it's remarkable that Google has stayed out of the fray given all the Project Maven background and stuff. Like, they must be so happy. They're just like
Speaker 3:Well, that's the other interesting thing is I this is actually goes back to Google, I believe, where Google had the project I think this is right. Yeah. Yeah. I but I think Google had project Maven, which their employees objected to. Yep.
Speaker 3:And therefore, that went to AWS.
Speaker 1:Yep.
Speaker 3:And then I some combination of I think the Pentagon is using Anthropic because
Speaker 1:Their AWS is what? PLS is a higher higher FedRAMP designation. That's right. So so that's
Speaker 3:why Anthropic was already allowed for classified content and OpenAI wasn't. Again, I don't know the
Speaker 1:It was I I I I've studied many of them pretty closely. It's a wild story. Mean, the it was similar, like, AI for the military, the same killer robot fears. The actual I mean, Google was a subcontractor on that project. And what they were actually exposing to the government was TensorFlow APIs that would run on Google hardware.
Speaker 1:And so they weren't actually writing any AI software, but they wanted to effectively classify images from drones in The Middle East, see that's a car, that's a house. And previously, they had Air Force airmen just sitting there, like, clicking, and they were like, okay. We're gonna automate that. Right. But but it was still, like, scary.
Speaker 1:Don't be evil working with the government, military. And then there was a backlash. They pulled out. Then eventually, they went back in and and had a new head of Google Cloud. Yeah.
Speaker 3:I mean, this is you know, it's hard to and I I speak for myself personally. I obviously have the biased angle because of Taiwan. I have the biased angle where I think there are you know, just in general, there is this very naive view of the world that doesn't understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving in to this naive mindset
Speaker 9:Yeah.
Speaker 3:That we have no duty to support the military. And there's this tension has been so the it's a tension that's been brewing for years. Yeah. Which is, are you an American company subject to American law and even beyond law, just morally compelled to support the US military Yeah. Or not?
Speaker 3:And there's an equally American sort of idea of moral consciousness. I'm able to say no. That's why we have the First Amendment. Right? This goes into the can the government compel a company to do something?
Speaker 3:It goes back to some of the questions that happened, you know, with the first Trump administration. And, you know, I've been on both sides of this, like, which I
Speaker 2:And this is what is Darius said in the in CBS interview. He said, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. He's already sort of like making this case.
Speaker 1:Yeah.
Speaker 3:Which, again, it is a case that I support. Yeah. But the point here is there's always the question with like a bubble or whatever. Is it different this time?
Speaker 9:Sure.
Speaker 3:And I guess that's sort of the question I'm raising.
Speaker 1:Yep.
Speaker 3:Is AI actually applicable to every other technology that's come along?
Speaker 1:Yeah.
Speaker 3:Or if it is the the potential to be a source of power going forward
Speaker 2:Mhmm.
Speaker 3:It's going to be dealt with as such.
Speaker 1:Yeah. That makes sense. Last question, we'll let you go. How happy should Ted Sarandos be right now?
Speaker 3:I mean, I think he had the killer quote in the last couple of days where I think someone was asking him if this is such a jewel and it's so rare. Like, isn't it a problem that you're missing out on it? Yeah. And he's like, well, have you seen the history of Time Warner? Which I think sounds about right.
Speaker 3:I'm not sure I'll I'll be the entity with all the debt that Paramount and Warner Brothers is going on. I think there's a bit where Netflix is always in the very long run been positioned, I think, to be the final buyer. Mhmm. Like, who else are content companies going to sell to?
Speaker 1:Yeah.
Speaker 3:I feel like they sort of I feel like they've been spooked by YouTube a little bit, and they felt a need to push forward
Speaker 1:Yeah.
Speaker 3:That bring the bring the the future forward. Mhmm. That was not allowed to happen, but that means their original plan, I think, is still in place. So probably pretty happy, all things considered, I'm going say.
Speaker 1:It's great. Well, I'm excited to get back to Netflix coverage and more Anodyne Yeah.
Speaker 2:Remember it was on Cheeky Pite you were talking about getting sucked into the I know. Social.
Speaker 1:Here we are.
Speaker 3:So I put that quote at the beginning of my article, you know, you may not be interested in politics. It was politics and interest in you. That was about Anthropic, and it was also about me. Yes. Yes.
Speaker 3:Yes. Did you do?
Speaker 1:Welcome welcome to 2026. Well, we thank you for taking the time to come chat with us. Yeah. Great to see you. And fantastic article.
Speaker 1:We appreciate you, Ben.
Speaker 2:Oh, yeah.
Speaker 1:Talk to you soon.
Speaker 3:Thank you. Have a great Let
Speaker 1:me tell you about Phantom Cash. Find your wallet without exchanges or middlemen and spend with the Phantom card. And let me also tell you about CrowdStrike, your business's AI. Their business is securing it. CrowdStrike secures AI and stops breaches.
Speaker 1:And our next guest is here live in the TVPN Ultradome. We have James Bashara from Magic Mind coming on down for a very refreshing, very different pace of interview, hopefully.
Speaker 8:Great to meet you, John.
Speaker 1:Great to meet you.
Speaker 8:We we actually met.
Speaker 1:I believe we met briefly in 2013
Speaker 3:No way.
Speaker 1:Because we were using CrowdTilt. No Soylent.
Speaker 8:Oh, well, yes. Yes. Of course. Oh, actually, that was a hell of a meeting. Yes.
Speaker 8:We gave you all a huge, like, 800,000 Yes. Dollar check.
Speaker 1:They they gave us a huge check. A physical check this big.
Speaker 8:Yeah. We printed it. I literally, like, an hour before, I was like, do whatever we need to do to get one of those kind of, you know, just TV checks. Totally. Because they y'all had one of the biggest crowdfunding campaigns of all time at
Speaker 1:that point.
Speaker 2:Yeah. Yeah.
Speaker 8:That's right. Unremarkable. I was just chatting with Ajay from our team about Rob.
Speaker 1:He was the one that was the leader
Speaker 8:on That's project. Right. Yeah.
Speaker 1:Yeah, that that was that was such a wild thing because we had applied to get on Kickstarter. And at the time, they said, like, no food products, nothing but, like, board games, I guess, or whatever they were doing at the time. And you guys were
Speaker 2:like Smart.
Speaker 1:We'll help you out, another YC company. You guys built it in, a weekend, and, it worked flawlessly. Also, I found out, like, you guys were basically, like, running digital ads for us acting as our Facebook, like, like, promotion engine everything cycle back.
Speaker 2:Crowdfunding platforms.
Speaker 1:It was so helpful.
Speaker 2:Yeah. Actively marketing groups.
Speaker 1:We just didn't have any marketing skills or any real, like, just, you know, we were so busy with other things that if we hadn't if you guys hadn't done that, we probably would have ended that campaign, like,
Speaker 8:way long. We took I still do take customer obsession to the extremes. Yeah. Yeah. And yet we saw it.
Speaker 8:It was like, hey, well, this helps both sides. We should we should lean into it. Yeah. The but, yeah, the and for for listeners, for viewers, Soylent was Yeah. I mean, that was so game changing.
Speaker 8:The whole Internet Yeah. Was talking about y'all for weeks. Super viral. And and it was for Y Combinator. And by the way, both of y'all, and I'm I don't know if if someone's just tuning in
Speaker 2:Yeah.
Speaker 8:You know, three months ago, and then they are like, hey, these are it's just the best hair on the Internet, that's why I tune in. These guys are journeymen and tech, and don't it's so cool to listen to y'all because it's not a journalist that doesn't know Yeah. How to actually build or what goes into creating a company, a startup, or sees a trend from a journalistic point of view. But you guys Many in the arena. The founder perspective.
Speaker 8:Which is why Yeah. Bringing y'all What do you bring? We have
Speaker 1:a Let's see.
Speaker 8:S ton of Magic Mind for y'all.
Speaker 2:Let's do it. Let's do it. Let's do as
Speaker 8:soon as I saw as soon as I saw yeah. We should do some shots. As soon as I saw I think it was on a Sam Altman interview Yeah. That Jordy was doing and he was chugging a magic mind.
Speaker 1:Oh, yeah?
Speaker 3:I was
Speaker 2:like, dude. I'm going for a max.
Speaker 5:We had a weekend.
Speaker 8:Let's let's magic mind max it today.
Speaker 1:I love it. So yeah. Yeah. Explain explain what this is. What are we consuming here?
Speaker 8:Yeah. The I'll get the the ten second commercial out of the way, but you just Is this peptides and amphetamines? Yeah. Exactly. Peptides, amphetamines, amphetamines, and steroids all in one, but you shake it, slam it back.
Speaker 8:Jordy's the king of the slams.
Speaker 1:Delicious. Mental performance spot. Oh, it has some caffeine in there.
Speaker 8:And it does so Max is unique.
Speaker 2:Some of that. And some of that.
Speaker 8:Yeah. Max is unique in that it is the world's first time release
Speaker 2:Mhmm.
Speaker 8:Energy energy shot. So Okay. The you couldn't do time release in liquid form Mhmm. Until about two years ago, and we were the first to put
Speaker 2:it in
Speaker 8:the shot.
Speaker 2:What did you learn from building software that you applied to CPG? Oh, my God. I think, like, right away, my experience as a customer has been this, like iterative approach basically versioning out like the product when you released it versus two years later was like night and day Night
Speaker 8:and day.
Speaker 2:In my experience like just a much better product which is something that unfortunately I think a lot of CPG brands, they kinda like make their product, they ship it, they get it into as many stores as they can, they sell as much, and maybe they don't even iterate that much on the product, which puts you at which is just like extremely high risk from from my view, but that's sort of
Speaker 8:It is. It's it's a really high stakes experiment when you start shipping it to to stores and you get into a thousand doors and then you realize, hey, this is kind of a b plus product. We could make it better. And and, yeah, we've chatted about this over the years. Basically, since, the beginning, Imagine Mind have take taking, excuse me, everything that that I could from Silicon Valley, and my background was in building software into building a dream company, which meant the first three and a half, four years was just improving, probably a 150 iterations of making the product better, better, better before we went into stores.
Speaker 8:Because when you are a d to c, and it is it is every time I say this out loud, I can't believe, it's the case, but as the first d to c energy shot, like, it was I couldn't believe no one had done it before, but one of the huge affordances is you can have version one, version 1.1, version 1.4. And each time that we made the product better, we saw the retention curve go up, up, up, and then we're like, alright. This thing's ready for retail. And now, as of last month, it's the number one health shot in the country in the natural channel.
Speaker 2:That's wild. Yeah. I remember when I when I first tried it, I was like, okay, I like what this gives me Mhmm. But it hurt my stomach. And then you were like, try it again.
Speaker 2:I tried it again. Yeah. Fixed it. And so I think that
Speaker 8:Oh, God bless the early testers. They're Yeah. Was it was rough early on. And and it was like, hey. I'll I'll eat nails if it'll help my productivity focus and flow.
Speaker 8:But, but it turns out most people wouldn't. And, yeah, it was actually, Biz Stone huge shout out to Biz Stone, cofounder of Twitter. Yeah. He's the one that put us in touch with college. Yeah.
Speaker 8:He's oh, really? No way. What college y'all
Speaker 1:go to? Northeastern.
Speaker 8:Oh, no way. Yeah. So brilliant mind investor. He's the one who put us in touch with these oncology manufacturers, this oncology manufacturing company. This is the most over researched beverage in the planet.
Speaker 8:So we worked with them and utilized this technology to make everything so small that you can make it taste better than any of these ingredients have ever tasted before. So my favorite ingredient is called bacopa, bacopa monnieri Yeah. Decreases impulsivity by up to 50%, which is amazing for
Speaker 1:focus if you wanna be risk on.
Speaker 8:That's right. Yeah. Exactly. If you
Speaker 1:if you want it on the track. I mean, performance takes many different times.
Speaker 8:Your advertisers don't want it for all the logos that are happening right knee right all over here.
Speaker 1:I I was I was skiing and I saw a guy do a backflip on skis with no helmet.
Speaker 8:That's impulsivity in a beautiful way. Was not on magic
Speaker 1:He needs a magic mind to put on the helmet because that's actually extremely dangerous. Do not recommend Yeah. Doing a backflip on skis with no helmet.
Speaker 8:That's an that's a non magic many people. Many people, when you don't wanna jump into social media or the
Speaker 1:Yeah.
Speaker 8:Yeah. 50 texts on your own, Actually, you wanna
Speaker 1:Yeah. Exactly. Yeah. You wanna be less important.
Speaker 8:But it tastes terrible until we found this technology.
Speaker 2:So you took you took the iterative approach, the r and d, actually brought this kind of like hardcore r and d to a category that many people would have just again shipped a project and been like, it's good for me, let's run it. Yeah. What did you not take? What did you not take? Oh, it's because I feel like that's Great that so much of the way that you run the company is is incredibly unique and kind of at odds with maybe how you built your last company.
Speaker 8:There are a handful of things. And this is one of my favorite things to talk about is just how we build the company in Magic Mind. And it's, you know, that that there's that, line in literature of once you know the rules, then you can break the rules. And so I think my twenties of three different startups, and a lot of it just failure after failure after failure, but learning how other companies did it or the the playbook would be done, and then recognizing getting to a place and honestly just being able to fund it with my own capital. Was like, I'm gonna do this my own way.
Speaker 8:Yeah. So one of the things is we don't have any junior employees. Oh, interesting. About 10 senior employees.
Speaker 1:Yeah.
Speaker 8:Everybody's senior. We have plenty of great contractors, plenty of great agencies that we work with. But because of that, one of the things that we do is no meetings. It is all asynchronous. Every once a month, we have a team meeting.
Speaker 8:Mhmm. And people can if you need it. You need to go to a meeting, go for it. But the default is asynchronous. Love we love Loom.
Speaker 8:Emails, texts instead of Slack. We don't I hate Slack. Mhmm. A the voice notes. But the senior aspect of the team members, no junior team members means that the investment is very expensive per employee.
Speaker 8:But, man, do we it's like a hot knife through butter whenever there's a challenge. You have senior people through and through on every aspect of the business. And I think with my earlier startups and with with most startups, a lot of times, you're hiring people that are doing something for the first time and saying, hey, we need you to know you've been a great engineer. We need you to PM as well Mhmm. For this thing because there's three or four of us.
Speaker 8:With Magic Mind, it was like, no. I want someone that's done this two, three times before and is a veteran
Speaker 9:Yeah.
Speaker 8:That knows how to do this.
Speaker 1:It's super key in in retail role. Yeah. Exactly. Because, like, it's just it's so hard to get on the phone with a buyer and actually build the trust if you haven't already sold them the last well, you were at the last company. You were the one that brought in vitaminwater to Costco.
Speaker 1:So the Costco buyer loves you because you brought him a banger product. You're going to try the next thing. Much harder to just take a cold call and break through.
Speaker 8:And you know I'm sure you know this with Soylent. Yeah. In fact, what are some of the things that you remember from the retail, the first stages of the retail world A that you would never nightmare. Us me more
Speaker 1:than what better with Lucy, but, yeah, very, very driven by trade shows, relationships, just being in some massive conference room, getting to know people over years and years and years, building trust. Then the only thing that can really accelerate is that example of, like, bringing in someone who has a relationship on the other side where they delivered the goods and then they're putting their their reputation at stake, saying, the last time I was with a smaller company, you took a risk on me. You were successful. We both made money because this product, you bought a bunch of it. You took your limited shelf space, and you gave it to my previous company.
Speaker 1:And it moved, and you made money, and you looked good. You got that promotion. Now you're VP. Do that take that risk again with this next thing. That's always been the best.
Speaker 8:Take the risk and then also do it in the right sequential way to where you don't I think one of the things that one of the biggest cardinal sins in retail, and I didn't realize this coming from software, You actually don't wanna move fast and break things. If you get into let's say you get, what seems like a dream partner in Walmart. If you don't move Yep. If the product doesn't move and I didn't realize this until getting into retail, and my cofounder William is a genius on this stuff of just avoiding what makes you good. I said we say this internally a lot.
Speaker 8:What makes you good is what you do. What makes you great is what you don't do. One of the things that the trap that is a doom loop in retail is you expand too fast. Let's say you get Walmart. You're like, hell, yeah.
Speaker 8:We got Walmart. This is the 800 pound gorilla, and then it doesn't move. Mhmm. And then you have to start putting in
Speaker 2:It's not like Facebook, you launch an ad on Facebook, it doesn't work, you just wind it down, Exactly. Relaunch
Speaker 8:Now it's like Walmart's like, hey, by the way, if you want to stay in here, and you got one bite of the apple, over the next ninety days, you've gotta spend 400 k in Why not? In spend, in promotional spend.
Speaker 1:Promotional
Speaker 8:spend. You're like, 400 I didn't we didn't expect we got $270 in the bank. You you all aren't gonna pay us for six months because of the payment terms you negotiated. And they're like, well, then we're gonna have to take you off the shelves. So then you go and raise some
Speaker 1:Yeah.
Speaker 8:Last last ditch effort round. That goes towards you don't get quite to the 400, you only get, like, 300, and then it starts this slow doom loop, and then you can't take it off the shelves because the worst graph in the world isn't flat. It's one that goes up and then down. Your revenue is flat lining or going down, and you can't raise any investment. And I would say, conservatively, I'd say three out of four CPG brands get into that trap because they expand too fast.
Speaker 1:Everyone starts D2C. The retail transition is the make or break moment for, like, every new brand, basically, in my opinion. And the hardest part is always for me rethinking the CAC to LTV math. Like, it's pretty easy on ecommerce to think about payback period. Maybe you're not doing ROAS.
Speaker 1:Maybe you're doing LTV to CAC, but you're still looking at, like, a one year payback. And when you get into retail, you're looking at much bigger tickets. It's not a $100 on an ad. It's a $100,000 or a million dollars of commitment Mhmm. With some chain.
Speaker 1:And then you might not see ROI on that because your margins are lower. You might not see ROI on
Speaker 8:that because distracted from the customers, so you don't really know what are they responding to, what messaging are they responding to. Yep. There's a This goes back to the two obsessions that that I have are on the product. Yep. Everything in the product, every ingredient, third party tested.
Speaker 8:Yep. Everything's clinically backed, the exact dosage on the product, and then and that customer experience. And then the other obsession is on the communication. And one of the things that you can do in d to c that you can't do in retail is you can iterate like crazy on the exact every word on
Speaker 1:here Yeah.
Speaker 2:Yeah.
Speaker 8:Has been tested with
Speaker 1:Yeah. Yeah. Of course.
Speaker 8:Millions of eyeballs. You go into retail without that that layer or you go in too early and it's a
Speaker 1:have it though and they can just they can just like one shot everything and then just go straight to retail, but it is very rare. And usually, have a retail background.
Speaker 8:Yes. They have a retail background. Yeah. And and one of the things that Survivorship bias.
Speaker 1:There's someone
Speaker 8:they got really lucky perhaps.
Speaker 1:Started some company where, like, their whole shtick was, like, the packaging is gonna be gold. And it's just gonna, like, jump off the shelf at you. And then, like, it did and they sold the company for, like, a couple 100
Speaker 2:David David is.
Speaker 1:Oh, David's kinda doing that now. But I was thinking But he's
Speaker 8:a total veteran. David's a perfect example where it's, like, every move is
Speaker 2:from What do
Speaker 1:you think about pricing? I was thinking of, was it Coia that was, like, $12 a bottle or something, and they sold for great outcome, and it just flew off the shelves. And it was almost like a Veblen good, like a Lamborghini or a or a Rolex or something where the higher the price people saw the higher price, and they immediately thought, like, this is a better product, so I'll buy it. And that actually drove sales Mhmm. As opposed to duking it out for, oh, well, we're the same price as the Red Bull, so people will comp us to that versus, like, no.
Speaker 1:This the the the the price tag is marketing. What do think about that?
Speaker 8:That is it's there's so, there's so much discussion around that on an investor and a handful of of CPG companies that you can get caught into, a lot of just, like, triangulation, and you end up with it's like, you know, Homeland you style like triangulation with the red string. Sure. We'll do this in these stores and this in these stores.
Speaker 1:Yeah.
Speaker 8:I will say Air One sometimes gets into like $8 shots at MagicBind, which we're not fans of. But, it is the number one shot there, so it is working. So maybe it's a It could be Maybe they've got a secret to it. But there is a, a really simple framework that we use, which is, I wanted it to be the best health shot in the world. Yeah.
Speaker 8:So in terms of everything from the vitamins for d and c and all of the b complex for immunity, because some nothing will sap your productivity like getting sick, all the way to the bacopa, acetylcholine. One of things that caffeine does is it's a vasoconstrictor. Yeah. So it will restrict the blood flow to the brain, which is terrible for
Speaker 1:Okay.
Speaker 8:Lateral thinking and terrible for switching context. Mhmm. It's not great for a conversation when you're like, oh, I really wanna absorb that new information. Yeah. But it is great for alertness.
Speaker 8:But if you add in something like, Citicoline, specifically this this Yeah. Supplier, we have Cognizant, which took it was like a two
Speaker 1:year yeah.
Speaker 8:That's really popular.
Speaker 1:I've seen this.
Speaker 8:It's amazing. So it's in Magic Mind. And Cognizant improves blood flow to the brain. Yeah. So that improves the the academic term for creativity is lateral thinking.
Speaker 8:Improves lateral thinking. Mhmm. All of these things are you stack them up, they're pretty expensive. Mhmm. So I was like, I just don't care what the price is.
Speaker 8:I want it to be the best in the world, and it's exactly what I had been taking for seven years before even thinking of making it into a product. Then we worked backwards, and we're like, alright, it's gotta be this price. Yeah. In the beginning, $5.99. Now, it's down to about $3.99 because of scale.
Speaker 8:Our batches are about 2,000,000, like, we're producing about 2,000,000 bottles a
Speaker 1:Wow.
Speaker 8:So now, with scale, you can get the price way down. But it was man, John, it was a frigging it was pushing a boulder uphill to get people to be comfortable with that price point in the But I was like, nope. That's what the price is gonna be.
Speaker 2:Makes sense. What's your relationship like with the Internet?
Speaker 8:Personally? Dude, we do get out daily.
Speaker 2:No. No. I just like I I no. Specifically, walk like, walk me through the last three days of your internet usage.
Speaker 8:Oh, great question.
Speaker 2:Because like I wanna I'm trying to gauge how online you are. Because I feel like the the internet is just a a stress machine. Dude. It feels like it works.
Speaker 8:This is my device.
Speaker 2:Oh, interesting.
Speaker 1:You don't carry a phone?
Speaker 8:Yeah. So this is a Monday. I have it right now because of the GPS to get here, and and just being, you know Jordy and I live a few blocks from each
Speaker 12:other. Mhmm.
Speaker 8:And and so I have the phone, but, like, literally, I didn't charge it last night. It's 10% gonna die. Rarely too? It is. Yeah.
Speaker 8:Okay. And I rarely use this thing. Yeah. The my relationship with the Internet is use it when necessary.
Speaker 1:Mhmm.
Speaker 8:And and then don't use it any other time of the day or week. On the weekend, specifically, we have three little girls, and I go watch only and pay the extra whatever, like $14 for it to have cell service, so that it's basically a dumb phone. And Interesting. And I've got text, and and I've got phone calls on it, but outside of that, it is a dumb phone.
Speaker 7:You can't scroll.
Speaker 8:You can't scroll. And it's a I've been I create apps, like with Replit, to where I like creating an app that brings up quotes of the Bhagavad Gita and stuff like that. It's the only thing. When I sit down on the toilet, I might read a little bit of the Bhagavad Gita on my walk, and then I'm like ready to rock. But I'm not where I was ten years ago where it's like every tiny little moment, let me jump into the amusement park that is my phone, and they get lost Yeah.
Speaker 8:For forty five minutes. And it's a Oh. Gods. Screen time's amazing. Yeah.
Speaker 8:Know. Scroll more. I will say, I listen to without meetings, I can deliberately, and I love I will work from my iPad. Try to do 99% of my work from my iPad. Shout out to Replit there because I got a great iPad app.
Speaker 8:And in there, I will listen to y'all, like this morning, while I'm working out
Speaker 1:That's great.
Speaker 8:Listening to y'all either on my you know, the podcast app or on or on YouTube. But, man, like, it's like the iPad is I'm on going into the offensive. I'm gonna actually create something. Mhmm. The watch is like, I'm gonna be on the defensive side if my wife needs me.
Speaker 8:Otherwise, I'm with the kids. But this is this somewhere in between Mhmm. Where I'm like, I can't really create with this phone, but man, do I get sucked into rabbit holes.
Speaker 6:Let's talk and
Speaker 2:and you get into a situation where, you know, I've got 30 tabs open on my device, which is just like pure chaos.
Speaker 5:Right?
Speaker 2:Right. And and these devices are sort of like are are just due to their nature, don't work very well to kind of multitask.
Speaker 8:The two things that I talk about so often with founders, an investor in about 100 startups, and I talk to them all the time. Again, it's what makes you good is what you do, what makes you great is what you don't do. So deliberately, for us, really mitigating meetings. Or I end up telling about the devices that I love, and the number one is the Apple Watch as a as a dumb phone. When you're at the gym, don't bring you can.
Speaker 8:Obviously, most people do bring the phones, but you might get sucked into an email. Certainly, with kids, my goal with them is by the time they're 10, spend twenty thousand hours with them. And and that was scrolling when I was I think I was listening to podcast on my watch, actually, so no scrolling. And this this guest said that that was their goal. This is like a year and a half ago.
Speaker 8:I don't remember the guest. Don't even remember anything about the other the other points that they're making in the conversation, but that stuck with me. Twenty thousand hours before their 10, because and his logic was so sound. He said, he could make $50,000,000,000,000 and never get that decade back
Speaker 1:Yeah.
Speaker 8:Yeah. With the the little ones. And so I was like, man, I wanna hit twenty thousand hours with my kids. And that was basically the moment where I was like, the weekend's no phone.
Speaker 1:Yeah.
Speaker 8:And I'm just gonna be with them in those hours. So I'm gonna be completely with them and not sucked in because I just know myself too well.
Speaker 1:I just was using no phone,
Speaker 2:just Apple Vision Pro mostly. I
Speaker 8:was married in them.
Speaker 1:And I and I used I I vibe coded auto scroll, so I don't scroll. It
Speaker 8:just scrolls yeah.
Speaker 1:And it has all the TikTok and the Instagram reels and the acts of monitoring the situation.
Speaker 8:Dude, you're virtually there with you.
Speaker 2:Exactly. Exactly.
Speaker 1:I agree. I agree.
Speaker 8:How how
Speaker 2:is your approach to angel investing evolved? Because I feel like angel investing can start out as like a fun hobby, a way to blow cash. But then you get caught up in this FOMO, and and there's you're on other people's timelines, like, you're kind of fighting for allocation at different points. How how what's your approach?
Speaker 8:It's it used to be more is better, and it was and it's true. So much of the game of investing is you go wide because the asymmetric returns, one investor
Speaker 2:don't know who the next gusto is.
Speaker 8:Exactly. And that was
Speaker 1:An angel and gusto?
Speaker 8:I was yeah. That was my first angel check. Sick. They were in our batch of Y Combinator. I know.
Speaker 8:Was nuts. That's amazing. Shout out gusto. Everybody
Speaker 1:The unified platform for payroll benefits and HR built to evolve with modern small and medium sized businesses. Go sign up.
Speaker 8:Wow. That has never happened before where I've been able to mention angel investment and then Get
Speaker 2:a live ad read.
Speaker 8:Live ad read.
Speaker 1:What's your website?
Speaker 8:Mine? Yeah. Jjbeshera.com.
Speaker 1:Yeah. But the but the company magic mind?
Speaker 8:Oh, magicmind.com.
Speaker 1:Is that on Shopify? It is. That's amazing. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces, now with AI agents.
Speaker 2:Continue. Continue. Shopify
Speaker 8:is great. And as we get into the angel investing conversation, I bet a lot of ads will pop up because it is it has been I've been fortunate enough to to invest in a handful of great companies. Cool. But I went wide. Honestly, Gusto did so well.
Speaker 8:Yeah. And and then a few others did really well. Mercury did really well. And and and then I'll tell you my biggest miss.
Speaker 1:What's
Speaker 8:that? OpenAI, and I'll tell you the story. It is it is Can I curse on you? I curse on you? Total.
Speaker 8:Total effing miss. And the other day. It's basically missing 20 Googles. Google went public at a $20,000,000,000 market cap.
Speaker 1:Yeah. Yeah. Yeah.
Speaker 8:So it's missing 20 Googles now with their latest funding. Is straight up missing It is missing 30 Googles. Yeah. 35
Speaker 1:Wow.
Speaker 8:40 Googles. So which is cool. Yeah. It's alright.
Speaker 5:It's fine.
Speaker 8:I never calculated the amount.
Speaker 2:You needed to That's why you
Speaker 1:need the magic line. To clear your mind, low stress. I don't need to scroll.
Speaker 8:And it's there's so many things that I will in life where I'm like, like Yeah.
Speaker 11:There are
Speaker 8:no meetings where I'm like, I know I'm leaving money on the table.
Speaker 7:Yeah.
Speaker 8:I we couldn't I couldn't imagine Yeah. Any other life. Any other life. I I feel very fortunate. But Yeah.
Speaker 8:I'll tell you the story. Yeah. So I've only shared this once Yeah. But it it definitely replays in my head pretty often. So Sam and I Sam Altman and I, we were advising and and building out YC's YC Research for universal basic income.
Speaker 2:Mhmm.
Speaker 8:And this was in 2017. Yeah. So every month, we would meet Mhmm. In a conference room
Speaker 1:Yeah.
Speaker 2:In
Speaker 8:this dingy little office.
Speaker 1:Yeah.
Speaker 8:And the dingy little office, the conference room, was where they were doing open air research.
Speaker 2:Yeah.
Speaker 8:So every month, and I was a full time we had sold my last company to Airbnb, and I was a full time angel investor. Yeah. Full time angel investor. Waking up every day
Speaker 1:Working out
Speaker 8:thinking about, yeah, what are the next game changing And I'm like, yeah, Sam and the team are trying to do this AI stuff and it's research and
Speaker 2:That can be a problem when you're too close to to an operation. Whereas, if you just get a pitch once, you're like, oh, this makes so much sense. But when you're getting all the information at all times and they're like, yeah, we don't really know what this thing's gonna be.
Speaker 5:Yeah. Right.
Speaker 2:And we ran into this issue.
Speaker 8:Has that happened? Friends it
Speaker 2:sounds terrible. I ended up creating a rule of like friend starts a company, like a real friend starts a company, just invest. Even no matter if you know like, oh, they've got this issue with how they operate or they've got this blind spot or Mhmm. You know all the problems that they're facing. You just if you're too close to it, you can just overthink it.
Speaker 8:Dude, I told Ahmad at Mercury when he was like because we had built financial technology and he's graduate we're at the Airbnb cafeteria and he was and he told me the idea for Mercury and wanting to start a bank for stripes. And I was like, don't do it, dude. And I spent forty five minutes trying to convince him not to do it. And and then, like, two days later, he was like, I appreciate that because we kept texting about it. He's like, I appreciate the input, but I'm gonna do it.
Speaker 8:And I was like, well, okay, I'll invest.
Speaker 1:There you go.
Speaker 8:This is not a good idea. Yeah. And good Lord, was I a total idiot. But, yeah, the OpenAI, and I was sitting every month saying no again and again and again. And even with the whispers of, yeah, well, I think we're gonna have to spin it out, make it a a for profit, and and would you wanna invest?
Speaker 8:I was like, AI. And it's I'll be honest, smart investors got in my head that were like, AI is so far off
Speaker 2:For sure.
Speaker 8:That it's like that's like a twenty, thirty five thing. We're so far off. Mhmm. And I was a total idiot. Well.
Speaker 8:40 Googles later. Still 40 think about
Speaker 1:Googles later. But plenty of other opportunities, plenty of other investments, plenty of other products.
Speaker 2:Congrats on all the progress. It was good. It was good to be humble a little bit. You're on too much of a a of hot streak. It's good to be like, okay.
Speaker 2:I gotta gotta lock in.
Speaker 8:That is the benefit of building, I think, anybody listening. I think it's what makes you guys so good at what you do. It's what
Speaker 5:Thank you.
Speaker 8:I think makes any creator really good at what they do is there are very few things it's like going to the gym. You don't go to the gym for you think you're going there for the gains, but man, what you really get is the ego dissolution. It is so Sure. It's so humbling. Yeah.
Speaker 8:And any time you put yourself out there, you think you're doing it for the gains,
Speaker 3:but Mhmm.
Speaker 8:You fast forward ten years later
Speaker 2:Mhmm.
Speaker 8:And you're The journey. Yeah. The journey. Ego the humility that comes with it, that only comes with doing it. Because, man, if you're just
Speaker 1:Yeah.
Speaker 8:Number two or number 2,000 at a big company, you're just living in your head like, I would have done it this way. Would have done it that way. But, man, it's a it is by putting yourself out there, it's quite it grows your awareness in a pretty, I'd say, invaluable way.
Speaker 1:Well, it's a great journey. Well said. Thank you again for the oversized check, and thank you for the oversized delivery of MagicBind.
Speaker 8:You all for doing what you do in such
Speaker 1:a unique We appreciate you.
Speaker 2:You powering the show.
Speaker 8:Dude, thanks for
Speaker 1:We'll talk to you soon.
Speaker 8:Show for us.
Speaker 1:Let me tell you about Gemini 3.1 Pro. Gemini 3.1 Pro is here with a more capable baseline. It's great for super complex tasks like visualizing difficult concepts, synthesizing data into a single view, or bringing creative projects to life. And let me also tell you about Restream. One livestream, 30 plus destinations.
Speaker 1:If you want a multistream, go to restream.com. And up next, we have John Quinn live in the TVP and Ultradome for the second time. Thank you so much for making the trip down to our studio. Welcome back. Welcome back.
Speaker 1:John Quinn, of course, is the most feared lawyer in America. Are huge fans. Tyler over there wears that hat every day. He is obsessed. You have a lot of fans.
Speaker 5:I got a couple for you for you guys.
Speaker 1:Yes. And you know, I I'm your neighbor. I live in Are right next very, very close to you.
Speaker 5:I know you went to school with one of
Speaker 1:my kids. I did. I went to Polytechnic.
Speaker 5:Yeah. I don't know which one. I had five who went there.
Speaker 1:I was in the same class as Jamie.
Speaker 5:Oh, okay. Yeah.
Speaker 1:Yeah. It was great time. Anyway, we're not here to talk about the minor life in Pasadena. We're here to talk about tariffs. We're here to talk about law.
Speaker 1:What have you been tracking? How have you been processing the the back and forth on the tariffs this year?
Speaker 5:Well, ten days ago Yeah. The United States Supreme Court ruled that President Trump didn't have the legal authority to impose these tariffs Mhmm. Which were kind of the cornerstone of his international trade policy, domestic policy maybe also, provided a lot of funding, collected $600,000,000,000 worth of tariffs. He imposed those tariffs under something called the Economic Emergency Powers Act, which gave them a lot of discretion under the administration's interpretation of it to I mean, we saw this, you know. One day it was 35%, then it was 50%.
Speaker 1:It's all
Speaker 5:of this. There were two categories of tariffs. There were the fentanyl tariffs that affected some countries, Mexico, Canada, a couple of others. And then there were the liberation day tariffs, which was across the board, different deals for for different countries. It had never that statute had never been invoked before or used to impose tariffs.
Speaker 5:It was challenged in the Court of International Trade in New York. He lost there Mhmm. And ended up going to the United States Supreme Court. The argument was a couple of months ago. The argument didn't go very well for the administration.
Speaker 5:Reading the tea leaves, I think a lot of people thought the administration was gonna lose. And then, enough, the decision came out on February 20. And the Supreme Court ruled that, you know, a lot of people thought the court would rule that these really weren't emergencies. The emergencies he was relying on were fentanyl and the balance of trade being an economic emergency. Yeah.
Speaker 5:And a lot of people thought the court was gonna say, you haven't shown that that's an emergency. But that's not what the court went off on.
Speaker 1:Oh, okay.
Speaker 5:The court ruled that he simply didn't you couldn't impose tariffs under this statute. It had never been done before. There's no reference to tariffs or customs or anything or taxes or anything like that.
Speaker 1:In the emergency economic
Speaker 5:act. It's Got it. And the court said, wrong statute.
Speaker 1:Yeah. And and zooming out, where do tariffs typically come from? What's the correct path?
Speaker 5:Well, there's a there's a number of different paths. There's Mhmm. Section one twenty two, which is now invoked. Okay. There's a national security regime where you can impose tariffs, but it requires actually an investigation that's been done by, I think, the Department of Commerce or some other agency.
Speaker 5:A record has to be created. The president can't just, on his own, say, you know, you're gonna pay 35%. You, India, you're gonna pay 50 Yeah. So, there is a whole other regime available. And the day he lost, he, you know, he was ready for this.
Speaker 5:He announced there's another statute that permits him section one twenty two Mhmm. That permits him to impose up to 15% tariffs
Speaker 2:Mhmm.
Speaker 5:A hundred and fifty days.
Speaker 2:Okay.
Speaker 5:After a hundred and fifty
Speaker 2:And then, theoretically, you could just pause for a second and then start up again. Is that true?
Speaker 5:I don't think so. Okay. I think the courts would see through that. But after a hundred and fifty days, Congress has to
Speaker 1:act.
Speaker 2:Yeah.
Speaker 5:So, he can do it for a hundred and fifty days, and then to extend it, Congress has to
Speaker 2:act.
Speaker 1:And is that is that passive in the sense that if Congress doesn't act, they don't continue? Or by doing that a hundred and fifty days, it forces a vote in Congress?
Speaker 5:I don't know that it forces a vote. I think Congress has to take affirmative action.
Speaker 1:They're the next one who have the they know the authority.
Speaker 5:The day he was ready for this, the day he lost in the Supreme Court, he announced, okay, I'm I'm invoking this section section one twenty two, 10% tariffs on the world.
Speaker 1:Yep. And he could've gone up to 15.
Speaker 5:Well, he did it the next day. Okay. He says, I thought about it again, 15% So, on the whole that's where we are right now, but the clock's running.
Speaker 2:And what about what about the supreme court didn't give any guidance on how the tariff the cash generated from the tariffs should be returned. Right? So there's also kind of a gray area on
Speaker 5:They didn't they didn't address that. But, you know, honestly, guys, that's pretty straightforward. These 670,000,000,000 in tariffs Mhmm. There are there are a lot of claims. So many, many companies have already filed claims.
Speaker 5:And there aren't a whole lot of defenses to those claims. Okay. I mean, he has said he's going to do everything he can essentially to delay it. Yep. Maybe so it's not on his watch, the next administration.
Speaker 9:Yeah.
Speaker 5:But there aren't a whole lot of defenses. These are good claims. There's one issue about in some instances, the costs have been the tariffs have been passed on to consumers. So the government might presumably have a defense that, hey, wait a second, you don't get all this back.
Speaker 1:Your profits didn't go down. Yeah. There was no damage.
Speaker 5:Passed it on, so you saw the revenue anyway.
Speaker 1:Interesting.
Speaker 5:And that raises a question about whether these secondary parties parties that paid for things that included these tariffs, which have now been ruled illegal, whether they can bring claims. Yeah. That's an interesting question. So I think Okay. I think we're gonna see some of those claims
Speaker 1:as well. So for a company that's trying to get a big refund, maybe in the hundreds of millions of dollars, I imagine, for some of these companies, Is the first step just to file a claim, or do they need a lawyer? Is this gonna be a lawsuit on a per company basis? How do you think that
Speaker 2:Yeah.
Speaker 5:I think it's I think it's a a lawsuit on a per company basis. They have to A
Speaker 3:lot of lawsuits.
Speaker 5:They have to file a claim.
Speaker 1:Okay.
Speaker 5:Lots of lawsuits. A lot have already been we're we're filing lawsuits.
Speaker 1:Okay.
Speaker 5:I mean, one of our partners, a guy named Dennis Haronitsky
Speaker 1:Okay.
Speaker 5:Is representing a number of clients already and bringing these claims. Yeah. But what what folks need to do if you want to pursue, get your documents in order. Okay. Get your records in order.
Speaker 5:Yeah. Figure out what you paid Mhmm. When. Yeah. And, yes, you probably need to engage a lawyer.
Speaker 5:I don't I don't think you wanna do this on a per se, you know, representing yourself.
Speaker 2:Yeah. Exactly. Yeah. Yeah.
Speaker 5:Yeah. You don't want any hallucinations and what and screw up of what is otherwise a powerful claim. Yeah. The claims are brought in a very obscure court known as the Court of International Trade. Okay.
Speaker 5:It's in New York. Okay. You bring the claim there and you you know, you pursue it.
Speaker 1:Yeah.
Speaker 5:The government has
Speaker 2:What what how how staffed up is that court? How could they process what could end up being a It
Speaker 5:it it could be an administrative burden, but the issues are pretty simple here. Okay. I mean, it's you know, he invoked this act. Yeah. They paid the tariffs.
Speaker 5:Yeah. The supreme court says he didn't have authority to do that. Yeah. And, you know, the government refunds tariffs Yeah. All the time.
Speaker 5:Yeah. It's it's kind of a routine thing. Yeah. So, it's not like this hasn't happened Mhmm. Before.
Speaker 1:Can you take me back to the Supreme Court battle and help me understand what's involved in fighting a case in front of the Supreme Court? How much do lawyers know about the positions of the various Supreme Court justices before they walk in the courtroom? How does all of this play out? What goes into a Supreme Court case? How how high stakes is it?
Speaker 5:Oh, I I think this was super high stakes. I mean, this was a, a major decision for this administration, maybe the biggest trade tariff decision ever. And I don't know that there was a lot of decisions by these justices before that could help people kind of predict what their reactions would be to this. The majority opinion was by Chief Justice Roberts. There was a dissent by Brent Kavanaugh and some others.
Speaker 1:But,
Speaker 5:you know, in preparing for it, I don't think it was unique to any other Supreme Court argument. Yeah. Obviously, that's the biggest of the big leagues, so prepare carefully for those arguments.
Speaker 1:How quickly oh, sorry.
Speaker 2:Go for it.
Speaker 1:How quickly did the legal community come to a consensus around the impact and longevity of the tariffs? Because from our perspective, the market reacted violently on Liberation Day. It was the worst day in financial markets in since COVID, effectively. But it feels like you've known or you've been able to predict that these tariffs might get struck down sooner than maybe most of the financial community. So what was the process like for the legal community to get a consensus around what might happen at the Supreme Court level?
Speaker 5:I think it was pretty straightforward, especially after the oral argument before the Supreme Court. Okay. That went so poorly.
Speaker 1:Oh, interesting. So it wasn't it wasn't that months ago, before even oral arguments started, people were looking at precedent and how the different justices No.
Speaker 5:Sure. Sure. They were. Okay. I mean, you know, the the administration had lost below Sure.
Speaker 5:In the court. Yeah. So that, you know, people knew that this is definitely in play.
Speaker 3:Okay.
Speaker 5:So, I mean, I think the issues were pretty straightforward. Sure. Mean, does this act give him the authority?
Speaker 1:Yeah.
Speaker 5:And as the Supreme Court found, there's no history for it.
Speaker 1:Yep.
Speaker 5:It had never been invoked before for this purpose. Mhmm. There's no reference to tariffs or customs in the act. Yeah. So as legal questions go, know, I don't think it's that complicated.
Speaker 1:Okay. How do you think about the speed, the time between Liberation Day and the Supreme Court ruling? That felt like almost a year. Is that deliberate by the administration? Is that is that just a function of how busy our Supreme Court is?
Speaker 1:Like, why does it take so long to actually get a decision from the Supreme Court?
Speaker 5:Well, some some decisions take longer Yeah. Than that. Yeah. Look. This everybody, you know, the Supreme Court reads the newspapers as somebody said during the FDR administration.
Speaker 1:Oh, that's funny.
Speaker 5:They read the tea leaves. They they know how there's no doubt they knew how important this was.
Speaker 1:So they know what's coming even before the case
Speaker 5:I'm sure they see it coming.
Speaker 1:Okay.
Speaker 5:But in terms of why it took time between the oral argument and the rendering of the decision, it was, you know, people were waiting, wondering. But it wasn't an extraordinary amount of time.
Speaker 2:Yeah,
Speaker 5:yeah. But I think because of the known repercussions of this decision, the court wanted to be really, really careful and straightforward.
Speaker 1:Yeah. Are there any other big new trends in cases that you're tracking that might make it to the Supreme Court this year? I'm thinking particularly of some of the artificial intelligence debates that we're having. We were just talking to Ben Thompson about the relationship between Anthropic and the Department of War. And there's a lot of folks that are asking for clarity on the regulatory side, whether an act of Congress or new laws, new leadership, more messaging.
Speaker 1:But the Supreme Court could play a role there. Is there anything that you're tracking in terms of, like, upcoming cases that will be particularly consequential for business or technology?
Speaker 5:Well, we all know about this issue, about using copyrighted materials to train large language models. Yeah. And we have cases pending, you know, relating to literary works, visual works, musical works. There are dozens of these cases percolating their way through the courts. Yeah.
Speaker 5:There have been a couple of decisions up in San Francisco now, the early decisions at the district court level, the trial court level
Speaker 2:Mhmm.
Speaker 5:Where the courts kind of came out. The decisions were kind of limited because of the unique circumstances of those cases. One was a meta case, and I think the other was I'm forgetting what the other case was where they prevailed at the district court level.
Speaker 2:Mhmm.
Speaker 5:But before these cases can get to the Supreme Court, they have to go through that process. A decision at the district court. Yep. Then you may have a court of appeals. Yep.
Speaker 5:So, and then and then to the US Supreme Court. I mean, it could well you know, our Supreme Court doesn't hear very many cases.
Speaker 1:How many do they
Speaker 5:hear? I I think they I think they do, like, I think, like, around 80 cases a year.
Speaker 1:Oh, wow.
Speaker 5:You compare that, like, to India, where they the supreme court there hears thousands of
Speaker 1:cases a year.
Speaker 5:Yeah. Of course, they have hundreds of judges on their Supreme Court. Yeah.
Speaker 1:Yeah. But,
Speaker 5:yeah, I I think there are definitely issues in the AI world that are could reach the Supreme Court.
Speaker 1:So walk me through the the thought process that if you were advising me and I believe that an AI lab has trained on my intellectual property and that lab is well funded and they come to me with a settlement, why would I want to turn a settlement down and fight it out and take it all the way, go the distance all the way to the Supreme Court? Like, what are the different trade offs that plaintiffs are making right now?
Speaker 5:You need to create a downside
Speaker 6:Okay.
Speaker 5:For the AI company.
Speaker 1:Okay.
Speaker 5:You know, they've got to see that you have the staying power, that you're prepared to litigate this case.
Speaker 1:Yeah. Anthropic So you need deep pockets. So if you're just a single author of a book, then you're not independently wealthy, they're probably gonna say, hey, you'll take the settlement. But if you're a class of
Speaker 5:Well, depends on what the settlement is Sure. I guess. Yeah. But my experience is that defendants don't write checks until they see a downside.
Speaker 7:Got it.
Speaker 5:And so that involved you really have to kind of establish your credibility Mhmm. that we're serious about this, we're willing to pursue it.
Speaker 1:Mhmm.
Speaker 5:Anthropic, you know, settled the case for a billion, maybe 2,000,000,000 of copyright class action. But those were very good lawyers who had put together a case, and they had gotten Anthropic in the position that they were facing whopping damages. If they lost at the trial court, it could have been much, much more.
Speaker 1:Much more.
Speaker 5:So that's the calculus. You kind of have to create a downside for
Speaker 1:it. Yeah. And what would the economic damage calculation be? Is that like I'd look at the profits that they've made from that or the revenues? Like, what sort of economic case should I be making if I'm trying to extract the maximum amount of damages?
Speaker 5:Yeah. I mean, under the copyright laws, there are specific rules about what's recoverable
Speaker 1:Sure.
Speaker 5:In a case of copyright infringement.
Speaker 1:Sure.
Speaker 5:And that's the theory here. Okay. That this is copyright infringement used our copy, my copy, the plaintiff's copyrighted materials
Speaker 2:Yep.
Speaker 5:In order to train the larger language model. And there's a couple of there are statutory damages, which are a fixed amount per infringement. Or there are theories under which you can recover the profits they got from it. But there are various rules about how damages are measured.
Speaker 1:Yeah. Yeah. That makes a lot of sense. Geordie, sorry.
Speaker 2:Give us one zero one on what it's like to be in an active case or litigation with the with the federal government. I'm sure over the many years we've talked, it's is it was it Palantir or Anderol that that both
Speaker 1:Palantir of and SpaceX both sued the US government over contracts that went to that that were not fairly bid in their minds. They didn't have a correct shot at displacing an existing program with Palantir was decent.
Speaker 2:Yeah. But but but let's say you you have a a client that is considering taking an action against
Speaker 1:suing the government.
Speaker 2:Suing the government. What how how do you kind of like walk them through it? Obviously, there's a bunch of details case by case, but what's the general framework that you operate under?
Speaker 5:Look, the there are good things and bad things about litigating with the government. We sue the United States government all the time. We had a we had a case relating to Obamacare. There was something called risk corridors
Speaker 8:Okay.
Speaker 5:Where insurers under Obamacare were assuming liabilities for markets that they had no history with. They didn't have a basis to underwrite it.
Speaker 8:Okay.
Speaker 5:And there was a term that was put in the Obamacare that said, okay, we will we, the government, will backstop you, the insurance companies.
Speaker 1:Sure.
Speaker 5:So if your you know, if your losses are above a certain amount Yep. You know, we'll fund that. Well, that was never funded. Mhmm. And that was never paid.
Speaker 5:Mhmm.
Speaker 2:We Basically, so the bill was coming due, the insurers weren't saying, hey, mister government.
Speaker 5:Yeah. Know, interestingly, the insurers, many of them weren't eager to sue the, you know, their heads above the parapet Yeah. And sue the United States government. Yeah. But we got involved and we brought a class action on behalf of a number of insurers.
Speaker 5:It took some persuading to get insurers to participate.
Speaker 1:Yeah.
Speaker 5:And ultimately, we recovered like 7 and a half billion dollars. Wow. Now, the good thing about suing the U. S. Government is if you get a judgment, it's not like you have to go out and hire investigators and find the assets.
Speaker 9:Oh, sure.
Speaker 5:If you have a judgment, there's a window in Washington you can take your judgment to, and they give you the money.
Speaker 1:You just get a check for 7 and a
Speaker 2:half Is it a drive through? Like a drive through? Like a McDonald's? That simple.
Speaker 5:They appealed and they lost. Okay. But once it was once the dust was settled Yeah. Yeah. They they give you the check.
Speaker 5:Wow. We have a case now. Sure. I mean, we represent Harvard University. Sure.
Speaker 5:Versus the government. Yeah. We sued Homeland Security on the ridiculous ban on I I thought it was ridiculous on foreign students around, you know, coming to Harvard. Yeah. So we sued Homeland Security on that.
Speaker 5:We sued Health and Human Services on the defunding and the cutting off of grants for research
Speaker 2:Yeah, we and things be
Speaker 1:you'd remit about that.
Speaker 5:Yeah. So, both those cases, we won on the trial court level, and it's up on appeal now. The government is appealing that. So, litigating with the government, you've got to assume you know they're well resourced. I mean, I don't know how many hundreds, probably thousands of lawyers there are in the Department of Justice.
Speaker 5:So they're going to show up. They're going to advance the government's position. And so they're not just gonna lay down.
Speaker 1:Can you describe the the level of sort of back and forth career paths that happened between Quinn Emanuel and the Department of Justice or the government? Are there folks that you're recruiting from the government occasionally? Are there folks that leave and go do a tour to understand what life is like on the other side? Or is it two sort of separate communities?
Speaker 5:No. It kind of goes both ways. It does. At our firm, we have about 25, 30 lawyers
Speaker 2:who Yeah.
Speaker 5:One time or another were prosecutors, U. S. Attorneys, Assistant U. S. Attorneys, or
Speaker 1:Yeah.
Speaker 5:High ranking officials in the Department of Justice.
Speaker 9:Sure.
Speaker 5:And that scene for a youngish lawyer, a job as in the U. S. Attorney's Office as a good job, great experience.
Speaker 1:Yeah.
Speaker 5:And those folks are in demand at firms like ours.
Speaker 1:Sure.
Speaker 5:And then, on another level, senior people, sometimes if they're interested in government service and they're connected, because a lot of this does involve political connections, you can get an appointment. We've had people in our firm who have appointments at very senior positions in the Department of Justice. So, it can go back and forth.
Speaker 1:Yeah.
Speaker 5:I will say another thing right now. This Department of Justice is really busy. They're facing all kinds of cases. And, you know, based on what I hear and what I read, they're they're kind of resource strapped.
Speaker 12:Yeah.
Speaker 5:So I'm I'm not suggesting they're not gonna show up and defend, but
Speaker 1:Yeah.
Speaker 5:I mean, like
Speaker 1:Does that mean your your associates are getting more calls to come over?
Speaker 5:You know, we we've had a couple of associates go to work Okay. Yeah. At the Department of Justice in the recent past. Yeah. But that's always been true.
Speaker 5:Yeah. Yeah. It's a good experience.
Speaker 1:Yeah. Yeah. That makes sense.
Speaker 2:How is, in your view, AI impacting the legal industry today? Not in the future, you know, what what, you know, how will the impact kind of evolve over time? But what are you seeing and hearing from colleagues or friends at other firms?
Speaker 5:Look, it's we are wordsmiths. Right? We work with words. We write things. And so obviously, large language models are something that is going to change how we work.
Speaker 5:They may change the whole structure of law firms. Big law firms historically are kind of described as pyramid in structure. You have senior people and then you have a lot of junior people. A lot of what the junior people have done in the past can now be done by large language models. It's not like they're going to give you a work product that you can then file with the court and use.
Speaker 5:I mean, there are hundreds of cases where lawyers have gotten in trouble with courts by filing things that cite cases and laws that don't
Speaker 1:Don't exist.
Speaker 5:So, that is really that's definitely a thing. The hallucinations, that's a huge risk factor, and that's that's really on the lawyer. I mean, there's no there's no excuse for a lawyer to have said, oh, I relied on AI. No. Yeah.
Speaker 5:You sign that thing
Speaker 2:Yeah.
Speaker 5:And that brief, you're responsible. It's your integrity on the line. So, we're not we're nowhere near getting, I don't think, at least in our practice. Maybe we do complex litigation. There may be practices where it's a lot of repetitive litigation where you can get output that's ready for filing.
Speaker 5:But we're really not seeing that yet. But we're we've developed in house at our firm a platform, a methodology for taking large masses of data that we work with, all the documents that are produced in discovery, all the testimony, all the contracts, and we organize organize that on it's built on the Claude Enterprise platform. And we've done this as lawyers. It's a system that's proprietary, developed by lawyers for lawyers. I think if you just turn a young associate loose with Claude or ChatchipPT, you're not optimizing the technology.
Speaker 5:But we take all the data and we structure it for a way that lawyers work. Sure. So it creates work streams. So, like, what do we need to do? We know what we do in every case.
Speaker 5:We prepare examination outlines. We prepare expert witness reports and the like. We prepare opening statements. So we structure in the data that creates these work streams. And I really think that's the that gives us a big advantage.
Speaker 5:That's it's not something engineers have created. It's lawyers knowing what lawyers need, having designed a way to structure the information.
Speaker 1:Yeah.
Speaker 5:And, you know, we're using that with great success. I mean, in trials now, in the middle of trial, imagine somebody's on the witness stand. Yeah. You can ask the AI, what's the best evidence that so and so just lied about that?
Speaker 1:Wow.
Speaker 5:Press a button
Speaker 1:It just doesn't.
Speaker 2:And you
Speaker 5:get a so most of those things you will have thought of. Yeah. Some of them make no sense. Sure. But there'll be a couple of gems in there.
Speaker 5:Yeah. Yeah. Yeah. You know, that you might not have thought of, lines of attack that you thought of.
Speaker 1:That's great.
Speaker 5:That's extraordinarily powerful.
Speaker 1:Yeah.
Speaker 5:So what we're our goal is to get to a point where the AI yields a work product that's like 80% or 90% there.
Speaker 2:Yeah. And Which is what an associate is typically doing today. Right? It's not you're not, like the best associate is not hitting it out of the park every single They're getting it they're getting a good good solid chunk of the way there.
Speaker 5:You're absolutely there. Right. And so lawyers can focus on what they do best, making making sure that last mile, the last 20%, the last 10% is as good as it can possibly be.
Speaker 2:How do you how do you think this affects the job market for lawyers at the early stage of their career? Because in some ways, yeah, their their their work might being, be being replaced. But at the same time, given that that AI is very good at generating words and kind of will be able to generate entire lawsuits, you can kind of imagine a dystopian world where the number of cases that get brought are a 100 times higher than they are today.
Speaker 5:I think that's true and that's something that people don't talk about a lot. There are AI companies, AI native companies out there that essentially identify claims. So, they'll have a database that has information on businesses of all kinds. What licenses do they have? What licenses don't they have that they should have under the law?
Speaker 5:I mean, you can just imagine if you can boil the ocean. Yeah. And they will these companies, you can subscribe to them Yeah. And they'll serve up. Here's a class action for your consideration.
Speaker 5:We've identified the claim and here it is. So, I actually think there's a potential that we may see more litigation as a result of AI. Yeah. On the other hand, I think resolution of cases may be faster. Sure.
Speaker 5:Because both sides can understand quicker
Speaker 1:Yep.
Speaker 5:You know, the merits of the case on each side Yep. And reach a resolution sooner.
Speaker 1:Yeah. Yeah. No, that makes a lot of sense. Last question from my side. Can you tell me a little bit about how you perceive, like, the battle of the forms and the level of the level of detail that contracts might be entered into?
Speaker 1:I'm thinking of this this news with Anthropic and the Department of War about these, like, two lines, domestic mass surveillance, fully autonomous weapons. And there's and there's, like, in every contract, there's a question of, do do you write it in one page or a 100 pages? I think most lawyers can tell you, I can do either. Right. And they have different flavors, and they have different subtexts and it depends on the type of relationship that you're forming.
Speaker 1:And so I'm wondering how much you think about that when you're communicating in a legal context and something that's going to be binding. What level of legalese you want to use, what length of document you want to use, how important is that in
Speaker 5:I think I I I think it's super important. Mhmm. I mean, it doesn't matter. Yeah. Everybody, you close the deal, you have a closing dinner, people shake hands, they have a drink.
Speaker 1:Yep.
Speaker 5:And as long as things are going fine
Speaker 1:Yep.
Speaker 5:Everything's fine, right? Yep. You never hear about it. And the legalese doesn't matter.
Speaker 11:Sure.
Speaker 5:As soon as, you know, a couple years down the road Mhmm. After an acquisition, joint venture, whatever Mhmm. Circumstances have changed.
Speaker 1:Yep.
Speaker 5:One side has a different goal. They wanna go in a different direction.
Speaker 1:Yep.
Speaker 5:Can and can they go in that direction?
Speaker 2:Yep.
Speaker 5:Or they're not performing. They face headwinds. Then, it's only then that people start to scrutinize the language and say, wait a second, Can they
Speaker 2:What did we actually agree to?
Speaker 5:Yeah. What they do this? Yeah. What are our options on both sides? If you want to get out of a deal or you wanna create some leverage, you're gonna scrutinize that language.
Speaker 1:Of course.
Speaker 5:And that's why, you know, we think it's a good idea. It's going to sound totally self promoting, but we have clients, major private global private equity clients who ask us to do this. Have a especially if it's a novel structure that hasn't been tested before, something you haven't done before.
Speaker 2:Sure.
Speaker 5:Take a look at this and pressure test it for me from a legal risk perspective. Sure. You know? And I think it's especially valuable to have a litigator's eyes on the problem. Because look, we live in a world where twenty four seven something's gone wrong.
Speaker 5:Yeah. So, day in day out, we're dealing with the very kinds of situations you're talking about. So, a lot of times, we can, you know, we look at these things with a jaundiced eye. Mhmm. And a lot of times, we can look around corners Mhmm.
Speaker 5:And see how might a devious counterparty
Speaker 2:Mhmm.
Speaker 5:Try to take advantage of this situation. Mhmm. So, I guess the answer to your question is it doesn't matter
Speaker 1:Male Yeah.
Speaker 5:Until it does.
Speaker 1:Yeah.
Speaker 5:And so, because of our experience as litigators, we can look at things and kind of see, oh, there's a potential loophole here. Male Yeah. It might be a simple agreement. Like, we had a case, an arbitration in Asia. I won't get into the specifics because it's confidential.
Speaker 9:Sure.
Speaker 5:But, basically, one side had an option, you know? Yeah. And there was a method for determining what the option price would be.
Speaker 7:Sure.
Speaker 5:And then, it was kinda like a a bit like a baseball arbitration, and then one side would get one number, and the other side would come with, and there would be a neutral that figured it out. The other side quoted an option price was was way out of bounds. Mhmm. It was like totally unreasonable. Mhmm.
Speaker 5:And then they said, your turn. And what do you do in that situation? If you respond to that Yeah. And treat it as a good faith offer and you respond, then you've sort of bought in to that going down that road. Yeah.
Speaker 5:And that seemed to be what the language of the contract on its face, you know Loud. Authorized. It allowed exactly. And our advice was, no. Yeah.
Speaker 5:Don't respond. Yeah. Don't respond. Yeah. And so they went to the arbitrators and said, look, they defaulted.
Speaker 5:They didn't respond. There's only one price on the table and it's ours. Yeah. Ultimately, the decision of the arbitrators was, no. Yeah.
Speaker 5:You know, you didn't act in good faith.
Speaker 9:Sure.
Speaker 5:So, they lost. And they're they were stuck in that invest. Ultimately, they got out. They were so eager to get out of the investment. Yeah.
Speaker 5:They had to come to terms with it.
Speaker 1:Yeah. Makes a ton of sense. Well, thank you so much for taking the time to come chat
Speaker 2:with us.
Speaker 5:Thank you. And thanks
Speaker 1:It was fantastic. There. Enjoy the Of course. You all have.
Speaker 2:Thank you for keep keeping us
Speaker 5:a couple more
Speaker 1:And we will
Speaker 2:keeping us fitted.
Speaker 5:You, Great to see you, Brad.
Speaker 1:See you around.
Speaker 5:Great to see you.
Speaker 1:Let me tell you about public.com investing for those who take it seriously. They got stocks, options, bonds, crypto, treasuries, and more with great customer service. And let me tell you about fin.ai, the number one AI agent for customer service. If you want AI to handle your customer support, go to fin.ai. And without further ado, we will begin the Lambda lightning round.
Speaker 1:Here we go. Pivot that camera over there. It's turning all the way around and followed John Quinn out of the studio. Already. And now you see that the Lambda lightning round has begun and we will We got
Speaker 2:a banger to kick it off.
Speaker 1:Absolute banger. We have Michael from WorkOS.
Speaker 2:There he is.
Speaker 1:Michael from WorkOS. He is the co founder and CEO.
Speaker 2:What's happening?
Speaker 1:How you doing?
Speaker 7:Gentlemen, great to see you. How's it going?
Speaker 1:Great to see you.
Speaker 2:Great to see you. It's been what has it been? A couple
Speaker 1:of months? It's been almost six months. Last time we had you on, we were hanging out with Satya Nadella on a historic day. Also another crazy day to join. But since this is the first time joining remotely, give us the update.
Speaker 1:Kind of reintroduce yourself for everyone.
Speaker 7:Yeah. So I'm Michael Grinich. I'm the founder and CEO of WorkOS, and we are an infrastructure company that helps other software companies with enterprise features in their app. Not not exactly the thing people get the most excited about, but it's the underlying infrastructure that allows people to sign in to things like ChatchBT and Anthropic and Perplexity Mhmm. With their company account, with their enterprise account.
Speaker 7:So we often say that we help developers make their app enterprise ready. And today, we're really thrilled to announce our series c, which we've raised a 100 Alright. Million
Speaker 2:We got a mallet. We got a gong to hit. Let's see. Let's see. Do we have the power?
Speaker 7:It's like Thor reaching
Speaker 2:There it is. There it is.
Speaker 1:There we go. Look at that. Here we go. Here
Speaker 2:we go. 100 on, 2,000,000,000.
Speaker 7:That's right.
Speaker 2:That's right. That is, some great dilution from your side. So working with me,
Speaker 7:we haven't raised money actually in, over I four think a lot of folks kind of forgot about us, to be honest, as a company. We were started back in, 2019. So this is over seven years we've been building the company. Really, the news today, what's new with us, is the last year or two has been all about AI. And we have found ourselves supporting all these AI companies as they're rapidly, just explosively growing.
Speaker 7:So whether it's it's like OpenAI selling into the enterprise or, you know, Claude, growing like crazy over the last few months or even last year, which, you know, like, kinda came out from nowhere and took over, how people write software. Workhorse is powering all of these, and we're helping all of them take, you know, the functionality that they've they've landed in AI and actually enable them to go sell out into these big, you know, customers that otherwise they wouldn't be able to get, and essentially unlock revenue.
Speaker 2:So so over the last few years, you you joked that, people forgot about you because you weren't, raising. What what were you doing? I'm assuming you were you were at different points probably turning turning down investor interests. Like, what what was your kind of mindset in leading the team through that period?
Speaker 7:Yeah. We were just building. I mean, I think at the end of the day, to build infrastructure like we've created, there's no real shortcut for it. You just have to spend day after day, week after week, month after month solving tens of thousands of small edge cases. Sometimes I say WorkOS, it's kind of like a ball of edges.
Speaker 7:Like, whole thing is made of edge cases. There's there's not really just one way to, you know, quickly ship it. It's it's in some ways like the anti y c company. It's something you can't build in like a couple months and launch. It does take months and months and years and years to develop it.
Speaker 7:Early on, we had a lot of amazing customers. I mean, we, know, early on powered off for folks like Vercel and Webflow and Carta and, you know, kind of the the previous era of of cloud SaaS we all know about. It's just what we've seen with the AI stuff is it it it grows faster. The companies are adopted sooner within the enterprise. Yeah.
Speaker 7:And all of this AI functionality, it's it's actually scrutinized by security IT people, you know, a lot more a lot more heavily. Like the IT people say, no way we can use your AI unless it fits these, you know, security policies. And so despite being started in the pre AI era, WorkOS is actually actually kind of perfectly positioned to be in well, we are an AI business today without having been started started that way.
Speaker 2:Yeah. How have you how have you processed the overall SaaSpocalypse narrative or or just vibe coding in general? In in many ways, WorkOS can can kind of I feel like deliver kind of the dream of vibe coding which is like maybe there's some some specific functionality you wanna build. You wanna be able to sell it into the enterprise. But pre WorkOS, maybe that wasn't that wouldn't have been possible because you have all these edge cases that that enterprises are gonna care about.
Speaker 2:But how have you processed the entire narrative?
Speaker 7:Yeah. When people are vibe coding stuff, you know, you typically don't want to vibe code your security part of your application. You you might vibe code the features of the UI like like the AI code gen is so good for doing UI engineering. But when it comes to stuff like authentications or permissions or things around compliance or auditing, that's maybe not the place you want to apply AI. And and and we've seen that actually from a lot of our customers, including our customers being the AI labs that are building the models to do this stuff.
Speaker 7:They're even using using WorkOS. What we've seen from a lot of other businesses is there's no time to build it in house. So in the previous era, you you might have had, you know, like a few years to figure out enterprise. If you go back and look at, like, Dropbox or something, you know, they they built for many many years before they did enterprise. Today, what's happening is these AI companies, you know, powered by by the functionality, they go after enterprise pretty much immediately.
Speaker 7:Like, within the first year, they have to go upmarket. And so there's no time to build it. They just turn to us and we ship it for them.
Speaker 1:So so, I mean, truly how low is the bar? Like, if I'm if I'm in YC and I vibe code a piece of software that's like MVP but it works, is WorkOS, like, accessible to the point where it's self serve, free for, like, how how low is the bar?
Speaker 7:Absolutely. So we have an amazing free tier. It's free up to 1,000,000 users.
Speaker 2:Wow. That's a lot.
Speaker 7:So even maybe you get a million users when you're in YC, but past that, you know, as you keep growing Yeah. What we charge for is the enterprise features, and really we try to align our pricing model with you as a customer. As you go close enterprise deals Yeah. You pay WorkOS. It's it's kind of like Stripe.
Speaker 7:You know, you pay Stripe money when you go, you know, make money yourself. Yeah. But it's so easy to use. We have people that integrate, you know, WorkOS in an hour or two. They're out there selling enterprise essentially as soon as they have demand.
Speaker 7:Yeah. And actually just last month, we shipped a new capability that's even faster to integrate because it uses AI. Mhmm. So AI is actually accelerating our customer base. It's also accelerating how fast you can adopt and use use our functionality.
Speaker 7:You essentially just run a command that installs WorkOS and then Do do have, like, KPI
Speaker 1:around that in terms of, like, integration timelines? Like, it seems like they're getting shorter, but what are we talking about? Like, of developer time, weeks?
Speaker 7:If you run our CLI installer Yeah. Which uses AI to install, it takes roughly seven to eight minutes.
Speaker 2:Oh,
Speaker 7:So it's super fast. I've been doing this trick, you know, whenever I do like a sales call with a with a customer that's interested in using Work OS. Essentially kick off the call and say, hey, go try this. You know,
Speaker 8:just run it
Speaker 7:in your terminal in the background. We'll come back to it after I click around the dashboard a little bit and essentially you have a POC that's ready to go. You know? In the past, we we would still integrate fast, you know, but we would have to talk to their engineering team. We get on the road map.
Speaker 7:We we do an architecture call. We have a amazing team of solutions engineers and and developer success that helps plug stuff in. But I think what what we're finding is AI is this accelerant, not just in terms of market adoption, but just making software easier to use and integrate. We even have people that migrate using AI. So they might using a different solution or having a homegrown thing.
Speaker 7:You know, you plug Clot or Codex into it and just rip through it. It's it's really wild. Yeah.
Speaker 2:What now? Is the job finished?
Speaker 7:Or Job's not done. Absolutely not done.
Speaker 1:In in some ways, it's kind of like
Speaker 7:a new moment for the company. I I told our staff at the beginning of this year, you know, if I was to start WorkOS today, what I would essentially build is WorkOS for AI, and specifically for AI agents. So that's what we're building going forward. Yeah. You know, you think about agents in the world, and a lot of people talk about agents in different ways, really what they're doing is displacing people doing work or or enabling people to do more, you know.
Speaker 7:I think Ivan from from Notion has called agents that you're you're a manager of infinite minds. The ability to go and control and kind of adapt all these different systems to do work for you. The problem with agents is if you're spawning all these things, these these workers to go do things for you, they essentially need permissions. They need access to data. And an agent isn't useful if you can't connect it to all your different stuff, and you can't do that securely.
Speaker 7:Last thing you want is an agent, you know, running wild. I think there was a there's a story on Twitter a week or two ago of an agent, an Open Claw instance going rogue and deleting a bunch of email. Do you guys
Speaker 2:remember Yeah. Was Amazon. Was like Amazon also had,
Speaker 1:I think, Sims. Oh, yeah.
Speaker 7:Was like head of alignment or something.
Speaker 1:Was a really wild look.
Speaker 7:So, you know, you definitely don't want that for your personal account, but it's completely disastrous if you're doing that internally to your systems, or God forbid, it's doing it to your customers. And so this problem around what can agents do, they're extremely powerful, but we actually need to give them different types of permissions and guardrails and identity on And top of so that's that's a new thing that we've been building at WorkOS with some partners, building this new essentially identity fabric, is what we're calling it. Mhmm. That sits across everything and allows people building agents to have the connectivity, but also the security and trust that is demanded by their customers. And we hope with that, it will act as an accelerant more into the enterprise.
Speaker 7:It'll help more companies building AI do it in a way that's safe.
Speaker 1:Amazing. Last question for me. How is the role of you said, like, integration engineer. It sounded like almost forward deployed engineer. How is that role changing at WorkOS now?
Speaker 7:Well, everything at WorkOS is using AI. Our customers are using AI. We're using it every single role. We have sales reps building stuff with Cloud Code.
Speaker 1:Sure.
Speaker 7:You know, our finance team doing stuff. We have the hackathons going. And so it's definitely impacting the people that are working with customers every day in that way. I think the magic of Cloud Code and and this four deployed engineer is it essentially turns one person into a whole team.
Speaker 1:Mhmm.
Speaker 7:So it kind of mimics having you know, if you go sell product to like a bank, like Deutsche Bank, you're a giant company like Microsoft, you would be able to afford having like 10 or 15 engineers go sit in their office and write code. Literally like post up with your laptop and like build stuff for them. That's the original, you know, solution engineer for deployed engineer. What AI lets people do today is essentially have that same experience but have a tiny company do it. Yeah.
Speaker 7:You know, like WorkOS. It essentially lets us give that level of consultative, you know, impactful work in your organization. The first step when we run our AI installer for customers, what it's actually doing behind the scenes is it's looking at your code base and building a plan. It's doing an architecture review by analyzing your structure and providing the best, you know, course of action going forward, the best recommendation. Yeah.
Speaker 7:And that I think we're just scratching the surface on that. I mean, that's like it's so early. It's so, so early in terms of all this. So at the end of the day, it just makes people, you know, integrate faster, go to market sooner, ship faster, and just just grow upmarket sooner. We we talk about WorkOS being the help making your product enterprise ready.
Speaker 7:Mhmm. It's really an API that unlocks revenue. It expands your TAM as an organization. And when these companies go through that transition moment, say they just start getting product market fit and expanding, the last thing they want to do is is slowed down by the lack of these enterprise features when they have a big fish on the line. Yeah.
Speaker 7:You know? And and what Workhorse does that turns that on almost immediately? If it's seven or eight minutes today, I'm hoping to take it down.
Speaker 2:Big fish company of San Francisco. I love it.
Speaker 7:That's right. That's right. Yeah. It's like a fishing line. It's like we'll help you get it to go faster.
Speaker 7:Yeah.
Speaker 1:Well, thank you so much for taking the time.
Speaker 2:Awesome. Great. Great to get the update. Congratulations to the whole team. And we'll talk again.
Speaker 2:Great to
Speaker 7:see you guys. Take care.
Speaker 1:Let me tell you about MongoDB. What's the only thing faster than the AI market? Your business on MongoDB. Don't just build AI, own the data platform that powers it. Our next guest is Adam Simon from IPG Media.
Speaker 1:He's in the restroom waiting room. Now he's in the TV. What's happening? Adam, how are you doing?
Speaker 11:Hello. Welcome to show.
Speaker 1:No.
Speaker 9:You're good. There we go.
Speaker 2:Now I can hear you. What's happening? Great to meet you.
Speaker 1:Thanks so much for hopping on.
Speaker 11:It's great to meet you. Thanks for having me on.
Speaker 1:Please, introduce yourself. Give us a little background.
Speaker 11:Yeah. I am Adam Simon. I spent a decade as one of the top innovation executives at a global media agency
Speaker 12:Yeah.
Speaker 11:Which basically means I got paid to be wrong about the future just a little bit less often than everyone else.
Speaker 1:Yep.
Speaker 11:And now I am out there consulting on where entertainment is going next Yeah. Which in my opinion is off the screen and into the real world. Oh. And I'm working on a book called The Immersion Economy about how immersive technology is set to supercharge the experience economy.
Speaker 2:Okay. What did you what did you get right when you were predicting the future?
Speaker 11:Oh, good question. I got right. I I I think that, Netflix would win in streaming, and I still maintain, that after, you know, last Friday's news, obviously. I I I think that Netflix, you guys covered this earlier, but I think that Netflix is gonna come out on top in this deal when we look back a few years from now.
Speaker 1:Yeah. I agree.
Speaker 11:That was that was something I called early. I also called that they were gonna pivot into having an ad supported tier at some point as well as into gaming. I think the Netflix, behemoth will just keep growing.
Speaker 1:Yeah. Walk me through a little bit more of, like, immersive economy off the screen. Like, that could mean Taylor Swift concert, that could mean ice cream museum. There's a million things that happen off the screen, like Yeah. How do you just how do you describe, like, the territory?
Speaker 11:Yeah. I mean, it's it it is all of those things. Okay. It is in my mind, it the exciting part is all of those experiences that are about to get are already getting upgraded or are about to get upgraded with technology. Mhmm.
Speaker 11:And some of that looks like immersive technology. It looks like the sphere. Yeah. It looks like what what Cosm is doing, which I think is super interesting, especially around, like, sports and and performances and stuff. But I also think it means AI personalization in the physical world.
Speaker 11:We're you know, we spend a lot of time talking about how AI is impacting software. You guys certainly spend most of your days, I think, talking about that. Yeah. But I think AI colliding with the real world, robotics is the place that most people go, which is understandable. It's super exciting.
Speaker 11:But I think there's an interim step before we see robots everywhere, AI powered robots everywhere Yeah. Which is just our physical world being a little more responsive to us as individuals. If you think about the physical world reacting to you in the same way that your your feed on social media might and starting to personalize itself in that way, that triggers some nightmare scenarios, I think there's some opportunity to also do some really cool, fun, creative work there.
Speaker 2:Okay. So I'll I'll give you a specific example. We're in Montana. We're in Montana this last weekend. We're at an ice race, which was the the first event that they've done where basically they get a bunch of cars around an ice track and then you can take cars out and do some laps and then there's a bunch of spectators and things like that.
Speaker 2:And I was I was, you know, my the the event was incredible. My only criticism is like there was kind of one screen over in one of the spectator areas. And so in the area that that we were, you you kind of you were reliant a 100% on your own vision to experience the event, which was cool. But at the same time, there were certain angles and like a bunch of missed context. And so in an event like how are you thinking about personalization in the real world?
Speaker 2:Is it we have smart glasses on and you have your own audio feed? Or, like, what does that actually look like in practice at something like a automotive event?
Speaker 11:Yeah. I mean, I think I think that that is a great example. I think that, to your point, we don't even need to wait for augmented reality glasses. I think when they do come, there'll be some really interesting, exciting things we could do. But just the ability to customize an audio feed, you could do that with AirPods right now.
Speaker 11:Right? Like, lots of people already have them in their pockets. We could be through people's smartphones broadcasting customized audio feeds to you, through your AirPods just as a one example. An an under sort of discussed feature of the sphere that I find really interesting is they have, beam forming speakers. So different sections of the sphere can technically be hearing different audio.
Speaker 11:They haven't really used this yet to, to my knowledge, but the way that they talk about it is that you could have, like, different language tracks sort of beams to different sections of the venue. And I think that that's the kind of thing that we're seeing built in to lots of venues. And, obviously, that's not a personalized sort of one to one experience, but you can also sort of see how that starts to move us closer in that direction and use some some technology that is, you know, not exactly new, but to to enhance the experience and make it more accessible for more people.
Speaker 2:How are you thinking about, VR? We had Ben Thompson on our show earlier today. He's been pushing aggressively for Apple to just let him watch any NBA game. He basically figured out, like, it's like a 40,040 thousand dollars worth of hardware that would just need to be installed at the different stadiums and then you could just he's like, I don't need a I don't need a separate livestream and commentary because if you just beam me into the stadium, I can hear the commentary that's happening. I can look up at the scoreboard.
Speaker 2:I really just need that kind of like video feed.
Speaker 11:Yeah. I mean, I think, you know, I I think long term, there are some cases and some users who are gonna want the more highly produced edited experience that Apple is is kind of leaning into. I know Ben is super against that. He just wants the the static camera. It's like pretends that I'm there.
Speaker 11:But I I think that he's right in that Apple's biggest problem right now is scale and just having the volume of content that you could get with just static cameras. And, you know, sports is is great, and I know I'm sure would sell a lot of headsets. But also just think about concerts and stage plays and any place else where just being a fly on the wall or being a butt in a seat, you can't get there or don't want to get there physically. But being able to get there with your VisionPRO or your your Meta Quest or what have you, I I think that there's real opportunity there. Longer term, I actually think that venues like Sphere and Cosm in particular are going to create a pipeline for that kind of immersive content that will be able to to go straight to to headsets.
Speaker 11:Because if you think about what Cosm is doing with the NBA, they basically have that camera already there courtside. And it's creating that immersive sort of shared reality is what they call it experience in the Cosm venues. There's no reason you couldn't just port that over to a Vision Pro right now. So we're starting to see some pieces be put in place that's gonna create a really interesting, I think, flywheel for immersive content. And I sort of think about it as, the venues like Cosm being a kind of middle class experience where if you have the money and the capability, you're gonna go there in person, of course, but you're only gonna do that for a few events a year even if you have, deep pockets.
Speaker 11:And, obviously, there's the at home viewing experience, which might be upgraded, if we can watch it in immersive, environments. And then you have the option to go someplace like a Cosm venue to get this sort of, you know, halfway experience. I can still go out with my friends and be social and have a a close to courtside experience without shelling out to to fly wherever the game is.
Speaker 2:Yeah. Are there are there a lot more spheres, cosm type experiences coming down the pipeline? They said they're super capital intensive, so they're high risk. But are from my view, the response to to both has been great and Sphere seems to be somehow doing pretty well in in the public markets as of late.
Speaker 11:Yeah. I think Sphere was caught everybody by surprise a little bit because that that first year, there were there were lots of headlines about how much money they were losing, how much it costs not just to build but to operate. But from everything I can tell, they from the outside, obviously, they look like they have found a path. I think if you look at what they're they did with Wizard of Oz, which was a surprise, I think, to everyone in Hollywood, it it is really successful. They basically made like a fan experience for Wizard of Oz, which gave them a reason to charge you over a $100 a ticket to see a movie that you've probably seen dozens of times before.
Speaker 11:And I don't know if it's streaming anywhere for free, but it used to be on television all the Right? It's it's like the classic movie that was was constantly available. Yeah. And yet people are shelling out over a $100 a ticket to go see it. They've sold over 2,000,000 tickets to that already.
Speaker 11:So I think it's an it's not exactly what they set out to do, but it's an interesting new path for them and clearly a source of revenue.
Speaker 2:Yeah. Brian Chesky from Airbnb was on the show fairly recently talking about how he is Mhmm. Quite bullish on on IRL in the age of AI. The the online is getting very wild and intense. You can imagine some people just deciding, I'm gonna log off.
Speaker 2:I'm gonna go get a go somewhere, get away. I can see that. But I can also see the other side of it, which is that the online world continues to get more and more entertaining. And some people just say, like, well, I'm not I don't need to go take this trip or go to this experience because I'm perfectly entertained at home. What indicators are you looking at to understand if if AI is actually disrupting some IRL experiences?
Speaker 2:Would it show up as like Disneyland, you know, visits dropping which obviously could be a number of factors? Like, how would you where are you kind of like trying to see where there might be some disruption?
Speaker 11:Yeah. I mean, I think I would be looking at things like live streams, sports, and concerts and things like that to see are the ticket sales going down. Everything we've seen to date is quite the opposite. People are spending like crazy on experiences. Right?
Speaker 11:It's it's everybody thought that that it was sort of a revenge spending after the pandemic, but that has has not played out. Everybody is very excited to, at least sometimes, right, like, close their laptops, put their phone in their pocket, and go out and do something fun. I think what we're competing with is that peep people are they they want to, be out there having experiences creating memories. And that is not something that you're gonna have the same sort of experience doing, looking at a screen. There's also some interesting, neurological research that I've I've been following, around just, like, how we sort of sync up and experience communal events.
Speaker 11:And it's just it's really fascinating. But, basically, there's, we're we we create stronger memories, and we form stronger social bonds bonds when we're in person, and it actually has a biological component to it. It's not just sort of the emotional aspect of being there together. So I think, you know, maybe if the the pandemic was the catalyst that we've realized what we what we were missing. But I'm I'm pretty bullish on the fact that demand for, you know, even techno technology optimized and oriented and enhanced experiences in the real world.
Speaker 11:Like, just because we're out in the real world doesn't mean the tech goes away. It just changes and evolves. I think that's where a lot of the exciting developments, both for AI but also for for other technologies is gonna be in the next decade.
Speaker 1:Well, we gotta do the next one in person with a live audience
Speaker 9:because Yeah.
Speaker 1:Just neurologically I feel like we'll have a better time.
Speaker 2:Yeah. And there's also the the reality that many people are chasing IRL experiences to have something to show back on social media. Oh, In this in this, you know, I I I try to remind myself, you know, we we were at this this car event everywhere you looked, there were cars that if I saw them on the street, I would just stop whatever I was doing. I And was, like, trying to force myself to actually just, like, take it all in Mhmm. Versus taking pictures.
Speaker 2:But certainly, there were some people
Speaker 1:Yeah.
Speaker 2:That that hardly were processing it live because they just wanted to share everything that they were seeing.
Speaker 11:Too. Yeah. Yeah. And honestly Instagram. That is the thing that I'm most excited about with with glasses, whether like Meadow Ray Bans or what Apple's gonna release later this year or early next year.
Speaker 11:Having the ability to capture content, but stay more present in the moment, I think is the best, sort of marketing case for those products because you're absolutely right. Like, we we wanna be able to capture these things, but we've, I think, tilted a little bit towards, doing things just for the sake of capturing them rather than, actually enjoying the experiences. And look, think everybody knows that. Even when you're doing it, to your point, you you're aware of it. So I think if there's the right catalyst, it'll get people off their phones and and more sort of focused on what's around them.
Speaker 13:Awesome. Totally. Well, thank
Speaker 1:you so much for taking Yeah.
Speaker 2:Great to meet and and come back Awesome. On anytime.
Speaker 11:Have a good one. Awesome.
Speaker 2:Great to
Speaker 11:meet you. Thanks, guys.
Speaker 1:Let me tell you about App Lovin'. Profitable advertising made easy with axon.ai. Get access to over 1,000,000,000 daily active users and grow your business today. We have been keeping our next guest waiting. We have Matias Wagner from Flux dot ai.
Speaker 1:He's the founder and CEO and has some exciting news for us.
Speaker 2:What's happening?
Speaker 1:Welcome to the show. How are you doing? Please introduce yourself and the company.
Speaker 12:Hi, guys. Thanks for having me. My name is Matthias. I'm the founder and CEO of Flux, and, you know, we're building the first AI hardware engineer.
Speaker 1:Amazing. And give us the news today. You raised some money. What happened?
Speaker 12:Yeah. We closed the series b. We raised a total of $30,000,000 in new capital personal. Across Congratulations. Led by APC, so you know, super stoked and know, stepping on the gas here now.
Speaker 1:Yeah. Talk about the key problem, the product, where you are in development, rolling out, getting product in the hands of customers.
Speaker 12:Yeah. Good question. So, I mean, you know, you've all heard it before, hardware is hard. Yeah. And we felt like this is like learned helplessness, you know?
Speaker 12:Yeah. And we are taking the heart out of hardware. Right? So if you think here
Speaker 2:It's just wear now. No. It's just wear.
Speaker 1:Exactly. Right?
Speaker 12:You know, we're we're starting with electronics here. Yeah. Because, you know, it's, a very standardized form factor, and it's really at the heart of anything that's worthwhile building these days if you think about, you know, computers, robots, what have you. And the the insight here really was, like, right, that if you think about how easy it's become over the last two, three decades to make software, especially now with the with the AI boom, right, hardware hasn't gotten any easier to make. Yeah.
Speaker 12:Right? Even though the supply chain is incredibly available now, I mean, I can, like, from anywhere in the world for, like, $20 can get a PCB wall manufactured in China and sent back in seven days fully assembled. Right? Yeah. And that that you couldn't do twenty years ago.
Speaker 12:You had to be Lockheed Martin or Apple or, like, a big tech company to afford it. But not everybody can do that, but there's a design tools just don't exist and are not accessible. And that's what this what we're solving for.
Speaker 1:So talk about the inputs and the outputs. Obviously, at the end, I get some sort of PCB spec. Is that just a CAD file or a PDF? And then what am I actually putting in? Is it just software that then I want to be translated into hardware?
Speaker 1:Like, at what level of abstraction are we operating here?
Speaker 12:Yeah. Good question. So, look, the vision is that you can go like, if you think about, you can go from, like, an an a prompt to a poem.
Speaker 1:Yep.
Speaker 12:Right? Or, like, cloud code, you can go from a prompt to a bug fix or a feature. Yep. With flux, right, ultimately, we wanna enable you to go from a prompt to an iPhone class device. Wow.
Speaker 12:Right? Now that's a long road. Right?
Speaker 9:Yeah.
Speaker 12:So what we are today is, like, you can make, like what's it like? Mid, small to make complexity, you can probably single shot today. And everything else is gonna be more innovative. You know? Mhmm.
Speaker 12:Again, like, ClawCode, Devin Yeah. Cursor, all these tools. Very, very comparable.
Speaker 1:Sure. That makes sense. Who's the best customer for you right now? Where do you see that going? Are you going after startups that are iterating really quickly?
Speaker 1:This is speeding them up, or are you going to large organizations that are already churning out tons and tons of PCB designs?
Speaker 12:Yeah. So it's a good mix, but most customers are SMBs. What we're seeing is that there is, like you know, only about 25% of our customers have previously made a PCB board themselves. Sure. The others are technical.
Speaker 12:They are mechanical engineers, firmware engineers, industrial designers. Right? But they didn't have the time, patience, or resources before to make custom boards, and they would probably previously buy OEM boards. Right? If you think about, like, a favorite example I have, we have this customer that make vending machines, like snack beverage vending machines.
Speaker 12:And they would previously buy four or five off the shelf boards and integrate them into a single machine. So think about the board, the powers, the display, the payment, the the motor in here, and so on. And now they can make, like, a custom like, a single custom board with flux. Right?
Speaker 1:Mhmm.
Speaker 12:That's much cheaper to integrate into the machines on the assembly line. The machine breaks down in the field, but, you know, we're just standing there, like, somewhere in the rain. You know? There's one board to replace. And then, right, the board costs in pennies on the dollars compared to before.
Speaker 12:Right? Because what you're paying now is material cost Mhmm. You know, to a manufacturer in China. Right? I can do that for you.
Speaker 12:Yeah. And so that's kinda, like, here the the main driver, I think.
Speaker 1:What's your favorite printed circuit board or favorite product that has a PCB in it?
Speaker 12:Oh, the space shuttle?
Speaker 1:The space shuttle. That's
Speaker 2:a good answer.
Speaker 12:No. I looked this up the other day. So the space shuttle, I think, was one of the first boards that had, like, eight layers or so, right, at And the it was hand drawn. They didn't have software for it at the time. Right?
Speaker 12:Which is, like, crazy if you think about that. Yeah. So, yeah, this is, a fun anecdote.
Speaker 1:That's a great one. Jordy, anything else?
Speaker 2:Wild, wild, wild. How have you seen anything exciting on The US side on the supply chain side? On the
Speaker 1:actual PCB main Yeah.
Speaker 2:Feel like we had somebody on the field last year that was trying to make PCB Like
Speaker 1:we have send, cut, send now for for like machined parts and and Yeah. Things that you need to cut. There is there a great company in America that's starting to think about PCB Yeah.
Speaker 2:Like if we have our own Shenzhen, I imagine it would have a variety of Yeah. Manufacturers like this to really because you wanna get to the point where you can kind of sit next to the manufacturer Totally. And really speed up that iteration cycle.
Speaker 12:Yeah. Yeah. I mean, look, a lot of people like us want that. Right? I've we looked into the details of, like, opening our own fab here in Fremont or so.
Speaker 12:Yeah. But you realize quickly, right, making PCB boards is like the board itself is a chemical process. Oh. Good luck getting the permits. Right?
Speaker 2:Oh,
Speaker 12:yeah. It's possible, but it's hard. Right? And so these are kinda like the bumps you run into. But I think there is exciting opportunities here.
Speaker 12:Right? Especially because, like, if you're designing these boards with AI, right, then you can also optimize for the capabilities Mhmm. And the inventory you have at the factory. Mhmm. Right?
Speaker 12:Because the way to make this cheap to make is to, like, design it so it can be assembled automatically by machines. Right? But for that to work, essentially, these pick and place machines, they have to have all the semiconductors you need on their on their rails. Right? Mhmm.
Speaker 12:And and that's difficult to do for humans to optimize for, right, because it's like a quick changing thing. But I think as we're automating all this, right, we can have the models designed towards what the what the factory can spit out in an hour, you know, for for almost zero cost.
Speaker 1:That's amazing. Congratulations. Thank you so much for taking the time to come talk to
Speaker 2:us since With with this pace, I'm sure you'll be back on this year.
Speaker 1:Yeah. So this is amazing.
Speaker 2:Yeah. All the progress.
Speaker 12:I hope so.
Speaker 1:Have a great rest of your time, guys.
Speaker 2:Great to meet.
Speaker 1:Talk to you soon. Let me tell you about Labelbox. RL environments, voice, robotics, evals, and human data. Labelbox is the data factory behind the world's leading AI teams. Up next, we have some exciting news from Zach from CalAI.
Speaker 1:He's the cofounder and CEO. Oh, is he not here?
Speaker 2:We are getting him set up. I had sorry about that. Weaken ahead.
Speaker 1:Weaken ahead to the timeline because we have an incredible announcement from Riley Walls, the new OpenAI staffer, who launched Payphone Go. It's effectively Pokemon Go, but for payphones. Payphones are strangely still licensed in California, he says. So I filed a FOIA request and got the full list. Naturally, I made a game so you can now play.
Speaker 1:You create an account. You get your unique player ID. It's a nine digit number. Then you use the map to locate one of California's pay phones. Some are easy to find.
Speaker 1:Some are not. Pick up the receiver. You dial this phone number, (888) 683-6697. It's toll free, so no coins required. You can just go to any pay phone that you can find, and then you call in and you enter your player ID and you collect them all.
Speaker 1:You have to catch them all, All 2,203 payphones in California are at stake. And Riley is on another absolute generational run with these projects. I love these. They are so much fun. Go check it out.
Speaker 1:And congrats to Riley on another banger project. Well Alright.
Speaker 2:I believe we
Speaker 1:have our next guest, but you
Speaker 2:know But Zach is gonna be later.
Speaker 1:Zach will be later.
Speaker 10:So we're
Speaker 1:gonna jump over to What's going on? How you doing?
Speaker 2:Sorry for the confusion.
Speaker 4:Hey. How's it going? No problem. Nice to meet you guys.
Speaker 2:Good to
Speaker 1:meet Please introduce yourself and the company.
Speaker 4:Yeah. I'm Juan Rodriguez. I'm from Barcelona, Catalan. I'm the CEO and founder of Quiver AI.
Speaker 1:Means.
Speaker 4:And at Quiver, we are, you know, building models for AI generation of SVGs.
Speaker 1:Okay.
Speaker 4:Vector graphics, you can put a text prompt and you can get a vector graphic. Okay. Or you can put an image and you can vectorize it.
Speaker 1:Love it.
Speaker 4:So it's really good for designers, really good for icon generation, logos, and so on.
Speaker 1:PNG to vector to SVG was such a hassle, like, a decade ago, maybe even, like, five years ago. It's still it's still not perfect. What's the secret? Because I've seen these demos from I think the latest Gemini 3.1 was doing some SVG drawings. It was pretty good.
Speaker 1:What's the secret to going superhuman in SVG generation?
Speaker 4:Well, that's a great question. It's a pretty challenging problem, and we are tackling this through kind of new paradigm. So it's code generation.
Speaker 1:Okay.
Speaker 4:So an SVG is actually code in the back. You know, it's kind of a a bunch of code, and you compile it, and you can get an image out of it. And previous methods were mostly, like, tracing. You know? Like, you get an image, you kind of trace it.
Speaker 4:But, yeah, I did this PHD in SVG, and I invented this method where you can actually get really cool SVGs through code generation. So, you know, LLMs are really good at code. We know this. And I just went nuts in my PhD to try to train these models
Speaker 1:I love And
Speaker 4:as good as I could.
Speaker 1:I imagine part of the benefit is that once you have an SVG, you can export that to Figma or Illustrator and then edit the and do the final if you're if your output's 99%, much better than getting a PNG that's 99%, then you gotta go and clean everything up and it takes a lot longer. You can adjust the actual splines. Is that correct?
Speaker 4:That's that's correct. Yeah. You can edit. You can change the colors. Yep.
Speaker 4:You can maybe do animations later.
Speaker 1:Oh, sure.
Speaker 4:Sure. Yeah. Gemini is really good for that too. Like, animations is something that's popping up a lot in DesignX. So yeah.
Speaker 2:Absolutely. What are you how are you thinking about competition long term? Sounds like you guys can kind of lead in this space, but at the same time, I imagine all the labs, like, kind of care about this category. Do you imagine partnering with them or or having a different distribution strategy? What does that look like?
Speaker 4:I mean, it's exciting to see all the good results. They are also evaluating our ideas. We have ideas to try and the big labs are, you know, showing really good results in the SVG space. We are seeing that if you just train a big model with a lot of data and just a bit of effort, you are going to see really good results on this space. And what we are doing yeah.
Speaker 4:You're we're kind of doing this our own thing, own way, with our own RL rewards one at a time and trying to learn how to do this the right way.
Speaker 1:How do you think about the customer? Do you wanna integrate with a a piece of software that already runs and lives and breathes SVG? Or do you wanna be sort of like this one off tool that lives on a website that, you know, artists and creators can go interface with and then just take the assets elsewhere?
Speaker 4:Yeah. Great question. And so we started with a customer kind of consumer approach. We build our websites. Mhmm.
Speaker 4:And we wanted to see what people do with these tools, but at the same time, we build our API so that other tools can also interact with our with our models and build on top of it. We can see agents already being deployed. We can see MCP servers calling Quiver.
Speaker 2:Oh, cool.
Speaker 4:And we are seeing, you know, all sorts of tools, integrations with all your, you know, kind of stack of tools for designers.
Speaker 1:Yeah.
Speaker 4:And that's what we are aiming for to be, you know, very close to the stack and and workflows of designers.
Speaker 1:Yeah. This is Christmas for After Effects artists. I've I've played with a lot of this stuff. It's a lot of fun. Congrats.
Speaker 1:Jordy, anything else?
Speaker 2:Where, where is the company based?
Speaker 4:Yeah. It's gonna be based in San Francisco.
Speaker 2:Cool. But you're calling calling in from, Barcelona? Or or
Speaker 4:That's right.
Speaker 2:Not in the process of
Speaker 1:doing thing. You raised some money. How much did you raise?
Speaker 4:Yeah. We just raised $8,300,000 from LedPointe. There
Speaker 8:we go. Congratulations. So
Speaker 1:I'm so happy that this is funded. This is Great partners. This is very, very
Speaker 2:great great use of your PhD Yeah. Time too. You're it feels like you're obviously just getting started with Quiver, but, I've been at it, for quite a while. It's very cool.
Speaker 1:Very cool. We will talk soon. Have a good rest
Speaker 2:of day. Cheers.
Speaker 4:Beautiful.
Speaker 1:Goodbye. Let me tell you about vod.co. We're d to c brands, b to b startups, and AI companies advertise on streaming TV. Pick channels target audiences and measure sales just like on Meta. And we have our next guest in the restroom waiting room.
Speaker 1:Let's bring him in to the TV. Ultra dumb. My man. Zach, finally. How are you doing?
Speaker 10:I'm good. How are you guys?
Speaker 1:Congratulations. Tell us what happened. I wanna hit the gong for you.
Speaker 10:Well, thank you. After a year and a half journey building Kaliai, we were finally acquired by my fitness foul. You. Thank you.
Speaker 1:When when did you meet the team at MyFitnessPal?
Speaker 2:Yeah. Yeah. Break down the deal. We
Speaker 10:yeah. We spoke to them initially back in May. I was talking to a few companies reaching out to them before going to college because it could have been if we got the right offer, it would have I think made sense. I could have maybe had a normal college life for a little is what I was thinking. We didn't move forward at that point with anything, but built those relationships and then kept in touch.
Speaker 10:And when it finally made sense, we moved forward.
Speaker 1:That's awesome.
Speaker 2:And and I I think I saw in your announcement, you closed it while you're 18 and you're just announcing it now. Is that correct?
Speaker 10:Yeah. So we closed it in December when I was 18, and then I turned 19 in January. So now it's
Speaker 2:Bruno, dude. You're lost. You're gone. You're gone now. It's over here.
Speaker 10:It's not even
Speaker 2:It's over. It's over No.
Speaker 1:Give us the scale of the business, the monetization. I mean, for the describe it for those who don't know, and then I I then I wanna know, like, how did you grow this thing? How big did it get?
Speaker 10:Great question. Cali Eye is an app where you could track your calories just by taking a picture of your food. Mhmm. And we did $30,000,000 in revenue in 2025.
Speaker 1:Wow.
Speaker 10:And we just finished January. I didn't check February, but January, we did 5,700,000.0 in revenue. It's a subscription model.
Speaker 1:That is incredible. Crazy. So heavily dependent on iOS in app subscriptions? Are you Android as well? How how important has that been?
Speaker 10:Android is definitely less important. It's maybe a tenth of our revenue.
Speaker 2:Yeah.
Speaker 10:Yeah. It pretty heavily is just iOS.
Speaker 1:Yeah. And then acquisition, what is this influencer driven, ad driven Customer acquisition. Yeah. Customer acquisition. I mean, obviously, you're on the charts, but are there viral mechanics?
Speaker 1:Like, what is the growth engine for the app?
Speaker 10:Early on, it was mainly influencers.
Speaker 2:Mhmm.
Speaker 10:And then as of recently, the last maybe six months, we've primarily continued our growth through paid acquisition on channels like Meta. And so the two combined are really what's driving it.
Speaker 2:Makes sense. What why what what are the kinda key reasons why MyFitnessPal wanted to pick you guys up? I can think of a lot, but what what Yeah. What drove this?
Speaker 10:Well, definitely many, many reasons. We have different audience different core audiences, but then we also have overlap. Mhmm. And we both are very mission driven. We both just want to help the greatest number of people as possible become healthier.
Speaker 1:Yeah.
Speaker 10:And so since we were since we have this overlapping space, we realized that it would make sense to come together, share resources, and it would further the mission for both of us and really make sense to just partner.
Speaker 1:Talk about accuracy. I think there's some people that are skeptical that AI can take a picture of food and tell you how many calories are in it. How has that evolved over the year or so that you've been running the company? Like, how accurate was it at the start? How accurate is it now?
Speaker 1:Where do you expect it to go?
Speaker 10:The accuracy of CalAI early on, it definitely wasn't as good as it is today. We initially started by just running Chachi BT by itself Sure. Pretty much as the model. You would just take the picture, put it into Chachi BT, say how many calories is this? Yeah.
Speaker 10:Output the response
Speaker 2:Total rapper victory. Right?
Speaker 1:Total rapper victory. Honestly, congratulations. It's so amazing.
Speaker 10:Well, that was like a month. Yeah. And then we fine tuned a model. We built out a more complex pipeline
Speaker 2:Yeah.
Speaker 10:Using like a ton of different models for different steps. And so it's become more and more accurate over time. At this point, we average over 90% accuracy
Speaker 1:Yeah.
Speaker 10:For each food scan. But you would actually be surprised only 30% of the scans are only 30% of food logs come from scanning. The rest are like barcodes typing in in the database. So that those are the most accurate methods.
Speaker 1:That makes a ton of sense. That makes a ton of sense.
Speaker 2:Very cool. What what are your guys' combined ambitions now? Where do you wanna take are are you gonna be supporting on the main MyFitnessPal app as well? Like, are you I imagine, like, they've seen what you guys can do from a user acquisition standpoint and a product design standpoint and and want your help across the org.
Speaker 10:We're definitely helping any way we can. So definitely some of our marketing systems, we are helping them to also build out. And they're helping us too, teaching us things that we didn't know. So together, we're going to be able to go a lot further, and we're going to remain standalone apps.
Speaker 1:Cool. Well, congratulations. Well,
Speaker 2:congrats. Yeah. Congrats to the whole team. Yeah.
Speaker 1:You're An actual overnight success.
Speaker 2:Truly. Truly an overnight success.
Speaker 1:But the beginning of a long career, I can tell already.
Speaker 2:Yeah. I know. I can't I can't wait till when you're when you turn 40 Come back third exit at that point. Hopefully, an IPO. Exit.
Speaker 2:You got the you got the exit at eighteen. We'd like to see the the IPO by twenty by the time you're you're 30. Awesome. Well, We'll talk to you soon, Zach. Congrats.
Speaker 2:See you.
Speaker 10:Thank you. Talk soon.
Speaker 1:Have a good one. Let me tell you about Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why a 150,000 organizations use it to keep their apps working. And up next, we have our last guest, Andy Markoff from Smack Technologies.
Speaker 1:He's the co founder and CEO. What's happening? Andy, how are doing?
Speaker 13:Great. Thanks for having me on.
Speaker 1:To the show. First time in the show, please introduce yourself and the company.
Speaker 13:So Andy Markoff, our co founder and CEO of SMAC Technologies. I'm a, well, I'd like to say, a recovering member of a cult known as the United States Marine Corps. Spent the first ten and a half years of my professional career there. Got out, tried to figure out what I wanted to do when I grew up. And luckily, one of the marines I served with, my cofounder, Clint Allen, he's got out.
Speaker 13:We both decided to work on building the decision making tools we wish we'd had when we were running the kill chain during our careers as, you know, marine raiders at MRSAK and, like, try and give back and, like, find a way to serve again after we'd both gotten out. So we talked about this idea, you know, really 2023 and then started the company in January 24. Really, so what SMAC is, we are the first frontier AI lab solely focused on building international security. And our mission is what we would call decision dominance. But how do we take, you know, petabytes of sensor data, multimodal data streams that, you know, too much for a human to actually analyze and then convert that analysis in real time into the right decisions across the range military operations, whether that's trying to figure out what are we doing for the next one to six months, what are we doing for the next one to four days, and, like, literally right now, what are we doing, but trying to make that process seamless and what right now is a pretty siloed and slow system.
Speaker 1:What are you building on top of? Like, it feels like Palantir exists. There's databases. Like, it's not like technology was just invented, but AI is obviously new. So, like, how how are you integrating with systems?
Speaker 1:What what are the tailwinds here that allow you to actually deploy systems effectively and quickly?
Speaker 11:Yeah. I think I mean, at the
Speaker 13:end of the day, we are not building on top of another, like, AI model. Building our own model. Yeah. And and I think that's you know, the reason for that is general purpose, like large language models are are very useful and they're very good at analyzing massive amounts of textual data. They have a lot of really, you know, important use cases, but they are not the right tool for 80% of military decision making.
Speaker 13:Like and some of that's just they're trained on labeled data sets. There's no labeled data set for World War three. You know, hopefully, we never have that data set. But but you need a different type of model to make decisions on multiple time horizons due to type of hard, you know, physics grounded geospatial reasoning required for most military decisions. And that's, you know, deep reinforcement learning is a is the right approach for that problem set.
Speaker 13:So we're we're building a deep reinforcement learning model. And instead of, you know, labeled data sets, we're we're training that model in an environment that's been built by, you know, physicists who are grounded in, the physics of a peer level conflict, but also domain experts. And I think that's a lot of our advantage and secret is there's there's a lot of information and expertise about how to fight wars that are in people's heads. They're not it's not in a document that can be read. They need to be encoded and put in an environment to to make a system that can actually function in that type of environment.
Speaker 1:Interesting. So, I mean, a lot of the foundation labs are going to data providers, data brokers, hiring experts to write down their SOPs. You're you're obtaining data from the US government. Is that correct? Like, what does that what does that pipeline look like?
Speaker 13:Sure. So so building this data is, you know, some of the domain expertise we have hired on the team. So, I mean, you know, Clint and I's background. I was a I was a joint fires instructor for a while at the Marine Corps' version of Top Gun, and Clint kinda spent his whole career, like, in the fire space as we did at Marsauk. We we've hired a lot of people on the team that were the absolute best at their military discipline.
Speaker 13:They were instructors, and they did it on deployment. And then we work with our end users. And I think in some ways, this gets to how do you trust how do you trust AI? How do we get users to trust AI? It's like we the users are part of our training data generation.
Speaker 13:They're always taking their knowledge, their understanding of the right way to do things today and helping bring that into the environment in a way that, you know, they can trust that, like, their knowledge and their understanding is actually being used to help the model get to a better start point.
Speaker 2:What ways in which conflict is evolving are you guys, like, willing to lean on and effectively bet the company around? It's obviously great to have the all the the time actually enlisted and and serving the country, but the battlefield's like evolving. You know, I'm sure you've been you and the team have been watching all the footage coming out of the last few days. But but where where is this all kind of going in your view?
Speaker 13:As far as where where is, like, kind of conflict moving
Speaker 2:example is, I think we saw, The US version of Shahed or I think Mhmm. Like, so so low cost, autonomous systems seem to be, you know, coming online in mass and and all these things are gonna impact how you build the product.
Speaker 13:Sure. So, I mean, I I think at the end of the day, you know, the scaling and operate operationalizing autonomous systems is a large part of the future of warfare. Right? And so I think that the concept that we talk about internally that we need to build and enable is what we would call intelligent autonomy. But how do you orchestrate all of these autonomous systems, not just against tasks that they've been assigned to, but with each other and with the more exquisite expensive systems?
Speaker 13:And how do you do that in a way where you still have a human in the loop? Right? I think there's been a lot of discussion about fully automating the kill chain. No one wants that. That's not even really something that I've heard anyone talking about.
Speaker 13:What what fundamentally people are trying to do is have the right amount of human in the loop, have humans for high value human touch points. It's not everywhere humans are making decisions today. Like, we can't have humans involved in every single thing that they're involved in today, but a lot of the a lot of the decisions are not high value decisions that are humans uniquely positioned to do. So it's Intelligent Time is about removing humans from low value human touch points, keeping them and bringing them back into the system for those touch points that they need to make a decision, whether for ethical reasons or for tactical reasons, and enabling them to make decisions that help move, you know, hundreds of thousands of, you know, autonomous systems and manned platforms and, you know, other types of unmanned platforms towards common goals across a, you know, what could be a 100,000,000 square mile theater. And I I think that's that's really the spec that we have to build towards.
Speaker 2:Makes sense. What what's the shape of the company today? Where are you guys based? What are what are you what what are your plans with this with this new capital?
Speaker 13:Sure. So we are, you know, team of 18 today. It's really split mostly between El Segundo. We're, like the majority of the tech team is El Segundo and then, a headquarters in in Texas. We we have a physicist, a physics lab, like, down in Texas, so part of the tech team is there.
Speaker 13:And and, really, our product, the way we think about it is that we need to have a model, like a heavier weight model that is available for people that work out of an operation center or not, like, compute limited. And then, you know, lighter, like, lighter versions of that model that can work at the edge at the front lines where people are going to be bandwidth constrained. And so a lot of this year is getting to what currently we're deployed under contract with the Air Force, the Marine Corps, and the Navy is to expand across all six services both at the command level, but also to take what right now we're prototyping as the edge version of the model we built and get that deployed to frontline units across all six services by the end of the year.
Speaker 1:Take us through the fundraising round. How much did you raise?
Speaker 13:So we've raised 32,000,000 to date, 26,000,000. Thank
Speaker 8:god. And
Speaker 13:we've been, frankly, just really fortunate to have the investor team that we had. I mean, like like I was saying earlier today, you know, Clint and I knew nothing about fundraising, procurement, government acquisitions. Like, we didn't know we just, like, were wandering around in the dark, you know, back in January 2024 trying to to get moving. And having Point72 kind of take us early on, believe in us, kind of teaches the ropes of the defense tech space was tremendously valuable. And, you know, we've been really fortunate to have, you know, Jidevski and Kosanova co lead the Series A to really help us figure out how do we get to the next how do we get to the next scale, how do we start scaling the team, how do we build the systems that allow us to grow rapidly.
Speaker 13:And we've just been really fortunate to have supportive investors that understand the vision and and kinda knew what we needed to what they could teach us at at different phases.
Speaker 2:What's the origin of the name?
Speaker 13:So SMAC is actually a tactical task that you would call over the radio to strike a target. So when I was you know, we'd be out on a raid, you know, Afghanistan, Iraq, and, like, you'd have a hostile target that you were gonna call in an air strike on. You know, the the conversation that you just smack that target is actually like a task. So that that was and I think, you know, fires fires is one of the military's functional areas, but I think that's an area that's, you know, Clint and I's specialty. And so that was a lot of our initial go to market was in the fire space, and we were initially heavily focused on that.
Speaker 13:But, obviously, we're expanding this year to kind of the range of military decision making.
Speaker 2:Is it is it true that that you hired someone named John Coogan?
Speaker 13:It is, it is true. I actually was getting really confused earlier today when, like, I I saw yours. I was like, wait. Why is John why is my John Twitter? He's also a marine too.
Speaker 2:Was like
Speaker 1:That's so funny.
Speaker 13:Marines don't even use Twitter. Oh, wow. Rags.
Speaker 2:Very cool. Yeah. Go around and tell everyone. Yeah. I hired John Kugen.
Speaker 2:There you go. They're like, what?
Speaker 1:That's very fun.
Speaker 2:Awesome. Well, great great to meet you, and, I'm sure you'll be back on soon.
Speaker 1:We'll talk to you soon.
Speaker 13:Awesome. Thanks for having me.
Speaker 2:Yeah. Cheers, Andy.
Speaker 1:One. Thank you. Let me tell you about Plaid powers the app to use to spend, save, borrow, and invest securely connecting bank accounts to move money, fight fraud, and improve lending now with AI. We have a debate to settle.
Speaker 2:Is it PetSmart or PetSmart?
Speaker 1:I have a opinion here. I I will settle this, but we should vote as a group. We should also play this video of the folks who kicked this off. Ben Lapides has a whole song asking the question, is it PetSmart or is it pets Joe.
Speaker 8:I got a question. Is it PetSmart?
Speaker 1:The aesthetic of this is incredible. Marks.
Speaker 2:Is this real? This is
Speaker 1:a real video. Smart? They went dressed up and found an office and trashed it for this video. So much production. So the question, is it a mart that caters to pets, or is are they saying that taking your pet there is the smart thing to do?
Speaker 1:Which side of the debate do you come down on, Jordy?
Speaker 2:They are smart about pets.
Speaker 1:Oh, okay. You're a smart guy. I'm the opposite. I think it's the pet smart. I think it's the
Speaker 2:Oh, and is it because the bounce is
Speaker 1:I so there are two sides of this. Adam Bain, former CEO of Twitter, chimed in. Is it pet smart or pets mart? Is the color in the logo indicates that they are saying pets are smart, but the bouncing ball suggests that it's a mart for pets. Which one is it?
Speaker 1:It can't be both. Of course, it can be both. The logo designers are probably having fun. But I'm I'm I'm a bouncing ball guy. I say it's the mart for pets.
Speaker 2:Albert in the chat says, WTF, I just entered a PetSmart.
Speaker 1:That's gotta be weird. Tyler, where do you sit on this?
Speaker 2:Now would be a good time, Albert, to
Speaker 1:tell Tiebreaker you that over here. We got we got Tyler on my side.
Speaker 13:You go. I'm with John.
Speaker 1:Tyler took my side. It's the Mart for pets.
Speaker 2:That just doesn't make sense. Why would they
Speaker 6:the are are for pets. It's a place that holds things for pets.
Speaker 1:It's a marked. Mart.
Speaker 2:No. It's a marked. No. PetSmart. It would be pets all red and then
Speaker 1:And then blue m a r t. But they flipped it. No. You make a compelling argument but I still disagree. Else do
Speaker 2:have to all the news, I don't think we ever said the words that We never rang the gong. OpenAI raised a $110,000,000,000 round of funding from Amazon, Nvidia and SoftBank. So it's like we're grateful for the support from our partners and have a lot of work to do to bring you the tools you deserve. That's probably that's a Gong record?
Speaker 1:Yes. It's the biggest round for a private company ever. It's also about one quarter of venture capital outlays that are expected for 2026 in one round. Absolutely. That's wild.
Speaker 1:Invested from venture capitalists broadly. Of course, this money is from the hyperscalers. It's it's more complicated than your average VC deal. I don't even know if this will be included in the VC
Speaker 2:Funding funding data. Tally.
Speaker 1:Yeah. Because it's such a big round and it's from so many strategists. But lots of more capital for OpenAI.
Speaker 2:Vo says everything on earth is thought bait. They're just trying to keep me pensive.
Speaker 1:It's true. It's true. What else
Speaker 2:Brian Peterson is saying happy late Valentine's Day to my wife when a semi Neanderthal man loves a mostly human woman.
Speaker 1:Very odd. We should plant the bomb. We should we should get out of here. We have another podcast to do actually that will go live.
Speaker 2:We're gonna break. We're gonna do five hours. We're doing
Speaker 1:five hours today. You'll need to On
Speaker 2:the air.
Speaker 1:Tune in later. We will tell
Speaker 2:you what we're going on. We're going
Speaker 1:on. But leave us five stars. Hit that subscribe button. Sign up for
Speaker 2:and.com. Newsletter. Least. The US Open is letting you know that the US Open will return to Inverness Club in 2045.
Speaker 1:2045. Let's
Speaker 2:go. Too.
Speaker 1:See you tomorrow.
Speaker 2:I can't wait. Goodbye. Have a wonderful evening. Nice work, brothers. I'll see you on the next one.