Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.
Timeline was in turmoil over the weekend and yesterday. We covered a little bit about the Nucleus, dust up on the timeline. The biggest news in tech in AI is that Ilya Sutskever, Dwarkash Patel podcast has dropped. The opening clip is is iconic. It's very funny.
Speaker 1:It's a it's a bit of a hot mic moment. Listen to that.
Speaker 2:All of this is real. Yeah? Meaning what? Don't you think so?
Speaker 3:Meaning what?
Speaker 2:Like all this AI stuff and all this Yeah. Bay That it's happened. Like, isn't it straight out of science fiction?
Speaker 4:Yeah. Another thing that's crazy is like how normal the slow takeoff feels. The idea that we'd be investing 1% of GDP It's like AI. Hasn't even feel like never felt like a deal.
Speaker 3:You know? Where right now it just feels like
Speaker 2:And we get used to things pretty fast turns out. Yeah. But also it's kinda like it's abstract, what does it mean? What it means that you see it in the news Yeah. That such and such company announced such and such dollar amount.
Speaker 1:Right.
Speaker 2:That's that's all you see. Right. It's not really felt in any other way so far.
Speaker 4:Yeah. Should we actually begin here? Think this is an interesting discussion. Sure.
Speaker 1:It's one of the greatest podcast interns
Speaker 4:the all time. Point of view. So good. So good.
Speaker 3:That's gonna be a new meta.
Speaker 1:Yes. Yes. You you can't you can't fake that. It's amazing. Also, it's just funny because, it's effectively getting caught on a hot mic.
Speaker 1:But I was joking. I was like, of all the things that you could say on the hot mic before you sit down, oh, Okay, we're actually recording, his is just completely reaffirming everything we know about Ilya Setskrit. It's just completely the same like, Okay. He is a true believer. It's not like he was sitting down and being like, like, Nordcash, we got to go on my private plane.
Speaker 1:I just sold so much secondary. It's crazy what's going on with this stuff. Like, if people really think this AI thing's gonna pan out, I'm making billions of dollars, and I'm I'm cashing out. I'm I I don't believe any of this stuff is real. No.
Speaker 1:He wasn't caught on a hot mic like that. He he his hot mic moment is like, wow. It's exactly like science fiction. Everything It's
Speaker 3:all real.
Speaker 1:It's all real. Yeah. Which is just iconic. Tyler, did you have any other takeaways from your speed run? You're listening to it at five f.
Speaker 1:Right? Yeah. Does he pop the scaling bubble? Does he give a bearish take about Is it over? Any point?
Speaker 5:So I wouldn't say he's, like, anti scaling, but he does kind of give this interesting take, which he he basically says that, like, AI companies like, there's too few ideas Mhmm. For the amount of companies and and for for the the scale that we're at. Mhmm. You you can think of AI progress as being in these kind of distinct ages. Right?
Speaker 5:So he says, 2012 to 2020 was like the age of research Mhmm. Where you're trying all these like different ideas and and the scale of things is very small. Right? Like, to train the original AlexNet was like two GPUs. To do the original transformer was like eight, maybe 64, but like, you know, very small amount of GPUs.
Speaker 5:Once we kinda figured out that that transformers work, we entered this age of scaling. Mhmm. And that's basically from from 2020 to 2025. And now we're basically at this point where like, yes, you can keep scaling and models will get better. But even if you scale a 100 x Mhmm.
Speaker 5:Like, are we really gonna get super intelligence? It'll get better on the benchmarks Yep. And they'll become more useful. But it's not like this he doesn't think that just raw scaling alone is basically what's gonna bring us there. I mean, this has been echoed by a lot of people.
Speaker 5:We still need a couple different kind of paradigms
Speaker 1:Yeah.
Speaker 5:For this to work. The reason that that OPUS 4.5 was better is not just because they scaled pre training.
Speaker 1:Yeah.
Speaker 5:It's scaling generally. The scaling has gone from pre training and now it's RL. Yeah. And so we basically we need to find another paradigm. And the way you do that is just doing like research.
Speaker 5:And and so he talks about SSI as basically being this, like
Speaker 1:Return to research. To research.
Speaker 5:Yeah. It it's small kind of training runs. Even though, you know, they only raise 3,000,000,000, which is like small compared to other
Speaker 1:Sure.
Speaker 5:To to other research Yeah. Institutions. The fact that they're basically putting it all on these kind of I mean, I don't know if they're moonshots, but they're these small training runs where they're doing experiments. Yep. And then they're gonna scale it up eventually.
Speaker 5:Yep. But they're not just basically trying to win the AI race by just scaling up and do the same thing as someone else.
Speaker 1:Yeah. Yeah. They're trying to find a new a way to actually bend the scaling curve, find a new scaling law, or find a new technology that they can scale against. I was thinking about Ilya's talk at NeurIPS last year. He pulls up this chart of the relationship between the mammal's mass and the brain volume, and it's a pretty linear graph.
Speaker 1:And so like the elephant is a lot bigger than the mouse, and so it has a proportionally larger brain to its body volume. And it's this perfect linear curve. I should just try and figure it out if I can maybe text it in. So basically, the the mammals have this, like, very clear linear trend. But then the nonhuman primates are a little bit higher up on the chart, and they're just doing a little bit better.
Speaker 1:But then hominids, the actual humans, have a different there's a very distinctly different curve. It was making me think like maybe that's what we're supposed to see when we think about, yeah, this, this. When we say like straight lines on log graphs, when we say we are seeing scaling happen with the current architectures, which line are we scaling against? Are we are we actually scaling on the on the human curve? Or are we waiting for divergence from that current scaling law?
Speaker 5:Scaling has taken all the air out of the room. Mhmm. Right? Where, like, basically, we have more than enough compute to try these, like, different ideas Yeah. But they're just all going straight into to training the next big model
Speaker 1:Yeah.
Speaker 5:Using the next paradigm. And maybe it's slightly different. Right? You have a different way of doing RL or whatever, but it is still fundamentally the same thing. Right?
Speaker 5:And he talks about maybe continual learning is really the the better approach. Right? We've been in this era of like having a pre training thing for so long that we think of like AI is like you train this thing and then you release it and it's like done. And RL is like a little bit different now because
Speaker 1:Yeah.
Speaker 5:There's this idea of post training and you can kind of integrate different
Speaker 1:also things. Thought the interesting thing was with with pre training, you use the whole Internet, so you don't have to decide anything. You're just applying this algorithm to just all the data, all the compute, and there's no decisions. But then with RL, you have to decide, okay, we're putting in these math equations and we're maybe not putting in something else because we're actually creating the data and we're and it's not just this
Speaker 5:This is like this is maybe why we see these kind of like models that are super well. Yep. They do super well in evals, but not so much
Speaker 1:Yeah. Some of overfitting and
Speaker 5:And the reason is because the data that we choose is not the correct data because researchers are basically being reward hacked maybe Yeah. Into like just solving for benchmarks.
Speaker 1:It's interesting to hear this the conclusion is we need another breakthrough. And then simultaneously consensus be but we're definitely going to get that breakthrough in the next decade.
Speaker 3:I feel like it echoes a lot of even what Mike Newp has been saying. We need new ideas.
Speaker 1:Yeah, He's
Speaker 3:been saying this for months.
Speaker 1:But it's way harder to predict the rate at which breakthroughs will arrive as opposed to you can actually chart out, Okay, the formation of capital, the time it takes build a data center, how long it takes to to, you know, manufacture a bunch of GPUs, rack them, run the training around. Like, that's much more predictable than, like, human came up with new algorithm. That's sort of random.
Speaker 5:And he he brings this up as as the reason why you see companies doing this because it's just if you're raising money, it's so much easier to to justify the raise by saying, we're gonna buy this data center
Speaker 1:Totally.
Speaker 2:And do
Speaker 5:this training run. Totally. It's gonna cost exactly as much. Oh, It's very under right way. Yeah.
Speaker 5:Then the model will be this good, then we can use it to monetize this way.
Speaker 1:Totally.
Speaker 5:Where if you're just saying, like, oh, yeah. We're gonna just pay a bunch of, like, really smart researchers Yeah. To do a bunch of research, and then they'll they'll figure something out. Yep. That it like, you can't really Yeah.
Speaker 3:In some ways, it it feels like SSI is set up for, like, somewhat of a mini AI winter Mhmm. Or, like, at least riding the hype cycle down. Yeah. Because it doesn't sound like he's sitting there being, we raised 3,000,000,000 and we're spending it in the next twelve months. It's like
Speaker 1:2.9 was that.
Speaker 3:No. Not not not
Speaker 1:No. No. No. No. That's the point.
Speaker 1:It's not. It's like equity. It's just sitting It's he can you you clearly pull back out.
Speaker 3:I'm gonna give each researcher, all these different teams, like, shots on goal. No. I love it. Gonna keep taking those shots until, obviously, he'd be able to raise, like, another $10,000,000,000 whenever he wants, especially if he has like a key breakthrough insight and they can be first to scale that. We're delighted by Google's success.
Speaker 3:They've made great advances in AI and we continue to supply to Google. NVIDIA is a generation ahead of the industry. It's the only platform that runs every AI model and does it everywhere computing is done. NVIDIA offers greater performance Versatility Versatility fungibility fungibility than ASICs are designed for specific AI frameworks or functions.
Speaker 1:That is a crazy thing to post.
Speaker 3:Crazy, crazy, crazy thing to post.
Speaker 1:Sometimes you get stuff
Speaker 3:don't know, boys, but having the largest company in the world sending tweets to defend their main product is not very reassuring.
Speaker 1:I feel like this would be so much better delivered. I actually don't have that much of a problem with the actual text here. This should be delivered by Jensen with some nuance in a conversational setting. It just hits a lot different when this is in at exactly 9AM, like, clearly scheduled, clearly typed out in a document. It it feels like a press release, which is just an odd odd thing when in when it should be there should be an answer to a question.
Speaker 1:Someone, Bobby Cosmic in the chat was saying like, oh, the mainstream media is just now picking up on the Gemini three story. And there's articles in The Wall Street Journal and other places saying like, oh, maybe Google's back. Like, you know, buy Google. Like, it's very exciting. And so NVIDIA feels feels the need to respond to that.
Speaker 1:But it's a lot different when it's actually a response instead of just like, we're putting out a press release. Like, who knows why? Like Yeah. As opposed to, like, Jensen saying, like, well, since you asked, talk show host or news anchor or whoever he's podcast host, whoever he's talking to, Dora Cash, whoever he's talking to, maybe us. We'd love to have him.
Speaker 1:I can ask him that question. He can defend this here.
Speaker 3:Timing is is seems important because they are coming under a huge amount of pressure right now. There was an article in Yep. Barons this morning by Tae Kim. Yep. The headline is not what NVIDIA's comms teams would have liked it to be.
Speaker 3:NVIDIA says it's not Enron in private memo refuting accounting questions.
Speaker 1:That's a crazy thing to say.
Speaker 2:Let me get let me get
Speaker 3:into the to the coverage. So Tay says, a series of prominent stock sales and allegations of accounting irregularities have put Nvidia in the middle of a debate about the value of artificial intelligence and its related stocks. Now Nvidia pushing back. In a private seven page memo sent by NVIDIA's investor relations team to Wall Street analysts over the weekend, the chipmaker directly addressed a dozen claims made by skeptical investors. NVIDIA's memo, which includes fonts in the company's trademark green color, begins by addressing a social media post from Michael Burry last week, which criticized the company for stock based comp, dilution, and stock buybacks.
Speaker 3:Burry's bet against subprime mortgages before the two thousand and eight financial crisis was depicted in the movie The Big Short, of course. Nvidia repurchased 91,000,000,000 shares since 2018, not 112,000,000,000. Mr. Burry appears to have incorrectly included RSUs. RSU taxes, employee equity grants should not be conflated with the performance of the repurchase program.
Speaker 3:Nvidia said in the memo, employees benefiting from a rising share price does not indicate the original equity grants were excessive at the time of issuance. That makes sense. Barron's reviewed the memo, which initially appeared in social media posts over the weekend and confirmed its authenticity. Burry told Barron's he disagrees with Nvidia's response and stands by his analysis. He said he would discuss the topic of the company's stock based comp in more details.
Speaker 3:Burry is, of course, now over on Substack. He's charging $380 a year. And if you are a permabear, I can't this is like Christmas coming early. Nvidia didn't respond to Barron's for a request for comment. But they also responded to claims that the current situation is analogous to historical accounting frauds, Enron, WorldCom, and Lucent, that featured vendor financing and SPVs.
Speaker 3:Unlike Enron, Nvidia does not use special purpose entities to hide debt or inflate revenue. NVIDIA also addressed allegation that its customers, large technology companies, aren't properly accounting for the economic value of NVIDIA hardware. Some of the companies use we've talked about this use a six year depreciation schedule for GPUs. Burry said he believes the useful lives of the chips are shorter than six years, meaning NVIDIA's customers are inflating profits by spreading out deep depreciation costs over a long period. The TPUs equal bad for NVIDIA Take is up there with the dumbest, maybe worse than DeepSeek, as it completely misses what actually happened in the last six weeks.
Speaker 3:And I will remember who is who in the zoo, my view. One, demand for AI is bananas. No one can meet demand. Everyone is spending more. Google said just yesterday they have to double capacity every six months to keep up.
Speaker 3:Two scaling laws are intact. He's referencing Gemini three. The flywheel is about to speed up. Somehow the mid curve crew thinks this is zero sum competition. None of this suggests that.
Speaker 3:If you think the race is hot now, wait until you see what comes out of large coherent black well clusters. All the magic from the quote god machines is pretty much still hopper based. Lastly, a quick GPU t p less than the cost and performance specs on the box aren't what you get in real life. And Google is going to get a fat margin too, doubled up. What matters is system level effective tokens to watt to dollars and TCO.
Speaker 3:NVIDIA GPUs have higher FMU because they are they're already embedded in workflows slash the ecosystem is massive. By the way, this is a good test. If you have an opinion on this topic, but you have to look up FMU, then perhaps curate better source. What?
Speaker 1:M f u. M f u.
Speaker 3:You said FMU. Above effective token watt gap also likely widens with Rubin. Add in that Jensen can actually deliver volume in a tight market, plus future flexibility, multi cloud capable, programmable for paradigm shifts, and he'll sell every GPU he makes for years. Google will too since everyone wants a second supplier and TPU is a fantastic chip. But this is as far from either or as it gets.
Speaker 3:The one benefit of this confusion is that it is likely to give Google a brief stint as the world heavyweight champion, the most valuable company. I would guess the midwits put the strap on them in less than two weeks.
Speaker 1:Put the strap on them? What does that mean? Just like like like pile in? It seems like he's predicting that that people will overplay the Nvidia bear take and Overplay the Google opportunity. Opportunity, that will result in Google becoming the most valuable company in the world.
Speaker 1:And he uses the phrase put the strap on them in multiple
Speaker 3:Yep.
Speaker 1:In less than two weeks.
Speaker 3:According to today's Wall Street Journal, AI related investment accounts for half of GDP growth. A reversal would risk recession. We can't afford to go backwards. The article is how The US economy Became Hooked on AI Spending. President Donald J.
Speaker 3:Trump unveils the Genesis Mission to Accelerate AI for Scientific Discovery. Today, Trump signed an executive order launching the Genesis Mission, a new national effort to use artificial intelligence to transform how scientific research is conducted and accelerate the speed of scientific discovery. The Genesis Mission charges the Secretary of Energy with leveraging our national laboratories to unite America's brightest minds, most powerful computers, and vast scientific data into one cooperative system for research. The order directs the Department of Energy to create a closed loop AI experimentation platform that integrates our nation's world class supercomputers and unique datasets to generate scientific foundation models and power robotic laboratories. The order instructs the assistant to the president for science and technology to coordinate the national initiative and integrate an integration of data and infrastructure from across the federal government.
Speaker 3:There's one more note here on strengthening America's AI dominance. Trump continues to prioritize America's global dominance in AI to usher in a new golden age of human flourishing, economic competitiveness and national security.
Speaker 1:Yeah. I'm I'm very interested to hear how, how the public private partnership actually works here. There was a time when every basically, cool technology was coming out of DARPA, coming out of the US government. The US government landed on the moon. And since then, you know, I I think a lot of people in technology have lost faith in the US government overseeing the development of technology.
Speaker 1:Even academia, I mean, the the people people think, like, you know, AGI will emerge from a private c corp. That's where people believe that the best work will be done. Give Ilya Sutskever, give the best scientist $3,000,000,000, let him go cook. Like, that's the thesis currently. This feels like somewhat of a rejection of that in some ways.
Speaker 1:There's obviously lots of different places where having AI resources, having science and technology resources within the government make a ton of sense. But it'll be interesting to see, like, where are the interfacing points between the two between the two categories. By default, I think most people in our audience in technology would say, hey, like, let's leave the let's leave the space travel the and and the AI research to the to the private sector. Should we, run through the Astral Codex 10 piece on trait based embryo selection? This is from Scott Alexander in Astral Codex 10.
Speaker 1:He said, suddenly, trait based embryo selection. When a couple uses I so in 2021, genomic prediction announced the first polygenically selected baby. When a couple uses IVF, they may get as many as 10 embryos. If they want one child, which one do they implant? In the early days, doctors would just eyeball them and choose whichever looked the healthiest.
Speaker 1:Later, they started testing for some of the most severe and easiest to detect genetic disorders like Down syndrome and cystic fibrosis. The final step was polygenic selection, genotyping each embryo and implanting the one with the best genes overall. Best in what sense? Genomic prediction claimed the ability to forecast health outcomes from diabetes to schizophrenia, for example, although the average person has a thirty percent chance of getting type two diabetes, if you genetically test five embryos and select the one with the lowest predicted risk, they'll only have a twenty percent chance. So you get a ten percent bump there.
Speaker 1:That's nice. Since you're taking the healthiest of many embryos, you should expect a child conceived via this method to be significantly healthier than one born naturally. Polygenic selection straddles the line between disease prevention and human enhancement. In 2023, Orchid Health, founded by NOR, who we've had on the show, entered the field. Unlike genomic prediction, which tested only the most important genetic variants, ORCID offers whole genome sequencing, which can detect the de novo mutations involved in autism, developmental disorders, and certain other genetic diseases.
Speaker 1:Critics accused GP and ORCID of offering designer babies, but this is only true in the weakest sense. Customers couldn't design a baby for anything other than slightly lower risk of genetic disease. You're you're basically just selecting out of what you already got. They're not editing the genes. They're they're merely sequencing them and then allowing you to select.
Speaker 1:These companies refused to offer selection on traits, the industry term for the really controversial like, height, IQ, or eye color. Still, these were trivial extensions of their technology, everyone knew it was just a matter of time before someone took the plunge. Last month, a startup called Nucleus took the plunge. They had previously offered 23andMe style genetic tests for adults. Now they announced a partnership with genomic prediction focusing on embryos.
Speaker 1:Although GP would continue to only test for health outcomes, you could forward the raw data from GP to Nucleus, and Nucleus would protect predict extra traits, including height, BMI, eye color, hair color, ADHD, IQ, and even handedness.
Speaker 3:And it's worth noting that Nucleus is now being sued by genomic prediction.
Speaker 1:Even though they have this partnership.
Speaker 3:I I'm assuming the partnership is no longer. We can ask. Yeah. But I'm assuming it's no longer because one of GP's co founders left the company You in production to
Speaker 1:Interesting. Join
Speaker 3:And allegedly turned off all the security cameras that the
Speaker 1:Is that that's metaphor? Or is that actually
Speaker 4:The lawsuit alleges that the
Speaker 3:that he turned off all the security cameras on his
Speaker 1:That's not a metaphor for, like, you know, sharing a Google Drive of PDFs.
Speaker 3:It's his last day at work. Okay. And he was allegedly, like, Okay. Up
Speaker 1:So he turns off the cameras, allegedly. And and the implication is that maybe he was rummaging around, like, literally taking documents or something like that. That's at
Speaker 3:least what the timeline is. The lawsuit alleging. Okay. Wow. People at Nucleus were emailing the former co founder at his old email address evidence of them violating the agreement that they had.
Speaker 3:Anyways, it's very, very, very messy. We can ask
Speaker 1:Yeah, there's like four or five companies involved in this.
Speaker 3:And all of them are controversial because this is the most, I think, the most controversial probably category that you can be in.
Speaker 1:Yeah. It's certainly up there. And also, there's just like the it's so easy to throw I mean, in the same way that people are throwing Enron NVIDIA, like it's so easy to throw Theranos at any biotech company that's not that's accused of anything. And also biotech, it's like it's pretty hard to understand the underlying science. It's not as popular as, Okay, like, does the website work?
Speaker 1:Does the business make money? You know, what's the cash flow like? It's way more complicated. And so it does attract even more attention. So one of the other companies in the space is Heracyte, and Astral Codex 10 continues here.
Speaker 1:They enter the space with the most impressive disease risk scores, yet an IQ predictor worth six to nine extra points, and a series of challenges to competitors whom they call out for insufficient scientific rigor. Their most scathing attack is on Nucleus itself, accusing its predictions of being mis misleading and unreliable. Let's start with the science and then move on to the companies to see if we can litigate their dispute. In theory, all of this should work. Polygenic embryos polygenic embryo screening is a natural extension of two well validated technologies: genetic testing of embryos and polygenic prediction of traits in adults.
Speaker 1:So genetic screening of embryos has been done for decades, usually to detect chromosomal abnormalities like Down syndrome or simple gene editing disorders like cystic fibrosis. It's challenging. You need to and we've talked about this before you need to take a very small number of cells, often only five to 10, from a tiny proto placenta that may not have many cells to spare, and extract a readable amount of genetic material from this limited sample. But there are known solutions that mostly work. And so the companies that we're talking about today aren't necessarily doing like the fundamental lab equipment development, building the machine, figuring out how to sequence data from the first it's more about the analysis that happens on top of the results.
Speaker 3:And the recommendations.
Speaker 1:And the recommendations.
Speaker 3:Which is probably, which I would say is the most controversial part of this.
Speaker 1:I don't know that any of them are recommending, hey, we you should take we think you should pick this baby. They're more just saying, like, we think that, according to the data, this baby might
Speaker 3:But if you're giving tolerating this somebody risk fact, if you're giving you're
Speaker 1:Yeah, but that's not a recommendation. If I tell you, this car is 700 horsepower and does zero to 60 in two seconds, and this one does 800 horsepower and does zero to 60 in two point four seconds, this one's faster in a straight line, this one's faster on the curves, and then you pick. Like, I didn't make a recommendation. I just told you the stats. If a company engages in malpractice, I.
Speaker 1:E. G, plagiarism, providing products they should know are bad to customers, etcetera, is it water under the bridge if they can clean up? That's obviously a reaction to my question was, you know, is there a redemption arc in his mind? Somebody says Volkswagen can answer this question really well. I think that's because Dieselgate.
Speaker 1:I just feel like the next the the the next turn of discussion needs to be, okay. We tested the models. We tested the data. We tested the claims at a at a higher level of rigor, I guess.
Speaker 5:So it's one model is also accusing Kian of using Chad filter?
Speaker 1:This happened before. So when Kian came on the show maybe six months ago, Growing Daniel accused him of using a a Chad filter and went super viral. And and I was kind of like, oh, like, that's, that's, I don't know. I don't know how to, you know, even respond to that. That's a very silly claim.
Speaker 1:I I have no idea if this is real. I can't I can't tell at this point, on a Zoom call at this resolution. What do you think? Do you think this is real? Are you are you guys just cracking up?
Speaker 1:Because everyone does everyone think it's real? I don't
Speaker 3:I don't think think that he used a filter. I don't think he used
Speaker 1:a filter. Filter either.
Speaker 5:I think he just grew a beard and
Speaker 1:I think he's just been mewing maybe.
Speaker 5:Maybe he's just photogenic.
Speaker 1:Yeah. It is possible that he just he just, you know, flexed his jaw muscles and, like, you know, has low body fat. I don't know. I feel like it would be extremely high risk to run a a chin augmentation filter.
Speaker 3:This down for a second. I mean
Speaker 1:Because because you you know that's what happens. Right? When you're using, like, the Snapchat filter or, like, the TikTok filters, like, sometimes they pop in and out. And if they pop out, like, you're done.
Speaker 4:We're get the nucleus test for the gigachad test and and the results.
Speaker 1:Everyone's cracking up in the studio. We're having a wild time. Anyway
Speaker 3:This is actually insane. Apparently, according to X, I don't know if this is true, but the robbery that took place yesterday in which an armed thief posed as a delivery driver and robbed somebody for $11,000,000 of Ethereum and Bitcoin was Locky Groom that was targeted.
Speaker 1:Woah. What? An armed thief posing as delivery guy finessed his way into the $4,400,000 miss Mission District home shared by investor Lockheed Groom. Yes. Sam Altman's ex boyfriend and another tech investor named Joshua.
Speaker 3:Oh, okay. So it was not Lockheed, but Joshua?
Speaker 1:Gary Tan posted the footage panicked enough to delete it minutes later. Crypto security experts are now saying what everyone thinks. Self custody is great until someone shows up your door with a fake UPS label and a Glock. San San Francisco's tech leader about to hard pivot into vault custody, private security, zero public flexing because this heist wasn't random. It was a warning shot.
Speaker 3:Very ChatGPT written. Mario Naufaul. But anyways, very sad.
Speaker 1:Yeah. We will be back on Friday
Speaker 3:Friday.
Speaker 1:For Black Friday. We have a fantastic lineup of a bunch of different entrepreneurs, ecommerce
Speaker 3:Some of
Speaker 1:founders, brand builders
Speaker 3:Some of the most savage
Speaker 1:Operators.
Speaker 3:Ecommerce operators in the world. Cannot wait.
Speaker 1:It's gonna be a great time.
Speaker 3:A lot of friends. Have a wonderful Thanksgiving. We are thankful for each and every one of you. Thank you for being a part of this. And we'll see you Friday.
Speaker 3:Goodbye. Cheers.