TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.
Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.
Sup, Ben? Good to see
Speaker 2:you again. To have you. Data centers in space, what you got?
Speaker 3:When are
Speaker 2:we going?
Speaker 3:Coming in hot.
Speaker 2:Me, you, the International Space Station. Let's break it down.
Speaker 1:You know, the space tourism industry is quite a quite a fun one. Right?
Speaker 2:Yeah. Would you go? Would you do the Blue Origin thing where they blast you out past the Carmen line? Good enough for Katy Perry. It's not good enough for you.
Speaker 2:What's
Speaker 1:going on? It's it's like, you know, like you're in free fall. You're not actually in space. Oh,
Speaker 3:doesn't count. Shout out to my Oh,
Speaker 1:Carmen line denier. I wanna be, like, going around for days. Oh, okay. Okay. I want my bone density to start to atrophy.
Speaker 2:It's like
Speaker 1:I I truly want to feel the negative effects of space. Yeah.
Speaker 2:Yeah. It's not enough just to go. I I think I would do it. It's right? Yeah.
Speaker 2:But it's better than being hanging out at all
Speaker 1:the cool stuff that our astronauts do. Right? Like, you know, put water and then like they're bubbling and then you like try and drink the water or
Speaker 3:like They'll be unplugging the GP and plugging it back.
Speaker 2:Oh, yeah. Yeah. Yeah. Yeah. Yeah.
Speaker 2:That's what you pay for your space tourism. You gotta go on a seas trip.
Speaker 1:Ninety seconds of service. Yeah. SpaceX satellite.
Speaker 2:One ninety second trip at a time. No. But people were wondering, you know, TPUs, Nvidia going on the on the on the Starlink v five or whatever something gets up there. It feels like this will be something more like a Tesla Silicon chip, an AI chip. Like, do you have any insight into, like, what the process if you wind up figuring out how to heat dissipate, if you wind up figuring out the costs, what might the chip look like?
Speaker 1:I think I think, you know, everyone freaks out, oh my god, putting stuff in space is expensive. Yeah. But if you look at like Starship launch costs and, you know, they keep falling, you're you're like fine. Right? Like I think that's not, by the end of the decade, the cost of space launch will be fine.
Speaker 1:The heat dissipation, I mean it's a challenge where you just put a massive massive effectively radiator. Yep. And it's fine. Right? By the end of the decade, you'll be good.
Speaker 1:I think the big challenge is that chips are just really unreliable. Right? And so so how do you deal with, like, a couple things? Right? Satellites can only be so large before they, like, you need start needing a lot of support and structure before they tear themselves apart.
Speaker 2:Mhmm.
Speaker 1:So when you look at, like, the the launches, right, these these things are shooting out, like, tiny satellites. Yeah. And many of them. Okay. So you can't have like a big fully connected cluster of chips.
Speaker 1:And then and then like on top of that, right, how do how do you deal with any random error? Mhmm. On on earth, have text running around the data center unplugging stuff, putting in spares, things like that. What do you do in space? You RMA it to the factory where they, like, might unsolder it and resolder it and then, like, test it and it works and go back out.
Speaker 1:Yeah. Sometimes it is just trash. Yeah. But, like, you know, that's the challenge to me.
Speaker 2:Is that is that I I I feel like maybe the the pattern we should be looking at is like how often do the Tesla self driving chips need to get serviced because that's like the the the team that would probably be building or like bridging the gap there. Like the Starlink satellites, sure they go down, but like the service works. Like you're you're just relying on some sort of, like, you know, 90% uptime stuff's coming down. But most people that are in a waymo, like, the chip keeps working. Right?
Speaker 2:Most people that are in a Tesla self driving, like, they're not like, you don't hear about Tesla owners being like, I love FSD, but I'm constantly in the shop getting my my custom silicon chips unseated and reseated. Right?
Speaker 1:Well, mean, it's it's also a function of like the complexity of a chip.
Speaker 2:Right? Sure.
Speaker 1:You know, if if a chip is twice as fast Yeah. And let's say the bit error rate, right? Like how often a bit flips Sure. Is the same, then it's airing out twice as often. Yeah.
Speaker 1:But let's say the chip is 10 x as big. Right? And so when you look at like a Tesla FSD chip, very very good, very very efficient, very still like relatively Small. Inexpensive and cheap compared to Sure. You know, a big old GPU or TPU or whatever.
Speaker 1:Yep. Right? Those things are extremely large. Mhmm. And, you know, again, like if if the error rates are the same Yeah.
Speaker 1:Then it it fails 10 x more. But in fact, the error rates are a bit harder, a bit higher because they're pushing these things to the absolute limit. Yeah. Whereas, you know, Tesla does have some level of like well, first of all, the Tesla car has two chips sort of redundancy already So built in.
Speaker 2:Maybe you do that on the satellite, but then there's more power, more
Speaker 1:Yeah. Right. So so the whole and lure, right, of it is Yeah. Is, you know, effectively power is free. Right?
Speaker 1:And solar panels you'll get the cost curve of solar panels. You'll get the cost curve of Yeah. Satellite launches. You're you're like, this is this is free. This is great.
Speaker 1:Power is less than 10% of the cost of the cluster. Sure. Right? Yeah. So so, like, it's that 90% you're not saving anything on.
Speaker 2:Yeah.
Speaker 1:Yeah. And in so far as much as
Speaker 3:For potentially a 100 times the hassle.
Speaker 1:Yes. Yes. Yeah. There's this whole like, you know, like if you look at NVIDIA GPUs. Right?
Speaker 1:Yeah. When you first turn on the cluster, about 10 to 15% of them fail RMA in the first two
Speaker 2:weeks. Wow.
Speaker 1:That's And then that's fine. Like, you have to receipt them, whatever. Yeah. Like, the industry knows how to deal with this. Yeah.
Speaker 1:Right? And and over time, like, Hopper's now at 5%, but Blackwell's still 10 to 15%. Wow. Right? It actually started out higher than that.
Speaker 1:Sure. And when a new generation comes out, it's gonna be higher than 15%. It'll it'll have its curve gradually decline down. But Yeah. You know, who who's gonna are you gonna test it and burn it in on the ground?
Speaker 1:Or are you gonna say 5% of my chips or 15% of my are trashed? Yeah. Yeah. Because someone can't go up there and like do these things. Or am I saying, oh, I need robots who can do all this stuff in space and now that's like an additional engineering problem when sacks of meat are actually very cheap?
Speaker 2:Yeah. Sacks of meat. Yeah. Speaking of media, we haven't talked since the Grok acquisition. What does that look like in the bull case?
Speaker 2:Like, if it's if it's a good if the next version of Grok is a great chip, is it sitting next to the, you know, h 200, h one hundreds in the in the rack, g b 200? Like, how does it fit into the actual, like, what NVIDIA deploys? Is it just a separate chip?
Speaker 1:I I think a big vibe shift from NVIDIA.
Speaker 2:Right?
Speaker 1:Before they were like, alright. Got this big GPU. Yeah. Yeah. Everyone's gonna use this GPU.
Speaker 1:Software ecosystem of the GPU is so good. Okay. It's one size fits all. Yeah. Everyone, you know, like, everyone's trying to make all these, like, specific point solutions.
Speaker 1:Yeah. But we've got the thing that's good good at everything. Okay. And then they had a vibe shift. Right?
Speaker 1:They launched this thing called CPX Okay. Which is a chip made for pre fill Okay. With, you know, prompt processing, creating a KV cache Okay. And also good at like video generation and image generation. Yeah.
Speaker 1:And that's coming out later
Speaker 2:this release, they were really talking about video generation as well.
Speaker 1:So so yeah. You've got like CPX, you've got like the standard GPU, now you've got the Grok chips and they all fill a different niche.
Speaker 2:Okay.
Speaker 1:But really it screams, oh crap. We don't really know exactly where AI is going, which I don't think anyone does. Right? I mean, it's it's moving so fast. The software is, the model architectures, etcetera.
Speaker 1:So we're just gonna like engineer solutions that are along multiple points of the Pareto optimal curve and then, you know, one of them will win. Yeah. Right? And I think I think it's like sort of like a big vibe shift from Nvidia. Mhmm.
Speaker 1:Also, just knew OpenAI was gonna do the Cerberus deal, so they freaked out. But
Speaker 2:Got it. Okay. Yeah. Yeah. Yeah.
Speaker 2:Get me up to speed on what makes Cerberus important in the ecosystem right now.
Speaker 1:So so, you know, you you have people thinking like, oh, latency matters in terms of where our data center is. It doesn't matter at all. What matters is, you know, as we've moved from, you know, chat applications which were like or search response immediately. Chat applications, let's say response takes ten, twenty, thirty seconds. You've got agents, you know, I don't know my cloud codes are working in the background for for a long time.
Speaker 1:Right? Yep. It doesn't matter where the data center is. But what does matter is that these streams of inference take, thirty minutes versus ten minutes Yeah. Versus five minutes.
Speaker 1:And for a lot of people, I'm fine to spend 10 x the price Sure. On something that completes 10 x faster. Yeah. And so so Cerberus sort of just makes a ton of sense there.
Speaker 2:Yeah.
Speaker 1:So OpenAI, you know, they've got these like long horizon. There's there's like Codex 5.2 Yeah. Extra high thinking or whatever. Yeah. It's terrible.
Speaker 1:Can you guys teach them how to market? OpenAI, you have to sponsor this podcast. Yeah.
Speaker 2:Yeah. We had two months yesterday and I did actually ask him like, can I I I had the Codex app pulled up on my desktop and I was like, there are six different models and then there's a then there's another button that I can pick to it? Well, how many different products are
Speaker 3:called Codex now?
Speaker 2:There's a lot.
Speaker 1:Right? And now there's an app.
Speaker 3:Yeah. Yeah.
Speaker 1:Actually have
Speaker 2:another guy on just to do branding. Lexicon branding came on the show yesterday talking about the the all the naming
Speaker 3:Naming architecture.
Speaker 2:Naming architecture. It is complicated but hopefully
Speaker 3:You could tell his blood's boiling because like all the AI companies just have the most chaotic, Anthropic, Claude, Claude code, but also you can use Claude code for other stuff.
Speaker 2:Yeah. But yeah, I mean, with Cerberus, it seems like there is a value to it, but are they constrained on the supply side? Like, they actually scale up to, you know, a a colossus style data center that could actually speed up Codex not just for like one user, but all the users.
Speaker 1:So so I mean, servers can speed up multiple users for sure. Yeah. Yeah. The question is sort of like where you use it and that's where they have to like figure out where within Codex. Right?
Speaker 1:Yeah. Because there are times where Codex is running for like ten hours. Yep. And sometimes you don't mind. Right?
Speaker 1:Yeah. Screw it. I've put out this nice prompt. Gone. Work on it.
Speaker 1:Refactor my code. Do this thing. Do this task. Yeah. Other times, I want this iteration feedback loop.
Speaker 1:Yeah. So how do you expose it to the user without saying, hey, actually, there's another toggle. Your is is 18 times.
Speaker 2:Well, hopefully, like, a really robust model router. But it feels like that's been a process.
Speaker 1:Yeah. So so the OpenAI deal is like for seven fifty megawatts. It's Okay. It's not that much capacity Yeah. On the order of like what OpenAI has talked about.
Speaker 1:Yeah. You know, by the end of twenty eighth, they'll be at like 16 gigawatts Sure. Of that
Speaker 2:So it's absolute cutting edge, the most price insensitive customers in that specific use case of this is the type of prompt that you need to return fast, then you'll get the speed up potentially.
Speaker 1:Right. Right. And and they've gotta figure out how to do
Speaker 2:it from a product
Speaker 1:exposing to the user, etcetera. But it it it's it's clearly something where there is demand. Right? Like, don't know. Like, Andrej Karpathy doesn't care if he's spending a thousand bucks per agent Sure.
Speaker 1:Per second or whatever. Right? Like, you know, whoever it is, these these like super cracked engineers don't care at all.
Speaker 2:Yeah.
Speaker 1:And then obviously, there's like a long tail of like actually cost does matter for most people. And so so all along that curve, they've gotta have solutions, right?
Speaker 3:Yeah. When did you first think that xAI might end up at another Elon company?
Speaker 1:I mean, has been rumored for a long time. Right? Yeah. Like people are saying Tesla, Tesla, Tesla for the longest time. It's harder with a public company.
Speaker 1:Yeah. Yeah. And then and then a few a bit ago, people were like, oh, SpaceXi. I'm like, wait. This makes no sense.
Speaker 1:Yeah.
Speaker 3:And then there was a very coordinated, like, narrative pump. Oh, yeah.
Speaker 1:Like And then the space data center.
Speaker 3:At the end of last year. No. Was perfect. It was like almost like perfectly telegraphed.
Speaker 1:Well, there's there's a bet, right, between basically the head of compute of XAI and the head of compute of Anthropic. Mhmm. And the bet is what percentage of worldwide data center capacity is in space by the '28. Okay. And the bar is 1%.
Speaker 1:Oh, wow. And so the XAI guy is like really bullish. The Anthropic guy is like A little far. Yeah. Yeah.
Speaker 1:So but it's it's a really interesting bet. I I take the under on 1% by '28 because that's a gigawatt in space. Yeah. But it's actually not that crazy. Right?
Speaker 1:Yeah. It's roughly a 150 Starship launches. Yeah. We'll get them to a 100 to get them to a gigawatt in space. Yeah.
Speaker 1:So, you know, Starship hasn't worked yet
Speaker 2:Yeah.
Speaker 1:Fully. I was looking at
Speaker 2:the energy draw of the current star Starlink fleet and I think they're at like, what is it, 200 kilowatts or something like that. So you you you get a thousand of those, 200 megawatts, and, like, you're starting to be be in the territory, something
Speaker 1:like that? Yeah. So the V two star satellites, I think, are the only ones they've launched. Maybe they've launched a few V threes, but the V threes are coming soon. And those are those are, a 100 x more bandwidth each.
Speaker 1:Right?
Speaker 2:Yeah. And more power. And just more power. And so I when I'm just thinking of like, can you scale this thing up at all? It's like, are they two orders of magnitude off?
Speaker 2:Are they three orders of magnitude? This feels like they're like one order of magnitude off I think they run something that looks like an h 100.
Speaker 1:I think the metric is like 50 it's either 50 kilowatts Yeah. A ton or something like this per satellite for v three.
Speaker 2:Yeah.
Speaker 1:If they let's say from v three to whatever the compute thing is Yeah. They double it again, get to a 100. I think that v twos are like 25.
Speaker 2:Yeah.
Speaker 1:Yeah. So if you get to a 100 kilowatts per ton Yeah. For launch, it's it's only a 150 or so Starship launches. Yeah. I think it's so reasonable.
Speaker 1:Yeah. Maybe not 28. Maybe it takes 29. But like, you know, it's it's so reasonable. The question is cost and reliability and, you know, what happens when the chip fails?
Speaker 1:How do you service it? That kind of stuff. How do you how do you deal with having clusters be much smaller instead of, like, you know, these big clusters? Even for inference, clusters are useful.
Speaker 2:Yeah. Yeah. How do you think about Google's response to Grox or Eris TPUs, obviously, very successful. But are they forking that project to eat more of the Pareto curve?
Speaker 1:Yeah. So so for the longest time, Google's had one main line of TPUs. Right? Made by Broadcom. And then sort of next year, they've diverged it.
Speaker 1:Right? Where Broadcom makes a TPU and MediaTek makes a TPU. These two TPUs are focused at different things.
Speaker 2:And they're fabbed at They're both
Speaker 1:fabbed at TSM. Everything at the end of the day goes to Rackus. Right?
Speaker 2:I I wanna I wanna go there next, but everything goes to Rackus.
Speaker 1:So fat by TSMC regardless, but both of these TPUs are focused on different things. Okay. And they've actually got a third project for another kind of TPU there. They also see this need to proliferate Okay. Along the curve of like, hey, do I care a lot about super high amounts of flops, not that much memory?
Speaker 1:Yeah. Do I care a lot about super fast on chip memory only? Do I care about three d stacking memory? Do I care about, you know, the sort of general purpose middle ground AI chip which is what, you know, an h 100, a black well Yeah. A TPU looks like today.
Speaker 1:You know, they're sort of like, oh, we need to hit the entire Pareto optimal curve. And it's like, okay, within this, there's training versus inference differences and like what numerics you want and all these other things. There's so much complexity there. Everyone everyone sort of is diverging their roadmaps once they're at a sufficient scale, I think.
Speaker 2:Yeah. Are are is Google still way ahead on cross data center training? Yes. And are the other labs like is that is that important to the other labs to catch up there? Or is it something that will just naturally happen because everything sort of commoditizes?
Speaker 2:Or do the other labs need to sort of marshal some Herculean effort to, like, crack the code on what it takes and what Google's doing?
Speaker 1:Yeah. So so it's a couple of things. Right? In 2023, everyone thought that scaling was pre training.
Speaker 3:Yeah.
Speaker 1:Right? You know, more parameters, more data. Oh, okay. And that's very difficult to split across data centers.
Speaker 2:And has Google been able to
Speaker 1:do that? And Google's been able to do that to an extent. Right? So what they've done is they've got, you know, they don't have the largest individual data center campus. But what they do is they do these like regions where it's like, hey, each data center is roughly 40 miles apart from each other.
Speaker 2:Sure.
Speaker 1:So in Nebraska and Iowa and then in Ohio, they've got like these complexes and now they're building one in Oklahoma, Texas. Got it. You know, complexes where there's all these data centers pretty close to each other.
Speaker 2:So it's not really cross data center across across the world. Right. It's just across like region.
Speaker 1:Yeah. And then that that makes a lot of the difficulties a lot easier. Flip side is we've also moved to RL. Right? Yeah.
Speaker 1:And majority of the time of the chips is spent generating data. Right? Only doing forward passes through the model. Sure. And then you only send the final tokens that you verified sort of back to train on to the training to train.
Speaker 1:Right? Yeah. So then you end up with like, oh, instead of in pre training scaling, you need to like synchronize all the weights every ten, twenty, whatever seconds. Mhmm. When you're doing these rollouts and especially as things get more and more agentic and training, you might not only need to send not the entire weights, but just the tokens that are relevant.
Speaker 1:So way smaller amount of data and way less frequently. Mhmm. Right? Minutes at a time instead of seconds at a time. Yeah.
Speaker 1:And so you've got this like now now it's become like reasonable where, oh, actually multi data center training is completely reasonable. Yep. And people do this. People do multi data center multi chip training.
Speaker 2:Sure.
Speaker 1:Right? You know, you do your inference on one set of chips and you do your training on another set of chips. So like, Anthropic does this. I don't know if Google does this, but Google's kind of already, got the cards. Yeah.
Speaker 2:Okay. Got it. Let's go to Aracus.
Speaker 3:Talk about Aracus. Just
Speaker 2:there's this debate. TSMC risk, is that the bottleneck or is energy the I was doing back of the envelope calculations. Feels like we're using maybe like 1% of global energy production or Western energy production on AI specifically workloads. And then we're using like 50% of leading edge fab capacity on AI workloads. And so that feels like, okay, well, even if we all agree and we say as a society, we're going all in on AI, we can only double the AI chip capacity before we need to build more fabs.
Speaker 2:That takes years. Whereas we could say, everyone turn off your air conditioning. We're sending the electricity to the data centers, right? Like we have the ability to generate. So we to create new
Speaker 3:Clapping for your earring turning off
Speaker 1:the AC.
Speaker 2:Clobb needs
Speaker 1:to eat. Keep strokes for all the grandmothers. Yes. Yes. I need my cat dancy videos.
Speaker 1:Need to
Speaker 2:feed Claude. Right? But but but seriously, like, there's this debate over, you know, is TSMC the main bottleneck or energy the bottleneck? How are you feeling about that?
Speaker 1:Yeah. Yeah. So so sidebar before I answer the question because I think it's fun. Yes. You know, in The US, it's insane to say turn off your AC for AI.
Speaker 1:Yes. Right? And the general public hates AI already.
Speaker 2:Of course.
Speaker 1:But in Taiwan, they've had droughts before and they've turned off water to entire cities. They're like, oh, you get to you get water three days of the
Speaker 2:week. Woah.
Speaker 1:And then the fabs still get supplied water. It's like, this is Wow. You know, you've gotta understand the mindset. We are not ready as weak Americans to do this. Yeah.
Speaker 1:That's right. No. But at the end of the day, right, like, water water water and power are certainly less big of constraints. Now now, you've got to imagine, like, you know, semiconductor industry is used to, hey, doubling the amount of transistors made every year or two. Yep.
Speaker 1:Part of that is Moore's Law, part of that is Moore's Moore capacity. Yep. Whereas the energy industry in America wasn't. And and so, like, initially people were, like, not creative. They're like, let's do let's do these kinds of gas plants.
Speaker 1:It's like, well, no. Now we've realized, you know, yes, there's three main main manufacturers of turbines and then you've got cycle, then you've got, like, IGTs. But you've also got, like, medium speed reciprocating engines. Right? Like, turns out Cummins can make, like, a million diesel engines a year, and, like, those can make electricity.
Speaker 1:Like, if I don't give a fuck and I put it in West Texas, easy. Yeah. So now it's more of like a regulation thing, a supply chain thing. Power is not a constraint in insofar, like, that much. Right?
Speaker 1:I think it certainly is a constraint still today. It was the biggest constraint in 2425, data center capacity power, because the industry was just not ready. People have woken up. They've like sort of been shocked to the system. Now you've got, you know, tens of gigawatts being deployed.
Speaker 1:You know, next year, 30 gigawatts are being added and we think the power is there for it. Wow. What was it for this year? It's it's or this year is like think it's like 18 ish, 10 ish. Okay.
Speaker 1:Fifteen fifteen to 18 ish.
Speaker 2:Sorry. So so almost a doubling.
Speaker 1:Yeah. Almost a doubling. Yeah.
Speaker 2:Yeah. Wow.
Speaker 1:And when you look at when you look at TSMC and the crew, right, there is not really, oh, this random, you know, there's 12 people making medium speed engines that you can now convert to make Yep. Power at some random data center. No. No. No.
Speaker 1:There's like there is Rackus.
Speaker 2:Yep.
Speaker 1:Right? There is one set of SPICE. Like, know, there's Yep. You know, that's it. Right?
Speaker 1:And so and then and then the flip side is like, okay, when you have 12 vendors, everyone's got a little bit of slack capacity. You know, there's more likelihood. You know, you can people are like, turbines you can't get. You can call a broker and you can get a turbine. Might be paying 50% more, two x more, but you can get a turbine.
Speaker 2:Yeah.
Speaker 1:Right? Like, it's You can't get
Speaker 2:a three nanometer fab.
Speaker 1:You cannot get a three nanometer fab. Exactly. And so when you talk about what's the you know, the the baton got passed from semiconductor shortages in '23 to power and and data centers That's right. In '24, '25. '26, we're still we're we're swinging the pendulum Mhmm.
Speaker 1:But it will fully beach semiconductors again in '27. Right? Mhmm. And so we see this across the entire space of the ecosystem. It's not just TSMC.
Speaker 1:It's also memory both. Mhmm. Because both of them have built at a certain pace. Now TSMC has been expanding at some rate. Mhmm.
Speaker 1:The memory makers, in fact, have just not expanded capacity. Basically, they've not built new fabs since 2022. Yeah. Because their their cycle their cycle is so undulating. Right?
Speaker 1:Yeah. And and so when you look at it, it's like, oh, even if they wanted to double capacity, they need to build the fabs. Yeah. Right? And building the fabs, it is the most complex building humans make.
Speaker 1:Right? It's it's it's it's the entire air of a clean room circulates itself every one point five seconds. What? And you don't even feel it when you're inside. Really?
Speaker 1:It's like that. And and it's like parts per billion Right? Of Like, it's it's actually insane how you could you could get coughed in the face by someone who has COVID and not get COVID.
Speaker 2:So It gets circulated so fast that it doesn't even hit you.
Speaker 1:It's like it's like
Speaker 2:It's that meme of, like, the spraying when someone's talking and then it's
Speaker 1:just it's circulated. So so one another another sidebar is everyone knows COVID like really popped off in Wuhan. Yeah. Right? Wuhan also is home to China's largest memory company, YMTC.
Speaker 1:Oh. And so when they were like welding people into their homes, the people who worked in the fab still went to work. Wow. It was because it's, you know, one, it's a national like it's a national importance. But two, like, these people are getting sick.
Speaker 1:This fab is like way too clean. Oh, that's crazy. Yeah. Jordy.
Speaker 3:I I wanna talk about Oracle. They put out a post this morning that said, our partners financing for the Dona Ana County, New Mexico, Shackleford County, Texas, and Port Washington, Wisconsin data centers are secured at market standard rates progressing through final syndication on schedule and consistent with investment grade deals. Obviously, they were fast following their posts from yesterday where they said the Nvidia OpenAI deal has zero impact on our financial relationship with OpenAI. We remain highly confident in OpenAI's ability to raise funds and meet its commitments. And obviously, everyone was looking at this being like, give me a cigarette.
Speaker 3:I like smoking. It's like bank run language. I haven't seen posts like this since like the FTX
Speaker 1:Is it just bad comms or is there something worrisome? It's it's terrible comms. Yeah. Like like I I told my Oracle context, like, who the hell is in charge of the Twitter? Like, what are you doing?
Speaker 1:NVIDIA did something similar last year when the whole TPU mania was
Speaker 3:Yeah. Going It was Yeah. It was it it was like we're we're thrilled with Google's progress with the TPU. That said, NVIDIA chips are the only, you know, there's Yeah. So It's like no one asked you to comment.
Speaker 3:Yeah. I mean like I'm sure a handful of people in your DMs and random but that doesn't mean
Speaker 2:Doesn't project confidence.
Speaker 1:It's it's sort of the lion shouldn't concern themselves with the sheep. Yes. And like, okay, Nvidia's the lion. Maybe maybe Oracle's a little bit more bumpy, but I think Oracle's like fine. Yeah.
Speaker 1:People are just freaking out because, you know, OpenAI is is peak know, people are peak negative on OpenAI right now because of how good Anthropic's been killing it.
Speaker 2:Sure. Sure. Sure.
Speaker 1:Yeah. I I think it's just like kinda silly. Like, need to they need to hire someone to do comms like a lulu or something. Right? Both Nvidia and Oracle because what are you doing?
Speaker 3:Yeah. How did you process yesterday in general? Jensen was clip farming. He was like, I don't know why he does these street interviews. Right?
Speaker 3:No other CEO
Speaker 2:does those where they just stick 25 microphones in your face and the paparazzi's flashing. It's a great vibe.
Speaker 1:It's it's, you know, Jensen's not been as famous as other CEOs for as long and yet he's so important now. Yeah. And if you've like if you know Jensen, how he's in meetings, there's I feel like there's two Jensen's. Right?
Speaker 2:Sure.
Speaker 1:There is like PR, like good at PR, just good at talking, good at like making people hyped up and believe what he's doing. He's great
Speaker 2:at standing on stage holding up the chip, delivering
Speaker 1:a sermon. And then there's the real Jensen which is like a business killer and like actually just knows about every like aspect of the supply chain. Right? All the way from like niche semiconductor, you know, design and manufacturing stuff all the way to like energy power data center. Yeah.
Speaker 1:And and and then doing the business deals too. Right? And so like you've got this whole Pareto like of the whole thing of whole range of things that he's good at and he's a killer in. Yeah. And clearly he's like he was in a meeting where he was being a killer and like negotiating like supply contracts or something.
Speaker 2:Oh and he walks outside.
Speaker 1:And then
Speaker 2:he walks outside. That's hilarious. Yeah. Yeah. My inference.
Speaker 2:But I like it. Yeah. That's awesome.
Speaker 1:And and that's why he was like, you know, like, he was like, still killer. Like, no. We never said we committed to a 100,000,000,000, you know, like, and it's like
Speaker 3:I don't know. Wait. Where do you even get the $100,000,000,000 number from? And it's like, well, you did go on CNBC and like a big deal out of it. Doggy pudding show.
Speaker 3:Assume that it was Yeah. But they did say in the press release Yeah. These are early talks. Yeah. But they just kinda jumped the gun.
Speaker 3:This was the height of the press release economy.
Speaker 1:Yeah. What's funny is Oracle stock peaked like just like a week after they announced the OpenAI deal. Yeah. And so like the press release of like, hey, OpenAI is gonna do this humongous deal. Stock peaks.
Speaker 1:Yeah. Same happened with a couple other vendors who announced deals with OpenAI or Nvidia. Like sort of a lot of these like like they all peaked to that and then it's sort of been like Nvidia OpenAI trade has been going poorly. Yeah. And sort of like the TPU, Anthropic, Google, Amazon complex has been doing well.
Speaker 1:Yeah. It's quite interesting that
Speaker 3:So this it's good good energy back at home with the roommates.
Speaker 1:What what's going
Speaker 2:on in wanted
Speaker 3:to Yeah. Got one more thing. So yes, over the weekend, it was sort of drowned out by all the justice department stuff.
Speaker 1:But Wait. Have you just talked about Elon saying you can smoke a cigar in the fab? No. Yeah. Yeah.
Speaker 1:Yeah. I was gonna say that's this is is this is part of the
Speaker 2:whole I didn't realize that was related. Yeah. That makes sense.
Speaker 3:Sense. Indoor heaters. Yeah. We have indoor heater technology. No one's taking advantage of Yeah.
Speaker 2:What what does the fab look like if you have no humans inside? Like, that's probably his long term thing is like, yeah, there there will be an optimist.
Speaker 1:But no one no. Like, the number of people who work in a fab is, like, irrelevant.
Speaker 2:Like Yeah. But but but is it is it irrelevant because there's all these things you have to do when a human's in there because they sweat and they breathe? And if you don't have to do that because it's a robot walking down, even if it's puppeteered or teleoperated, you you might be able to have different considerations. I don't
Speaker 1:know if that actually affects Well, it's like a nesting of like it's a nesting of like cleanliness. Right? Yeah. For example, you've got this wafer you've put like down let's say you put down copper. Yeah.
Speaker 1:And now you're moving it from one area to another. Well, it needs to be stored in a vacuum. But the easiest way to store a vac like or or an inert gas. Yeah. And that's like the thing that's being transported in.
Speaker 1:But then around that, you want it to be super clean as Yeah. You you you if you don't, then the copper starts getting oxidized. It affects our yields. All this sort of stuff happens. Yeah.
Speaker 1:And so like you kind of want it to be a nested layer nested layer of like, well, this chain this thing inside the EUV tool is super clean and then the thing feeding it is super clean and then the thing it sits in is super clean because because that's how you get to like there's zero particles. Yeah. Well, because like, you know, in the in the in the FOUP and the transportation devices parts per trillion and made FOUP, it's called f o u p, front operated, front opening, I don't know, something pod.
Speaker 2:Yeah. Pod. But it's called
Speaker 1:a FOUP.
Speaker 2:It's like
Speaker 1:the thing that moves and it carries the wafers.
Speaker 2:Sure. Sure. Sure.
Speaker 1:And then and then the fab is like parts per billion and you know, sort of like, you know, you've you've gotta like got this nesting relationship so everything is super clean. Yeah. You know, I'm I'm bullish on robots, like super bullish on robots, but only for like not for tasks that have like TSMC's Arizona fabs or okay. Let's say TSMC Sure. Tainan which I think produces, like, you know, indirectly hundreds of billions of dollars of global GDP.
Speaker 1:Yeah. Even directly, it's, like, still tens of billions of dollars, has, like, five, ten thousand people in it. Yeah. Like, it's, like, irrelevant Yeah. In terms of the number of people who work there.
Speaker 3:In terms of the overall economic
Speaker 2:value that's created.
Speaker 1:Right. Like it's like Yeah. It's like how many people fold laundry or how many people wash dishes or how many people like do construction work. Like, are way bigger markets.
Speaker 2:For robotics. Yeah. Yeah. Makes sense. Speaking of China, what what are you making of the the the Dario essay or he I I guess his comments at Davos about, you know, selling chips to China is equivalent to, you know, nuclear weapons these days.
Speaker 2:The the Ben Thompson line was something like he's okay selling chips because he wants dependency on the NVIDIA ecosystem, CUDA, but he would ban lithography tools from going to China. And I've been wrestling with this idea of like, I don't know if China would accept this, but wouldn't there be a different world where you want them dependent on American LLM APIs and you don't even send them the chips? And you say, yeah, you're you can have as much AI as you want as long as you're paying, you know, OpenAI and Anthropic API. Yeah.
Speaker 1:I think it's I think it's like a curve of like
Speaker 2:What they will accept.
Speaker 1:It's it's it's, you know, one, you you you push someone to the corner, they're gonna start swinging. Right? Which means this. And and I'm I'm like very concerned that China does this. Right?
Speaker 1:Do they do you do you push them too far into the corner? Do they say, screw this. We're gonna start being a lot more aggressive. We're gonna we're gonna, you know, do more military actions
Speaker 2:or Or even just invest twice as much in
Speaker 1:In global supply chains. In global take over Africa more than they already Like, LatAm, like etcetera. There's there's or or just take over Taiwan. Yeah. Right?
Speaker 1:Because if I can't have the chips, what value is there in Taiwan existing? Sure.
Speaker 2:Sure. Sure.
Speaker 1:In its current state. Right? So there's like there's this like game theory aspect. Yeah. At the same time, you don't want China to be able to like, you know, if you believe AI is gonna be do what I think many, at least in San Francisco think it's gonna do, which is like completely revolutionize humanity Yeah.
Speaker 1:And cause GDP growth to accelerate. Yeah. Do you wanna have China also own that technology? And and all, you know, their ability to integrate that into their military and all these other things much faster. Yep.
Speaker 1:You know, so there there is like these competing like, you know, interests.
Speaker 2:Yeah.
Speaker 1:Where where is the like right line? And some people think it's like, hey, yeah, sell them AI model Well, think Dario would say, don't even sell them AI model access.
Speaker 2:Don't even sell them tokens.
Speaker 1:Yeah. Think so. I think I think Anthropic does not sell AI access to China. Yeah. They they loop it through and you can see this in the traffic data.
Speaker 1:They go through Korea
Speaker 2:and Yeah.
Speaker 1:Yeah. Places. But like And so they get it. Yeah. And then the other side is like sort of like I think like the Ben Thompson view which is like Sure.
Speaker 1:And I think I'm more sympathetic to that. Yeah. Although I think I'm not exactly in line with that which is like and we've been saying like don't sell them equipment. Don't sell them equipment. Don't sell them equipment.
Speaker 1:My view my argument is like more economic in the sense of like if you sell them like tens of billions of dollars equipment, they can make hundreds of billions of dollars of AI Yeah. Value or chips with that equipment. Yeah. Whereas if you sell them AI model access and it costs them this much to get the economic know, they're they're not able to
Speaker 2:And you're capturing more of the value
Speaker 1:Exactly. And so that's sort of the question that is is at foot here. Right? Do you want them to capture all this value of the of the supply chain
Speaker 2:Yeah.
Speaker 1:In equipment or by buying the chips Yeah. Or using the models, right, and services? And we've seen, you know, across many, you know, stacks. China refuses to accept, you know, using American ecosystem and they'll wait many years before they develop their own. Yeah.
Speaker 1:Whether it was like, hey, they didn't use Windows. They've figured out a bootlegging economy. Yeah. Or they didn't use Visa and eventually they came out with like Alipay and WeChat Pay or whatever it's called on. Yeah.
Speaker 1:And and like these things are way better than Visa in fact. Right? Lower transaction cost and higher volume.
Speaker 2:Never use Red Star Linux? It's North Korea's Linux machine. Wait. Really? Yeah.
Speaker 2:If you if you don't if you put it on a network, it'll immediately call home. So you have to put it on a on a on a firewall network or else. It just like steals everything immediately.
Speaker 1:I'm I'm a fan of TempleOS, you know?
Speaker 2:Yeah. Alright. Is Doug O'Laughlin suffering from a case of Claude Code psychosis?
Speaker 1:Okay. Yes. Yes. So I think I think everyone's like Claude Code is for coders. It's like, no.
Speaker 1:No. Claude Code is for people who don't code now. Yes. Right? And that's the big realization this year.
Speaker 1:Yeah. You know, we've got we've got a couple folks now in the firm who have psychosis. But Douglas O'Laughlin who is like semi analysis, number two he's president. Yeah. You know, he's my boy.
Speaker 1:He's the one in fact, he's the one who encouraged me to make a sub Oh, you did? A long time ago a long time ago. What were you doing before? I had a WordPress blog.
Speaker 2:Oh, okay.
Speaker 1:And I was consulting on the side, but I like, okay, let me do a Substack now. Because I saw him making money off it, I was like, this is shit.
Speaker 3:Like, why are you doing money for this? Yeah.
Speaker 1:Like, there were multiple times where he wrote something and was like, I could do way better.
Speaker 2:I'll show you.
Speaker 1:And and and and like, obviously, like it was good because we both taught each other a lot of
Speaker 3:things and
Speaker 1:we've been great friends. Yeah. And eventually he joined semi analysis. Yeah. But like, you know, he he his background is he was a hedge fund analyst.
Speaker 1:Yeah. And then he decided to do a substack slash walk hike the Continental Divide Trail for like six months walking from Mexico to you know, and then and then, you know, came back to doing substacking, tried to do a fun.
Speaker 2:Six months of touching grass and then he like, ready to lock in Cloud Code.
Speaker 1:Yeah. And and and so now he's, you know, like, he's he's never been a software developer. Yeah. Right? But he's been on a generational run.
Speaker 1:Like Mhmm. He's he's he's not coding anything. Right? He's just telling Cloud to do stuff. Yeah.
Speaker 1:Yeah. Like, it's to the point where it's like, our our our like head of data, head of IT is like, oh, can you send me that? And he's like, how do I do that? And then he like Yeah. He zips the whole thing and sends it to him.
Speaker 1:It's like local host. Yeah. He sends him a leak once. It's like local host like Yeah. It's like
Speaker 3:bro, that's not how this works. But
Speaker 2:but yeah. No. I I I've talked to some folks at VibeCode and they'll be like they'll be like, why'd you choose Node. Js? And they're like, what's Node.
Speaker 1:Js? And I'm
Speaker 2:like, that's a very specific advice. Someone? Someone? Yeah. Tyler.
Speaker 1:No. But it's it's it's we went on a we went on a little tour of a lot of our clients. Like, you know, roughly like half our business is or 40% of our business is like hedge funds. So we went to New York a week, two weeks ago and we went to all of our clients. Mhmm.
Speaker 1:And like part of it's like them asking me is OpenAI fucked? Yeah. And I'm answering like, no, think they're fine. Yeah. And then like some like actual ideas.
Speaker 1:And then like a lot of it is Doug just telling them Clog Code is like they they're like, you don't have to hire any junior hedge fund analysts anymore. They're like, the junior hedge funds analysts are like and then he's explaining, know, what what can you do? It's like, well like, you can just do like financial models and pro form a financial models and like everything in Clog code Yeah. Without ever opening Excel. Yeah.
Speaker 1:And you can generate charts and like you don't need Yeah. To know how to code. Yeah. You just need to know how like how this stuff generally works and you can just do it.
Speaker 3:Are how how many hedge funds are just trying to copy trade situational awareness
Speaker 1:now? I mean, I think everyone who's I think I think a lot of hedge funds obviously believe in AI. I think there's a lot of them who don't believe in it, right, to be clear. But a lot of them that have done the best
Speaker 3:why are they selling software everywhere?
Speaker 2:Oh, you mean selling software stocks? Yeah. Oh, yeah. Why the sell off then?
Speaker 1:Yeah. I mean I mean, of course, it's like an incremental thing. Right? But anyway, so so these hedge funds like and then then the question is like, okay, you believe in it, how do you manifest that trade? And and so when you look across the like ecosystem, I would say almost all my clients sometimes think are two years out, numbers are too high.
Speaker 1:Mhmm. But like there's there's like Leopold's like, your numbers are too low. And so it's like it's like it's like in general. Right? And I think I think like if you think about how much do you believe in AI and what's your access to information of AI Yeah.
Speaker 1:You know, there's not many hedge funds who live in San Francisco and like fully breathe and live and understand it. Then and then depending on how much you believe in AI, how do you manifest that trait. Right?
Speaker 2:Mhmm.
Speaker 3:Are you surprised that more hedge funds wouldn't like even just smaller shops wouldn't say like, hey, this AI thing seems like it's gonna be big. Maybe we should set up in San Francisco?
Speaker 2:Or hire.
Speaker 1:There's there's a number of people. Right? So we're we're, you know, we're we're getting an office together. Leopold, myself, to our cash and then a client of mine, another hedge fund.
Speaker 2:Cool.
Speaker 1:And they have one analyst here and it's like Mhmm. And there's like a number of other hedge funds that are like hiring analysts here. But, you know, being plugged into the AI ecosystem does not mean you're just in San Francisco because you can just walk around and talk to like doofus like Yeah. Startups and VCs and like not actually, see what's coming down the pipeline. And you have to combine it with all sorts of information.
Speaker 1:Right? You have to have a good tune with what's going on in Asia supply chains. You have have a good tune with what's going on in New York. You have to get tune with what's going on in the financial markets. And then like what's going on in credit markets and what's going on in all, you know, the data center energy, blah blah blah, all these different industries.
Speaker 1:And so it's it's it's actually not like so simple to like be in tune with what's going on in AI. Mhmm. You can easily get like head faked. Mhmm. Right?
Speaker 1:Mhmm. You For the longest time, were thinking, you know, Adobe is an AI company. And like Yeah. And it's like For for a bit like, oh, Adobe was going down on AI and then they like launched a few AI features and the stock skyrocketed. And then Mhmm.
Speaker 1:Now it's going back down again because people realize, oh, wait. No. Actually, it's not an AI company. Yeah. I think it's it's the manifestation and thought of like what is actually gonna the world gonna look like if Anthropic three x's its revenue again this year, OpenAI two x's its revenue again this year or, you know, by the end of the year do how many people even believe by the end of the year AI startup revenue is over a $100,000,000,000?
Speaker 1:I think that's an insane statement for a lot of people but that's what it's gonna be.
Speaker 2:Yeah.
Speaker 1:Right? And who believes that number? Right? It's like Yeah. Very few people.
Speaker 1:And then you you you draw the continuation. It's like and who believes you know, and when Anthropic says in their funding like, we're hey, gonna have $300,000,000,000 of revenue by the end of the decade. And it's like, actually, I think that number's too low. Because because the economic value of what they're gonna create is gonna be insane. Yeah.
Speaker 1:And and you tell people, oh, excellent. You know, OpenAI is gonna have 18 gigawatts or 16 gigawatts by the '28 and they're gonna be able to pay for it. And that's like, well, that's $300,000,000,000 of spend. How are gonna pay for it? It's like you you sweet summer child.
Speaker 1:Don't worry. Sam can raise. They're gonna blow up on revenue. They're fine. Right?
Speaker 1:Like, it is like a bit of a vibe thing. It's a bit of like, you know, irrational exuberance almost. Right? Like, Leopold's in his, you know, mid twenties. Like, I'm 29.
Speaker 1:Like, we we are irrational.
Speaker 2:Yeah.
Speaker 1:Right? Because we have not lived through know, getthese.com. These PMs who like don't
Speaker 3:You've want never been you've never been that humble.
Speaker 1:I I don't know. Like, we almost My family almost went bankrupt in 2008. Like, you know, because we lived in a motel and we almost foreclosed and we actually did foreclose on one motel. It's pretty bad but I was still a kid. Right?
Speaker 1:Yeah. I've never been humble. No.
Speaker 3:It's good to live through that and understand how things can go wrong. That's interesting. What are you expecting out of Zoc and Meta this year? We've been big Zoc defenders, especially I mean, there's this pressure of like, oh, Meta is spending so much and yet they haven't created, you know, any any AI product that's super compelling or that's really working. And our stance has has generally been Meta's making more money from AI than almost any company in the world outside of Nvidia.
Speaker 3:So it's like, of course, Zoc should be justified in saying, hey, this is real. It's big. Like, I'm gonna like back the truck up and and go all in.
Speaker 1:Yeah. I mean, it's it's clear if you look at the most recent earnings. Mhmm. I think their CPM went up 9% when the consumer's weak. Mhmm.
Speaker 1:Which means, like, if you were to, like, try and strip out, like, what is consumer spending increasing for CPM of ads versus what is the effectiveness of their algorithms? Or algorithm got better by double digits in one quarter. Yeah. Right? It's, like, actually insane how good the algo is getting, right, at serving you the slop in the ads.
Speaker 1:Right? So so so in that sense, like,
Speaker 2:big sound of the trough. I love it. Slop the
Speaker 3:It's it's slop sound. We're going all in on
Speaker 2:that farm. Slop the fence.
Speaker 1:Slop the fence. Daughters.
Speaker 2:I love it.
Speaker 1:So so, you know, if you if you think about it, right, like, Metas, where are they gonna, like, win? Right? You know, I think if you have the Galaxy Brain take, it's like, well, they've got the best wearables coming down the pipeline. They're gonna put AI on it. Mhmm.
Speaker 1:Apple won't be able to put good AI on their wearables, so they'll seed it all to like Google or
Speaker 3:Well, the other people have had this narrative, oh, as AI gets better, the the value of real world experiences will increase. Mhmm. And I think that's a cool theory. But if you actually play it out, AI getting better means more content that's more, like, effectively crafted for you, more personalized, a 100 times more con a thousand, a million times more content, that would imply to me that people will just use digital products more which means more time on-site, more time in the app for meta. So I don't know.
Speaker 1:I I mean, I I'm I'm with you entirely. But I think I think like the Galaxy of your intake is that you're just gonna have a wearable and that's gonna have AI assistant. OpenAI is trying to make it wearables, you know. Mhmm. You know, there's there's you know, everyone's trying to make wearables.
Speaker 1:Google is, etcetera, etcetera. I think Meta will actually execute and then they'll have a good AI. And then you you stack on like a few things. Right? How do they get users?
Speaker 1:Well, we've seen at least if you look at the user metric charts, Google's use you know, OpenAI's users were growing, growing, growing. They were gonna hit a trillion by the end of the They ate a 100,000,000,000. Why did they not keep growing in the last quarter? It's because Nano Banana came out and they took all the incremental users. Right?
Speaker 1:And and likewise, if you go look at like you know, Gemini three didn't actually make Google grow that much. It was Nano Banana and then Pro or two or whatever it's called. Right? Are the ones that made them really grow. Meta's licensed all of Midjourney's code, data, models.
Speaker 1:Right? One. Two, they're like actually just like focusing hardcore on
Speaker 3:was that a billion dollar plus deal?
Speaker 1:The number is undisclosed. Midjourney still exists as a company.
Speaker 2:Yeah. Mhmm.
Speaker 3:No. It felt it looked to me like effectively a massive exit, but the best case scenario where they can just keep kind of being artists.
Speaker 1:I I think I think if you had me guess, I would bet it's bill over 1,000,000,000.
Speaker 2:Right? Every deal that Meta did was over 1,000,000,000. Basically, like, whether it's an employment contract, a licensing deal, an acquisition, everything had a B after it.
Speaker 1:Well, so so the interesting thing is
Speaker 3:So you're a zero again. Miss the zero again.
Speaker 2:Yeah. Every every discussion is how many billions are we spending on hiring this person or buying this company?
Speaker 1:Well, Meta interestingly has gone down market for compute because they there's not enough compute in the big size deals. They've actually gone and like What that mean? Bought like small clusters. Oh. Because it's like, well, I want more compute.
Speaker 2:From like long tail neo clouds?
Speaker 1:Yeah. Just like Yeah. From a longer tail.
Speaker 2:Okay.
Speaker 1:Because that's the only place they can get compute they need. Interesting. Because you know, they've already like went out and signed big deals with Google and Core Weave and so on and so forth.
Speaker 2:Is cluster max three gonna be a smaller chart because of consolidation in the industry? No. It's there's more bigger.
Speaker 1:Gonna be bigger bigger bigger But but you know, so so metal
Speaker 3:That's the thunder. That's ominous. It's ominous.
Speaker 1:So so I think Meta will, you know, capture consumers through generative. Mhmm. If there's more content, people are just gonna go to the content marketplace. Right? Mhmm.
Speaker 1:The creator of the content captures less value as there are more content creators and more diversification of content. Right? So I think Meta just wins by being a platform. Right? Google does too and ByteDance does too.
Speaker 1:Right? But like those three win by having a platform. And then the real question is, can they get in the assistant productivity game? Right?
Speaker 3:Mhmm.
Speaker 1:And I think this is effectively
Speaker 3:search. Like if you're an assistant, it means that you can like there's some commerce happening.
Speaker 1:Well, they spin out and poached a bunch of people from Google. Yeah. So this wasn't in the media much, but like they actually poached Google search people with similar sized deals as like these crazy Yeah. Researchers. Yeah.
Speaker 3:And I always I I you know, demoing demoing any of the the the wearables, you can imagine like Meta wants you to walk around in the world and see like, oh, what are those headphones? And like while we're talking, I just hit my little thing and buy it. Right? Yeah. And it's like you didn't even necessarily know that it happened.
Speaker 3:But like of course Meta's gonna wanna
Speaker 2:knows those are the Sony MDR XU two seven twos four six two
Speaker 1:Dude, I've been I've been screaming about them like doing some proper marketing.
Speaker 2:Branding so
Speaker 1:It's it's literally like they're over here is like w h x 1,000 x m five It's crazy. And then their in ear is like w f one x x m 1,000. It's like, dude, just call them like Bravia buds in Bravia like Yeah. Headphones or some shit.
Speaker 3:Well, China just What?
Speaker 2:Bought Sony brand? Yeah. Yeah. Bravia brand is actually a Chinese company Sony sold their TV
Speaker 1:and PlayStation buds. Yeah. Yeah. Yeah. PlayStation.
Speaker 1:Walkman. Yeah. Walkman. Something.
Speaker 2:Something. For sure. Anyway, anything else, Jordy?
Speaker 3:For this weekend.
Speaker 1:Yeah. Yeah. Super excited. You guys
Speaker 3:What are some plays that we don't watch a
Speaker 1:lot of sports? What are some plays? What are
Speaker 2:some plays a football guy. Right?
Speaker 1:Yeah. Yeah.
Speaker 3:You grew up in yeah.
Speaker 1:Rural Georgia, so I like football. High school football was the thing. College football was the thing. I think NFL is a little less soulful. Sure.
Speaker 1:But you know now college football has the NIL and so it's also soulless to some extent. It's fine. We enjoy it. Primal desire of seeing heads clash.
Speaker 2:Yes.
Speaker 1:And sometimes that manifests in like Twitter drama and sometimes that manifests in real football. Yeah. All I can say is fuck the Patriots. Okay.
Speaker 3:Woah. Okay. Okay. Since we're gonna be at the game, we're not gonna really get the great experience seeing the ads. Gonna be like my glued phone.
Speaker 3:I I wanna see all the AI, the different
Speaker 2:Well, don't worry. Got some more ads for you. Alright. Thank you so much for coming.
Speaker 1:Thank you so much. Alright. Great segue.