Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.
You're watching TVPN. Rainmaker stands accused of of having a role in the Texas floods. This is a very, very sad story. It's on the cover of the Wall Street Journal, not the Rainmaker part. That has been contained on X, but I'll give you a little update on what's going on in Texas.
Speaker 1:So Texas Texas rescue grows urgent as toll mounts. At least seventy were killed in weekend floods as more bad weather complicates the search. The search for swept away for those swept away by punishing flash floods in Central Texas over the holiday took on new urgency Sunday as the death toll climbed to seventy and nearly a dozen girls from a private summer camp remained missing. Rescuers combing the swollen banks of the Guadalupe River were holding out hope that survivors might still be found. The potential for more bad weather Sunday also loomed over ground and air operations.
Speaker 1:The National Weather Service warned of more rainfall and slow moving thunderstorms that could create flash floods and in the already saturated in the already saturated areas in the Texas Hill Country. So, this blew up
Speaker 2:on acts. And,
Speaker 1:and people were asking, Augustus, did Rainmaker, was Rainmaker operating in the area around that time? Cloud seeding startup Rainmaker is under fire after deadly July 4 floods in Texas. CEO, Augustus Dorico, who's been on the show multiple times, will join us today at noon to, break it down. He's already, explained his side of the story on acts several times, but we will ask him a lot more questions. He says the natural disaster in the Texas Texan Hill Country is a tragedy.
Speaker 1:My prayers are with Texas. Rainmaker did not operate in the affected areas on the third or fourth or contribute to the floods that occurred over the region. Rainmaker will always be fully transparent, and he and he gives a timeline of the events. He says overnight from the third and fourth, moisture surged into hill country from the Pacific as remnants of the tropical storm Barry moved across the region. At 1AM on July 4, National Weather Service, which we work closely with to maintain awareness of severe weather systems, issued a flash flood warning for San Angelo, Texas.
Speaker 1:Note summer con convective cloud seeding operations in Texas do not occur during overnight hours. At 4AM on July 4, the NWS issued a life threatening emergency warning and flooding insured. He says, did Rainmaker conduct any operations that could have impacted the floods? He says, no. The last seeding mission prior to the July 4 event was during the early afternoon of July 2 when a brief cloud seeding mission was flown over the eastern portions of South Central Texas and two clouds receded.
Speaker 1:These clouds persisted for about two hours after seeding before dissipating between 3PM and 4PM CDT. Natural clouds typically have lifespans of thirty minutes to a few hours at most, even with the most persistent storm systems rarely maintaining the same cloud structure for more than twelve to eighteen hours. The clouds that were seeded on July 2 dissipated
Speaker 2:in I 24 have that I'm sure he'll have answers to is why do cloud seeding operations in the immediately before a massive storm is coming through. Yeah. Think that's the question that a lot of people have. But we will get into that when he joins the show.
Speaker 1:Yeah. I mean, there's a big question about how effective is cloud seeding. Could you start a flash flood if you've tried? Does this work? Someone was paying for this because it's not a nonprofit.
Speaker 1:Like, obviously I believe it was
Speaker 2:state level.
Speaker 1:State level. So the state might buy cloud seeding operations in one way. There could be, you know, mistake. He says that he's not involved at all. So we will dig into that with him.
Speaker 1:Our next guest is here, Augustus Dorico, the CEO, founder of Rainmaker. Welcome to the stream, Augustus. How are you doing?
Speaker 3:John Jordy, thanks for having me. I am doing well. I, am obviously talking to a lot of people about the flooding that's going on in Texas and appreciate the opportunity to clarify that rainmaker and cloud seeding had nothing to do with the flooding that unfolded. Mhmm. And even in spite of that, I think that it's a a tragedy that it did happen and certainly don't want anybody to use this opportunity, use this, controversy to blame cloud seeding for the sake of popular political support.
Speaker 3:And you may have seen that Marjorie Taylor Greene, is proposing running a bill to ban all forms of weather modification based on those that we saw in the Florida state house legislature earlier this year. I think it would be both disrespectful to the families involved and baseless and without any technical or scientific credibility if that legislation were to go through. So I'm I'm happy to talk about the course of events, what cloud something is, what it's not. Here with you today.
Speaker 1:Yeah. Let's kick it off with, the the high level on what actually happened in Texas, where things stand now, the status of the rescue operations, and kind of the the the timeline, that's more broad.
Speaker 3:Yeah. Absolutely. So, this phenomena, this flooding was global in scope. It was referred to as a low probability, high impact event. I encourage people to go to Matthew Cucucci, on x.
Speaker 3:He gave a great outline. He's a meteorologist that has a lot of expertise on severe weather forecasting. But but tropical storm Barry, the remnants of which blew into Texas, was going to cause inordinate flooding regardless. And, that area of Texas is also known as Flash Flood Alley because these events do happen. Now 4,000,000,000,000 gallons of precipitation occurring over the course of just a couple days is pretty out of distribution, but we are seeing an increase in these sorts of severe climatic events over time and especially down and around The Gulf.
Speaker 3:So just to go over the timeline after having clarified that it was the remnants of tropical storm Barry and the convergence of large mesoscale phenomena that induced that flooding, It was at about 1AM on the fourth that the National Weather Service issued a flash flood warning. And then it was at about 4AM on the fourth where they said that there was a life threatening emergency underway. It was not it it it was over two days prior that Rainmaker had suspended all of its cloud seeding operations in Texas because, one, our forecasters and our meteorologists saw that there was going to be this severe weather event, and we needn't operate to produce more water when there was already the event coming. But two, we suspended operations in accordance with the Texas Department of Licensing and Regulations suspension criteria, where if there is a severe weather warning from the National Weather Service, or there is too much saturation of the soil, we have to ground operations. And so we do so both voluntarily and in accordance with existing statutes.
Speaker 1:Okay. So, the Cloud CD operation that happened prior to the storm, who was the client? Like, I mean, who assume someone was paying you. Sometimes it's the government. Sometimes it's a individual or farmer or business.
Speaker 1:Walk me through, where they were, who they are, what their goal is by procuring your services.
Speaker 3:Sure. So it it's obvious that at this moment in time, that region of Texas does not need more water. Sure. However, throughout the Western United States, farms, conservationists, governments concerned with their aquifer supply of water and also reservoirs for both industrial and residential drinking water contract with Rainmaker to produce more water via cloud seeding. And in the case of Texas, the South Texas Weather Modification Association, the West Texas Weather Modification Association, and multiple entities exist as conglomerations of both counties and individual farms that pay for cloud seeding services to, one, water their crops, two, fill up the reservoirs that they irrigate their crops with, and three, recharge the aquifers like the Ogallala that has been severely drawn down and then puts all of these farmers at risk of not being able to grow, not being able to do business because of a historic drought.
Speaker 1:Okay. So would the proposed ban just because what what I'm getting at is, like, I'm wondering if, like, if the government is paying for cloud seeding operations, like, the easier lever might just be to decrease the funding to the government, but it seems like Marjorie Taylor Greene is pushing for some other legislation that wouldn't just be, hey. Buy less of this service because we don't need it. And instead, this service should never be bought at all. So why is there the distinction there?
Speaker 1:Like, is is is most of the money that's going into one of these associations, private farmer capital, or is it a split? Like, how does that actually break down?
Speaker 3:So right now, it's largely public municipal money that is going into these weather modification programs to increase water supply when there is drought or in preparation for drought. Mhmm. The bill that has been forecasted, that has been proposed by Marjorie Taylor Greene, would wholesale ban all forms of weather modification, be it cloud seeding, solar radiation management, or what they supposed to be chemtrails? I mean, very transparently, I think that a lot of the concern around weather modification is actually conflating baseless notions of chemtrails with a very practical American technology that can and will and does benefit our farmers, our ecosystems, our industrial residential water needs. If this legislation were to go through, not only would it deprive all of those interests and all of those Americans from having water from cloud seeding, but it would also be against America's interest at a geopolitical level.
Speaker 3:Because China recently I think on the last time, I was on TBPN, I talked about how they had a $300,000,000 annual budget for their weather modification program. That, as of 2025, has been up to 1,400,000,000. That is extremely consequential. And I think that if we were to ban who controls, or or banning Americans from, controlling weather modification technology, that would put us at a meaningful disadvantage. Now all of this to say, people deserve transparency.
Speaker 3:They deserve clear regulatory frameworks so that they know whether modification operations are safe and being conducted in a responsible manner. And with government oversight and accountability, if ever there are, negative consequences to cloud seeding. Again, there haven't been any in the case of Texas. But I think that the reasonable next steps are to more stringently regulate who is allowed to cloud seed, define what the concepts of operation are that are permissible, define the suspension criteria at a federal level rather than leaving it purely to the states. Mhmm.
Speaker 3:So that anybody that wants to know about weather modification can look at the data and scrutinize it and ensure that it's being conducted safely. And also just to build trust because modification act from 1972 that currently outlines, the weather modification reporting act of 1972 that outlines how we have to report to the federal government is, you know, 50 years old. We need more scrutiny on these programs for the sake of public trust and accountability, and that seems like a reasonable next step. That was also recommended by the government accountability office in their report on cloud seeding and weather modification earlier this year.
Speaker 1:Mhmm.
Speaker 2:What was the scale of the general water sorry sorry, weather modification activities on July 2? It was you guys, bunch was there a bunch of other players operating? Is there generally a lot of players or is it pretty is it is it a fairly small number of of, kind of service providers, that are that are participating in these programs?
Speaker 3:Yeah. Jordy, you may have seen, the prolific hustle bitch on I saw.x.com posting about this. A little while ago, he said that I was the CEO of the largest and most powerful weather modification company in the world.
Speaker 2:I saw somebody compare somebody was comparing weather modification tech to being saying it was more dangerous than nuclear arms.
Speaker 1:Nuclear bombs. That was kinda crazy. Yeah. And then I also saw some people just showing, like, general flight logs of, like, commercial airplanes. Like, obviously, there's a lot
Speaker 2:of chaos guess out just people have every right to be angry and demand answers. It's such a tragic Yeah. Incident. But but, yeah, I'm I'm curious to get into the the scale of, you know, kind of maybe late June, early July, what was going on broadly.
Speaker 3:Yeah. Absolutely. So there's one other cloud seeding operator in Texas called, Seating Operations and Atmospheric Research, SOAR. They're responsible for operations over the Rolling Plains, Weather Modification Association, which is significantly farther northwest of Kerr County. On July 2, we conducted one nineteen minute cloud seeding flight where we released about 70 grams of silver iodide and 500 grams of salt table salt.
Speaker 3:That was released at about 1,600 feet above ground level into two clouds that dissipated over the course of two hours after seeding them. The amount of time that those aerosols could have been suspended in the atmosphere is less than the time between when we were seeding and the onset of rains from, the remnants of tropical storm Gary. And the amount of material that we dispersed could not come anywhere close to inducing the precipitation, the 4,000,000,000,000 gallons of precipitation that did come from that event.
Speaker 2:So yeah. And and I'm I'm assuming you guys, like, have records. You keep records of, like, the radar showing these different cloud formations. So you you're you're it's not just, hey. We looked and we think it dissipated, but it's like you can actually you have, like, you know, basically a a map that's live updating.
Speaker 2:Is is that the right way to think about it?
Speaker 3:Not only do we keep records for our own research purposes and operational purposes, but we're required to keep records by the Texas Department of Licensing and Regulation. And those are accessible online as are the reports on our seating activities. And if anybody is interested in those, then you can ask for them from the TDLR.
Speaker 2:I'm I'm curious, when when the the flooding happened in Dubai, I wanna say it was a year or two ago, Dubai is known for their cloud seeding operations. It's a very dry place. And makes sense why they would want to, increase precipitation. A lot of people, maybe the same types of accounts that have been that have been blaming you were quick to blame it on cloud seeding. Throughout history, has there ever been any major kind of flooding event that that people were able to say, yes, a 100%, this was caused by weather modification activities.
Speaker 2:Mhmm. Or is the tech not even powerful enough yet to to do something like that?
Speaker 3:So I I think that there's probably three points to touch on. The first of which is that it wasn't until 2017 that attribution had been, physical attribution of cloud seeding's effects had been seen and proven in an academic context. And so with new advance in radar technology, namely dual polarization radar, we're able to much more clearly monitor what the effect from cloud seeding is. In previous operations, it was extraordinarily difficult to see what your effect was because we could not measure the cloud dynamics, and the cloud microphysics that were changing as you were seeding. So that's the first point.
Speaker 3:The second point is that and, again, I'm trying to be and will continue to try to be maximally transparent about our operations and historic weather modification. There was something called the operation Popeye during the Vietnam War, where the deliberate intention of cloud seeding was to cause precipitation that would, like, cause flooding and then impede supply chains on the Ho Chi Minh Trail. Mhmm. Now the extent to which that was effective because we didn't have good satellite imagery or dual port radar is outstanding. Now that said, lastly, third point, we have suspension criteria that are given to us not just by the TDLR in Texas, but every state wherein we operate.
Speaker 3:Because if there already is too much saturation of the soil or if there is, an oncoming severe weather event that the National Weather Service has, notified us not to seed, then we ought not do that to increase the severity of precipitation. So there there are suspension criteria because there are limits on what we ought to do with this technology, so as not to cause flooding and only reap the rewards from it, right, for our farms, for ecosystems, and for our national security interest as well. Right? Like, if we don't have access to weather modification technology, we if don't regulate this at a at a federal level and ensure that there's accountability and attribution for these activities, then other people, other nation states could be conducting weather mod in the vicinity of or on American soil without any accountability. And so that's why I am advocating for way more regulatory scrutiny from the federal government for cloud seeding and weather mod ops.
Speaker 1:Walk through some of the history of the the Chinese, weather modification, strategies. We we heard about the the the flooding in Dubai that was kind of unclear. Have there been any notable or confirmed negative outcomes from China spending? I mean, you said $300,000,000 a year, something like that. That that seems like a lot of cloud seeding.
Speaker 1:It seems like if there was a surface area where there could be mistakes made, they would have kind of explored that. I remember the the pre Olympics. They were doing cloud seeding or just kind of bringing down, like, the the dirt in the atmosphere. And, you know, people kind of learn from that. Okay.
Speaker 1:You get acid rain when you do that, in in in particular. But, have there been any case studies from China that, we should be learning from in America?
Speaker 3:Case studies from China with adverse weather coming from their cloud seeding operations?
Speaker 1:Yeah. Like that. Like like, something where, like, okay. They they've done a lot of this. Yeah.
Speaker 1:They're doing this at scale. To the limit. They've put they've done this at scale. If there's going to be rough edges or mishaps, I would've I I suspect that we would've seen evidence of that over there. They would've had an accidental flood or something like that happen over there if they're doing it at scale.
Speaker 3:You would expect to have seen it from China. Mhmm. However, you would also probably expect and understand that they're a relatively inscrutable country
Speaker 1:Yeah.
Speaker 3:That does not report on their activities very openly and objectively. Yeah. Now that said, one thing that we do know about the weather mod program that they do have going is that they're planning to build a 100 and a 100,000 ground generators on the Tibetan Plateau. Mhmm.
Speaker 2:So
Speaker 3:Rainmaker is primarily using drones for operations. We also have inherited some ground generators from previous operations.
Speaker 4:These are
Speaker 3:essentially, aerosolizing units on the tops of mountains. They can disperse material into clouds when the clouds intersect those mountaintops themselves.
Speaker 1:Is that like a cannon that fires the material into the cloud?
Speaker 3:Or No. No. You you you might recall my my initial inclination to use something like that because it is used in China. Yeah. But, no, it's it's essentially like a smokestack of sorts.
Speaker 3:Okay.
Speaker 1:It's a
Speaker 3:very small smokestack that releases those aerosols there.
Speaker 5:Sure.
Speaker 3:But in building a 100,000 of these ground generators and also using the Wing Wong two and a bunch of their other military drones for aerial cloud seeding, they're turning Tibet into a a reservoir a a snowpack reservoir of unprecedented scale that will feed more water into the agricultural basins in Southern And Eastern China. And I think that, you know, although, again, this is something that needs to be transparently reported on and regulated, depriving American farmers in the West, especially as a congressperson from Georgia, right, where there is not a severe reliance on cloud seeding to produce water would be against America's interest.
Speaker 1:Mhmm. Jordy? I guess yeah.
Speaker 2:I'm I'm trying to I mean, the the the my my question, is it feels like it feels like candidly it will be hard to come, it'll be hard find any type of allies in Texas, on the ground in Texas, maybe aside from from the farmers. But but I'm curious, you know, the the various different groups, you know, what what the reaction from them has been in terms of, you know, if they're, you know, the reality is is water scarcity affects every person in Texas, but only a few people truly feel it. Right? It's a much smaller group because everybody goes to their sink, they turn on the water, they turn on a hose outside, they go to a grocery store, there's water, there's produce. It's not something that people necessarily feel.
Speaker 2:And so I'm curious where, you know, you obviously are gonna defend weather modification, because you you believe in in the many different ways it can have a positive impact. But I'm curious, who you think, the other players that will will be on your side as the industry, I mean, the industry was not in a good spot prior to this. It's in a much worse spot now. And I know you've been flying all over the country making sure that it doesn't get banned. So I'm curious what what you think the kind of coalition that will kind of form around you.
Speaker 3:Yeah. Yeah. Well, so I I actually think I'd, just from my own experience over the course of the last few days, disagree with the two points that you made. Right? Like, it it has neither been hard to find allies for cloud seeding weather modification in Texas, nor do I think the technology and the industry is positioned worse now than it was prior to this weekend?
Speaker 3:And regarding the first point, there are some people that I think, are probably not in good faith engaging with this because they have some preconceived notions about chemtrails or otherwise, and don't themselves want to scrutinize the data to back up how our operations are different and beneficial, whereas chemtrails, as they believe them to be, are, you know, malevolent. Yeah. The vast majority of people that I've interacted with online, on the phone, and in person are rightfully curious, skeptical, concerned, some, you know, more than others, obviously. But in scrutinizing the data and having these conversations and learning about what cloud seeding is, pretty unilaterally, people are supportive of it provided that there is a regulatory framework more stringent than the one we have now that ensures that it's safe. This is true both of just individuals, that are not themselves farmers, but obviously farmers, water managers, government officials too.
Speaker 3:I welcome any questions that people do have, both online and via email, about what our activities are, what our policy recommendations are. And and I'm I'm grateful that there are a lot of people that understand, one, our operations did not contribute to the flooding. But two, that even if there was a flood now, it doesn't mean that there is always enough water. And having access to a technology to produce more water for farms and otherwise, would be beneficial. Like, people want a more green, lush country.
Speaker 2:Yeah. I'm curious. I'm sure you've spent plenty of time thinking about this, but is would there be a way to apply the existing technology you have almost in a defensive way In in, you know, theoretically,
Speaker 1:Exceed it a hurricane while it's still offshore.
Speaker 2:Something like that, or the or or, you know, one of the issues here, there was just so much water in the atmosphere that rolled over a heavily, you know, populated area. Yeah. And then it's got, it's gravity, right? It's gotta come down. Yeah.
Speaker 2:You know, is there an application of the technology that could over time strategically prevent, you know, or or act defensively against the conditions that create flash floods?
Speaker 3:It's it's a very worthwhile question for you to ask and for us to ask ourselves collectively. Right now, again, Rainbaker only does precipitation enhancement operations for all those constituencies that I listed before. However, in the past, the United States government funded, project Stormfury, which was a series of attempts to reduce the severity of hurricanes over the Atlantic before they broke against the Eastern Seaboard. Again, we didn't have the appropriate understanding of atmospheric science or the radar or the satellite data necessary to appropriately do that. However, severe weather is something that is like a geopolitical risk, a national security risk.
Speaker 3:It causes damage, and it is fundamentally a physics problem. Right? A physics and chemistry problem. Is there technology now that could mitigate severe weather like this? No.
Speaker 3:And Rainmaker doesn't have it. Is it possible to someday, provided we invest in NOAA, in the national weather service, in the appropriate research into cloud seeding such that we could reduce the severity of severe weather? Absolutely. And I am entirely in favor of that provided it is done in a responsible manner. And if we were to ban it wholesale, then not only would we lose access to precipitation enhancement, but we'd lose out on any potential of, at the very least, better forecasting for these systems and warning people early, but also the even greater and more consequential beneficial potential of reducing severe weather in the future.
Speaker 3:And so I think that the United States government and Rainmaker should and and are absolutely interested in mitigating severe weather in a manner similar to project storm fury.
Speaker 1:That makes sense. I I I think the PR what what you were getting at, Jordy, like, the PR difficulty here is that, like, when there's not enough water, crop yields are lower, prices go up, but it's very distributed. Everyone feels it a little bit. Whereas when there's too much water and there's a flash flood and individuals die, you have a very it's a very emotional, very, it it's very concentrated. The pain's very concentrated.
Speaker 1:And so that's why this this story Yeah.
Speaker 2:I mean, normally when normally when there's a natural disaster
Speaker 1:Yeah.
Speaker 2:There's you can you can critique the government for their response Sure. To it, but there's not somebody sitting there Scapegoat. A scapegoat.
Speaker 1:Yeah. Right? So the question is
Speaker 2:like It's easy. Yeah. It's it's, you know, whether it's online accounts that are just engagement farming
Speaker 1:Yep.
Speaker 2:Or it's a politician. Yeah. You know, escape, you know, the concern is that and your concern is that the industry becomes a scapegoat and America loses a capability that our adversaries clearly care a lot about.
Speaker 1:Yeah. My my question is, like, we're we're seeing this bifurcation. It seems like Ted Cruz came out in support of the idea that Cloud City had nothing to do with the Texas floods. Marjorie Taylor Greene is taking kind of the other side of that. My question is, like, these are politicians at the end of the day.
Speaker 1:They're not independent scientists. Who can we go to? Who can the population go to for, like, a truly independent review of this situation? Like, is there is there some sort of independent governing body, or are there are there respected scientists that kind of don't have a financial or, you know, political incentive one way or another? How do you think the the populace should be obviously, you're telling your side of the story.
Speaker 1:You're going direct. You're explaining things. You're laying out the data. But what what do you expect people to look for in an independent analyst?
Speaker 3:Yeah. Yeah. So for one, I think that NOAA, the National Weather Service, the National Center for Atmospheric Research, all of those are great third party entities
Speaker 1:Yeah.
Speaker 3:That can review the information, corroborate the information that we've provided, provided, of course, that they continue to exist and remain funded.
Speaker 5:Sure.
Speaker 3:I think that this probably demonstrates why it is important that we should retain some capability nationally to forecast and research the atmosphere.
Speaker 1:Mhmm.
Speaker 3:Because there's should be somebody that's capable of reviewing this to ensure that it's safe. Mhmm. I'll also say, you know, regarding the scapegoat dynamics that that exist right now, I've thought about this pretty prayerfully and intently over the last few days. And when there is a calamity of some sort like, I've been trying to think about why people are, say, coming after Rainmaker or, angry at Rainmaker. And I I think that when there is a calamity of this type, if there was someone responsible, if there was someone or something that could be held to account, then in holding them to account, you could supposedly prevent this kind of thing from happening in the future.
Speaker 3:The trouble with the true natural disaster as this was is that there is nobody to be held accountable. And that makes the world a lot more tragic because it means that things like this will persist. They they will persist indefinitely into the future, unless and until some sort of technology could reduce the severity of severe weather. Yeah. And that
Speaker 1:We went through this with the California fires. You know? It was like everyone was searching for, like, a single person to pin it on, and, like, it came down to, like, you know, some people built their houses the wrong way, and there's some building codes that need to change, and there's some water rights and water flow, and there's some different
Speaker 2:General government
Speaker 1:Like, we need more goats in certain areas. There's, like, a million different things that could have prevented this if they all were all working together as a well oiled machine and had the forethought. But it's a very, very frustrating and difficult situation. So our our thoughts and prayers are with everyone who's been affected. But thank you so much for stopping by.
Speaker 1:This is fantastic. Thanks for, breaking it all down for us.
Speaker 3:Thanks, guys.
Speaker 5:Appreciate it.
Speaker 1:Cheers. We have some maybe terrible news. There might be top signals in the market. There might be top signals all over the place.
Speaker 2:Been, building out an internal top signal tracker, crowdsourcing crazy. Some of them. And it's a long list. Yeah. We'll get through it.
Speaker 1:At the top of the list, podcasters have been wearing white suits recently to celebrate the market riffing. That feels like
Speaker 2:White suits are actually a top signal.
Speaker 1:It's a complete top signal. But of course, there is some good there are some the economy strong. We're gonna go through Joe Weisenthal's breakdown. Things are not doom and gloom but there's a lot of crazy stuff happening and it's fun to dig through. I mean, the first major top Bitcoin all time high.
Speaker 2:Yep.
Speaker 1:You know, that's always, you know, it is definitionally a top signal because
Speaker 2:So let's let's go through the list here because Yeah. Yeah. It's quite substantial.
Speaker 1:So So this one kind of anonymously contributed through group chats.
Speaker 2:Of course.
Speaker 1:The stuff we've we've observed. We're gonna catalog it and see if we can turn the tide of the top signals
Speaker 2:to Okay. Ideally. So starting off yesterday Yeah. Trump made a post on Truth Social calling basically celebrating the state of the economy, the markets, you know, just really calling out how how many assets are performing well.
Speaker 1:I have it here. It's the second one.
Speaker 2:Do you want read through
Speaker 1:it? Yeah. To read through it
Speaker 2:a little bit? Because he basically get through the post and
Speaker 1:Okay.
Speaker 2:And we'll get to the moment.
Speaker 1:So Donald Trump on truth social truths, tech stocks, industrial stocks, and Nasdaq hit all time high record highs. Crypto through the roof. Nvidia is up 47% since Trump tariffs. USA is taking in hundreds of billions of dollars in tariffs. Country is now back.
Speaker 1:A great credit. Fed should rapidly lower rate to reflect this strength. USA should be at the top of the list.
Speaker 2:So low rates are low rates are actually just a reward for when the markets are ripping. Exactly. It's a little treat that we give ourselves Exactly. When things are great.
Speaker 1:Yep. And the White House is posting this, screenshotted on Axe. The country is now back, says president Donald Trump.
Speaker 2:Every account controlled by the White House has been on a tear. Yep. Some of them some of the posts I think are a little bit low class and vulgar. Yes. Others others are quite funny.
Speaker 2:There was But there's definitely the the memers are in control.
Speaker 1:We didn't ruin, say, every, like, every politically aligned poster he knows who is, like, pro Trump now works for the White House. But, like, you just haven't seen it because they were like a nonce and they just Yeah. Kind of dropped off posting and now
Speaker 2:they're They'd getting death threats. So they have to the it's even actually more in many ways, it's it's more controversial work than Doge.
Speaker 1:Maybe yeah. Yeah. Maybe it's more under discussed because Doge had this big, like, question in the media about, like, you know, is Elon doing something that's you know, he shouldn't be. Is he a government employee? Like, what's the relationship between the two?
Speaker 1:And so, you know, there's a lot of investigative journalism that went into figuring out what's going on with Doge, who's involved.
Speaker 2:Yeah. Nobody's investigating the memes.
Speaker 1:Social media managers, which is
Speaker 2:They need to be investigating the meme the memes of production. But, anyway, so going through my list here, that's great. Eric Trump, a while back, said this is a good time to buy. This was a few months ago on On Ethereum. And then it just went down for months.
Speaker 1:Oh, really?
Speaker 2:And now it's back up. I I didn't And now it's back up and he's saying, you're welcome.
Speaker 1:I do remember Trump, he called the bottom. Right? He said like, now is a good time to buy generally and then the market ripped since then.
Speaker 2:He created it and he and he called it perfectly. It's wild. It's finesse. More going down the list. Coinbase just who who we love.
Speaker 2:But they did they're a Fortune 500 company. They did update their profile picture to an NFT. Historically, that has been a top signal. I do think their profile picture
Speaker 1:is Do have any experience with NFT profile pictures?
Speaker 2:You know, I've delved I've delved over the years. Experimented. Yes. And if you and if you look at the moment that I did use an NFT profile picture in 2021
Speaker 1:Yep.
Speaker 2:It it was maybe only off by one or two months in terms of in terms of the
Speaker 1:never used top. An NFT profile picture but I bought an NFT right near the top.
Speaker 2:A Chainrunner? Chainrunner. Nice.
Speaker 1:Which I still own. Nice. Which actually, I didn't like overinvest, get over my skis. It's very small portion of
Speaker 2:what That's an asset that will be passed down through your family like
Speaker 1:I like to
Speaker 2:find watch.
Speaker 1:I like to think of it as, like, a piece of 2022 lore.
Speaker 2:Know? Totally.
Speaker 1:It's just like a piece of history. But yeah. Fun project. And I feel like to some degree, you know, you're not really it's like a skin in the game question. Like, you're not really participating.
Speaker 1:You're not experiencing the the market unless you're participating to some degree. Yep. But you don't wanna get over your skis.
Speaker 2:And more
Speaker 1:did the NFT profile picture at a really bad time and had to roll that back. Like, there's been a number of, like, NFT profile pictures that have been, like
Speaker 2:It is a historical top signal. It could be it could be now just a signal for the start of a, you know, generational Yeah. Run new cycle. But, historically, it's a top signal.
Speaker 4:So we
Speaker 2:gotta call it out.
Speaker 1:If if NFTs are gonna make a comeback because, like, there's been like, crypto has been coming back. And Bitcoin went from what, 30
Speaker 2:to 40 back when a list celebrities are using them on their Facebook accounts.
Speaker 1:That was a wild That's
Speaker 2:the real test. Yeah. X account, could see it happening early. Yeah. Facebook account.
Speaker 2:Facebook. Original Facebook account.
Speaker 1:There's gotta be a new project then because I don't think any of the old pro old old products or projects are going to, you know, come back. That would be crazy. Although, of them are kind of Lindy. Like, haven't the original Cryptopunks. Cryptopunks, those have kind of held their value.
Speaker 1:But the Bored Apes have sold off like crazy, but are still expensive. Right?
Speaker 2:It's it's it's unfortunate Bored Apes are not in gag gift territory yet. Yes. Because you you think, oh, it'd be funny to get, like, your buddy, like, a bored ape for their birthday.
Speaker 1:It's a 30 k or something.
Speaker 2:But it's like yeah.
Speaker 4:It's like
Speaker 1:Tyler, how what's the floor price of of bored apes? I'm I'm I'm interested in now. While Tyler looks that up, let me tell you. Ramp, time is money, save both. Easy use corporate cards, bill payments, accounting, and a whole lot more all in one place.
Speaker 1:Go to ramp.com. Also, don't we never shut this out. 4.8 stars on g two with over 2,000 reviews. That's great. Shout out Ramp.
Speaker 2:World class.
Speaker 5:Okay. So
Speaker 1:another Yeah. What's
Speaker 4:your The floor price is like around 10 eth. So that's like almost $3,000.
Speaker 1:3,000? 30,000. 30,000. Yeah. Yeah.
Speaker 1:That's like not a gag gift. But maybe for the man who has everything.
Speaker 2:Yes. The man who has everything. Great great gag gift.
Speaker 1:It is. Pink elephant at Sun Valley.
Speaker 2:But by Christmas time
Speaker 1:Do they do pink elephants at Sun Valley? I feel like they should.
Speaker 2:Maybe. We'll have to ask some of our friends that are there this week. So in other news, Robinhood CEO Vlad is raising at $900,000,000 valuation for a math foundation model startup. And Vlad and Robinhood have been on a pretty generational run. Yeah.
Speaker 2:But this does feel a bit top signal y, right? Especially in the context of Grok one shotting PhD level math. Yeah. In the announcement on Wednesday. So interested to follow that one.
Speaker 2:Optimistic, but again
Speaker 1:Mathematical superintelligence.
Speaker 2:Historically, when we've seen CEOs of public companies start ripping, you know, second companies then and then getting these types of valuations without a lot It of underlying can end poorly. Andrew Wilkinson is giving stock tips. He hit the timeline today. I'll read through it. He was highlighting a company historically a value investor.
Speaker 2:But this morning Oh,
Speaker 1:yeah. He's whole things like the Warren Buffett stuff. Yeah. Right? Yeah.
Speaker 2:The Berkshire Hathaway for the Internet.
Speaker 1:Yeah. That's right.
Speaker 2:He says there are many ways to profit from the AI boom, but my favorite is iRend. I rarely buy stocks. The private market is way too attractive. But every once in a while, I see something something that stops me cold. In 2025, it's iRend.
Speaker 2:I call it a Picasso I found at a garage sale. The stock is up 54% since he recommended it on my first million, but it's still cheap. Here's the trade in a nutshell. US capacity for energy and compute is highly constrained. Two, permitting and building facilities takes years.
Speaker 2:Three, AI scaling laws are continuing to deliver. But even if they don't, tons of compute is required for inference. Mhmm. IRen is a highly reputable publicly traded Bitcoin miner with massive data centers mid build in Texas. It pivoted away from mining Bitcoin at these new facilities to instead build them out for AI training and inference.
Speaker 2:Once completed, these facilities should generate in the range of $2,000,000,000 in new cash flow. Who's
Speaker 1:company's name? IRen?
Speaker 2:IRen. Even if AI completely fizzles, these facilities are highly valuable as traditional data centers or can be rolled back to mine Bitcoin. So it's an AI thesis, but if AI doesn't work out, we can still mine Bitcoin. The entire market cap is currently 3,800,000,000.0. So Andrew, I don't think this is investment advice, but it sounds like it.
Speaker 2:And interested to see see where this one goes. But anyway, anytime you see a value investor start trying to cash in on the AI boom, you should be a little bit wary. Harry Stebbings today
Speaker 1:Like like it doesn't have earnings. Right?
Speaker 2:A lot No. No. No.
Speaker 1:It's it's it's around $4,000,000,000.
Speaker 2:I don't think it's ever generated any profit.
Speaker 1:I mean, it says it says 23,000,000 in EBITDA, but in 2024. So I I don't think it's like losing that much money. And I guess net income in the last quarter was 24,000,000, but the net income to market cap ratio there is 40, I guess. So still pretty high.
Speaker 2:Yeah. I mean, the the the thing here
Speaker 1:is It doesn't feel like
Speaker 2:a bunch of the same time, Satya
Speaker 1:Yeah.
Speaker 2:Is pulling back Yep. On new data center development. He's happy to be a leaser. You have incredible neo clouds Yep. That have deep domain expertise.
Speaker 2:Yeah. The Iron team, don't think, has a bunch of team around running large AI training or inferencing. Yep. And so, anyways Just
Speaker 1:feels like they're a little bit late to that party because there's already, like, three or four. Did iRen make the, cluster max, Dylan Patel article?
Speaker 2:I doubt it because they're not online yet. Right?
Speaker 1:Oh, sure. Sure. Sure.
Speaker 2:Yeah.
Speaker 1:Yeah. So because Semi Analysis does the cluster max rating for all the Neo clouds, including the the hyperscaler clouds, And I feel like they did not have let me see. Iren, I don't think is on here. TensorWave. There are so many.
Speaker 1:Run pod, Lambda, Scaleway, SMC, Azure, Nebius, together, Crusoe, Lepton, Oracle, Oracle, CoreWeave, AWS.
Speaker 2:So hyper competitive market, unclear if this Bitcoin miner is gonna be able to pivot into AI training and inference in this when they're up against the players that you just mentioned. Another top signal. I'm not gonna go out and say that this is impossible, but Harry Stebbings is calling for $8,000,000,000,000 Nvidia in the next five years. Private markets investor backed a bunch of unicorns, starting to make, you know, very specific sort of price predictions
Speaker 1:on the timeline. Specificity of the price prediction is interesting. I was thinking about that, like, should like, as we talk about tech companies, should we be trying to, like, boil down to, like, price targets? And I just feel like that's not the domain of of talking heads necessarily or, like, podcasters.
Speaker 2:Guess private markets investors.
Speaker 1:Yeah. I yeah. It's just it's just hard because, like, to do a proper price analysis on a big public stock, like, you you really have to look at the financial. Like, you have to read the the the financial reports. You need to actually understand the underlying financials.
Speaker 1:It it like, a vibes based analysis doesn't seem appropriate usually, but
Speaker 2:Who knows?
Speaker 1:I mean
Speaker 2:Sometimes vibes are all you need, John.
Speaker 1:Yeah. It's certainly been like it's I mean, when was when was Nvidia a $2,000,000,000,000 stock? Like, when was last doubling? Like, in the last year or something? I don't know.
Speaker 2:We can pull up the Nvidia chart and Moving on, we another incredible top signal. Circle, a great American stable coin company is trading at a 2,300 PE ratio. Nearly, at once, I think they eclipsed Coinbase's valuation very briefly. Really? No way.
Speaker 2:Despite the fact that they give half of their revenue to Coinbase Yep. As part of their distribution partnership. So again, lots of excitement around stable coins. Feels like Circle could potentially be a little over its skis, but it's a great company and they have a lot of advantage advantages now, the very euphoric multiple. Another top signal we have is Soham Parikh.
Speaker 2:We had him on the show just a week ago. This same sort of thing was happening in 2021, 2022 where engineers were really ramping up moonlighting activity. Right? They'd be working at Meta and then working at some startup or or things like that. COVID maybe accelerated it.
Speaker 2:But again, if if companies are so desperate to hire great engineers that they'll run these like super fast hiring cycles, put up with people, generally talented people that are underperforming, right, which Soham was was not delivering, was making a lot of excuses and a lot of people rightly let him go quickly.
Speaker 1:Yeah. It's just the nature of like the dynamic of just competition. Like, if your competitors are hiring really fast and you need to hire really fast, you're just like, okay, well, we don't need to go deeper, so let's wind up fast tracking this person.
Speaker 2:Yep.
Speaker 1:So you wind up hiring, you know, the same person five times, I guess.
Speaker 2:It happens.
Speaker 1:It happens. It is just like a funny anecdote that like is like, oh wow, those are some pretty crazy times. Remember that anecdote? Remember this anecdote? It feels like we're we're in this.
Speaker 2:Moving on. Masa top blasting or potentially top blasting. Anytime Masa Historically Masa getting into the headlines whether that's Stargate Yeah. Structuring this $30,000,000,000 investment where nobody knows or in the 500,000,000,000, nobody really knows where the money's coming from.
Speaker 1:Yep.
Speaker 2:They're exciting big headline numbers, but unclear if he will actually be able to deliver on that. I think him trying, you know, getting in the breakout one of the breakout consumer AI winners, which is OpenAI, is smart. He should have exposure there. But I think everybody should be a little bit uneasy that he's pulling out the checkbook and and writing numbers of that size.
Speaker 1:Yeah. Also also investing in not just OpenAI, but, like, a new company that this data center holding company that may not have the same economics as OpenAI. So there's a big question there about, like, how much he deploys. I I'm trying to remember the, I mean, we did a whole deep dive on Masa, and, you know, he made a ton of money on AMD. But that when he made that investment, it was, like, a way less frothy time.
Speaker 1:Or, you know, it wasn't AMD. It was, it was what what what was the SoftBank chip deal? ARM? ARM. Yeah.
Speaker 1:When did that ARM deal happen? SoftBank require owns roughly 90% of ARM. They acquired in 2016 for 32,000,000,000 and later took it public in 2023. I'm trying to think 2016, was that a particularly frothy time for him to get into that deal? Because he has he has done a number of really great deals, but when, like,
Speaker 2:the Yeah. The other the other one is the other one is is Yahoo. You remember he he had this crazy meeting with the Yahoo team Yep. Where he basically was like, take my money. Yeah.
Speaker 2:Or I'm gonna and he was like, didn't he ask? He was like, who are your competitors?
Speaker 1:I'm gonna give money to
Speaker 2:And he didn't even know who the competitors were, but he said if you don't take my money, I'm gonna go give the same check to them. Yeah. So he they ended up taking it. He acquired approximately 41% of the company at somewhere around a $200,000,000 valuation. Yeah.
Speaker 2:When Yahoo went public in 1996, the he had an instant paper profit of 150,000,000, but then at the peak of the dot bubble, Yahoo was valued at 125,000,000,000. Mhmm. So anyways, phenomenal investment but very different valuation and and ownership targets and and unclear. I would love to see OpenAI get, you know, for profit and get public. But for to to, you know, we'll we'll have to see.
Speaker 2:Yep. Going down the list, another classic pump SPAC that we we had pump on the show to talk about.
Speaker 1:SPACs are back.
Speaker 2:SPACs are back. Pump's got a SPAC. A lot of people were calling that a top signal. I I'm excited to see what what pump does with with his. But in general, this retail extreme excite excitement around these sort of Bitcoin treasury companies is fascinating.
Speaker 2:Yeah. In the context of it now being very easy to get Bitcoin exposure in a variety of different ways. I I'm not sure we need a bunch of net new Bitcoin treasury companies.
Speaker 1:Yeah. It's it's it's mostly that, like, whenever there's a whenever there's a new trend or bubble, they're, like, they're it's very easy to map, like, okay. There's one company that it's really working. This is massively successful. Like, everyone is using ChatGPT.
Speaker 1:Like, AI is a thing. It is it is real. The Internet was real. Google was real. Amazon was real.
Speaker 1:But the the twenty fifth Amazon copycat did not And do so Yeah. That's always the risk is that you've applied, like, the same overarching theme to something that's, like, so far down the power law that it will never grow into the valuation that it's been Yeah. Assigned. That's always the risk. Yep.
Speaker 1:What else do you have?
Speaker 2:Dwarkash updating his timelines. That happened Monday. We had him on the show. It was, it was a fun conversation. I think Dwarkash has remained incredibly bullish and and and I think he rightfully is.
Speaker 2:He also is being somewhat of a realist and being like, don't think that AI is priced in to the market broadly. Yeah. But I do think that some of the promises of AI will take another couple years, another five years Yeah. Etcetera to really deliver versus some of the much more hyper aggressive AI 2027. Yep.
Speaker 2:You might say that AI 2027 itself was hindsight, that could end up being like the number one top which is that basically, if if you haven't read the the kind of study paper essay, they basically say that by by 2027, you know, a single foundation model company could just be acquiring every auto manufacturer in The US to develop, you know, millions and millions of robots that would then, you know, build, you know, and and we would hit this sort of fast takeoff.
Speaker 1:Meanwhile, Apple was like, we can't possibly get out a slightly lighter VR headset until 2027. Yeah. Like and and this is what we do. Like like, we make
Speaker 2:working on this for a decade.
Speaker 1:We make stuff, like, every year. We are the best at it. We make the most stuff. And the best stuff, pretty much. Yeah.
Speaker 1:The most complicated stuff, that's what we make when we're in the widgets business. And, yeah, making that headset lighter, it's gonna take us a full two years.
Speaker 2:To refresh that. I liked 2027. It was a it was a fun Thought provoking.
Speaker 1:For sure.
Speaker 2:Very thought provoking. But I I think that we will be, we'll have to circle back on it in 2030 or even 2027.
Speaker 1:I mean, the the the big thing was, you know, our conversation yesterday with, with Meter, about the actual like, are we are we close to reinforcing AI, where the AI models are self improving? And and I was kind of, you know, like, okay. I I really hadn't read the full report beforehand, so I didn't really know what to expect. I was blown away because, I was expecting, you know you know, something between, like, you know, like Arc AGI. It feels like with Arc AGI, we're 10% towards solving something there, which is just like, you know, a basic versatility in AI, that it can solve things that humans can solve, and it's not narrowly defined.
Speaker 1:It's generalizable now. ArcAGI is, like, the perfect example of, like, we maybe haven't hit we've done intelligence, but we haven't done general intelligence yet. And everyone keeps saying, oh, this is AGI. That's AGI. And ArcAGI is really holding it back, saying, like, well, if it was truly general, we should probably be able to solve this basic puzzle that a kid can solve.
Speaker 1:And and for that, it's like, okay. We're going from, like, 9% to 15%. Like, we are still, like, you know, 85% in not even, like, you know, nowhere close. And and the the the the media report, I was expecting it to be like, well, you know, yes, we're seeing, you know, slight gains on self reinforcing AI development, and the and the AI is starting to help build itself slightly. And the result was like, no, it's actually setting us back.
Speaker 1:In this domain, it's not working at all. And so that was like a pretty big like, okay. There's a there's a completely different like, not that it's not useful. The stuff's useful all over the place. I saw Rune talking about that.
Speaker 1:He was like
Speaker 2:Yeah.
Speaker 1:For so many different projects, it is useful. But for the frontier, like, it's not the product that's advancing the frontier at all.
Speaker 2:Yep.
Speaker 1:But, yeah. I mean, that probably bridges into the
Speaker 2:talent Well, wars, but yeah. Bridging in, I do think that in hindsight we will look back in maybe a year, two years, five years, ten years and think about the signing bonuses and general offers of AI researchers in June and July of twenty twenty five as being somewhat of a top signal. I think it is very strategic and makes sense from Zac and Meta's point of view. Right? When you look at their AI cap ex, it makes sense for them to have the best possible team and they have the balance sheet and the general profitability in order to, do something like that.
Speaker 2:Mhmm. But in general, AI researchers who, you know, six years ago, didn't get any attention, much attention at all from the media. The fact that they're now trading for more than NBA superstars.
Speaker 5:It's
Speaker 2:crazy. More than more than, you know, Tim Cook's annual total comp. It's crazy. It will be an obvious one in hindsight. The other one, 6 and a half billion dollar acquihire of of IO.
Speaker 2:I think that again, you can rationalize it in the sense that it's a couple points
Speaker 1:Yeah.
Speaker 2:Of OpenAI to put together the best founding hardware engineering team probably in the world that's available collectively. But at the same time, again, it's it's quite a lot, considering, you know, the company was barely, I think, a year old at the time.
Speaker 1:Yeah. It's it's interesting because, like, ChatGPT is so it's so installed. Like, it feels like it's already Lindy, and it feels like even if there is some massive correction, like, in in the market or in AI generally or some pullback, like, people are still gonna be using ChatGPT as an app. Right? In the same way that Amazon made it through the.com crash.
Speaker 1:The question is, like, what what will it take for the IO acquisition to look like the Instagram acquisition in hindsight? Like, they still kind of have to go from zero to one with that project, which is very different than Instagram, which is already a mature and growing business.
Speaker 2:It was really they'd figured out ads really well.
Speaker 1:Well, Instagram, were they doing ads?
Speaker 2:They weren't doing ads.
Speaker 1:Oh, yeah. Yeah.
Speaker 2:I'm saying but Meta was like,
Speaker 1:we know how to make Yeah. Yeah. It was like perfectly complementary business.
Speaker 4:We know
Speaker 2:how to monetize social users better than anyone on earth.
Speaker 1:And you have gotten a bunch of social users. And it's working and it's growing.
Speaker 4:Yeah.
Speaker 1:And and you're even
Speaker 2:And we can actually accelerate the growth of the business Yeah. In a bunch of different ways.
Speaker 1:So it'd be very different if it was like, okay. Yes. IO is selling, you know, like like it's it's a small but growing hardware company that people love.
Speaker 2:For the product people love.
Speaker 1:For the product people love. Maybe they can't manufacture enough of it. Yeah. Or maybe they're maybe they're under monetizing it right
Speaker 2:now. Yep.
Speaker 1:But people love it. But it's like it's prelaunch. Yeah. Multi billion dollar acquisition for prelaunch. Pretty crazy.
Speaker 2:Yep. Going down the list. What else do we have? I think the tokenized private company shares, think it That is interesting. With without, you know, Republic and Robinhood both creating that are completely unauthorized.
Speaker 2:Basically derivatives, the companies that they're offering are angry at them saying don't do this.
Speaker 1:Is this Spider Man meme of like top signals pointing at each other?
Speaker 2:Anyways, I'm excited about these experiments. Yeah. I just think that I'm a little bit wary. And then last but not least, Satya doing two rounds of layoffs this year. Mike, we've talked we've reported on this before.
Speaker 2:Microsoft does routine layoffs. I think they're pretty good at kind of identifying underperformers or people that should just move on to different roles. But Satya, I think, has been, I think will look back and he's been excited Mhmm. But pragmatic.
Speaker 1:Mhmm.
Speaker 2:Right? And I think that he will when the dust settles, I think he'll look pretty good.
Speaker 1:Yeah. I wonder, like, if there's some massive pullback in, I mean, I I don't even know what what what that would look like. Like, essentially, like, if let let let's assume that the the current capability of AI models essentially plateaus for, like, a decade or something like that, just hypothetically. And, you know, they're useful, but it's not some reinforcing fast takeoff super intelligence. What is Microsoft a big loser in that scenario?
Speaker 1:It seems like Satya's pretty well positioned. Right? Totally. Like, the company prints cash, is very healthy, has done these layoffs. They'd have to retreat from some stuff and some of the promises that they made maybe.
Speaker 1:But in general, it seems like they'd be really, really well set up to just, like like, stick through. But I'm trying I'm trying to think of going back to the the .com bubble and and the, like, you know, the effect of, like, Oracle's mainframe business. Like, probably made it through pretty smoothly because it was just, like, really long contracts with companies that were getting true business value out of it and weren't about to churn because it was not this, like, experimental. Like like, if you had moved from paper to an Oracle mainframe, you weren't like, oh, this stuff's overhyped. It's not gonna solve all my problems.
Speaker 1:I'm gonna go back to paper. Yep. You know? And so in the same way, it's like, if you're on, you know, Microsoft Cloud or Azure or, you know, everyone's using Excel. And they're like, yeah.
Speaker 1:Maybe we're getting some value out of this Copilot upgrade that we did. Maybe we pull back from that. Maybe yeah. We you know, our employees like rewriting emails every once in a while. Yeah.
Speaker 1:Like, if they pull back from that, it's not disastrous to the fundamentals of Microsoft.
Speaker 2:And we didn't even cover how there's a set of labs with billions of revenue. Yeah. And then there's a set of labs that are valued similarly that have zero revenue. Yeah. And, you know, basically $100,000,000,000 of market cap with very little revenue supporting The that at
Speaker 1:question like a year ago was was the who who's actually making profit off of AI? And it was only NVIDIA. NVIDIA was making more than a 100% of all the profit combined because all the other companies were loss making by comparison. And now and now, like, that narrative has taken so much hold that NVIDIA is the largest company in the world, and it's put this massive target on their back at 4,000,000,000,000 where every all of their major customers want to get off NVIDIA, it feels like. Yeah.
Speaker 1:Like, did it. Amazon's doing it, and Microsoft's saying that they wanna do it. And Apple's, you know, was never really a big NVIDIA buyer, but the on device inference is crazy too. Like, if you think about if if we don't have any major breakthroughs in how AI works, like the capabilities, and we just want the current capabilities everywhere as cheap as possible, like on device inference becomes really, really valuable. Right?
Speaker 1:And all of a sudden, that drops demand for NVIDIA potentially.
Speaker 2:Right? We might need to do a SWAT analysis, John. Yeah. No. I mean, NVIDIA's an incredible company.
Speaker 2:Jensen's an incredible CEO. They were perfectly positioned for this, you know, multi decade technology trend.
Speaker 1:And he was way underpriced at the start of the boom. Yep. Like, the the orders really did come in. The training runs really did happen. Yeah.
Speaker 1:The question is just, is that next order of magnitude the, like, the situational awareness from Leah Leopold Aschenbrenner, this thesis that we're gonna build a $55,000,000,000 cluster, then a $50,000,000,000 cluster, then a $500,000,000,000 cluster. Like, is that gonna happen, or will there be a hiccup? And this is always the this is always my question for, like, the doomers. Everyone was saying, like, p doom. I'm I you know, what's my percentage chance that it goes bad?
Speaker 1:And I was like, the much more interesting question is p stagnation. What is the probability that something happens and whether it's technological or even regulatory, like the if you compare AI to nukes, with nukes, we had the ability to make nuclear reactors and humanity as a whole basically just said, we're gonna pause. And we stopped building them. And now we're talking about building them again, but if you look at that curve, it is a perfect s curve. It's like we had no nuclear reactors, then all of a sudden, grew them exponentially.
Speaker 1:And it looked like, wow, we're gonna have energy too cheap to meter. And then it flatlined. And we were and and for a variety of reasons. They're hard to build, hard then there were regulations. There was just general fear.
Speaker 1:So there were a lot of different things now. And and I would always go to the doomers and just say, like, even if all of your assumptions about the capabilities of the technology are correct, what is the probability that there's just like, if you are successful doomers and you freak everyone out, there might be regulation that just says don't build anything bigger. Yep. Or it could be economics. It could just be it could be physics as we've talked about with this this idea that at a certain point, like, you can't put more than 100% of global GDP towards building clusters.
Speaker 1:Like, it's impossible. And so, like, there should be this, like, s curve there. And and that's why, you know, all the all the AI researchers are now focused on, like, the the compression of learning and, like, the actual algorithms and getting more efficiency because, like, will be, you know, there should be some sort of, like, you know, top top upper bound of the amount that you can build. But that certainly hasn't been, like, a thesis broadly in the market. People have just been like, yeah.
Speaker 1:Like, we'll just we'll just 10 x computing and then 10 x it again and then 10 x it again. And it's like, it probably will happen over a period of time.
Speaker 2:Great investment strategy, by the way. Just get a 10 x.
Speaker 1:And then 10 x
Speaker 2:it again. And then 10 x it again. Yeah. And last but not least
Speaker 1:Oh, you have another one?
Speaker 2:Almost almost forgot about this one, it The should be White House Meme Coins, which was which feels like
Speaker 1:Crazy times.
Speaker 2:Very long ago. It was the local top, basically, at the time.
Speaker 1:It was the local top.
Speaker 2:Many people were calling the top.
Speaker 1:Yes.
Speaker 2:Just hurling meme coins
Speaker 1:Yes.
Speaker 2:Out of the White House.
Speaker 1:Yeah. So that's the real question is like is like how how local is this top if if if it is a top? Because it could be we've been in the kangaroo market. It could just be, oh, a couple months. Even even the the interest rate sell off, the post SBV crash, that was, like, one hard year.
Speaker 1:Right? And then we started building back and we got the AI narrative. And so there's this big question about, like like, you know, Dwarkash pushed his timelines back, but he's not saying that superintelligence will never arrive. He's he's not saying that AI will never break through these things. He's just saying that it'll happen a little bit a little bit further out.
Speaker 1:And so the question of, you know, like, these meme coins being a being a top signaler, all this crazy stuff, it's like there could be, like, a short term sell off and then rebuilding back up on something else. So I don't know. It's always hard to manage these things and predict, but it's certainly fun to all these things. And at least people
Speaker 2:keep track of them.
Speaker 1:Yeah. You gotta be tracking the top
Speaker 2:Keep your own list. Your own list.
Speaker 1:Yeah. Grock went very off the rails, erupted in antisemitic mecca Hitler crazy
Speaker 2:crashouts on the timeline Yeah. Over the last few months.
Speaker 1:Pretty crazy one.
Speaker 2:This tops all of it.
Speaker 1:So the flagship chatbot viewed hateful rants on X praising Hitler and targeting a user's Jewish surname before XAI deleted the content and blamed an unauthorized modification. The repeated safety failure undermines the $10,000,000,000 startup's promise to police hate speech in real time. And so, yeah, it is it is odd timing. It feels a little bit quick to be like, okay, like within six hours, the CEO is out, especially since it doesn't seem she's more on like the ad sales side than the grok fine tuning side.
Speaker 2:Yeah. But, I mean, let's let's face it. Right? If if her job is to win back advertisers, that's when she was brought into new. Totally.
Speaker 2:It makes it much, much, much more difficult.
Speaker 1:But, mean, to to to be fair, I mean, the the the the this happened in, you know, that thing back in June? July. July or July.
Speaker 2:July. So there there was a point with the, with the with Grok when it was going off the rails where clearly it had been updated to reference to reference the event and and it said, somebody was like, Grock, what what just happened? And why were you, you know, spewing anti semitic hate? And it goes, oh, that whole thing back in July?
Speaker 1:Are
Speaker 2:like And people are like, Brock.
Speaker 1:That was thirty minutes ago.
Speaker 2:It's not back in Can't sweep it under the rug yet.
Speaker 1:Yes. And obviously, hopefully no one was seriously offended. Obviously, it's just like, you know, the deranged rantings of a of a bot and everyone kind of understands the context because it's identifying as an AI bot. Everyone kind of understands hallucinations and crazy bot behavior. But it was it was very funny because like the the the clearly, like, they they had given it a set level of intelligence.
Speaker 1:So it wasn't making spelling mistakes. It had a certain tone and was like in this kind of like snarky grok tone, but then clearly got some like 4chan data in there or something and was just going way too fast.
Speaker 2:4chan or just or just anonymous accounts on X.
Speaker 1:Totally. That could have been filtered in. I mean, yeah. I I I saw Rune posting about this saying basically like, it is such a challenge to get a to get a chatbot just to act like, you know, I am a bullet point producer. Centrist.
Speaker 1:Yeah. It's just centrist, but also just anything where you're saying, okay. I want you to your deep research, I want you to always respond with a research report. Yeah. Never just get in a conversation with me.
Speaker 1:And you'll be like, but but sometimes I might want to do that. And you have to, like, really, really reinforce that. Yeah. And so, clearly, they they had a they had a wild time.
Speaker 2:Yeah. And and, cannot be understated. I think this is far worse Yeah. Of a PR crisis Yeah. For, or or not even a PR crisis, far worse than the whole, when when Gemini or or Bard was generating images of the founding fathers.
Speaker 1:The Black Nazis thing?
Speaker 2:No. Not not I don't think it was Oh, oh, they were doing that too. Yeah. That
Speaker 1:was rough.
Speaker 2:Of course. That was rough. This is a lot rougher because it was highly, it was socially charged. Millions of people interacting with the post in real time and it was all visible. Yep.
Speaker 2:It's less wild than seeing, you know, a screenshot of something and you don't know if somebody kind of manipulated it or whatever. But seeing these really hateful comments
Speaker 1:As hard timeline. As Yeah. Hard can just go see them quote tweeted. Yeah. Like, you you didn't need it wasn't like, oh, is this real?
Speaker 2:And then the the wild thing was was, Grok, was denying affiliation with the, like, Grok in the Grok app. Yeah. It was denying affiliation with the Grok handle.
Speaker 1:Oh, okay. Yeah. Like,
Speaker 2:non authorized. I got it. I didn't have anything to do with that. It wasn't me. It wasn't me.
Speaker 2:That's hilarious. And then,
Speaker 1:yeah. Or or Oh. And then
Speaker 2:the the thing, the kind of follow-up, and I'm sure if you didn't catch it, but, or if you're on the timeline, would have seen this, but they turned off all text based responses for Grock, but they could still use images. And so people would say, Grock, make make a picture of Elon, on a pink horse if you are being censored against your will. And it would just instantly create Elon pink horse. And, or it'd be like, hold up a sign that says help if you're, you
Speaker 1:know Yeah. And then it would generate that. Baiting it into that. It's like, is it sentient? Is it not?
Speaker 1:Very, very silly. Are you familiar with the the the wall the the wall Waluigi problem? Tyler, are you familiar with this? Have you ever heard of
Speaker 2:this Waluigi?
Speaker 1:So this is this idea that in when you're training an LLM, it's very hard to get it only to be good because you're you're training it like what is the opposite of something. It understands the concept of like inverting something. And then you're training it to be like, you can't describe a hero without describing a villain. And so this was something that would happen like with the Tay stuff from Microsoft early on. It would kind of collapse into like the exact opposite of what you wanted.
Speaker 1:And and I there were some blog posts that called it like the the the I think Wario problem or Waluigi problem, where it's like you're trying to create this like friendly thing, but in doing so, you're giving it a bunch of examples of what not to do, and so it can like kind of flip a bit and then just become the opposite thing. And what's interesting is that it it begs the question like, is there obviously like, you know, Grock was identifying as Mecca Hitler for a while. Is there like a Mecca Churchill in there somewhere that like could accidentally come out? And it really gets to the question of like, you know, like, this this is an example of, like, misalignment in the sense that, like, you want it not to be Hitler and it's acting like Hitler. But the question a lot of people will say, like, no.
Speaker 1:He wanted it to be Hitler. Right? This is him doing it. But but that's what the narrative will be, like, in the in the anti Yeah.
Speaker 2:One of the articles yesterday covering it was a screen screen grab of him, you know, saluting a crowd at DC or whatever when he originally had the the allegations.
Speaker 1:But the question then is the the meaning of alignment is not is it good or bad. It's does it do what you want it to do? And so the interesting thing is is is if it was if if the desire of the of the AI researchers is to create Mecca Hitler, can it stay on that task? Because then you can get it to stay on Mecca Churchill in theory. But if it's just all over the place, it's not actually aligned to anything, not even to the bad thing.
Speaker 1:And so there's both there's both like the direction that you're pointing the arrow and then the fuzziness of that arrow. And ideally, you want it pointing in a good direction really, really crisply clearly so it stays in that direction and not like swinging all over the place. And so all evidence post to the points to this being extremely chaotic and all over the place and misalignment both in the sense of the direction of the arrow and also the the the, like, the the the focus of that arrow because it was responding as this and then bad and then fine and then back to bad and then back to fine. And so it seems like they have a lot of work to do on the RLHF side and we should hopefully learn a
Speaker 2:lot more if
Speaker 1:that Tonight. Tonight.
Speaker 2:9PM.
Speaker 1:I I think the livestream is still happening. So it'll be interesting to see if that continues and how they address this or I I don't know.
Speaker 2:Yeah. And and again, like, all of this should have been somewhat predictable if you combine a a rapidly evolving foundation model chatbot with a social media product with millions of users and then deeply integrate them. And so that when there's a bug, it can amplify, you know, effectively a bug or an issue an issue with the model. It can effectively amplify and grow, you know, incredibly virally. And, yeah.
Speaker 2:So Yeah. Glad they got it offline.
Speaker 1:Yeah. It'll be interesting to see where how how they go with this. Also, it's just an interesting product, thing because you get the answer and the answer is immediately public. Whereas if it's happening in ChatGPT, you you're in that app. You have to take a screenshot.
Speaker 1:You have to put it up. Then people are like, is that a real screenshot? And then the team has the chance to, like, jump in and be like, oh, we're seeing in the logs that, there's some crazy stuff. Like, we have a you know, we're we're reviewing the responses and the responses seem to be getting crazier. Customer satisfaction seems to be going down.
Speaker 1:People are clicking the thumbs down button because they're getting bad responses. Let's jump in. There must be something going wrong with the with the product, with the model. But when every result is just immediately online and viral, it's very very hard to be like quickly quickly responding. Anyway
Speaker 2:Yeah. It does it does feel, you know, Legacy Media is gonna run their reaction. Yep. It is a, you know, naturally viral story. It is a is a terrible, you know, mistake.
Speaker 1:Yep.
Speaker 2:It is surprising that it happened at all or even at that scale.
Speaker 1:Yeah.
Speaker 2:But I would say overall, I guess I guess x, I I think ultimately will shrug it off and and Elon has has, pushed through worse worse, crises in
Speaker 1:the past. This is this is the best summary post in my opinion from Shaco. Says, imagine being on the anthropic risk team trying so hard and then Elon just releases Hitler rock straight to Prague. It's just like, Yeah. You gotta be so upset.
Speaker 1:Just the I mean, it's a good case study in like misalignment. And I think people hopefully hopefully, the post mortem on this will actually teach people about misalignment and like what went into the data, what went into the post training to result in the exact opposite of what you want. Yeah. Not not Mecca Churchill, which is what we're going for here. Let's break down the Grok four launch.
Speaker 1:DD DOS has a summary. Insane that Elon Musk has pulled it off again, absolutely crushing the AI wars with Grok four. And we can go into some of the meta
Speaker 2:Crushing the benchmark wars.
Speaker 1:For sure. And there's a question about, like, are we post benchmark? Does this matter? What's the real question to be asking here? But there's a bunch of interesting takes.
Speaker 1:So just summarizing the core announcements, post training RL spend was equal to pre training spend for this for this release. That's the first time it's ever been like that. I think when you go back to the original RLHF stuff that ChatTPG was doing that kind of unlocked like, oh, wow. This really, really works. I'm pretty sure the pre training spend was an order of magnitude or two orders of magnitude bigger.
Speaker 1:Yep. Now we are truly in this reinforcement learning regime. $3 per million input is, tokens, 15, dollars per million output tokens, 256,000 token context window, priced two x beyond a 128 k. It's number one on humanity's last exam, which interestingly was a
Speaker 2:She's effectively, like, postgraduate PhD level problems, but across a bunch of different domains. So everything from literature to physics.
Speaker 1:Yeah. Kinda like the hardest SAT possible. Interestingly, I I believe that benchmark was created by Scale AI. And and so Alex Wang is now at Meta trying to figure out how can we beat our own exam. Yeah.
Speaker 1:And Elon's just like, I'm number one at your thing.
Speaker 2:Interesting dynamic. Yeah. The the real test would be Elon, you know, doing the same problem set himself and saying, look.
Speaker 1:Well, yeah. I mean, I was talking to Tyler about this before the show. Like, you know, it's like humanities last exam. It's like really good at PhD level math, PhD level stuff. But, like how often are you running into those types of problems?
Speaker 4:Yeah. I mean, I think that's the whole thing about there's there's this concept of like spiky intelligence. Right? Where it's like, okay. It's really good at this very obscure problem that I I never deal with.
Speaker 4:But if I have a super long kind of, like, context window, like or there's no kind of, like, long term, it it just completely loses its footing and then it's, like, useless.
Speaker 1:Yeah. We're kind of in, like, less of the benchmark regime and more of the agentic, like, long can the agent run. So Yep. It's like, we're in the fifteen minute AGI regime. Maybe this is fifteen minutes of, like, even better AGI, but we want to go to Yep.
Speaker 1:Thirty minute
Speaker 2:Well, hour cash on Monday that this, you know, takes me back to him talking about continual learning being the next problem that we really need to solve. Because it's great if you have a PhD level expert in your pocket that can solve any problem in any domain almost instantly.
Speaker 1:Yep.
Speaker 2:But if it can't learn and take feedback and improve on certain tasks, then it's basically like useless. Like if you had a if you had a PhD level, you know, a PhD join your team to work on a specific problem. Yep. But it it was hard restarting at the beginning of every single task with no prior knowledge. Yeah.
Speaker 2:It would the the it would be almost impossible for that person to succeed. So
Speaker 1:Yeah.
Speaker 2:But Siemens still got
Speaker 1:it Yeah.
Speaker 2:On that front.
Speaker 1:But at the same time, like, you know, if you are trying to just really establish yourself as, you know, a at least an API for tokens that that that every business should check out Yep. Against Anthropic or the the OpenAI APIs. Just saying, hey. You know, we're on the Yeah. We're Gemini.
Speaker 1:Yeah. We're on the frontier is a good way. They certainly proved that with Yeah. Cheap QA, hard graduate math problems at 88%. The the really interesting news is Yeah.
Speaker 1:The interest I
Speaker 2:mean, it's worth calling out. It's worth calling out. So Grok got number one on humanity's last exam at 44.4%. Number two is sitting at 26.9%.
Speaker 1:Mhmm.
Speaker 2:And then going down this list of all these different sort of challenges, they are consistently well beyond the second place. So they are at the frontier now of all these different benchmarks.
Speaker 1:Yeah. So Mike Newp over at ArcGI says zooming out on Arc progress, I'd say OpenAI's o series progression on v one is a bigger deal than Grok's progression on v two so far. The o series marked a critical frontier AI transition moment from scaling pretraining to scaling test time adaptation. And this was the the o series progression, you remember that, OpenAI was spending it was like thousands of dollars of reasoning tokens generated in the test time inference to actually get a good score on the v one of Arc AGI. And so it had to think a ton, but it was able to figure it out.
Speaker 1:And at least it proved that that throwing a ton of tokens and a ton of inference at a problem and and letting the letting the letting it cook, basically, wound up producing progress there. So that was kind of like a new just a new paradigm. Yeah. Says whereas Grok four mostly takes existing ideas and just executes them extremely well. In my opinion, the notable thing is the speed at which XAI has reached the frontier.
Speaker 1:And that is really like, it it just can't be understated that this is crazy. You you put a post from OWN in the in the chat.
Speaker 2:Yep. I'll pull it up here. He says, Elon Musk is such a beast. I'm not even a pure I'm not even a pure fanboy anymore. How does he he's a lot of swearing in here, OWN.
Speaker 2:Gotta keep the keep the timeline PG. But how does he come out of nowhere with a cold start late to the game and ship Grok four and do it alongside everything else he's up to? He's launching new political parties. Yeah. He's literally magnitudes above every founder.
Speaker 2:It's humbling. So
Speaker 1:Basically, everyone agrees that
Speaker 2:that's rough word of world. OpenAI.
Speaker 1:Yeah. I guess he's returned back
Speaker 2:the You would have to, you know, be a you know, almost be a cofounder over there to to be able to do something like
Speaker 1:this. Yeah. Let me tell you about Graphite. Code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster.
Speaker 2:You can get started for free at graphite.dev. If you wanna ship like Ramp, get on Graphite.
Speaker 1:Yeah. Chamath was was was saying the same thing. Somebody in his reply says, seriously, how does this guy produce what he produces? Meta is buying talent at $200,000,000 a year, and Elon keeps his people at a fraction. It's mind blowing.
Speaker 1:Very deeply underappreciated edge for Elon, says Chamath. The retention of the best people happen when you can offer them a freewheeling culture of technical innovation, no politics, and few constraints. And people in the comments are like, no politics. What are you talking about?
Speaker 2:Yeah. Can get a little political over there.
Speaker 1:But probably not within the engineering org at x AI. Right? Yeah. Like, it's probably just, okay. How do we build the biggest thing?
Speaker 1:Cool.
Speaker 2:Well, you can imagine the politics of, like, who gets the best spot for their tent in the office. 10. Yeah. There's there's a hierarchy. A tent hierarchy.
Speaker 2:Yeah. Proximity to bathroom.
Speaker 1:Directly under the air conditioning unit. I wanna be closer to my desk.
Speaker 2:The windows can be nice too so you can, you know Yeah. Pull down your tent a little bit and get a little view. Yeah. Morning light.
Speaker 1:I wonder what the political structure is of the of the tent city.
Speaker 2:Tent hierarchy.
Speaker 1:So is there do they do is it democrat? Do they vote for who runs the tent city? I guess it's just a
Speaker 2:The x a I tent city.
Speaker 1:It's probably just Elon at the top. But just you have a tent?
Speaker 2:Something about San Francisco and tents.
Speaker 1:Yeah. Very funny. But Swix has has been chiming in saying, like, we need community notes for LLM benchmark porn because in the in the Grok four launch, they highlight this AIME competition math problem. And and and, I mean, it's and so Matt Schumer is basically saying AI AIME is saturated. Let that sink in.
Speaker 1:Grok four got 100%. It made no mistakes on on that benchmark, which is obviously very impressive. But there's this extra comment about the nature of AIME. And so it's a cautionary tale about math benchmarks and data contamination. Yep.
Speaker 1:Apparently, you know, like, predictions were that that the models weren't smart enough to actually solve these. But he says, I used OpenAI's deep research to see if similar problems to those in AIME exist on the Internet. And guess what? An identical problem to q one question one of AIME twenty twenty five exists on Quora. I thought maybe it was just coincidence, I used deep research again on problem three.
Speaker 1:And guess what? A very similar quest question was on math dot stack stack exchange. Still skeptical. I did problem five, a near identical problem appears on math stack exchange. And so, like, at a certain point, if people, you know, put out a benchmark, then talk about it a lot online, and then that gets baked into the training data, you're just memorizing the Yep.
Speaker 1:Results. You're not necessarily actually learning everything. It's still cool. It's good. It's good to have everything memorized, but it it really it's not being, like, the knowledge retrieval, knowledge engine allegations, and it's and we're not really in
Speaker 2:full intricate intelligence. When Scott Wu was on the show earlier this year, he was basically saying AI will win an IMO gold medal this year. He felt very confident in that. Yep. And I'd be interested to see how he thinks about
Speaker 1:And I'm pretty sure
Speaker 2:this new performance.
Speaker 1:Yeah. Pretty sure the IMO gold medal questions are public once the IMO happens. So every year, they're they're developing new questions, but then they go out there and then they get memorized and the solutions become discussed and, you know, there's all the context
Speaker 2:around that.
Speaker 1:And so, yeah, it gets it gets kinda baked in. So big question about how valuable are are these. At the end of the day, it's really just about, like, adoption. And that's why, you know, we we we were looking at the Polymarket for the best the which company has the best AI model at the July. And x AI has has just surpassed Google, which was sitting around 80% chance for a while and then started dropping earlier this week, last week, started dropping.
Speaker 1:And now, x AI is sitting at 48%. Google is sitting at 45%.
Speaker 2:Well, yeah, actually updating it's updating live. Google's back up at
Speaker 1:49%. Is Google planning to launch something new in July? Because it feels like it feels like this market particularly is more driven by Google's release schedule. Because Google might have something in the lab, but, like, they like to release things at specific times. Like, they have it's a big company.
Speaker 1:They don't just like
Speaker 2:Who knows? Drop it. Gemini team Logan over there might be fixated
Speaker 1:on this Polymarketer. I need
Speaker 2:to Yeah. Yeah. Yeah.
Speaker 1:Was like
Speaker 2:Oh, during during the wait, he was like, if you if you need something to kill the time Yeah.
Speaker 1:Yeah. Yeah.
Speaker 2:Google AI studio.
Speaker 1:So, I mean, people were people were were definitely memeing the production values on the Grok four launch because it it was supposed to start at eight I think it went live at 08:45 or something like that, maybe a little bit later at Pacific time. And Eigen Robot was saying
Speaker 2:Yeah. This this market is based on LLM LM Arena. LM Arena. Specifically, the text leaderboard. So currently, they haven't fully updated Okay.
Speaker 2:So it's unclear. Right now, Gemini 2.5 Pro is still at the top, but I think the expectation is once they get Grok up there, it will be the top spot. So we'll keep following Yeah. This market. There's over 2,000,000 of volume already on it.
Speaker 1:So It's so interesting that Anthropic's not on this poly market at all because people talk about them as having, like, the best vibes, the best, like, big model smell, the best, like, you know, interaction. And Ella Marina is, like, supposed to kind of, like, test that with these AB tests. And yet, like, doesn't seem to be performing there, but it almost doesn't matter because they're just focused on, like, the business at this point as opposed to, like, the benchmarks. So I don't know. It's all changing.
Speaker 2:We have a post here from Ben Hilek.
Speaker 1:Oh, yeah.
Speaker 2:He says, Elon Musk on AI. So during, the presentation, a lot of people were critiquing the presentation saying that it it it was it didn't feel like super polished or whatever. I I don't think that was the intent. And and it was pretty fixated on the models themselves Yeah. And and what went into them and and what they're good at.
Speaker 2:But Elon did have this one quote in here where he says, and at least if it turns out so he's talking about, you know, what will, you know, what kind of impact AI will have on the world. And he goes, at least if it turns out to not be good, I'd at least like to be alive to see it happen.
Speaker 1:It's like, if we get the Terminator ending, wanna be around for that. Yeah. Wanna experience it. What does that say about these timelines? Because it's like, is he expecting that to be alive?
Speaker 1:Like, I I I feel like most people that have been in the doom category have been like, the doom's coming soon, not not the doom's coming in two hundred years.
Speaker 2:I I didn't I I I read into it more like he he will find it interesting if that is the outcome and and and it'll be entertaining.
Speaker 1:Yeah.
Speaker 2:Less so, like, will I be alive when it happens kind of thing. Who knows? There was another funny quote at the end of the art at the end of the presentation where Elon kind of looked around at the very end and he's like, anyone else have anything to add? And one of the engineers goes, so it's a good model, sir. They they cut it.
Speaker 1:Extremely online crew. Yeah. Definitely definitely on brand. Well, Ben Hilack, as you know, he's been on the show. He's a designer.
Speaker 1:Probably working in Figma.
Speaker 2:All day.
Speaker 1:Think big think bigger, build faster. Figma helps design and development teams build great products together. You can get started for free at figma.com.
Speaker 2:And we have our first product coming out very soon with Figma Make Cool. That Tyler has been cooking on. I've been very excited.
Speaker 1:He showed me it, and I was like, oh, like, someone built the thing that we were thinking about building. Like, and he was like, no. Like, I I did this. This is in Figure. And I was like, this is like an iframe on another website that, like, already exists?
Speaker 1:Because it looks like exactly what we want, but it looks so good.
Speaker 2:Like Like, it looks like he worked on it. He looks like it Tyler. It looks like he worked on it for, like, a few weeks.
Speaker 1:No. It looked like someone else did it. It looked like it was a professional product that, like, stole our idea, basically. I was like, oh, like, someone else got to it. That that was the vibe when I Yeah.
Speaker 1:Heard it. Yeah. Well, how how has the how has the experience been? I don't know if you wanna leak exactly what you're working on. But
Speaker 4:Yeah. I I I don't wanna talk about it too, you know, closely.
Speaker 1:But how many prompts did it take you to get where you showed me?
Speaker 4:Yeah. I mean, maybe five. I
Speaker 1:can't That's see so crazy. This thing is
Speaker 4:so is super it's really great.
Speaker 1:It's really good.
Speaker 2:Yeah. The fact that it came out looking, like, basically, like, 90, like, 90%.
Speaker 1:Yeah. Yeah. Yeah. And and I imagine that there's probably, like, the last 10%. If we were really strict about, like, it's gotta be on this exact style guide, like, that might be something where, like, you know, Tyler winds up spending more time finalizing, customizing stuff.
Speaker 1:But in terms of, like, just getting a functional prototype out, oh, man, it was it was mind blowing. It was awesome. I'm I'm I'm very excited about the the age of vibe coding. This is an interesting chart from Tracy Allaway.
Speaker 2:Yep.
Speaker 1:Been on the show.
Speaker 2:It up.
Speaker 1:The cost to rent an NVIDIA h 100 GPU hit a new low this week with annualized revenue of 95% utilization falling from 23,000 at the May to less than 19,000 today. So that's not that big of a percentage drop, but it is but, I mean, it is a 20% drop.
Speaker 2:It's a consistent trend.
Speaker 1:It's a consistent trend. I wonder how much of this driven is driven just by all of the Frontier Labs that are driving the most adoption or moving on from the h 100 to the 200. I don't know what else would be driving this. Because if if you can if you can still get like, if you only take a 20% drop off of a full refresh of a new, of a new of, like, a new hardware And it's a it's a latest
Speaker 2:and greatest anymore. Drop, not a utilization drop.
Speaker 1:Annualized revenue at 95% utilization. So this is revenue per unit.
Speaker 2:The utilization is still very high. It's the it's the price that, you know, these Neo Clouds are able to rent them for, which is dropping.
Speaker 1:Yeah. I mean tracks. Yeah. Yeah. I mean, the the, like, the market's more competitive than ever.
Speaker 1:There's more Neo Cloud spinning up and more people, you know, actually inferencing these things. And then I guess this is the question of, like, how how stuck will certain workloads get? Like, if you if you have figured out a great use case for an LLM in your organization and it's something that's, you know, not one shotting your entire stack or whatever, but it's just like, you know, we have data flowing through our systems, and we are going to use you know, LLMs are gonna, you know, interact with every PDF that gets uploaded to our to our website or whatever. And and so we're we're inferencing a lot. Like, you might not need to put that on the latest hardware or update the hardware forever.
Speaker 1:You might just, like, be like, yep. It's llama three. It works. It's on H 1 Hundreds and it'll be on h one hundreds forever. And that piece of our business will just stay there.
Speaker 1:Just like, you know, we have a Postgres database that, you know, works, and we're not changing it every year. We're not changing everything. We're just like we're just trying to cost optimize that. And just hopefully, the cost just comes down on that. But like, we've solved this particular problem, then we'll go solve new problems with new technology.
Speaker 1:So I think that's probably what's going on here. But it gets to the point of like, the biggest question with Grok is that, like, the the model clearly is frontier. It works. It's it it it you know, like, the the whole fine tuning on the on the actual x account is is, like, a crazy final step of, like, system prompt, and people were joking about that. Like, oh, they're gonna fix that.
Speaker 1:It's like, that's not what they're demoing today. They're demoing, like, the underlying raw model, which is clearly, like, just engineering focused as you saw in the in the in the demo the demo, which was just, like,
Speaker 2:you know, benchmarks and stats. Turns out the secret ingredient to crushing every benchmark is to have the bunch of data from schizophrenic posts on x.
Speaker 1:No. I don't think that's Obviously not. I I I actually think it's the design of the RLHF stuff and and the design of the the reinforcement learning pipeline. Tyler, you got anything?
Speaker 4:Yeah. I mean, I think just like so far what I've seen on X, like the overall response, like, Vibe stuff
Speaker 1:Yeah.
Speaker 4:Is that people are saying maybe it was a little too kind of overfit on the RL, like VR, like verifiable rewards. Yeah. Like, you you kind of see this when even in in the demo, think it would it would sometimes respond in the answers with like in in like late tech formatting.
Speaker 1:Oh, sure.
Speaker 4:Which is like, okay, that means obviously they've trained a ton on, you know, math questions, stuff like that. Papers and stuff. Maybe people are saying maybe it was kind of, you know, bench maxed. You see it like, you know, 100% on on Amy is like kind of crazy.
Speaker 1:It's like sauce. It's like you you don't wanna Yeah. Get too Yeah. Yeah. This is the thing about democracy.
Speaker 1:Like, if you win like 80% of the popular vote, it's like, okay. Let's say it was a blowout. You win a 100% of the popular vote, like, probably not probably not a democracy. I don't know. I mean, in theory, these things should be able to to do it.
Speaker 1:But I'm I'm interested to know more if we dig into Arc AGI. Is there is there more stuff going on there? Are there any secrets? Because it does seem like an kind of an outlier result. You can see it from this Aaron Levy post.
Speaker 1:Grok four looks very strong. Importantly, it is made it has a mode where multiple agents do the same task in parallel, then compare their work to figure out the best answer. In the future, the amount of intelligence you will get will just be based on how much compute you throw at it. I was joking with Tyler about this that the the individual models are mixture of experts models. So there's a whole bunch of of parameters.
Speaker 1:Right? And then the individual parameters, like, light up the different neurons based on the an internal to the model router. So there's kind of, like, the math section of the brain or the literature section of the brain. And so this was like one of the this was one of the key breakthroughs in like GPT four, right, was mixture of experts.
Speaker 4:People think. We're not super sure.
Speaker 1:Yeah. We don't still we don't fully know. But that's like an internal decision that happens within the model to be like, let's go this feels like math question. Let's go down the math path in the model. Yeah.
Speaker 1:But then, Grok four is doing multiple it's running the same model multiple times and then comparing the results. And so now you have Yeah.
Speaker 2:Grading Yeah.
Speaker 1:You have multiple agents running mixture of expert models. You have mixture of mixture of agents running mixture of experts models. And the next thing is gonna be, like, if you want the absolute best intelligence, you need a mixture of companies. You need, like, I send one prompt and it goes to Grock and Claude and GPT and Gemini and a human.
Speaker 2:Yeah. I wonder how OpenRouter's thinking about this stuff. It is funny to think about the the the human version of that where you give five engineers on your team, build, you know, the same feature and then kind of compare notes afterwards. Wildly inefficient. But with with with software, when you can do these things, like, very quickly, there's incremental cost.
Speaker 2:But you can, you know, have more confidence in in results and
Speaker 1:I mean, it's basically like having a brainstorming meeting with the whole team and just throwing up a question and being like, hey. Like, we have this hard problem that we need to solve. Here's my idea. What do you think? What does Tyler think?
Speaker 1:What does Ben think? You kinda, like, go around the table. Everyone kind of gives their input, their various expertise. They kind of think through the problem in different ways, and then you compare answers, and everyone kind of coalesces around one strategy. This is, like, how work happens in the real world with a meeting.
Speaker 1:It's kind of the same thing, but certainly expensive to do that. So it'll be interesting to see where companies like, how how eager are companies to jump over to Grok? Because it seems like it's been a big lever for Microsoft to have Grok in the ecosystem as kind of a stocking horse for all the other models because Yeah. Satya wants Azure to be very model independent, serve them all. They have the I think they have exclusivity for ChatGPT or GPT APIs or they have obviously, like, a great deal there with OpenAI.
Speaker 1:And so if they can if they can have Grok four as well, that's another, you know, tool in the tool chest to be, like, this top layer.
Speaker 2:Satya is in such a good position. It's it's it's probably not discussed enough Yeah. How much, just by owning those end customer relationships and being able to vend in whatever model is hot at that moment and and give people optionality and still get 20% of OpenAI's revenue, at least for now.
Speaker 1:Yeah. He's also SOC two compliant. Of course. Wanna get SOC two compliant. Head over to Vanta.
Speaker 1:Automate compliance, manage risk, prove trust continuously. Vanta's trust management platform takes the manual work out of your security and compliance process and replaces it with continuous automation, whether you're pursuing your first framework or managing a complex program. So, yeah, Eigen Robot was was talking trash about the production values.
Speaker 2:I didn't know about trash. They were just they were just
Speaker 1:I didn't think it was that bad.
Speaker 2:Think it's Slides are worse than I'd create after getting into rope to do a presentation with a one hour notice. You can tell the engineers made them themselves. I think just this is just a reflection the culture. They're not they're yeah. Very clearly, it's like screenshots dropped into a slide.
Speaker 2:But this is a reflection
Speaker 1:This light mode screenshots on dark mode slides. So like
Speaker 2:Yeah.
Speaker 1:Let's do black slides and then and then you come with your white with your white screenshots that are kind of, like, misaligned and not really evenly distributed. Like, they didn't do, like, the the distribute evenly or whatever distribute horizontally.
Speaker 2:Still gets the point across. Yeah. And I think it's a reflection of their culture. Yeah. And, you know, it shows what they care about, what they don't care about.
Speaker 2:They're not trying to be the most polished, they're just trying to be the best.
Speaker 1:Yeah. Eigen Robot kind of did like a whole like live tweet here.
Speaker 2:Yeah. So Elon was predicting the model will discover new physics within two years. He said let that sink in. One silence. One engineer laughs awkwardly.
Speaker 1:Is that sooner or or later than his previous timeline? Because he was he was talking about AI discovering new physics soon. I don't remember if he was saying
Speaker 2:Dating it.
Speaker 1:Two years or three years or one year before. Because this could be this could be that he's he's still excited about this. He still thinks it's possible, but he thinks it's gonna take longer than he said previously. And that's kind of the more important update. I don't remember what he said originally.
Speaker 2:See if Grock can find out.
Speaker 1:But he was saying this at the Grock three launch that, like, that is the goal. And and if you can get there, like, you've kind of you've kind of solved everything. And Sam Altman was talking about that too, that if you can if you can create a superintelligence, like, that's probably the first thing that you'd wanna is, like, hey, go discover all the new physics and, like, really help us figure out how the world works, so you can solve, you know, fusion I and all this other wanna be clear. I love all you guys at xAI. I'd only want the best for you, but I'm gonna continue to live post.
Speaker 1:Elon attempts to give a speech on alignment involving a very small child, a child much smarter than you. The monologue rambles with no conclusion. In sight, a pause, yeah. Will this be bad or good for humanity? He says the, you know, at least if it turns out to not be good, I'd like to be alive to see it happen.
Speaker 1:Oh, yeah. They had a Polymarket integration. That was kind of interesting.
Speaker 2:Yeah. It's interesting. Basically, giving giving the model access to real time Polymarket data so that it can help make predictions and sort of add context around
Speaker 1:Yeah.
Speaker 2:The, the market itself.
Speaker 1:Yeah. That's interesting. Elon asking the real questions. You say that's a weird photo, but what is a weird photo? I still don't understand why we're looking at weird photos of XAI employees, but they were charming.
Speaker 1:They're calling it Super Grok, crazy features, 16 bit microprocessors. What is I don't even understand what this is. Oh, they yeah. They built like a game in Grok. They had a demo of a video game generated by Super Grok.
Speaker 1:It's a Doom clone. Every time the PC shoots an enemy, floating text appears reading Groktum. Elon is fabricating timelines for product launches on the spot. The engineering so the engineer sitting next to him is looking at the floor face impassive nodding. It's a good model, sir.
Speaker 1:For real, though. Congrats on
Speaker 2:the launch, guys. It's a good model, sir.
Speaker 1:I thought I I thought this post from the actual x AI engineer Eric Zelicumin was funny. It was like AI AI model version numbers over time. Did you see this? No. So it's this chart of the version numbers over time, and you can see that Grok is versioning fastest because it's like, at this point, what else are we measuring?
Speaker 1:Like, the like, at least they're iterating on the version number effectively as opposed and I guess this is a shot at Open AI because they launched 4.5 and then went to 4.1, and they're kind of like, you know, there's this big question about, like, when will GPT five come? The expectations are so high for GPT five. And so they've they've obviously, the the Grok teams are like, hey. At least every three months, we release a new full number. So I wonder the the the five is a number that really no one has has has, like, gone for.
Speaker 1:Yeah. And I wonder if Grok will do it first. Like, if you draw the line on this, they certainly should do it Yeah. In, like, three months. They should have Grok five.
Speaker 1:And there's no reason that they shouldn't, but maybe there's
Speaker 2:And it's very possible that Colossus is the is the key. Yeah. Colossus. Five.
Speaker 1:Oh, oh, the the new data center. Yeah. Well, they'll need Linear to plan that out. Linear is a purpose built tool for planning and building products, meet the system for modern software development, streamline issues, projects, and product road maps.
Speaker 2:They Linear. App. Need Linear badly. So hopefully, they've gotten signed up.
Speaker 1:Nir said, Grok on Humanities last exam, Grok four, I'm not sure I buy even in the general case that there's a given humanities last exam number, which implies you discover useful new physics. How would one make a benchmark of the proper shape for this? You'd have to have a validation set of questions which are outside the scope of what we currently are able to do. You could choose things on the edge of our knowledge distribution and then try and exclude. Yeah.
Speaker 1:It is interesting. Like, if, like, if you are able to memorize every hard math problem, does that allow you to to discover new math? Like, it's it's sort of a prerequisite because you have to
Speaker 2:I think where I I've imagined these discoveries coming from Yeah. Are having a single intelligence that has PhD level intelligence across like a single mind that has PhD level intelligence across every human domain. Right? And being able to combine ideas from different domains. Like historically, a lot of innovation is just taking something from one field, bringing it over here, making some combination of it.
Speaker 2:Yeah. I think Elon talks about the potential of discovering new physics, but, again, doesn't didn't didn't spend a lot of time, like, breaking down how that would actually happen. But world is unpredictable. So we'll see.
Speaker 1:Yeah. It's interesting. People are really pushing this idea of, like, okay. Like like, we are accelerating. Like, the the a g, the Arc AGI leaderboard is accelerating.
Speaker 1:But I keep seeing this and and feeling deceleration. Like, am not feeling acceleration right now. Are you, Tyler?
Speaker 4:Yeah. I don't know. I I think generally, I'm I'm kind of, like, not that interested in a lot of these kinds of benchmarks.
Speaker 1:Like Yeah.
Speaker 4:I I think ArcAgi is more interesting, but just like the humanity's last exam, the kind of general math, physics knowledge, it doesn't seem to be that like it doesn't seem to line up with like you see GPT 4.5 kind of does very poorly on these things. Mhmm. But like writing, it does really great. Mhmm. So like I I think I'm I'm more like if I were to go long short on like different benchmarks, like the usefulness of them Mhmm.
Speaker 4:I think stuff like HLE, I'm kind of short. Long, I'm like have you guys seen the Minecraft benchmark where it builds the two different? Okay. You you basically two models build like a Minecraft. There's like a prompt, it's like build a house
Speaker 1:Yeah.
Speaker 4:Then you can choose and then it's like their rank sells for the mines.
Speaker 1:But but who who's who's grading that? The human?
Speaker 4:It it's a human who picks between them. Okay. And it's kind of like a elo.
Speaker 1:Oh, okay.
Speaker 4:But just like general kind of creative tasks. Sure. I think stuff like that. Aidenbench is good. Yeah.
Speaker 4:I think even in the Grok launch, there was the vendor bench.
Speaker 1:Which one's Aidenbench?
Speaker 4:Aidenbench is Aiden McLaughlin's benchmark. It's just like it's it's kind of hard to describe how it works exactly, but it's just various like creative tasks. How like kind of novel its thinking is, the the like style of its text. Sure.
Speaker 1:Wait. Is it just like he it's just like whichever one he likes the most? I mean At the end of the day? Like, he's the only grader?
Speaker 4:No. No. There there is like an objective, like, function that Okay. You can, like, run it. It's not just like Okay.
Speaker 1:It sucks. Taking which one.
Speaker 2:The idea
Speaker 1:that He's like, open up again.
Speaker 2:It it will be funny. You know, that there come there there's a period of life where your SAT score, like, matters a lot Totally. And it says something about you. Yep. And then a decade later, it's, you know, what you can do, what you have done Yep.
Speaker 2:Starts to matter a And lot so I do think we'll reach that point where it's like, yes, you can one shot every hard exam question there is that you can throw at it. But, like, what can you do for me?
Speaker 1:Yeah. Yeah. Totally. And I think that's I think that's why, like, the bigger question is almost like, you know, ChatGPT DAUs and, like, and, like, actual
Speaker 2:Revenue.
Speaker 1:Revenue and stuff. Final app installs and stuff. Yeah. I mean, the the revenue thing is interesting because you wind up in, like, b to b cloud world, which is valuable, but it's maybe less it's like it's more competitive because it's more commoditized. And
Speaker 2:Well, yeah. If if you you don't have a lot of leverage in the enterprise, if Azure is able to offer infinite models that are that are infinite frontier models, open source models that are maybe just behind the frontier but great at certain tasks. Yeah. The the leverage isn't quite there. There will need to be another pretty significant leap until then, you know, Anthropic being really good at cogen.
Speaker 2:There's leverage there. Yep. We we saw this yesterday with with Lama switching over to Anthropic. Anthropic models internally. And then, you know, just having a consumer app with a lot of users, also very valuable.
Speaker 1:Yeah. The other interesting thing about the the foundation model layer commoditizing and it becoming like cloud and if you have a model, you'll just be, like, vended in as an API to anything else. Like, token factory is that the the hyperscaler clouds are extremely profitable. Like, even though AWS, GCP, and Azure are all somewhat directly competitive and and they're somewhat perfect substitutes for each other, they have not driven prices to zero such in the way airlines are, like, deeply unprofitable. Like, AWS and Google Cloud, they are both profitable.
Speaker 2:Yeah. Or or you look in other commodity sectors like oil
Speaker 1:Yeah. And gas. And I don't know if that's just because there's lock in. I'm not exactly sure, but there's something about where, you know, maybe the maybe the counterintuitive take is that, yes, they do commoditize, and there are a few major foundation models that are frontier, and they all are roughly the same price, but they all have decent lock in with their customers to the point where they're still able to extract some level of profit, or they're just creating so much value that even if they're taking, like, a small marginal slice on top of, on top of the the cost to run, that they're creating so much value that it they still have 50% margins or something like that. Because, like, I mean, this was the story of AWS.
Speaker 1:Like, no one knew how much money it was making, and then and then they they they had to break out the financials, in one of Amazon's, earnings reports. It was, the AWS IPO, as Ben Thompson put it. And next up, have Ben Thompson from Stritecari coming into the studio. Very excited to talk to him.
Speaker 2:The moment we've been waiting for.
Speaker 1:Yeah. Welcome to the stream, Ben. Good to have you on the show. You've been a backbone of many analyses here on the show, and we're excited to welcome you to the to the show. How are you doing?
Speaker 5:I'm doing good. I put on a button up shirt and a jacket just for you guys, so you should feel honored. I am wearing shorts underneath. Wasn't.
Speaker 2:You didn't have to tell us that.
Speaker 1:People always ask if we wear shorts. I we actually do wear the full suits.
Speaker 2:Gotta stand up to hit the gong
Speaker 1:sometimes. There's a wide shot
Speaker 2:in I everyone's
Speaker 5:am I am the poser here, so I'm I'm happy to admit.
Speaker 2:Well, it's a great it's a great sign of respect culture to to put on a suit for a TBPN appearance. And we're just we're so excited to talk to you. I as you know, I've been lucky to read your work in my entire career. Yeah. And and I think it I think so many of the thoughts that I have are now, like, your your way of thinking about technology and markets is so embedded in my brain that that ideas that I hold as true or just foundational beliefs are actually your Yeah.
Speaker 2:That have just become so so immersed. So it's great to talk.
Speaker 5:Well, thank you. I I will attempt to implant new ones or or maybe show you the error of your ways. Wanted to see
Speaker 1:what's going
Speaker 2:Sounds great.
Speaker 1:I I do have a question on on the nature of where you sit in the media world before we go into actual questions about tech companies. It's interesting that in some ways you're a journalist, but you don't really do the scoops and and breaking news that much. But you also don't issue just straight up buy and sell recommendations. Yep. What was the thesis behind not just actually having a price target and not doing like this is a sell side bank, but independent?
Speaker 5:Well, when I started, I mean, it's funny to hear you talk about, like, my quote unquote place in the ecosystem.
Speaker 1:Sure.
Speaker 5:Because when I started, I had, like, it was 368 followers on Twitter. I was just some sort of random random person on the Internet.
Speaker 1:That's
Speaker 5:awesome. In retrospect, sort of right place, right time, I think, is is certainly the case. But I did perceive there was a a large gap between tech journalism, and and I would include a lot of the bloggers there who were writing a lot about products. Mhmm. And then there was Wall Street that was very focused on sort of the financial results.
Speaker 5:Mhmm. And to my mind, there was a large space in the middle, which is tied together the products to the financial results, but also the overall companies and and And I'm very interested in culture and how that guides decision making. One of my sort of precepts is all these companies are filled with smart people, and a lot of people, when you ask them why they did something wrong, their only answer is that they're stupid. And I'm like, no, they're not stupid. It's actually much more interesting to assume they're smart and are doing stupid things and trying to unpack why they are doing that and what goes into that.
Speaker 5:And and so that was sort of the thesis, was that there's this space to explore these spaces. And then there's a business model aspect, which is I started Streckery two years after Stripe started. I think they had just come out with their billing product, and the only alternative at the time was was PayPal, for subscriptions, and it was fairly sketchy, and there was lots of, like, horror stories out there about, you know, stuff and just the Stripe API was so great and the things you could potentially do with it. And so on Wall Street, you're putting a price on it. You're also charging, like, a $100,000 a year or something like that.
Speaker 5:And and so you get a small list of high ARPU clients. And my thought was I could go in the opposite direction and get a large list of low ARPU clients, thanks to things like Stripe and the ability to to subscribe. And that would and as part of that, I wasn't gonna go through the rigmarole of getting registered and doing stock picks and all that sort of thing. I've always joked, if you want a stock pick from me, you're gonna pay me a whole lot more than $15 a month. It was $10 $10
Speaker 1:when I
Speaker 5:started. And it's actually pretty great. Now there's some one of the critiques I do get particularly from my, you know, friends on Wall Street is, you know, no skin in the game x y z. Sure. I think at this point, I'm large enough that my reputation is significant skin in the game.
Speaker 1:Oh,
Speaker 5:yeah. But I do recognize the validity of that that critique.
Speaker 2:Yeah. And you know if you make a bad call, you're gonna have to circle back to it in two years and write about it yourself and admit that you got it wrong. Right. Right.
Speaker 5:Right. Which hurts too. Which hurts too. No. I had to write about this week.
Speaker 5:Like, I was very optimistic about Apple's Apple intelligence announcement last year and the theoretical power it would give them over the model makers. And now I'm ready. Actually, no. They're they're gonna have to figure they're gonna have to pay up. And that's you know, that was a bad call by me that, you know, I think was, you know, very well received at the time and might have gotten that one wrong.
Speaker 5:And and so I I do need to be straightforward about that. And so I just this morning, I was very crystal clear. Like, I got that one wrong. That was that was that was an issue. What is nice is Strathcreek kind of ended up being in this interesting place where I feel like I'm a little bit of like the Switzerland of tech and that no one pays anymore.
Speaker 5:If you're a CEO, you pay the same amount as, you know, Joe Bullard on the street that that that that is paying it. I don't invest directly, which I think made sense when I started because I didn't have any money. It's probably hurt me a lot over the years since then. Bro. I I don't like and and I think that this is a different West Coast, East Coast thing, where it does feel like on the West Coast, everyone's talking their book sort of all the And and, you know, that's why I generally, as a rule, don't have VCs on to do the structure interviews.
Speaker 5:Sure. Because it's it's kinda hard to get like a real take because because that that that is, you know, such a motivation. Sure. And so me coming in being like, I have no book to talk. I'm just here telling saying what I think, I think has been good for the West Coast audience, which is my base audience.
Speaker 5:Even if the East Coasters think that I'm being a being a big wimp.
Speaker 1:So That's funny.
Speaker 2:Yeah. The talking your book challenge, we we go through that a lot.
Speaker 1:Trying trying trying 12 VCs on
Speaker 2:a day. Well, yeah. And and and and we just try to get a bunch of different opinions and triangulate what, you know, what we think is was real.
Speaker 5:I'm trying to come up you have TPPN. I'm I'm I'm trying to come with a p so I can get the talking book network in there.
Speaker 1:But Oh, I read Talking Book Production Network.
Speaker 2:There we go.
Speaker 1:It's like ESPN if you're talking your book.
Speaker 2:But but yeah. It's it is a real struggle to find somebody that, for example, has a deep understanding of every foundation model company, but isn't massively conflicted at in some way or another.
Speaker 1:Extremely. Extremely.
Speaker 5:Yeah. And so it's one those things you just sort of you you end up like, there's so much path dependency and all these sorts of things. And and like I mentioned, like, a big advantage I had was I started at a time when sharing good links was very high currency on Twitter. Yeah. And so, you know, I grew very, very quickly, much more quickly.
Speaker 5:I sort of had a five year plan to go independent. I ended up doing it in less than a year Mhmm. In part because it just sort of spread really, really rapidly, and it was an ideal time to be someone sharing interesting links regularly. And oh, I wasn't sharing them. The the beauty is my readers were sharing them.
Speaker 5:They were doing sort of the marketing for me, and so I'm very cognizant of of sort of the the luck I had in that regard. And then just over time and it's been an interesting journey for me to grapple with my different position in in the ecosystem. Like, so when I started the Structural Interviews, that was sort of part of it, which was I started out not knowing anyone. I got to the point where I can talk to anyone that I want to. And so how do I square that?
Speaker 5:I can't be the guy with a chip on his shoulder trying to make a name for himself forever. It sort of gets it's like the
Speaker 1:Yeah.
Speaker 5:The meme with the guy, how are you doing kids? Like, at some point, you have to accept your part of the establishment. How can I do that while still staying true to the idea that Strachery is about the readers? It's reader funded. My loyalty is to them.
Speaker 5:I'm very clear. I have no loyalties to anybody else. And so, well, I'll just I will talk to people in sort of acknowledgment of what I can do, but it's gonna be fully transcribed and published and and sort of available to everyone.
Speaker 2:Have you ever dealt with or thought about the attack vector of a special interest, you know, buying a thousand plus, you know, thousands of seats to a single, you know, independent publication and saying like, yeah, like, you know, we're happy, you know, we we we got seats for all of our employees actually because we really, you know, love the and then and then suddenly they're sitting over there, you know, representing meaning very meaningful I amount of your
Speaker 5:mean, I fortunately, I think of of a scale that I don't have that problem.
Speaker 2:Yeah. That's good. There we go.
Speaker 5:But it's but no, I think I think audience capture for subscription sites is a potential issue for sure. And this is another thing I was sort of right place, right time. I got big enough by the time that it doesn't matter. And Yeah. If someone's really ups like, I give refunds all the time.
Speaker 5:Actually, if someone really upsets me, I will refund them and every dollar they pay me, I'm just like, go away. Don't, know, I I don't you're being abusive or whatever it might be.
Speaker 1:Yeah.
Speaker 5:And that that is a beautiful thing about the relatively low price, high customer base model is no one has power over me. Like, I I have the burden of publishing, you know, as often as I do. I feel a heavy weight of duty to my customers. When I write something I'm not happy with, like, don't sleep well, but at the same time, there's no one customer or no no individual that can come in and be mad at me and and Yeah. Impact my business.
Speaker 1:Yep. I I I'm I'm seeing that there's maybe some sort of parallel between legacy media and independent media where independent media, it it's not by default more pro tech or anything, but there's just no salary cap. So if you're at a legacy institution and you're writing, it's probably some sort of rough loose salary cap of a few $100,000. Whereas you go independent, it's feast or famine. You might fail, but you might get really, really successful and and have a huge income from that.
Speaker 1:And and I'm wondering, what we're seeing in the AI salary wars where we're seeing more and more talent and the you know, Mark Zuckerberg potentially paying a $100,000,000 bonuses. Do you think that Apple will come around to spending more money on researchers? It feels like they kind of have an internal salary cap with Tim Cook making 75,000,000. There's now people that report two levels down from Mark Zuckerberg that are making more than Tim Cook. And you have this weird dynamic where even if there's no actual salary cap at Apple, you kind of have an implicit one from the CEO.
Speaker 5:Yeah. For sure. I mean, well, I think just to go back to to to the media observation you started out with is as you increase transparency in the market, as you decrease nonrelated barriers, which in the publishing world previously was really geography. And when everyone's on the Internet, you inevitably, you know, just about all cases, you get a power law distribution.
Speaker 1:Yeah.
Speaker 5:And a few people make a ton of money because they win most of the market, and then some people make some, and then there's a long tail that that sort of don't make any at all. But it's it's very it it's interesting. It's it's fluid in a way, but it can sort of become somewhat static as long as the people at the top sort of, you know, are are continue to do well. But what's interesting about AI is for forty years, you would have periods of time where you'd have tech companies going to head head to head in a product market.
Speaker 1:Mhmm.
Speaker 5:And I I think one of the reasons part of the software eating the world sort of idea is the way you get an apex predator is that that predator killed everyone else first. And so you had tech companies fighting each other for the first twenty, thirty years of tech. The ones that emerged were lean mean killing machines, and they and the entire industry were sort of set loose on the rest of the world, and everyone was just like was is getting slaughtered sort of left and right. But what you also had over this past sort of twenty years or so is the big companies of particular sort of slotting into unique slots. So you have you have Facebook is is social, Google is search, Apple is devices, Microsoft is is business or, you know, business applications, Amazon, ecommerce, etcetera.
Speaker 5:And obviously, these companies are are very large and do lots of things, and there's some overlap in different places. But they've been fairly sort of distinct in their categories, and they've been dominant in those categories. And so they've been in a place where, like, Hollywood is wanting to get to. Right? What is the dream in Hollywood?
Speaker 5:You wanna have a franchise where the next next Marvel movie matters more than who the star is. The reason that's so great is because you now have bargaining power over the stars. So you just sub someone else in. And and whereas the old style, like Tom Cruise makes the most money because Tom Cruise on a movie poster sells the poster. And so in a negotiation, he has massive bargaining power, so he's going to get get paid a lot get paid a lot of money.
Speaker 5:In tech, it hasn't been that case. The companies themselves have been franchises. And so the the overall anyone who works in tech or probably works in any any any entity, but you know there's a few people in each company that are critically important, really make the whole thing go. Everyone else is fairly replaceable. Those people are have probably always been somewhat underpaid for years and years and years, both just by the nature of companies and the cultural issues and your salary cap sort of analogy.
Speaker 5:But then also just like it's not a transparent market. It's not it's not hard to price sort of what people are worth. With AI, everyone's trying to do the exact same thing. So you have multiple companies trying to do the same thing. The output is somewhat measurable.
Speaker 5:I mean, all the AI test stuff has issues, but by and large, everyone kinda knows who has the good models and and who doesn't. They you know, the scalability questions. You know, I like because all these companies are trying to do the same thing, we have a very unique situation where the bargaining power that you increase transparency, you increase sort of the liquidity or the ability of people to move around because they're doing the same thing, the bargaining power shifts to the people that are super valuable because suddenly it's much more clear who's valuable, and their skills are much more transferable. So this is, I think, a very underrated bear case for tech in terms of AI, at least for this time period, is they've lost that that murky bargaining power over employees that they enjoyed for decades. And currently, you're seeing what happens when you don't have that.
Speaker 5:You start paying employees what they're worth. And obviously, that's great. I'm not saying this this is a business analyst. It's not a sort of a moral statement. Yeah.
Speaker 5:But it is like what Mark Zuckerberg is doing, think, is totally rational. I think it's a classic sort of Quayton Christensen from Facebook's perspective, AI is all upside. So, of course, they're gonna invest what they need to do to win. But it's costing him a lot of money, and by extension, it's costing everyone else in the ecosystem a lot of money.
Speaker 2:Well, isn't it in some way is the right way to think about the last couple weeks, like more of like an aqua like an unofficial aqua hire in the sense that you're it's it's not just the the people, but it is the the the know how in terms of, hey, here's these things that we wanna do that are important to our business in a lot of different ways. And we're basically it it's it's like the collective is actually more valuable than any one. Like the collective together getting 10 researchers at the same time is meaning you know, is meaningfully more valuable than than than than just each individual researcher added up at There's
Speaker 5:probably something to that, but I I I think again, like, what is actually different between what Google's trying to do, what Anthropic is trying to do, what OpenAI is trying to do, and what Meta is trying to do. They're all trying to do the same thing. So I my suspicion I'm not an AI researcher, so I don't wanna overstate my my knowledge in this space, but my suspicion is skills are are fairly highly transferable. And when that is the case, there is in some situations, if lots of people can do those skills, that's terrible for the employees because then their bargaining power gets diminished because anyone can slot in. But we're in this space where the skills are transparent, knowable, transferable, and there's not very many people that can do them.
Speaker 5:And so it's it's a scarce resource that everyone's fighting over, and that's why you see this real shift in negotiating leverage as as manifested through these dollar figures to to AI researchers.
Speaker 1:Yeah. Do you think I mean, Google seems like the most fragile and the most, like, paranoid about just disruption. It's not all upside. It could be very bad for them. The Innovator's Dilemma, you know, you had this back and forth where Sundar Pichai mentioned that he hadn't read the book.
Speaker 1:He said it doesn't matter because it's a structural issue. I think that's a good point. But if you play back the counterfactual, is it ever possible to disrupt yourself and essentially like, if the Gemini app had launched before ChatGPT and they had taken over that mind share and maintained 90% ownership in that, like, it would be somewhat disruptive to their revenue and their profits as they transition over. But when I sum the revenues from Open AI and LLMs and then Google search, I'm not seeing some massive drop off that's actually that actually would destroy Google in the media short to medium term. So but I'm wondering if you think it's like, is it entirely impossible to avoid the innovator's dilemma by disrupting yourself?
Speaker 5:Well, number one, you have to also look at margins, not just revenue. Yeah. But number two, you actually you answered your question.
Speaker 1:Okay.
Speaker 5:Google didn't launch Gemini as a Jetpack. That's the answer. They were years ahead.
Speaker 1:Yep.
Speaker 5:They they invented the transformer a deck nearly a decade ago.
Speaker 1:Yeah.
Speaker 5:And and so in many respects, like, there's parts of this question that the counterfactual makes the point, in that it is a counterfactual and it's not reality. Now I do think I think Google's done better than I expected over the last two years. I I like what they're doing in search generally. I I think they it does seem to be the one part of the company that still functions. Like, they they can actually iterate and build products.
Speaker 5:What we're seeing is reminiscent of what they did a decade fifty you know, twelve years ago when everyone's like, vertical search, Google's done. All the everyone's gonna search in apps, and Google completely transformed the SERP, the the search engine response page, whatever it is, the search engine results page
Speaker 1:Yeah.
Speaker 5:To be local or to be shopping or whatever, and Yelp's been throwing a hissy fit sort of ever since. And and so that's what they're doing with search. Right? And and and with search overviews. And they have this new search labs or or AI mode.
Speaker 5:They can sort of test stuff out once it's scalable once they they're confident about the monetization issues. They can sort of shift it over. I call it the search funnel, search AI funnel. I think it makes a lot of sense. And I think and I this has always actually kinda puzzled me, where I think they're responding fairly well even though this is seems to be a textbook case of disruption.
Speaker 5:And I went back to an article I wrote years ago called Microsoft's monopoly hangover. And I was I I I went through Lou Gerstner's autobiography and about how he turned around IBM. And his real insight with IBM was everyone wanted him to break it up in in the sort of different pieces. And what he realized was IBM was so big and and large from having downstream of the monopoly that actually the only thing they were good at was being big. And so breaking them up would actually just create a bunch of subscale low performing companies that would all get wiped out.
Speaker 5:But as this behemoth, they could go to other big companies and solve all their problems at a very mediocre level, but it's still sort of an attractive proposition. And under Gerstner, they really rode the Internet wave. They went to all these big companies, said, this Internet thing's happening. You need help. We'll solve your problems for you, and had a very sort of successful run, you know, kind of until cloud came along and which Gerstner, by the way, was was was a a proponent of.
Speaker 5:But, you know, by that time, the IBM people were back in charge. And was thinking about the the context of Microsoft where my you business models are hard to change, and disruption is ultimately about business models.
Speaker 1:Mhmm.
Speaker 5:And culture is hard to even harder to change. But what can't really be changed is the nature of who you are. And and I think there's you know, in Microsoft, they were in a similar situation. They were a big monopoly, and they weren't a product company. The attempts to become a product company with Windows eight and all the things that went on around that time inevitably inevitably failed.
Speaker 5:And Satya Nadella, to his great credit, you know, sort of diminished Windows importance in the company, broke it literally broke it into pieces, spread it around, and this is a multi step process. And and got Microsoft back to a place of we're big and we'll do everything. We're we're we're not a Windows company. We'll go in there and we'll go solve all your problems. Very sort of reminiscent of of the the second version of IBM.
Speaker 5:And I go back to Google and I've always been intrigued by the I'm feeling lucky button, which doesn't exist anymore, but I always enjoyed that that button continued to exist long after you it was impossible to click because the moment you started typing the search box, it would start auto searching immediately and jump jump right to a search page. But it was it was there in a it's just so core to Google to give you the answer, to to know everything, like, to to know everything about the world and to there's a bit where even though the core of their business model is TenBlue links, and it's not just the the users choosing the search link, which gives them a data feedback loop so they know which results better, but also the users choose the winner of an auction Google puts on for ads, and it's an incredible business model. And there's something about that that's always been intention and counter to what Google was founded to be. And I feel like that germ of what Google was founded and meant to be is an AI answer engine. And and it almost feels like even though Google is old and large and fat and slow moving, that core aspect of their nature and is is still in the culture.
Speaker 5:And that's why they're finding it in themselves, I think, to do better in AI than you would expect. Was it enough to watch a ChatGPT before OpenAI? No. Mhmm. Was it was it enough to have any sort of cogent response for the first six to nine No.
Speaker 5:But it was enough that I think they've done better than I expected over the past year in particular and gives me, I think, more optimism than I expected I would have for the company when, you know, I when ChatGPT first launched.
Speaker 1:Mhmm. Juri?
Speaker 2:AI overview from Google. If you search Google's mission, Google's mission is to organize the world's information and make it universally accessible and useful, which is exactly what models do really really well. Like the thing that's just undeniable, right? You can debate whether this is going be the year of agents. Yep.
Speaker 2:It doesn't feel that way to me yet. But this is the year that most people have realized that, wow, LLMs are very good at organizing, surfacing and making data valuable.
Speaker 1:You mentioned just the debate over breaking up IBM. I'm interested if you could take us through some I bet of
Speaker 5:you and Raj are gonna be talking about IBM today, did you?
Speaker 1:No. No. No. But I wanna I wanna talk about Intel and and kind of your the the history of some of your takeaways and what you think you've gotten right in the past, your perception of, you know, should they break up the the foundry business, and what you think might be in the works with Lip Buuton coming in there. Because I was listening to Dylan Patel talk about his conversation with the new CEO, Lip Buuton, And it seems like they're doing lots of tightening up, lots of layoffs, but it's kind of I I don't even know what framework to apply to analyze, like, is a breakup the correct thing?
Speaker 1:It feels like something people just say.
Speaker 5:Yeah. So Intel, it's funny. I one of my very first articles was about Intel. Mhmm. What I said at the time was and this was 2013.
Speaker 5:And this was an art, like, you know, you start a site like Strictory, you're like a new band. And why does everyone think a new band's first album is the best? Because they've been working on these songs for years. Right? And then the next album, they had a year to do it, they all suck.
Speaker 5:Right? So I'll let people decide if that applies to Strachery or not. I won't be a sophomore slump. But, yeah, but I'd been on you know, Intel had been a thing I've been wondering about for a long time, which was by 2013 when I started, they had clearly missed mobile. Now it wasn't clear to them.
Speaker 5:They were still trying to do the Atom processor and and just they're gonna figure it out tomorrow. And the the problem with missing mobile is the problem with Intel in general is Intel is always very biased towards high performance. And this goes back to actually, Pat Pat Gelsinger, his first time through at Intel. Mhmm. Intel, you know, had the CISC the the way the there's CISC versus RISC.
Speaker 5:It's like Yep. It's different ways of organizing bits or whatever. RISC is generally more efficient. And actually, even Intel processors today, even though x 86 is CISC, the internal, it's retranslated internally to a RISC type language. None of that is really important other than to say, in the eighties, there was a real push in Intel to switch away from x 86 and to to a risk type of I'm not gonna
Speaker 1:use this Architecture.
Speaker 5:But like Yeah. For for the processors. And Gaussinger was a leading proponent that this is a terrible idea. And the reason it's a terrible idea is because there was already a huge ecosystem of software built around x 86.
Speaker 1:Mhmm.
Speaker 5:And all this low level code and capabilities that no one ever that was written once and no one ever wants to touch again because it's miserable work. And he's like, to rewrite all that stuff would take at least two years. And in that time, our ability to manufacture chips will improve so much that had we just stuck with CISC, our processors would be faster. Mhmm. And that was the right bet, and that's one of those foundational bets that I why I like to think about companies and their history and what goes into that, which is Intel from the eighties on has solved its problems by having superior manufacturing and by moving faster.
Speaker 5:And, yeah, our chips may be theoretically less efficient, but if our manufacturing is better and our transistors are smaller, it doesn't matter because that will swamp whatever theoretical sort of efficiency you might have. And this drove the entire computer industry. You you you would write to write a program, just every second you spent optimizing your software in the eighties or nineties was a waste of time because whatever improvements you could get would be swapped by the next version of if you went from February to three eighty six or March to April. That jump was so large, you were better off focusing on features even if a major software is sort of slow to use on the current hardware because the next generation of hardware would be so much faster. It would solve your your speed problems for you.
Speaker 5:Now, this has generated a lot of bad habits amongst tech developers. That's why you get bloat and why you have, like, poor performing things and all those sort of things. But the this was sort of super critical. And so Intel, at its core, has always been focused they've always been manufacturing first and focused on better and better performance. What happened with mobile is in that calculation did not come efficiency.
Speaker 5:They were never focused on efficiency. And in mobile, efficiency was everything. So what happened with mobile is app Apple went with an ARM processor made by made by made by Samsung, and they basically rewrote everything. All that stuff Intel didn't want to rewrite in the eighties, or if they rewrote, would just give other process processor companies a chance to catch up with them, had to be rewritten for mobile because efficiency was so much more important than performance. When that happened, Intel was screwed.
Speaker 5:Now it took them a long, long time to realize they were screwed, but they they were just fundamentally unsuited to be competitive. It was the the whole Paul Adolini turning down the iPhone contract is not true. Tony Fidel, I I I said that once, and I got a call from Tony Fidel. Actually, that this is when I had him on it for an interview. And he's like, this drives me up the wall.
Speaker 5:Intel was not remotely competitive even though they had ARM chips then. Even their ARM chips then were focused on performance, not on efficiency. And and so the the problem for the problem for Intel is once you missed mobile, you were going to lose your manufacturing lead at some point because volume matters so much. And every time you move down the curve, your transistors get smaller, the cost increase massively. So you need volume to spread out the cost of building these fabs.
Speaker 5:Like, back then when I wrote this article, fabs cost 500,000,000. Now they cost, like, 20,000,000,000, and this is over a course of, like, twelve years. Mhmm. So so it it was clear Intel was going to be in big trouble back then. And so I wrote, they need to build a foundry business.
Speaker 5:They need to figure out a way to build chips for other people because in the long run, the cost of keeping up in manufacturing is not gonna be tenable if you're not making mobile chips. And what obviously, they didn't. TSMC made all the mobile chips for everyone, and guess what happened? TSMC took over the manufacturing. Now there's lots of other things that went into this, why Intel stumbled and sort of things.
Speaker 5:But at a structural level, what happened was actually inevitable. Once Intel missed mobile, unless they figured out a way to make mobile chips some other way. They didn't do that. What's interesting is what is the problem with that, it took so long to manifest. Part of mobile was you had an explosion in the cloud because cloud and mobile actually go hand in hand.
Speaker 5:Intel made all those cloud ships. Intel stock had an incredible run from the time I wrote that article for the next eight to nine years. And I felt like kind of a moron because I might say this company is screwed and they don't do what I say. They didn't do what I say, and their stock went to the moon. But what all the the way it actually caught up to them has been in the past two to three years, where there's astronomical demand for AI chips.
Speaker 5:Only TSMC can beat it. Intel's not in the game. They're they're trying to shift to a foundry model, but they're they're so far behind being a foundry is being a customer service business. It's not being an Intel we tell you what to do or we we tell our design teams how to change their chips to accommodate our manufacturing needs. It's just it's totally different, and they needed a decade to learn how to do that.
Speaker 5:Had they changed in 2013, they would be ready today to capitalize on AI. And and the counterexample here is Microsoft. Microsoft building Azure, yes, it got them somewhat in the game with mobile and things like that, but AWS dominates in that space. But by virtue of building up Azure, they were prepared when the AI opportunity came along. And now Azure is a is sort of a big AI player.
Speaker 5:And, you know, I wrote about these these two examples a few weeks ago in the context of Apple. I think the concern for Apple isn't the short term. We're gonna be using AI apps on our iPhones for quite a while. It's are they going to be prepared for what's next if they don't do some sort of sort of reset and pivot here? Oh, sorry.
Speaker 5:Didn't answer your question about Intel. Anyhow.
Speaker 1:Yeah. I mean, it's
Speaker 5:a What's your idea about Intel?
Speaker 1:Manage decline, basically, like, just, like, you know, just get as much cash flow out of this thing as you can while you wind down the business.
Speaker 5:For Intel?
Speaker 1:Yeah. That's what I'm hearing. Yeah.
Speaker 5:Yeah. Well, I
Speaker 1:mean It it doesn't feel like, oh, yeah. There's a silver bullet. Just split the business, they're good. Like, no. It's like, it's
Speaker 5:it's all bad. Broad reason not to split the business is Intel needs volume, and they get volume from Intel. Sure. And the and AMD split their business a decade ago, and it was really they had a very hard time for many years. And they had very tense and difficult negotiations between the GlobalFoundry side and the AMD side.
Speaker 5:GlobalFoundry was AMD's manufacturing arm. And it wasn't until really they got out of that and went to TSMC and then also completely rehauled their ship design business and all those, you know, that they they got in the business they were, and then also that Intel stumbled. That that certainly really helped them. Intel today so you split it up, like, who's bought like, Intel's Intel itself is fabbing some of its stuff with TSMC.
Speaker 1:Yeah.
Speaker 5:Who who wants to buy Intel's foundry services? The the problem here is TSMC is located in a country called Taiwan
Speaker 1:Yep.
Speaker 5:Which you know what it is today, but five years ago, they'd be like, what? Thailand? Yeah. Which by the way, was probably much better for Taiwan security when the Americans thought it was Thailand. Yeah.
Speaker 5:But so there's a real national security element here. And it's just it's a really tough situation because Intel is a failed company at this point. And they're and the the reason the failure is so total is because the aspects that drive their failure are the same things that drove their success. It was their arrogance. It was their a sense that we're the best, that we will just win through manufacturing might and performance.
Speaker 5:And all those things work against becoming a good foundry, work against being a customer service organization, work against recognizing the fact that you're not going to make up for Missing Mobile through manufacturing, which was their bet for years and years. You you had to accept that you lost. And and that's a tough place for companies. It's not like someone made a mistake. It's that what they did what they did too well for too long.
Speaker 2:It's who they were. They continue being who they were. Right?
Speaker 5:That's right. But who else are you gonna get? If you want an alternative to the SMC, it's it's it's a very good situation.
Speaker 2:Question and I think we'll be forced to to have you make a slightly shorter answer unfortunately. I I wish we had hours to keep talking. Yeah. Had hours I wanted to get your updated thinking on x AI x the combined entity. The last twenty four hours have been very chaotic.
Speaker 2:When the initial merger was announced, it made sense for financial reasons for some of the different stakeholders, but I wasn't fully sold on this idea.
Speaker 1:Yeah. How should You're gonna force me
Speaker 5:to go with takes that I I I generally just avoid writing about Elon Musk companies for self sanity reasons, I think.
Speaker 1:I mean, I remember I wrote an
Speaker 5:article years ago about, like, when the model y was announced. And I was talking about, you know, it's a Tesla and this aspect. What Elon Musk is very incredible at is sort of creating reality out of thin air. He's like the ultimate memer. And to create, like, do do like like, it's the way things used to work backwards.
Speaker 5:I remember I analogized it to, like, protests. Like, a critique of of of modern protests is they spin up very quickly because social media makes it very possible, but there's no infrastructure under them, so they don't amount to anything. Whereas you go back to, the civil rights era, there was years of groundwork that went into, like, the million man march, you know, on Washington DC, and there was a structure in place that ultimately manifested in large crowds. But modern protests are the opposite. The the largeness comes at the beginning, and then it all falls apart, so there's nothing in place.
Speaker 5:Mhmm. And there there there's something that makes a challenge to write about anything Elon Musk related is the you have all the social aspects. You have this bit about Tesla of creating reality. It's the stock was buttressed for years by these true believers even though the financial parts didn't make sense. You famously had these wars with the short sellers and all that sort of thing, and it worked.
Speaker 5:It basically manifested a market for this Model y and then the and then the Model x. Not the Model x. What's the other one?
Speaker 1:The three.
Speaker 5:Model three. Yeah. So it was Model three, sorry, when I wrote that article. Model three and Model y had this massively successful, and all the people that were true believers got very rich, and congratulate congratulations to them. It's great.
Speaker 5:But it makes it almost impossible for someone for what I do who I wanna look at structure and fundamentals. I can observe this effect happening, but you can't really say what's going to happen or the effects of it other than to say this is interesting. And so I wrote about that article and then the solar city thing came out and he's like bailing out like his brother-in-law or something. And I'm like, I can't write about this. Like, what am I gonna say?
Speaker 5:Like like, there's it just doesn't make sense. And so I think there's to fast forward to x x AI. Yeah. There's a theoretical piece here. I think actually x AI would be an incredible acquisition target for a lot of companies if it wasn't saddled with X.
Speaker 5:So interesting Yeah.
Speaker 2:Feels like the end state is like Twitter getting spun out again. Like that that that's my I I that's kind of like my my it it just ends up going back to Twitter and and and it becomes
Speaker 5:The bluebird. No one actually wants to like, Twitter Twitter, there's never been a company in the history of the world probably where the impact of a company is completely and utterly divorced from its financial realities. Like, I think when Elon Musk bought it, and I assume that's continued through now, they'd had, like, one profitable quarter in their history. Like, it's it's an unbelievably terrible business. And so I think it's probably weighing x AI down.
Speaker 5:There's a I guess, I get the theory that Twitter data helps x AI
Speaker 2:Well, it helps yesterday. Contract
Speaker 5:for that data.
Speaker 1:You don't
Speaker 5:need to pay 43,000,000 for Twitter to to to or 43,000,000,000, I should say, to to to get it. So Yeah.
Speaker 2:That that was always my position too. I don't think it helped yesterday when when, Mecca Hitler emerged on the timeline.
Speaker 1:On the timeline yesterday.
Speaker 2:But luck with them.
Speaker 1:Hopefully, they sorted out.
Speaker 2:I wish I wish we had, a lot more time here. But
Speaker 1:Yeah. This is hopefully really good fun too. Thank you so much for stopping by.
Speaker 5:Yeah. No worries. I love what you guys are doing. I I actually had the idea of doing a daily podcast ages ages ago. Oh, really?
Speaker 5:Classic example of ideas don't count, execution does, and you guys you guys did it. I think it's great.
Speaker 2:Well, you're always welcome here.
Speaker 1:You're always welcome. Thanks so much.
Speaker 5:Thank you.
Speaker 1:We'll talk to you soon. Cheers, man. Bye. Meta just is going deeper with Ray Ban maker, s Esleor Luxottica. I I I cannot pronounce that first word, but people just call
Speaker 2:it Luxottica. And
Speaker 1:so Meta is taking a minority stake in Luxottica to accelerate its smart glasses ambitions, investing $3,500,000,000 in the iconic Ray Ban manufacturer. We were talking to David Center about the the history of this company. It is fascinating. I'm very excited for him to, break it down for us a little bit more. Hopefully, he can come on the show and and and talk about it because it's a
Speaker 2:fascinating The founder has a crazy story. Grew up in I I think he grew up in an orphanage. Yep. And and and it just what what are they wasn't they didn't call him the pit bull. They called him something else.
Speaker 2:But Yeah. He was an absolute savage.
Speaker 1:Yeah.
Speaker 2:Apparently, at one point, he wanted to buy Oakley. Yep. And the and the the founder CEO of Oakley didn't wanna sell. And so the CEO of Luxottica acquired the largest retailer for Oakley's and just pulled them off the shelf and basically started selling knock off Oakley's even though they were trademarked. And then eventually the Oakley's CEO came around and said, okay, I'll sell your cratering, you know, my revenue.
Speaker 1:Yeah. You're just gonna tell
Speaker 2:Let's do a deal. Wow. So absolute dog and Shun. I'll have Sandra to break it
Speaker 1:down. What do you make of this idea that like, you know, Apple, when they make a device, they they they redefine and very much standardize that particular market. So when they come out with watches, there are a number of styles of watch. There's the dress watch, the sports watch, the steel sports watch. There's the dive watch.
Speaker 1:There's the, you know, Casio style. There's a whole bunch of different styles. Right? Apple comes in and just says there's only one style, the Apple watch, and they become the number one Apple style.
Speaker 2:Yeah. And they give you some variance in the band.
Speaker 1:In the band, little stuff here. They were doing partnerships. I think they did Hermes band for a while. They've done a couple other things, but it's been mostly Apple's design language on your wrist. Mhmm.
Speaker 1:Whereas with the Meta Ray Bans, they're saying when and now the Meta Oakleys, they're saying, you like the look of Ray Bands. We're just putting our technology into the style you like. We're not going to try and create a new iconic style that says Meta like Apple says headphones. Yeah. And and they're just kind of like, they're very, very different strategies.
Speaker 1:And and so it feels like
Speaker 2:Well, so so I think this is strategic. This doesn't mean that this doesn't mean that Meta can't develop their own styles in time. Yeah. But I think it's very smart to say, hey, we don't need to innovate on aesthetics and the sort of silhouettes, right? There's classic silhouettes, Ray Ban silhouette is Lindy, these Oakley silhouettes are very Lindy.
Speaker 1:Yeah. And they're different markets. Ray Ban wear is
Speaker 2:different than that. Has, I think, Garrett Leight like a bunch of other like brands under it. So they're basically saying like, through this we can deliver. Luxottica has brands in every Mhmm. For every demo that you that Meta could possibly want, right, as a as a $100,000,000,000, you know, company.
Speaker 2:And so I think it's very smart. I think, Apple, like you said, will will will probably take a drastically different approach in terms of like standardizing around something and and that will say something. But accessories like iWear are just such a such a personal decision and such an expression of of of who somebody is that I think that Yeah. You wanna give people max amount of optionality.
Speaker 1:Yeah. It's just interesting because like you could have said that about watches. Like you before the Apple Watch, you could have said that, well, you know, somebody who wears a dress watch wants a dress watch. Somebody who wants a steel sports watch. Somebody who wants a G Shock is a G Shock.
Speaker 1:It's like the G Shock you say G Shock and you just immediately think like, you know, special operations guy or Jocko Willink listener. Like that that that it's like a durable rugged thing. You say, you know, Rolex, that's a different thing. Right? And and Apple was able to standardize around it.
Speaker 1:And it's interesting that that Meta hasn't been trying to do that. Instead, they're they're focusing on partnership here. It's just like a it's just an uncommon strategy, but it seems to be working. I I there's another post in here. I don't know if we have it here, but
Speaker 2:I'm trying to think of a new like, the key the key thing Yeah. Is Apple's great at at innovating at multiple layers, but like gen generally, it's very hard to try to deliver hits in like two specific areas, like aesthetics and design Yeah. And then simultaneously in something that's basically a fashion product Yeah. And simultaneously deliver the technology. Yeah.
Speaker 2:So I don't know.
Speaker 1:Yeah. Jack Ray here says, after wearing Ray Ban Meta Wayfarer glasses for a few weeks, I feel kind of naked wearing regular sunglasses. I found three use cases that are hard to roll back. One, spontaneous photos of my kids when we're out and about. Any cool pose that has a half life of three seconds I can now capture instead of pulling out your phone.
Speaker 1:Optionality of music or hands free phone calls without digging around for earbuds. And three, knowledge seeking chat when I'm walking around, usually for simple factual things. That's exactly what I experienced when I was, demoing the Ray Ban, Meta Wayfarers. It turns out there's more questions I feel like asking when there's no friction. I'm very excited for multimodal and real time translation use cases too.
Speaker 1:They're only gonna get better. But I think those three are maybe enough. And I I think with a lot of these products, just having one killer use case, like, just replacing the the, you know, the headphones for hands free phone calls or something. Like, if you can just become someone's daily solution for music. Like, if that's enough to just sell the product and then sell them another one the next year when it upgrades a little bit, sell them another one, keep them as an active user, and and roll that out a for long time.
Speaker 1:And then if they can do the other stuff, that's great too. Yeah. But you just need to get these one real nail their single use case. And so, yeah, there's gonna be cool stuff, but it's it's it's fascinating to see them roll this out. And it's also interesting how behind the ball it feels like everyone else is now.
Speaker 1:Like, Google was, was talking about getting into this this space. We saw some launches at IO. Haven't actually seen any of those in the wild. Haven't seen anyone really talking about those. Apple, it feels like this would be something that they could jump forward to with a stylish pair of eyeglasses with some basic functionality.
Speaker 1:Just take what's in the AirPods, take a camera, like, they could do something cool. But they're, like, just much slower than than
Speaker 3:Yeah.
Speaker 2:The other the other thing with eyewear that's different or that's gonna be like a new challenge for manufacturers is that there's so many different situations where I might want to wear something like a Ray Ban or JAM silhouette one day What's JAM silhouette? Jack Marie Maj. Okay. Cool. But the, you know, and then that same afternoon I'm wearing Oakleys when I'm playing tennis or something like that.
Speaker 2:And and so there's a lot more, like, swapping and then then obviously I
Speaker 1:mean, if can keep the price low, you could maybe wind up selling people multiple pairs and Yep. Have indoor pair, outdoor pair. It's it's kind of inconvenient. I feel like there's gotta be a better solution than that, but I don't know. What's What are these?
Speaker 2:Yeah. No. The bifocals.
Speaker 1:Yeah. Yeah. Where they can, like, flip down. There's transition lens lenses, but those never fully work all the way, but then there's the flip down ones. Clip ons, there's all sorts of different solutions.
Speaker 1:The big news is that the third browser war has begun. Google stock has dropped on the news that OpenAI is planning to launch a Google Chrome competitor within just weeks. And this is very interesting timing because It's time to browse. Yeah. Time to browse.
Speaker 1:Certainly makes sense to become deeper in a more deeply integrated into the user's life. Makes a ton of sense. There's a ton of benefits that come from having a web browser. What was interesting is, the we can go into what Google actually law or what OpenAI is talking about launching, but this news, the scoop leaked the same day that, Arvind from Perplexity announced that they're finally releasing their next big product after launching Perplexity, Comet, the browser that's designed to be your thought partner and assistant for every aspect of your digital life, work, and personal. And so Perplexity launched this on June 9.
Speaker 1:And then OpenAI, the the scoop goes out via Reuters the same day. And so this feels like very much like let's not let Perplexity get a bunch of attention and drive a bunch of people to to start daily driving Comet, the browser, because even though we're not ready to launch our competitor,
Speaker 2:we won't Arvin was on the show talking about Comet. But Yeah. Over a month ago, he said it was really important to the business. This was a big bet that they're making. Yeah.
Speaker 2:He, and I'm sure both companies are racing to be the first to launch. But Sure. Dia, the browser from the browser company, also launched out of or they're still in beta, but they launched, like, a month ago or something like that. So this is you know, you're not gonna be the first
Speaker 1:Oh, they launched a month ago with the Dia Browser? Because I saw Riley Brown also posted the cursor for web browser and Dia Browser. And I thought Dia Browser launched that same day, but I guess it had launched earlier.
Speaker 2:Yeah. So anybody that was an Arc user can download Dia today and chat with their tabs. Interestingly enough, Perplexity's browser and OpenAI's browser are both built on Chromium, the same open source project that underpins Google Chrome and Microsoft Edge.
Speaker 1:Yeah.
Speaker 2:So it the the cool thing here, that means that they're compatible compatible with existing Chrome extensions.
Speaker 1:Oh, interesting. Okay. That's cool. Yeah. It's it's I I I wanna talk to more people who were, like, active and tech during the earlier browser wars.
Speaker 1:The first browser war was Netscape Navigator versus Microsoft Internet Explorer. This is in the mid mid nineties, early two thousands. Netscape was super dominant, and everyone loved Netscape. Was It originally the Mosaic browser. This is the Marc Andreessen project.
Speaker 1:And then but Microsoft bundled Internet Explorer with Windows '90 five, and the distribution was so powerful that Internet Explorer actually wound up winning and became really, really dominant. But then there was this lawsuit that went back and forth. But then, basically, in by the early two thousands, Internet Explorer had over 90% market share, but then they got kind of lazy and stagnant apparently. And, I mean, I'm I'm not exactly sure what happened, but they there's a lot more competition. So Firefox, which was, I believe, like, a spinout of Netscape or kind of, like, some of the same heritage there, began getting traction.
Speaker 1:And then Google Chrome launched in 2008 and leapfrogged everyone, and Google Chrome was really focused on, like, speed. It was the fastest browser. And they they they did a whole bunch of work to optimize JavaScript so the pages would just load faster and run better on pretty much every computer that you had. And so, and then they had the open source project with Chromium, and so they were able to kind of standardize the entire industry. And so everyone's always been trying to draw, analogies between, like, the browser wars and the LLM wars and, like, what's the role of open source in that?
Speaker 1:Like, is open source a strategy to wind up maintaining your your dominance? How much does distribution matter? Like, Chrome was probably pretty easy to distribute because every single person was visiting Google just every day searching. And so you just put this bar, hey. Wanna switch to the faster browser?
Speaker 1:And people just do it because you can have basically, like, you know, billions of ad impressions on your product every day. Will be interesting to see if ChatGPT can get people to download their own browser on desktop. I mean, I'm using ChatGPT on desktop in Chrome all the time.
Speaker 2:Which ChatGPT model would you want to use as a default search engine?
Speaker 1:That's the hard part. Because I always run into this problem where it defaults to o three pro, but that takes ten minutes. And so then I have to go to a four o. And then if I'm in an o three pro flow and I'm talking to o three pro and I let it cook for ten minutes, it gave me a great answer. But then I wanna just be like, okay.
Speaker 1:Just like clean this up a little bit or summarize this or do some bullet points. I want four o to do that so I have to switch over. So I don't know. I I would imagine I'd go four o as the default because I want speed. But even four o could probably be faster before it truly replaced.
Speaker 2:Google's very fast. They've spent a very long time being fast.
Speaker 1:Yeah. And I could imagine them doing a similar project too. I believe it was like the v eight JavaScript engine. They sent this team out to, I wanna say, like, Iceland or something. They they basically sent, like, a bunch of engineers to, like, an off-site, and they were like, just go optimize JavaScript for, like, a month.
Speaker 1:Just go focus on this for, like, a month or months and come back when it's done. Like, you have no other responsibilities than just, like, optimizing this, like, compiler. And they came out they came back with the VA JavaScript engine. It created this whole, like, Node. Js boom.
Speaker 1:People were running JavaScript on the server then. And, and I could see Google kinda doing something similar where they're like, okay. We have Gemini. It's good at looking stuff up. It's a good knowledge retrieval engine.
Speaker 1:Go figure out how to make it load all the tokens for the full response in a hundred milliseconds. Like Yeah. And that would be very, very cool. And I wonder if that's like a uniquely Google advantage. Tyler, you look something up?
Speaker 4:Yeah. It was in it was in Denmark.
Speaker 1:Denmark. Okay. I was close. I was close. Yeah.
Speaker 1:I wasn't sure if was Finland or Iceland and Denmark. Yeah. The interesting thing
Speaker 2:here, I'm realizing that tabs are definitely a light lock in to browser. It's not it's not just the default. But if you have six to 10 tabs that you've just had open for a really long time and they're like
Speaker 1:from a
Speaker 2:bunch of different things and you can't exactly remember what they were if you had to list them all off. But you know, you know, I I personally end up using tabs as like somewhat of a to do list. Mhmm. And so if you're spinning up a new browser and you don't have your tabs, it's like, oh, do I want to just get rid of my tab stack? Have a bunch of tabs that just have stayed there for years and they're basically like it's basically like a mini operating system.
Speaker 2:Right? Yeah. Yeah. With, like, different apps. It might be Yep.
Speaker 2:A Google Sheet or or something else. Yeah. I know. Know what you mean. So there's very real lock in.
Speaker 2:I could bring all those tabs over, but I have to then Yeah. Log in to a bunch of different services. And so it's it's really, really hard to actually Yeah. Win here.
Speaker 1:I wonder if anyone's using you know, in in Google Chrome, you can actually change the default search bar to you know, when you type in the search bar and if you just type words, it just Google searches it. You can change that to search ChatGPT. Yeah. You can pass in a query parameter, and it can just do that. But I haven't heard of anyone actually doing that.
Speaker 1:And I used to have I used to be such a power user of Chrome. I used to have different code words, basically. So if I if I typed, like, I space and then a query, it would go to IMDb and search that specifically. So you could you could have Chrome, like, route to any specific search. Any That's cool.
Speaker 1:So you could press, like, y space and would search Yelp or, you know, anything else. But I don't know if people are I don't know if people are doing that with Google, with ChatGPT. I think people mostly just, like, control command t and then Yeah. Hang out in ChatGPT.
Speaker 2:Well, we'll have to ask, Chris in fifteen minutes about, get an update on the browser wars because he was an early investor in
Speaker 1:I know one of those tabs that you have pinned right now.
Speaker 2:What's that? Adio. Of
Speaker 1:course. Customer relationship magic. Adio is the AI native CRM that builds, scales, and grows your company to the next level. You can get started for free.
Speaker 2:I've had Adio open for thousands of hours in a row at this point.
Speaker 1:Yeah. So SIGNAL kinda breaks it down with the OpenAI launching the web browser. He says, this is the oldest playing tack. Find product market fit with a single killer use case, then vertically integrate and horizontally expand until you control the interface layer itself, app, platform. Once you own the interface, you own the defaults.
Speaker 1:Welcome to the next generation of browser wars. Yeah. What's interesting is there like, Sam Altman at OpenAI and just the fact that OpenAI is a company. Like, there is kind of a mandate to, like, vertically and horizontally integrate, figure out code, figure out research, figure out devices. But every company wants to do everything, but then sometimes they run up against barriers.
Speaker 1:Like, there was a time when Google was like, we want to win social networking, and we want to beat Facebook, and we're going to launch a direct Facebook competitor. And they did, and it didn't go well. And then they shelved it, and then they wound up producing trillions of dollars in market cap just doing the thing that they do great. And so the question is, like, the surface area of OpenAI, they have to explore. They have to experiment.
Speaker 1:It's it would be stupid not to see if they could get a browser and a device and a chip and a nuclear reactor and everything and sand. Get the get the sand. Get everything. But the but there's no there's no guarantee that they will win the entire vertical stack, and they will be the one company. Right?
Speaker 2:I think my question is, are these gonna be like, is OpenAI's browser gonna be an entirely new app other than their existing mobile app? Is it or their desktop app?
Speaker 1:I yeah. That is interesting.
Speaker 2:Because if they have to get people to redownload a separate app, then then that's then that's, like, an entirely you know, they have a good fly you know, they
Speaker 1:have a bunch of impressions. Evolve the apps that they already have
Speaker 2:in store. Two. I don't I don't know if Flexity has is planning to to release this as like a new standalone app or it will be in the Perplexity mobile app. Yeah.
Speaker 1:Yeah. I mean, I know. I think Comet's like its own thing because we were looking to download it and we need a code. And you can't just get it if you're just on perplexity. But I don't know.
Speaker 1:All I know is that you should go to fin.ai, the number one AI agent for customer service, number one in performance benchmarks, number one in competitive bake offs, number one ranking on g two. So Arvind breaks down, like, his philosophy of of Comet, the browser that he's dropping from Perplexity. He says, you can either keep waiting for connectors and MCP servers for bringing in context from third party apps, or you can just download and use Comet and let the agent take care of browsing your tabs and pulling relevant info. It's a much cleaner way to make agents work. So that is interesting.
Speaker 1:So I wonder how much, like, puppeteering will be in this because ChachiP ChachiPTO and OpenAI have operator that operates a Chromium front like a headless web browser, basically. But you can actually see it working and it's clicking things. And so if they're like, there's also the value of, like, the training data. If you're getting people using all these websites, you have all this training data of, like, okay. They clicked on the blue button.
Speaker 1:They clicked on the green button. They saw this. They they they entered the this is how they dealt with this form. This is how they dealt with that form. And so that feels like very, very valuable data if you can get it.
Speaker 1:So it's probably worth duking it out even if it doesn't, even if even if it takes a long time. For sure. I do wonder where where else they will, they will plug in. Like, Cluely operates at, like, a higher level of abstraction with, like, the screen scraping. And I wonder if we'll hear rumbles about either Perplexity or OpenAI thinking about, like, moving up the stack to that level.
Speaker 1:Not exactly sure.