Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 12 - 3 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.
You're watching TPPN. It is Friday, 03/28/2025. We are live from the Temple Of Technology, the Fortress Of Finance, the capital of capital. This show starts now. We got a great show for you today.
Speaker 1:We have a whole bunch of guests. Warkesh is joining. We got his book here. He sent me the PDF. I printed it out.
Speaker 2:Out of respect.
Speaker 1:Out of respect. But if he doesn't, it doesn't come
Speaker 2:on the show. Manuscript.
Speaker 1:If he bails on us, you heard it here first. I'm gonna tweet out the link to the PDF. Piracy is back, folks. Napster's being sold for 200 And I I it's a threat.
Speaker 2:Honestly, you could just upload it to chat GPT. It's easier.
Speaker 3:No. People can just carry it.
Speaker 1:It's a fantastic book. I highly recommend going and picking it up. Stripe Press never misses. The folks over at Stripe are fantastic. We have another Stripe Press author coming on the show.
Speaker 1:We have a bunch of incredible thinkers and founders and investors lined up. A little bit of a AI focused day, a little bit of focus focus on geoengineering today. Casey Hanmer and Augusta Dorigo will be coming in talking about solar and also cloud seeding.
Speaker 2:Augusta has been accused of being a part of the deep state.
Speaker 1:He's breaking his silence.
Speaker 2:And he's breaking his silence Yes. Us. TBPM.
Speaker 1:So we should have some fun with that, obviously. But it is kind of getting to a serious point with his with his drama where, like, I think that the the the tone needs to shift, and hopefully, we can be part of that. But first, let's start with a great post by Deleon advocating that Trump should unpardon Trevor Milton because of how bad his video is. He says,
Speaker 2:Mike Nothing else. It's just the video. Yeah. Which is so bad.
Speaker 1:Yeah. And so we're gonna go through Trevor Milton is the founder of Nikola, a electric vehicle company that zero emissions trucks went to prison over a fraud case, and we'll break it all down. Delian says, my god. Can you be more narcissistic? Maybe take some responsibility for your fraudulent behavior.
Speaker 2:Yeah. It's not a good signal if Trump pardons you but doesn't post about it. Oh, yeah. Because if he was going to get positive attention from doing it, you'd think he would Totally. You know.
Speaker 2:Even if he knew he was gonna get some negative attention. Yep. But I don't think anybody it's hard for me to see, you know, prison is a sad thing. Yep. But it's hard for me to see sort of whatever the argument is around freeing Trevor given what seems to be
Speaker 1:Pretty clear cut. Pretty clear cut. Totally. Yeah. But let's, read the Wall Street Journal article on it and get some facts, and then we'll make and then we'll give you our take.
Speaker 1:So Trump pardoned Nikola founder Trevor Milton. Milton was convicted of fraud in 2022 for statements about zero emission trucks. Milton said in a video posted to social media Thursday that he received a call from Trump who spoke about how much of an injustice this all was done by the same offices that harassed and prosecuted him. The greatest comeback story in America is about to happen, added Milton. The White House on Friday confirmed that the pardon had been granted.
Speaker 1:A federal jury in Manhattan convicted Milton in 2022 on one count of securities fraud and two counts of wire fraud, which is kind of always the framework for these, anything that any anything that goes wrong in business. It's all either securities fraud or wire fraud, because you're making statements that lead to someone buying or selling a security that are false. During the trial, prosecutors portrayed Milton as a con man who duped investors, including in podcasts and on social media, about the company's sales and the capabilities of its vehicles. In one instance, prosecutors said he created a video of what appeared to be a truck driving normally
Speaker 2:This is the video.
Speaker 1:Famous video of the truck going down the hill. But, really, it was an inoperable prototype rolling down a hill. And that that video always got me because I just feel like such a skill issue there. Like, can't you just build one prototype truck? Can't you retrofit a truck?
Speaker 2:Even if it was a flat road.
Speaker 1:I mean, are kids that are, like, engine swapping Miatas right now, and you can't, like, engine swap a truck and put an electric powertrain in there. Like, it doesn't seem that hard to get something functional. The hard part is actually the scaled manufacturing. Like, that's what Tesla Yeah. Is good at, and that's why Tesla's so impressive.
Speaker 1:Like, The first Tesla was impressive too. They kinda but even with the first Tesla, it was built on the Lotus Elise, and they just chopped it up and extended it and stuff. It was very much like the criticism was like, that's just the Lotus Elise with an electric powertrain. And it was impressive, and it's great for all the investors that saw that, but it was never misrepresented. Elon was never like, oh, yeah.
Speaker 1:This design, like, you might you might think it looks like a Lotus Elise, but, we designed it. No. He just said, like, yeah. We used the body from a Lotus Elise. We paid them.
Speaker 1:They're happy. We're happy. And that's why the Tesla Roadster, the first one, looks like a Lotus lease. Yeah. It's fine.
Speaker 1:And and that's not fraud. That's just actually doing the important parts of your business and outsourcing the ones that aren't important.
Speaker 2:Calder in the chat says, put him back in jail and don't let him out until he's built a truck.
Speaker 1:Yes. This is our thesis. Yeah. Yes. I wanna see I wanna see breakthrough biology From Elizabeth writing from Elizabeth Holmes.
Speaker 1:I want to I wanna see a truly trustless ZK proof network hacked together by SBF in prison. He's moving prisons, by the way. Not a full story, but just an interesting fact. His morning routine now includes getting woken up at 3AM
Speaker 4:Yeah. The real
Speaker 1:By the guards.
Speaker 2:The real sort of, you know, crypto protocol is one that you could build and launch from within jail. Right? It's just pure code.
Speaker 1:You sort
Speaker 2:of get it onto one of the library computers, and then it's just out in the world.
Speaker 1:I mean, SPF was telling that story. Like, I'm the Jane Street guy. I'm this genius. I'm this, like, amazing, like, you know, hacker. And it's like, okay.
Speaker 1:Show us. Show us. Build build the chain. Build build
Speaker 5:something that chain.
Speaker 1:Build a chain. Write some code. What what what's the what's the code Rust that they all use or something? I forget. I don't know.
Speaker 1:It's a hard programming language. Like, programming for crypto is not easy. And so if you can drop some some amazing some amazing code, some decentralized network or something, He would win a lot of fans back instead of just being like, ah, it was, you know, always it's all talk.
Speaker 2:Yeah. And here's something. Campaign finance records show that Milton and his wife donated more than 1,800,000 to a Trump fundraising committee in October.
Speaker 1:Wait. Like from prison? I guess. You can do anything from prison. That's what we've learned.
Speaker 1:Going to prison, you can do podcasts. You can tweet apparently. You can make political donations. Like, can't you do in prison?
Speaker 2:Yeah. You would think that from within prison
Speaker 1:Yeah. There
Speaker 2:should be some guardrails around sort of this type of thing. But
Speaker 1:Yeah. Oh, I mean, also, Milton's lawyer is Brad Bondi, who's the brother of attorney general Pam Bondi. And so clearly cozied up to the Trump administration as things got worse for him. Unclear if he was connected to the Trump administration before all of this happened. But if you're running a fraud, probably pays to, you know, start making some friends in Washington.
Speaker 1:You know? Also, if you're not running a fraud, probably need a lobbying group anyway.
Speaker 2:The lesson is have friends in Washington.
Speaker 1:Yeah. I guess no matter what. Yeah. And so Milton maintained his innocence and he said he acted in good faith accusing prosecutors of cherry picking his public statements to build their their case. And of course, there is an argument that this company could have been successful and was just kind of faking it till you make it.
Speaker 1:But there's always this question of like, when is too fake? When are you faking it too much? And there's always this like fine line. I think I think it's a decent amount of faking is actually acceptable. Like, it's fine to put up a CGI render of what you want to build and just say, hey, this is my goal.
Speaker 1:I'd say Not here today.
Speaker 2:But it's so different for it to be CGI or even a hype video versus a demo. Totally. Demos, faked demos are wrong.
Speaker 5:But even
Speaker 1:Google has gotten in trouble
Speaker 2:for fake And this is why there's any time you see these
Speaker 4:Yeah.
Speaker 2:Hard tech companies that won't be named on X creating what they they sort of blur the line between hype video and demo.
Speaker 1:Yep.
Speaker 2:And I believe that that is wrong. Right? Totally. You gotta kind of like draw the line around. Is this an advertising Yep.
Speaker 2:For a future state or is this the current state? Yep. And the reason for that is if you're putting something out that you're sort of positioning as a demo, but it's not real, and then you're using that to raise money, that's wrong.
Speaker 1:Yeah. Yeah. I do it is interesting. Like, Google had an AI demo where you could just take a user camera to kind of communicate with Gemini and use voice. And it was a very cool demo, but people dug into it and found out that part of it was sped up.
Speaker 1:And so they had done some light editing, so it wasn't a completely unedited video. And they hadn't disclosed that, but they hadn't said that it wasn't edited. And so it was it was more just like a a bad day in PR world, but you could imagine that if the stock had sold off a lot, there could be a securities fraud case, but we'd still be, like, 25 steps away from the founder CEO is in jail. The Nikola case is much more, you know, like, just way way more red flags. And so, Milton is 42 years old.
Speaker 1:He founded Nikola in his basement in 2015. He took it public at in 2020 at a valuation of 3,300,000,000.0. He resigned from the company later that year after a short seller's report alleged he made misrepresentations about the status of the company's vehicles and the production of hydrogen fuel needed to run them. So it wasn't even electric, vehicle. It was it was hydrogen, I remember.
Speaker 1:Nikola, whose market value briefly eclipsed that of automaker Ford before the fraud case against Milton filed for bankruptcy
Speaker 2:last Everybody everybody has they they put Ford squarely in their targets. Right? Adcock clearly wanted to raise at a bigger valuation to Ford. Otherwise, why would you price around Sure. You know, exactly Just
Speaker 1:right above the market.
Speaker 2:Million dollars above it or whatever.
Speaker 1:But Ford is lindy. I think we're gonna be driving Ford GTs in the year March. They're they're just not they're not going away. Yeah. It's it's too great of a company.
Speaker 1:Henry low key goaded. Yeah. %. Was about to say.
Speaker 2:Many people said, why don't you just attach a motor to the horse? Yeah. Robotic. Why don't you make robotic horses?
Speaker 1:Yeah. Yeah. Teleoperated horses.
Speaker 2:He said we have to wait a a while till, you know, robotics can get there, but for now, we're gonna make cars.
Speaker 1:But hopefully, the horse makes a comeback. Hopefully, you know, all this humanoid buzz with with Figure and Tesla Optimus. We know we want the robotic horse first.
Speaker 2:Yeah.
Speaker 1:I wanna be galloping to work the self driving horse. The horses are self driving. You can literally train a horse to, hey. Go home, and you can just be like, kick him on the butt, and he'll just take you home. Yep.
Speaker 1:Who needs a Weymouth? Yeah. Who needs a Weymouth?
Speaker 2:That's
Speaker 1:right. Let's let's bring horses back. Milton had sold roughly $400,000,000 of stock in Nikola. Wow. That's a lot of secondary.
Speaker 1:Delisted his chair his shares from the Nasdaq a few days ago. Two weeks ago, federal prosecutors asked the judge for Milton's criminal case to order him to pay back nearly 661,000,000 to shareholders. The SEC sued Milton in federal court in July of twenty twenty one alleging he committed civil securities fraud. That case, which is on hold during the criminal proceeding proceedings remains active. The court records show the SEC declined comment.
Speaker 1:So this is interesting. Like, he he got 400,000,000. They're going to ask him to pay back 661,000,000. But in the meantime, he's spending that money and making campaign donations. What happens?
Speaker 1:I mean, he will just be personally bankrupt. Right? So he can so, basically, he has 400,000,000 that he can just spend, and he can just give away. And if there's no way to claw it back like, if he buys a house and then he goes into bankruptcy, they'll force him to sell the house and then the house will go to the shareholders or the or the money or the proceeds from the house. But if he donates
Speaker 2:it He's out now. He's also He just makes sense. He's he's probably got another thirty years of his career in hard tech. I could easily see him running something back.
Speaker 1:Come back. You know? I mean, I'd love to see it. I I we've always maintained that it's always sad when people go to jail and we always believe in restorative justice and we certainly hope that he gets back on the mechanical horse.
Speaker 2:Yeah. The video to Deleon's point, it just makes it feel like, you know, maybe he doesn't maybe he doesn't actually feel bad at all. Yeah. You know?
Speaker 1:It's odd. Odd. Anyway, Trump issued a raft of pardons since taking office and has pledged to crack down on what he has described as the weaponization of the justice system. Trump was twice indicted by the justice department after his first term and separately convicted of falsifying records in the state court of New York. Both federal cases have been dismissed, and Trump is appealing the state conviction.
Speaker 1:Earlier this week, Trump also pardoned Devin Archer, the former Hunter Biden business partner who gave congressional testimony about their business dealings. Milton Archer and Silk Road founder Ross Ulbricht, whom Trump also pardoned, were all prosecuted by the, the US attorney's office in Manhattan. The justice department has also asked a judge to drop a bribery case against New York City mayor Eric Adams. So oddly, like, you know, a couple of Democrats getting pardoned, couple of Republicans getting pardoned, a couple of seems like apolitical people who have maybe just, you know, asked Yeah. Politely getting pardoned.
Speaker 1:I think the future, if stuff gets really, really political, will just be your you when you get sentenced, you're in prison when your party's out of the office. And then you so you get un you get pardoned as soon as your guy gets into the White House, and you get unpardoned on day one of the new administration.
Speaker 2:And you just in and out of prison for
Speaker 1:four years. Exactly. Yeah. I think that's the future.
Speaker 2:Yeah. And you're just donating to your to your team while you're in, you know, while Exactly. In prison to try to make sure, like, your team wins. Otherwise, it could be an eight year sting.
Speaker 1:Schrodinger's prisoner, both in and out of the clink.
Speaker 2:Dark.
Speaker 1:But, you know, a lot of people made money. A lot of people lost money on Nikola. If you were buying puts, probably did pretty well.
Speaker 2:Yeah. Those short sellers.
Speaker 1:Yeah. Yeah. They probably did pretty We
Speaker 2:should actually have them on.
Speaker 1:I wonder do you think it was Hindenburg that did it? I wonder who did the actual Nikola short seller report. But, anyway, if you're interested in going long, going short, you gotta do it on public investing for those who take it seriously. Multi asset investing, industry leading yields. They're trusted by millions.
Speaker 1:Folks, go to public.com. Sign up.
Speaker 2:The best in the business.
Speaker 1:There's gonna be a new stock that you can trade on public.
Speaker 2:Wait. Before And today now. Yeah. Nicola in 2020
Speaker 4:Yeah.
Speaker 2:Had some coverage from Hindenburg Research, the firm that you want to get coverage from.
Speaker 4:Nope.
Speaker 2:How to parlay an ocean of lies into a partnership with the largest auto OEM in America. That doesn't sound So anyway Sounds a little rough, So Hindenburg's kind of retired now. Yeah. Anyways
Speaker 1:Out of the game.
Speaker 2:Out of the game.
Speaker 1:But maybe there'll be a new viral short seller that emerges soon. I think it's Skreli. Yeah. He's doing it. Yeah.
Speaker 1:He's been doing
Speaker 6:pretty well.
Speaker 2:Doing that.
Speaker 4:Yeah.
Speaker 2:He's like, he actually realized you don't need to do a report.
Speaker 3:Yeah. You just need you just
Speaker 2:need to post.
Speaker 1:Yeah. Just post.
Speaker 2:Just post.
Speaker 1:But, yeah, he's been he's been good. Anyway, new stock coming to the market. Everyone's talking about CoreWeave's IPO, and we're gonna go through the information's report on the winners in the CoreWeave IPO and why VC's missed out. We had Tane on the show, and he had a great quote. He said, CoreWeave is the web three to AI pivot done right.
Speaker 1:It was originally a crypto mining company, and now it's an AI
Speaker 5:The best
Speaker 2:data center. Potentially the best to ever do it.
Speaker 1:The best to ever do it. Truly. Truly. And and it looks like it's gonna be a fantastic outcome. It looks like it's gonna price right around 40, up from 5¢ a share in 2019.
Speaker 1:Not too bad. And so we'll take you through While. Who invested, when they invested, and how much money they made. Let's kick it off with the information reporting from Corey Weinberg. He says CoreWeave's IPO, which started trading today, had a rocky path to the market but delivered a windfall to Magnetar Capital.
Speaker 1:The investors big bet the the investor bet big on the AI data center startup, but presciently protected itself from the risk of big losses. If you wanna pull up what CoreWeave is doing on public, I'd love to know, like, how is it trading right now? Because this is the IPO day.
Speaker 3:Today is the IPO
Speaker 1:day. The IPO.
Speaker 2:I didn't even
Speaker 1:Boom. Wow. We got IPO news, folks.
Speaker 2:I'm pulling it up right now.
Speaker 1:And I'm gonna need some soundboard from you. I'm gonna need some public.com data from you. I I'm gonna need a
Speaker 2:lot crazy. I didn't actually realize
Speaker 1:Yeah. Well, there's always this trickle of, like, the s one comes out, and then there's gossip articles. Will they go out? Will they not go out? Will they scrap it?
Speaker 1:Well, they went out. It started trading today. It had a rocky path to market, but delivered a windfall to Magnetar Capital. The investor bet big on the AI data center startup. So but they also protected themselves from big risk of big losses.
Speaker 1:It seems like they sold at a certain point in the secondary market. So Magnetar invested when CoreWeave when others wouldn't touch it, and that helped it get favorable terms that turned 850,000,000 in equity into 4,300,000,000.0, folks. Let's hear it based on the $40 per share IPO price. That's a good return. And they weren't exposed to significant risk.
Speaker 1:Magnetar is the biggest of a handful of winners in the offering, which priced Thursday night at a lower price than expected. So CoreWeave has been long been controversial. It borrowed billions to build data centers that serve the huge demands of artificial intelligence. That debt really scared off Silicon Valley's big venture firms, and none of them will be celebrating one of the biggest tech listings in recent years. A lot of VCs have a blind spot, said Nick Carter, former guest of the show, a venture capitalist focused on crypto startups.
Speaker 1:And he made an early investment personally in CoreWeave. We had him on to talk about that. We'll have to reshare that clip today. Yeah. The the conventional wisdom as a VC is you don't want to invest in capital intensive businesses.
Speaker 1:This is something that we've seen again and again. They're harder to underwrite.
Speaker 2:They did quite the party round. Oh, yeah. If you look at some of the friends and family Yep. There's a Russian poker player
Speaker 1:Yep.
Speaker 2:An Italian music producer
Speaker 1:I love it.
Speaker 2:A New York based plastic surgeon and a Sesame Street voice actor.
Speaker 1:That's exactly what
Speaker 2:you wanna Which is kind of what you wanna optimize for in the early stages. You never know. One day you might need some as a founder Yeah. Founder might need some plastic surgery. The next day you need a voice actor.
Speaker 1:Yep. People talk about it's diversity of thought. Exactly. You want diversity of thought around the table.
Speaker 2:Yeah. The next day you might say, you might be down to your last hundred grand. You need to take it to Vegas.
Speaker 1:Yep.
Speaker 2:Multiply it. Get your Russian
Speaker 1:player. Help you there.
Speaker 2:If you can get them into the country, that's helpful.
Speaker 1:You need to make a jingle.
Speaker 2:Yeah. The jingle. The Italian music producer. The big one the big one that's rising
Speaker 1:Corweave jingle in Italian voice. What's their tagline? CoreWeave. Big data centers. Highly leveraged.
Speaker 2:Yeah. I don't know if this is accurate, but their debt to equity ratio right now is 800 over 811 x.
Speaker 1:Brannen McBee. Then on their business plan, the first business of CoreWeave, Ethereum mining became worthless after Ethereum holders essentially cut miners out of the process in the fall of twenty twenty. That, of course, was the merge when Ethereum went to proof of stake as opposed to proof of work. I had mentally written off the investment, Carter said. I thought they were going out of business.
Speaker 1:Instead, the founders repurposed chips and computing equipment they used to mine Ethereum into other uses like graphics rendering and AI. And, of course, Crusoe Energy also did something similar. They were using peaker plants and oil and gas extraction, just the extra energy that's sitting stranded on the grid. You can't use it to power homes. You drop a bunch of server farm right where the energy is super cheap or maybe even free, and you start mining crypto.
Speaker 1:And then over time, you start, training AI models. So let's go into Magnatar Capital.
Speaker 2:Magnatar, which turned a $50,000,000 loan to CoreWeave four years ago Wow. Into a multibillion dollar stake is set to be the largest beneficiary from the IPO. So it sounds like it sounds like they were
Speaker 1:They know what they're doing.
Speaker 2:Yeah. They they knew I mean, absolutely insane. Former Citadel traders. Founded Magna Tar is CoreWeave's largest investor and has continued to back the company through its most recent private fundraising. Yeah.
Speaker 2:Its name is mentioned a 57 times in the company company's IPO prospectus more than twice CEO. Wow. As often as the CEO.
Speaker 1:I mean, you look at this chart. It's like this is a convertible note in 2020 at $2 a share. They did the series b at $5.55.0.5 dollars a share or $5.58 a share. Then they also came in the series c $38.95 a share, and then they did a tender offer at 47 a share. And so they have they have done just again and again and again been involved in this business.
Speaker 2:So this was cool. So the the managing partner of Magnetar, David Schneiderman says, sometimes the stars just align. I think we were the first firm to get comfortable lending against that asset called high performance compute.
Speaker 1:Mhmm.
Speaker 2:She's referring to CoreWeave's inventory of high end chips produced by NVIDIA. To power AI products.
Speaker 1:Some of Magnetar's investments came in the form of loans that turned into equity. This is the convertible note facility.
Speaker 2:Yeah. If if it was a convertible note, it's it's sort of it's not sort of confusing to just say this was
Speaker 1:I I I think they did all of the above. Yeah. I I I think they did they did do just
Speaker 2:straight up Yes. They're getting warrants.
Speaker 1:With warrants on top and they converted those because it went well it went well. It also vacuumed vacuumed up shares from executives selling privately in secondary offerings, but it forged an agreement preventing founders from selling more than 20% of their shares, in the year following the company's IPO. Remember, Tane was telling us that there was a lot of secondary action by the founding team. I think they had sold hundreds of millions of dollars. Yep.
Speaker 1:And so the they've they've they've created this agreement that, hey. The founders won't sell more than 20% of their stakes. But because the company's so big, it's still, like, a huge amount of money and a huge amount of liquidity in, you know, a frothy AI market. Like, this is a this is an important IPO because if it does well and it becomes a really strong business I just incredibly bullish for the entire industry. But at the same time, like, there is a lot of debt.
Speaker 1:It's a early stage company. Like, they've only been in business for a couple years.
Speaker 2:Yep. So just watch that the angel investor that's the Sesame Street voice actor was able to get some liquidity.
Speaker 1:And I think they probably did fantastically. I mean, Nick Carter did great. So Yeah. Why why would the Sesame Street actor not not do not do well? Yeah.
Speaker 1:I wonder
Speaker 2:wonder if they were able to prevent a lot of dilution just by being able to fund a lot of this growth through through the debt office.
Speaker 1:Yeah. I mean, it seems like it seems like it was kind of fifty fifty because they did take a lot of dilution from the penny warrants. Right? Yeah. That's why Magnetar is mentioned so many times in IPO perspectives.
Speaker 1:But probably less than if they were just trying to, hey, we need $50,000,000 to buy a bunch of chips and build a data center. And we have no revenue. And so it's a $10,000,000 pre money, and we're gonna give up 80% of the business or something. Like, what what is the alternative if you're not doing debt at this scale? As Corey grew, Magnetar lent another $730,000,000 collateralized against Magnetar's.
Speaker 1:Oh, oops. Where am I? Magnetar Capital. Collateralized against CoreWeave's contracts to sell computing power to Microsoft and NVIDIA. CoreWeave has paid $66,000,000 in interest so far on those loans, so less than 10%, and these mature by the end of the decade.
Speaker 1:Magnatar was the second biggest investor in CoreWeave's fundraising last year, putting in another 350,000,000, document show. Magnatar and tech investment firm Kotu were able to hedge their bets. They got a put, which allowed them to sell back their stakes to CoreWeave if the stock fell two years after the IPO. But that could become an important investment term because investors in the round paid $38.95 a share, meaning they stand to lose money on the deal if CoreWheed stock goes down much after the $40 a share IPO. Of course, they might not exercise that immediately.
Speaker 1:They might wait it out, but see where the stock is in in a year or two. But they do have that that put option for a while. If investors were to exercise the right, it would mostly it would be very costly for CoreWeave, the company said in its filing. Spokesperson for CO2 and Magnetar declined to comment. They're not talking because they're in a quiet period, of course.
Speaker 1:Several funds are underwater on their investments. Core Weave in Core Weave at the IPO price. Investment firms, Jane Street, Fidelity, Macquarie, BlackRock, Newburger, Berman, and others bought $650,000,000 in stock last November for early investors. Moment of silence for
Speaker 5:this.
Speaker 2:Folks are
Speaker 5:already brought
Speaker 3:to you
Speaker 2:by public.com.
Speaker 1:They bought at 47 a share, 15% above the IPO price. Magnetar, best known for winning bets on dodgy mortgage loans before the financial crisis. These are former Citadel guys after all. They know what they're doing. Is now deep in the AI investing frenzy.
Speaker 1:It's expected to write one of the largest checks to OpenAI, also a CoreVive customer in OpenAI's ten billion dollar funding round this month, Bloomberg reported. Magnatars launched a VC arm to leverage its relationship with Corviv by investing in new AI startups in exchange for access to the NVIDIA chips CoreWeave owns. And CoreWeave put 50,000,000 in that. And we've talked about these, like, VC funds attached to startups before. You said, you know, maybe it doesn't make sense for perplexity.
Speaker 1:But in CoreWeave's case, it's like they have the underlying data center you know, they have something that's, like, very complementary to an AI company, an AI app, an application layer company. And so Yeah. Maybe it makes a little bit more sense in this case. And if you're building an AI application company and training or inference is one of your most ex most large, like, expense line items, maybe it's great to actually have that relationship really tight and take some equity.
Speaker 2:I think it makes you know, once you're maybe past the $10,000,000,000 stage.
Speaker 1:Exactly. It's a little
Speaker 2:bit least a billion dollars of revenue, do your VC fund.
Speaker 1:Yeah. Like, seems like CoreWeek has product market fit.
Speaker 2:Right? Yeah.
Speaker 1:Yeah. It's a pretty basic business.
Speaker 2:It's Yep.
Speaker 1:They build a they build a a data center, and then they sell access to that data center. Yeah. And they're they're doing great. And interestingly, Dylan Patel just put out semi on semi analysis, the GPU cloud cluster max rating system, how to rent GPUs in The US. I'm gonna try and refresh this.
Speaker 1:But I believe
Speaker 2:The ClusterMax.
Speaker 1:I believe he gave CoreWeave a very a very strong rating. Actually rated a a few let me see if I can go to semi analysis. No. You want to keep reading?
Speaker 2:The article's mostly over. I can cover a little bit from Vidya.
Speaker 1:Yeah. So Semi Analysis rated every single Neo Cloud, CoreWeave, Crusoe, Together dot ai, Oracle, Azure, AWS, Lambda, Google Cloud. And interestingly, CoreWeave got the highest ranking. They are the only semi analysis they they're the only one that won the semi analysis platinum tier. In the gold tier was Crusoe, Together dot ai, Nebius, Leptin dot ai, Oracle, and Azure.
Speaker 1:But AWS got silver, and Google Cloud got bronze, which is something that most people wouldn't expect from a hyperscaler. But, you know, we've seen with Google Cloud, they just had to buy they had to shell out $32,000,000,000 for upping their security. And, you know, even though they are known as the greatest, you know, hyperscaler in many ways, there's a little bit of, like maybe they were behind the ball on on specifically running AI workloads in the cloud for for other companies. Whereas CoreWeave built their cloud specifically with AI in mind for the last few years, kind of had a fresh start, maybe a greenfield project, and did very well and earning high marks from Dylan Patel over at Semi Analysis. Anyway, there's there's another interesting line in here about, Les Wexner.
Speaker 1:He's he's in here. He he owns 3% of CoreWeave. He he invested $1,000,000 through a trust, and that's worth something like $800,000,000 now. Of course, Les Lex Wexner
Speaker 2:is Couldn't happen to a nicer guy.
Speaker 1:A controversial figure. He's been linked to Jeffrey Epstein, and he also in here, he used to his he used to own Victoria's Secret. And so again, not your traditional AI data center investor, but someone who got on the cap table early and rode that stake. Yeah. Think it was roughly a
Speaker 2:700 x. So
Speaker 1:Banger. Banger.
Speaker 2:Never thought we'd be hitting the size gong
Speaker 1:for Size is size.
Speaker 2:Size is size, though.
Speaker 1:Size is size. NVIDIA is also in the deal. Probably Silicon Valley's biggest winner from
Speaker 2:the offering. Get a win, to
Speaker 1:be honest. Finally. Finally. Yeah. Jensen needed it.
Speaker 1:He's They've
Speaker 2:been sitting on the sidelines.
Speaker 1:Rough yeah. They really did need, like, an AI narrative for them. They needed, a something to really pump the stock. So they're, of course, the dominant chipmaker. They, have made a crucial supporting role as one of Corviv's largest investors.
Speaker 1:It owns about 3% of the company fully diluted after having bought about a hundred million dollars worth of shares in the series b in early twenty twenty three, which was, you know, that chat GPT moment was just taking off and scaling laws were just kind of becoming popular.
Speaker 2:Strange that Nvidia and Lex Wexner have the same ownership.
Speaker 1:That is crazy. CoreWeave.
Speaker 2:Yes. Narrative violation. So Nvidia owns about
Speaker 1:$700,000,000 at the IPO It's a
Speaker 2:pretty secret Jeffrey Epstein guy. Yeah. And then Has the same ownership levels
Speaker 1:As Nvidia. As one greatest companies of all time. And CoreWeave.
Speaker 2:One of the highest profile
Speaker 1:I mean, really is like just a bizarre Like, they're doing amazing stuff. You can see from Dylan Patel's analysis, like the product that they've created is clearly top notch. Yeah. I believe that Dylan Patel's very objective in his analysis and and he's not because of the nature of his business, he, you know
Speaker 2:There are some lists in this industry that you can more or less pay to get on.
Speaker 1:Yes. But I believe that
Speaker 2:Dylan Patel's is.
Speaker 1:Exactly. And so
Speaker 2:Well Yeah. It's 11:30.
Speaker 1:It is.
Speaker 2:And we have a special guest.
Speaker 1:We do.
Speaker 2:And I believe they're in the waiting room.
Speaker 1:Let's bring him in.
Speaker 3:What's up, guys?
Speaker 1:Hey. How you doing?
Speaker 2:There he is. What's going on?
Speaker 3:Good see you, guys.
Speaker 2:It's great to have
Speaker 1:you. Good to see you. I heard you've been doing space repetition to practice for Uh-huh. Interviews. So I I I've been doing it.
Speaker 1:It says your name is Dwarkesh Patel. You have a podcast and you interviewed somebody named Mark who owns a website called facebook.com. Can you tell me about that?
Speaker 3:Yeah. In fact I I made an entire Anki deck just for you guys for covering
Speaker 2:Really? No
Speaker 1:No way.
Speaker 3:We're we're honored.
Speaker 1:That's good. The book launch been going? How's the press tour? How are you doing today?
Speaker 3:It's been going good. The people have all kinds I I have a book called The Scaling Era Mhmm. And it compiles the interviews I've been doing about AI. And know, people like, as you mentioned, Mark and Demis and Dario, the heads of the AI labs, but also researchers and engineers and philosophers and economists about and it it it's been really interesting because AI is one of those topics where there's so many fractal questions you could ask about what is its impact gonna be, how are we gonna train it, how do we even think about a superintelligence. So been dealing with a lot of different kinds of questions which has made it interesting.
Speaker 1:It's
Speaker 2:great. How contentious was the was the sort of book process? Did you know you wanted to go with with Stripe? I imagine you could have had your pick of the litter in terms of like legacy publishers that promised you all sorts of things. But you happen to own, you know, sort of your own distribution.
Speaker 2:So maybe it was just about picking the right sort of underlying partner for it.
Speaker 3:It it honestly was never a matter of picking a publisher. The on the main question was whether I should do a book in the first place. And so some folks at Stripe Press reached out, and if I was gonna do a book, it would be with them because as you know, their reputation precedes them. And so then then it was just, like, deciding, do I want to do the book or not? And I I I think retrospect, it was the right call.
Speaker 3:I think I'm really delighted with how it turned out.
Speaker 1:I wanna talk about acceleration. Are you feeling the acceleration? Like, mathematically, we are not accelerating GDP yet. Although, technically, we are today. I think GDP ticked up just a little bit, which means we're technically accelerating.
Speaker 3:But It must be your podcast.
Speaker 1:Yes. Hopefully. Yeah. And, hopefully, we're responsible. But energy use is not accelerating.
Speaker 1:Even some of the benchmarks are kind of saturating. We're not seeing acceleration curves. We're seeing solid growth. But at the same time, it feels like we're on the precipice of acceleration. But Yep.
Speaker 1:What does it mean to feel the acceleration for you?
Speaker 3:I I I think it's a really good question because we have these models which we think are smart.
Speaker 4:Mhmm.
Speaker 3:And we as you say, we haven't seen them. Even automate the things which we like, when we're having a conversation, we'll be like, oh, the call center workers. They should be really worried. And Yeah. There's they still got their jobs.
Speaker 3:Right?
Speaker 2:So what's going
Speaker 3:on? There's as you know, people have been talking about what we need to make the models cheaper. When Deepsea came out, they were like, oh, it's gonna be Jevan's Paradox,
Speaker 5:and Yep.
Speaker 3:We'll be using them way more now that they're cheaper. I think the real bottleneck is just we gotta make them smarter. Mhmm. I don't like, they're they're already so cheap. It's like 2¢ per million tokens or something.
Speaker 3:Ridiculous. I think the the the real bottleneck for me using them more is not their price, but them being more useful, being able to take over more parts of the economy.
Speaker 1:Yeah. Do do you think that intelligence is all we need? Andre Karpathy was talking about the importance of agency. Yeah. Talked to other people about maybe it's all like, what makes humans effective?
Speaker 1:It's not just intelligence. It's also agency. It's also coordination, friendliness, networking. Tyler Cowen's talked about, like, do we even need to map the different parts of, like, the skill tree that humans have, like charisma, wisdom? Like, underrated is, like, these like, the AIs are getting more intelligent, but they're already, like, maxed out on wisdom.
Speaker 1:Right? But do we need to think about a different taxonomy here, or do we just need to max out intelligence and everything else will come?
Speaker 3:No. I think you're absolutely right. I think, you need a lot more skills. There has been this trend in AI where whenever there's a big breakthrough, we think we've automated a large part of what intelligence is. And in fact, in retrospect, it's clear that it was only a beginning.
Speaker 3:So the big example here is when Deep Blue came out and beat Karp Kasparov at chess. Yep. People thought that this was, like, a big breakthrough in intelligence in general because we thought that what chess required was the general intelligence. And Yeah.
Speaker 5:You might have
Speaker 3:heard this concept of AI complete problems where if you solve this problem, then you solve the intelligence. So people said that about self driving. The Turing test was supposed to be AI complete. Yep. We've gone through all of these subcomponents of intelligence, and afterwards, we realized there's actually still more left to it.
Speaker 3:The thing that's sort of underrated is not even agency per se, although that's a part of it. I think the thing that's underrated is we humans have this global hive mind where the reason we can make iPhones and we can make buildings and whatever is not just intelligence and also not just agency. It's the fact that there's so much specialization. There's so much capital deepening. People are just, like, doing things, trying different ideas.
Speaker 3:AIs need to be smarter in order to do that. They need to have more agency in order to do that. But once they can, you just if you have millions of AIs running trying different things, that's that's when we get the real acceleration, and you'll feel it in your blood.
Speaker 1:When when you say, like, millions of AIs, you've said that you think there will be billions of AIs running around. Like, what does that actually mean? Like, how can we quantize are we just talking, like, ChatGPT DAUs? Are we talking about individual threads? Because you can inference multiple threads on a single
Speaker 5:chip. Right.
Speaker 1:Like like, how how are you thinking it when you think it it sounds more concrete when you say there will be billions of AIs, but is that really just, like, there will be a hive mind that is equivalent to billions of people? Are we talking about, like, one or maybe each model is, like, one entity, but then there's subthreads? How do you think about, like, that concept of, like, billions of AIs?
Speaker 3:Honestly, don't think anybody knows. I think it'll just, depend on how the tech tree shapes out. Sure. I have heard, like, these wild ideas in some of these interviews where one person at J. Kocher mentioned this idea of the blob.
Speaker 3:And the blob is right now, you know, it's really hard for if you have an institution or organization company, it's really hard for the person at the center to have that much awareness of what's happening in the company to control it to any great extent. Xi Jinping has the same 10 to the 15 flops in his brain as any other Chinese person or any other person in general. Yeah. And in the future, you can imagine that look. The the thing at the center just has way more compute, and it's not clear whether they're thinking about it as, like, more copies of AI Xi Jinping or AI Sundar Pichai or something.
Speaker 3:Yeah. But you you you could just have this, like, huge blob that's constantly you're you're it's like learning more things. It's like writing every single press release the company releases. It's reading every single pull request. It's answering every single customer response.
Speaker 3:That didn't know I don't know if that at all helped answer the question.
Speaker 1:Yeah. It's just it's a very weird question of, like, how this will actually play out. Will What? Like like, do we need to recreate the concept of like an individual brain and then copy paste it a billion times? Or do we just need one really big brain?
Speaker 1:It's unclear to me. I don't know. Jordy, what do you what do got?
Speaker 2:How did you feel on Wednesday? This sort of like gibble y moment. A lot of people were saying, oh, can't even go on x. It's just all slop. And and my takeaway was this is not slop.
Speaker 2:This is beautiful. This is like actually the most beautiful the timeline has ever looked. And the most powerful thing about that moment, it was the first time that you, like I felt that the entire world could get consistently perfect outputs without any sort of like prompt engineering and basically just one shotting these outputs in this sort of very scaled way. Are you are you so to me, I get very excited about it because it's like having any human be able to create beautiful images out of text is fantastic. Yes.
Speaker 2:And I think we should be excited about that. But how did you kind of react to some people saying like, you know, oh, this is bad or like, I'm gonna log off forever. Right? You saw people that were just like, okay, like, I'm gonna delete my account and just leave x now. It's over.
Speaker 3:How many times have those people said that? Yeah. Exactly. I think that's a very zero some people were saying things like, oh, you're you're like eating up the like this, you know, fossil fuel, which is like our affection for Ghibli somehow by making these things. I think it's just a very zero sum view of the world where there can be a limited amount of beauty.
Speaker 3:There can be a limited amount of joy. Mhmm. I just don't think that. And can can I be honest? Like, the thing that I ever I was really feeling when all these Ghibli images were coming out, I became more of I became more convinced of our glorious transhumanist future, where, like, look, you're get you're getting a glimpse just from these, like, early images of how cool and beautiful the things AI makes or helps us make will be.
Speaker 3:Just imagine this scaled up like a hundred x, a thousand x integrated into all our senses, maybe even into our minds, integrated into the way we relate with the people we care about and so forth. Yeah. I'm just like, the future could be really beautiful. I Yeah.
Speaker 4:I'm wearing like
Speaker 1:a VR goggles and it's making everyone beautiful. It's really rose colored glasses, right?
Speaker 3:Yeah. Yeah. Yeah. I know you To begin with. Yeah.
Speaker 2:Yeah. How how do you feel so Manus AI is apparently doing like a road show in The United States right now sort of raising for American VCs potentially. I saw some people pushing back on the timeline saying like, you know, bad luck to any American VC that does that. How do you feel, you know, you know, and the criticism would be like, you know, we're in this sort of AI, you know, cold war, American, you know, venture capital dollars shouldn't be sort of funding companies that are potentially competing with you at US AI labs or application layer companies. What's your sort of broad take on this sort of like cross border investment in AI?
Speaker 3:Yeah. It's a it's a it's a I was in China A Few Months earlier, and it was really striking to me how dismayed the venture capital system there felt and the tech ecosystem generally. Because after the '20 '20 '1 crackdowns, people are just, like, really pulled back. And it sounds like after the deep seek moment, that that sort of change at least in the AI at least in AI because now the state you know, the city funds and whatever are more willing to pitch in. Yeah.
Speaker 3:And and how how should people react to this? I I'm of two minds because, one, I do believe that there is like, there could be an intelligence explosion. You you really wanna be ahead of that, and you don't wanna help them get ahead on that. So I think, like, the expert controls and whatever are wise. As for just mana seems like in the middle ground here where I wouldn't want to just generally try to harm China by tariffing batteries or cars or something.
Speaker 3:This is a application of AI, and it's complementary to American AI Foundation Labs because they're using the Claude model. Right?
Speaker 5:So Yeah.
Speaker 3:I honestly, I don't have a strong thing. But what do you guys think?
Speaker 2:I don't I I don't I'm I'm exactly in the same position as you. I don't have a, you know, if American, you know, I I did find it a little bit weird that some American venture capitalists were just like cheering on deep seek just like for, you know, blanket statement like, you know, open source is good. This is good when it felt like the way the launch was like rolled out and announced was done in a way to potentially sort of harm Yeah. American financial markets. But
Speaker 1:I mean, my take has been that just just on a pure investment finance level, like investing in a Chinese company just can be difficult to get your money back at a certain point because the money just gets kind of stranded there. And then depending on who's in charge of America at the time, it could be very onerous to bring that money back. But I wait. I wanna stay on China real quick. I I I thought I mean, fantastic piece.
Speaker 1:I I don't even know what you call a video essay essay. But the one of the things that really stuck out to me as a creator like yourself was the was the lack of Chinese Joe Rogan, basically.
Speaker 2:Yeah.
Speaker 1:And and I was wondering, like, have you have you thought about that more? Have you unpacked that more? Like, you would almost expect that even if all the crazy censorship is true, why is there no power power law winner and there's a Joe Rogan who just spouts propaganda constantly? Like, what is going on that's driving the lack of these, like, tail, like, you know, world like, country renowned, country famous people.
Speaker 3:Yeah. I feel like you guys might actually have a good perspective on this because somebody might have said, why doesn't tech have their, like, Jorgen equivalent? And you guys started your podcast network. And I think somebody could have said before you guys started your podcast network, why doesn't something like this exist already? Mhmm.
Speaker 3:I I I honestly, I I I don't speak Chinese, and I don't I I this is just secondhand stuff I heard in China. I feel, like, reluctant to make grand conclusions about Chinese culture based on, like, why don't they have a zhuroge, and then it's because, like, this is Chinese culture is like this or something. Yeah. The sense I got was that they are more concern like, whether it's young people or just whatever people wanna consume is often more focused on practical matters. And if you listen to Joe Rogan, it's very much like, let's just shoot the shit about whatever.
Speaker 3:I get the sense that it's like, that is just not that, interesting, at least with the people I met.
Speaker 1:Yeah. Yeah. It's it's less practical stuff. I wonder if there's also an effect where just some of the first social networks probably accelerated more quickly to these highly diffuse algorithmic driven TikTok feeds that allow for smaller micro celebrities essentially. Whereas we in America, we've been building up this celebrity culture for so long that we have this more of a power law dynamic.
Speaker 1:I don't know. I don't really have a thesis on it that's that that that that's super built out, but it is fascinating.
Speaker 3:Yeah. That's that's interesting. That's interesting. Have have you been there?
Speaker 1:I I was there once on a layover, but Jordy did live there for a while.
Speaker 3:I Oh, really?
Speaker 2:Yeah. In 2016, I worked at a I I went to Fudan, which
Speaker 3:is It it was a little hedge fund called High Flyer. Yeah.
Speaker 2:No big deal. No. I I just studied abroad there. So I was there for a semester and and I I worked out of China Accelerator, is like a China startup accelerator. So a very very interesting experience that that I I don't
Speaker 1:I mean, scale is crazy. I was in Guangzhou even just for a day and the scale of the buildings there, it's just it really is remarkable. Like, need to see it in person. Because the pictures kind of compress everything. You don't really understand until you're there.
Speaker 2:The one the one thing that I found that was weirdly fascinating is I would I for some reason I don't know if it was an iPhone camera at that time Mhmm. But I found it very difficult to get high quality images Mhmm. Because it was so polluted that like something about like your own eyes would be able to kind of like you know, I don't use the contour app but I would take a picture and I'm like, that's not what it looks like. And I realized over time that it was just like certain areas were so heavily polluted that Interesting. The iPhone camera would just kind of bug out but It's like your nightmare.
Speaker 2:I had a question for you. I, you know, something from this week that I thought was funny was that sort of like, we were all kind of like holding our breath for the next like ChatGPT moment. And then it was also, you know, ChatGPT with with their image generation product. Do you do you think we should be holding our breath for sort of another company to sort of experience this sort of moment like that where it's just full, you know, complete takeover of the mind share, right? Because mind share is just so important right now.
Speaker 2:The sort of benchmarks come out and everybody in our corner of the internet is like hyper fixated on it. But the average consumer, you know, one of the I thought was fascinating is, you know, John and I had a couple of Ghibli posts each that sort of broke containment and a lot of people were quoting it and saying, all right, like tell me what this app is everybody, like what's the joke? Like they just still didn't know like ChatGPT or anything like that. But Yeah. I think it would be amazing for the industry if another company could have Yeah.
Speaker 2:You know, moment back that big. Do you see something like that happening, you know, this year? Or have we sort of reached Yeah,
Speaker 1:even like the agent stuff, like flight booking, you could see that like there's some application that would go real broad. Or is there any like next next milestone that you're waiting for?
Speaker 3:If somebody did get a reliable agent to work, I think that would just be like that that would have a similar break the Internet Mhmm. In a way that you personally could use. And you could just like log in and I think it'll probably be one of the foundational lab companies. Mhmm. People have been for years trying to build agents, and they just haven't worked.
Speaker 3:And it makes it it makes me think that that's a fundamental limitation of the current models. And so it'll just be the company that is building a future model that is geared towards computer use and so forth. That is what I'd expect it to be. I mean, I I was thinking the other day, when Sam got fired and people were posting on Twitter, oh, there's something what did Alias Two stars.
Speaker 1:Two stars. Yeah.
Speaker 3:Yeah. Yeah. I I don't think that's why he got fired. Yep. But it is notable that in retrospect, if you were following the Q Star rumors, you actually would have been in a good position to anticipate
Speaker 5:Totally. That, you know, that there would
Speaker 3:be this, like, reasoning breakthrough. That's kinda what they were talking about at the time. Yep. Similarly, gbt 4.5 and grog three beating not that much better. If you were following Twitter and six months ago, you would have seen, oh, pre training might plateau, but we'll have to go with inference scaling or something.
Speaker 1:Yep.
Speaker 3:So maybe my update has been that you can sort of know what's gonna happen. I mean, I remember at the time I was just like, these these idiots on Twitter are just like they don't know what they're talking about. Just like a rumor mill. And in retrospect, I'm like, yeah. Kinda like I mean, you know, you like, take it with a grain of salt, but they they they kind of had the big picture.
Speaker 3:There's, like, no secrets.
Speaker 1:Yeah. There was always this funny dynamic where people were criticizing Sam for launching ChatGPT without telling the board. But then at the same time, people were criticizing Sam for, like, being nontechnical and not driving the product forward. And I was like, no matter what you think of Sam, like, those two things cannot be true simultaneously. Like, you have to pick a side here.
Speaker 1:You can't criticize them for both, like, not innovating and also innovating too fast. Yeah. Yeah. But, anyway, I wanna talk about, I think, a, like, a flip side of the p doom argument that I've been kicking around. Basically, like, we've seen these, like, accelerating trends before, nuclear energy, energy too too too cheap to meter, hasn't talked about before, and we have hit stagnation.
Speaker 1:Like, it is possible that just something can break in our society and all these different economic and political forces can align to just say, hey. You know what? Nuclear energy is not going to double every couple years, and it didn't. And I'm wondering, like, what is your p stagnation? Like, just your probability that something happens, maybe people freak out, maybe there's just, you know, one world government or something, but we actually see AI stall for a significant amount of time, like fifty years.
Speaker 1:There's no intelligence explosion purely for stagnation reasons. Do you think that's a possibility?
Speaker 3:Like, 20%.
Speaker 4:Oh.
Speaker 3:I think there is a dynamic I talked about earlier where we in the past, we have underestimated how much it takes to make a coherent intelligence that has agency and so forth. Right? That could be part of it. Another is that there is no intelligence explosion. So sorry.
Speaker 3:I mean, the the the most important thing here is, look. We can keep increasing compute that we're putting into these systems for maybe the next five, ten years because computers are just growing at this ridiculous rate where in in three years, we're gonna have 10 x the amount of global AI compute that we have right now. But at some point, right now, we're spending 2% of GDP on compute and data centers and stuff like You can't just keep, like, 10 x ing that Yep. Forever. Yep.
Speaker 3:So if somehow this whole deep learning paradigm is wrong and we just, like, totally missed the boat somehow, then I could see it happening, that's how give it to 20%. Otherwise, if we do get AGI, I'm of the opinion that we're not it would just be so hard to contain it. Like, it's an incredibly powerful technology. Even if there's no intelligence explosion, even if it doesn't help you make an ASI or something, just AGI alone Sure. Is like would just it make the economy explode and Yeah.
Speaker 3:All kinds of crazy shit.
Speaker 1:I I mean, on the on the like, there's there's a little bit of a force of deceleration like the GDP question, but also just I've had this idea that, like, no matter how intelligent you are, you can't break the laws of physics. At a certain point, you need to, like, get the sand out of the ground and turn it into silicon. And, like, at a certain point, just moving the sand around fast enough even at light speed, you're not 10 x ing every two years. And so it feels like there could be a slowing down even as we're having the robots do basically everything. It's like the robots are still maxed out by physics.
Speaker 1:I don't know.
Speaker 3:I was thinking about this this morning, actually. And the intuition I was thinking about is so since seventeen fifties, we've had 2% economic growth in the world. Before that, it was, like, a tenth of that. Right? Point 2%.
Speaker 3:If you were around in the fifteen hundreds or 1,000 and somebody said there there'd be, like, 2% growth, I think you might, given your reference class, have been like, look. It just takes a long time to learn how to artificially select crops and how to build, like, new structures and aqueducts and whatever. It just that that, like, that is a process that takes a while. So why do you think you're just gonna be going through, like Yeah. Increasing that 3% a year?
Speaker 3:And in retrospect, it is, like, really weird. You look at the last hundred years of history. Yeah. We're discovering all these new things in physics and chemistry and so forth. The last fifty years, we're like, we start with a transistor, and now we're talking on this magical screen.
Speaker 3:Yep. And that was just like physics didn't bottleneck that. I think, like, you get another 10 x, and I don't see any imprincipal reason why the next 10 x of physics is like, just would not allow the robust to move fast enough.
Speaker 1:So, yeah, I mean, certainly on the GDP question, I I I think the energy question's, like, maybe a little bit murkier, but then there's probably other ways to optimize and and still get those GDP lifts even with even with energy growing at, like, a more reasonable, less explosive rate. So I I think I agree with you there.
Speaker 2:I'm sure you've talked about this in other interviews and and with some of the the individuals that are sort of leading initiatives at these companies. But what's your sort of broad take on on Apple's position and and how they've been approaching everything in AI, you know, it's sort of like I I've seen, you know, they've sort of led with, like, Genmoji almost as much as they've led with, like, you know, everything all the potential of what you would want out of sort of AI assistant. But how do do you think these, you know, companies like Apple and Google sort of like figure out product development and like proper distribution of these products? Because I feel like that's been the big critique. Yeah.
Speaker 2:Just like they have every every possible advantage, such talented, you know, team members, and it has to be so frustrating And
Speaker 1:it should be a sustaining advantage or sustaining innovation. If you're looking at the innovator's dilemma framework, and yet it feels like it might wind up being disruptive. I don't know.
Speaker 3:Yeah. They're not AGI filled enough. You know? Like, if you treat it like another feature well, mean, even if you treat it another feature, it's, like, mysterious why Siri doesn't work on my phone. But but, yeah, like, it's it's like it's like more people basically.
Speaker 3:And if you take that seriously, you're not just gonna be like, oh, and we the the the twenty fifth department in our complex is about making Siri better at speaking or something. No. It's like, this is this is the future.
Speaker 1:Yeah.
Speaker 2:And then that's is that the that that then becomes like the gigabull case for safe super intelligence, meaning like none of the features or like consumer applications really matter at all today and you shouldn't even release them and you should just sort of accelerate towards the the end goal that enables all the other goals.
Speaker 3:That's that's an interesting point. I think maybe somewhere in between where, like, if you didn't release ChatGPT, you wouldn't have been able to just, like, know that this is a feature people really wanted and that they would get a lot of use of as compared to the other things people were using GPT 3.5 to do. And I wonder if other things that other, like, features of AGI will be similar where if you don't you don't deploy it to a bunch of engineers on Cursor, you just, like, won't know what it would actually make something a good coding bot. The counterargument to this, and I think what the s SSI people would say is that they actually are deploying, but they're deploying towards the one thing they care about, which is accelerating AI research. And they don't need to do that externally.
Speaker 3:They can just do that internally. And so the basic question is, can you get this, like, closed loop where you build the AIs which are helping you accelerate AIresearch...super intelligence? I'm like fifty fifty on that question. So but that that other 50% is like a big deal.
Speaker 1:Yeah. You you you mentioned AGI pilled. Is there a difference between AGI pilled and ASI pilled? And why do OpenAI cofounders seem incapable of starting anything but a foundation model company? Yeah.
Speaker 1:I always wondered like, I just want one of them to be like, yeah, actually I'm starting a travel company. But seems to
Speaker 4:be like
Speaker 3:traveling a lot after their jobs are gone.
Speaker 1:They only know one thing. Like, they're all one trick ponies. I I mean, I love them all, but, like, it's just funny that that none of them started anything else.
Speaker 2:It's the only thing that matters, maybe.
Speaker 4:Yeah.
Speaker 3:I I mean, I think you're right. I think some people I wouldn't even put it as ASI because even if you don't believe in this, like, god like intelligence that's gonna control the world, I'm not sure I believe it either.
Speaker 7:Mhmm.
Speaker 3:I think there's AGI pill, then there's, like, transformative AI pill where you say, look. Even if they're just like humans, if they have the advantages that AIs will just intrinsically have because of the fact that they're digital, which is the fact that they can be copied with all of their brain all of their knowledge. Right? So think of the most skilled engineer in your company, like Jeff Dean, Aurelius Eskiver. You can, like, copy that person with all their tasks and knowledge and everything.
Speaker 3:You can merge different copies. You can scale and distill distill AGIs. Those advantages alone and the fact that there will be billions of copies as we increase the amount of compute in the world, That alone is enough for transformation in the sense of going from, you know, like, what we were like before the industrial revolution to the industrial revolution pace of growth. And so I think somebody can be AGI pill in a sense and say, like, yeah, I expect, like, human level intelligence to emerge in the next ten years, But they still don't take that seriously as in like, okay, well, what does that imply about what is happening through the economy? Does that just mean like, oh, you've got a smart personal assistant or does it mean like, no, we're in a we're in a very different growth regime?
Speaker 1:Last question.
Speaker 2:Two last questions, I Go
Speaker 5:for it.
Speaker 2:Go for One one question. So are there any people in the book that you feel like in the fullness of time are are sort of very under under hyped or not getting enough attention? People that are unsung heroes that maybe don't post a lot on X today. But when we look back, you know, fifteen years from now, we'll be like Good question. You know, those were the people that were doing because there's this this weird phenomenon right now where it's like if you're just loud on the internet, like you just like, you know Yeah.
Speaker 2:You get you suck mind share and attention. And maybe there's somebody like a building over that's doing more impactful work or or really at the forefront that isn't posting at all because they're actually onto something.
Speaker 3:That's a really good question. I think a lot of the people I've interviewed have subsequently or, you know, at the same time, like, are well known. Right? So even if you're not a lab CEO so if you're like a Leopold or you're a Scholter or Trenton, people know who you are on Twitter as well. Mhmm.
Speaker 3:The person who I think might be underrated still is an interview I did that we only released in the book. So we have two interviews that we kept for the book. One of them is Ajaiya Kotra, and she is somebody who has been doing these really interesting like, since the twenty tens, these really interesting analyses of how much compute did evolution spend in total in order to, like over the billions of years evolution has been going, like, how do we model that as a com computational, like, pathfinding exercise and using that as an upper bound on how long it will take to build AGI? And then, like, how much computers a human brain use? How much how much, like, time do we spend learning as kids?
Speaker 3:And how much compute is that toe in total compared to how long it takes to train these models? What does that teach us about how much better these models could get given the overhang? That has informed, I think, a lot of oh, shit. Sorry. That this is the wrong answer.
Speaker 3:Although, JI is excellent, and she like, she's also underrated. The one who was, like, also super super underrated is Carl Schulman.
Speaker 1:Okay.
Speaker 3:And I think I I don't know if this ring name rings a bell to you. This man is like you would not believe the amount of ideas that are out there in the AI ecosystem from, like, the software only singularity, basically, intelligence explosion kind of stuff, to these, like, transformative AI and modeling out the economics of this new growth regime to, like, so much more. It's like this one guy, it all came from him. He doesn't like to write that much, so he he, like, tells other people his ideas. I had him on my podcast, and we put his stuff in there.
Speaker 3:And he but one of them,
Speaker 8:like he just had us all
Speaker 3:these galaxy brain takes. Like, one of them was you looked at the research on what changed between chimpanzee brains and human brains. And he's like, oh, there's like there's there's a bunch of structural similarities. It's just that the human brain is bigger. So this lends credence to the scaling hypothesis.
Speaker 3:All kinds of galaxy brain stuff.
Speaker 1:That's amazing.
Speaker 2:I have one more question and we'll let you go. By the way, x is down. I think you broke it or we broke it together. We broke it. We need to yeah, we're still live on
Speaker 1:this Scaling era.
Speaker 2:Yeah. But how do you think about more on like the business side because I've just think everybody's been fascinated with your journey. When when you started your podcast, everybody would have said like, there's enough podcasts. Like, we don't need, you know, we just don't need more of them. That that clearly is not true.
Speaker 2:There's plenty of white space and your growth is proof of that. But how do you think about value capture and what you're doing? Because I'm sure you've had people come to you and say like, hey, look, like, you know, just just come keep doing the show, but we're gonna give you $50,000,000 of of sort of like shares and you don't Nobody
Speaker 3:nobody's coming to me with that.
Speaker 2:Well, well, they should I think. Yeah. But yeah, like how how do you think about kind of I think there's this general fear in the tech community right now where it's like, this is the last two to three years where you can like, you know, accumulate wealth and then like Yeah. It's over. And I don't believe that that's true.
Speaker 2:But how do you sort of balance like, you know, you're doing something you love all day long, which is just like talking to interesting people and like thinking about the future and you you know, humanity's potential and technology and all this stuff. But you know, how do you balance, how do you balance all that? That fear and and you know, wanting to capture value from your work. But also wanting to not be conflicted, right? And being able to just be sort of this independent
Speaker 5:actor. Yep.
Speaker 3:I actually am very curious about the answer for you guys because even though you're the the sort of the the network you're starting now, it like just it started with a much bigger bang than my podcast started out with out with. So I I assume you actually got a bunch of these kinds of offers, right? As you're like, oh, this is like new and exciting.
Speaker 2:Oh yeah, we care a lot about, you know, being like Switzerland, but specifically from like the investor side, right? Like long term, we wanna have any sort of investor be able to come on the show and talk about what they're doing. That's right. And I would imagine the same thing for you. Like, you don't wanna be like, you know, so tied in or have to to one, you know, foundation model, you know, company that Yep.
Speaker 2:You can't talk about the incredible things that someone else is doing. Right?
Speaker 3:Yep. Yep. I have had like different podcast networks or whatever reach out to me in the past and I've seriously considered them. In one case, I was like close to saying yes. And in retrospect, it was like this is many, many years ago before, like, the podcast had grown that much at all.
Speaker 3:And it was like, we'll give you we'll edit the show for you and we'll produce it for you. And all we ask is 50% of the revenue you earn through the future.
Speaker 1:That's a rough trade.
Speaker 2:50% of your lifetime earnings. 50%. Yeah. But
Speaker 3:I I had a couple of friends who I was like, they're like, dude, it's like working. Just do it yourself. And I'm glad. Because also another thing that might have influenced your decision as well is talent is so key. Yeah.
Speaker 3:Talent in the sense of, I think it really matters to have one or in this case, two in your case, two people who are like, we care. This is our vision, and we are going to instill it rather than this is an institution
Speaker 1:Yep.
Speaker 3:Where I'm the face of it. But secondly, what I was going for is, like, talent as your editors, your other people on your team. For me, I've just been super delighted with the people I get to work with there. And it just, like, the care and attention to detail they have just would not be replicated with, here's a team of editors at this podcast network. Like, instead of the people I've sought out Yeah.
Speaker 3:And I love working with and I give them detailed feedback and they give me detailed feedback and, you know, yeah. That that that that that's that's what also what makes it special.
Speaker 2:Yeah. On to the specifics around, do you think the the general techno, you know, techno capitalist fear of, you know, like, you know, basically, like, I think a lot of, you know, 22 year olds right now are coming in to their sort of careers and they're saying like, well, everybody's gonna be paper clipped in a few years. Like, you've got to sort of like create value and capture it now so that you're okay in this sort of like AGI future. I do feel like that's maybe sort of common fear throughout history. Right?
Speaker 2:Where there's like people have the sort of pending sense of doom sometimes. But do do you think that's like, you know, what what would you say to somebody that that had that sense?
Speaker 3:I I think like I think the way to model out the next few years from a career trajectory is you'll just have a hundred extra leverage, but you wanna be in a position where you can use that leverage. There's a common thing, and I I'm sure you experience this now as well, is that as you advance in your career, before you're like, I've got a bunch of time, but I don't know what to work on. And after you're further along, you're like, I have no time, but there's, like, a thousand different ideas I have for things that would be super valuable or I
Speaker 2:think would go really well
Speaker 3:or something. So that's I think what you should do is just get to a point wherein whatever you think is interesting or care about, you're at the frontier and can see what the problem space actually looks like. If you care about AI, I would really recommend moving to And then just start working on problems and, you know, use the leverage that AI gives you. And if we end up getting paper clipped, it's like, look. What's the point of, like, you personally, what's the point of that you, like, not doing anything worrying about that?
Speaker 3:In the 80% of worlds or 90% of worlds where we don't get paper clipped, you'll get to say you worked on something really cool at a time that was really important in the history of humanity.
Speaker 1:That's great.
Speaker 2:Well said.
Speaker 1:Thanks for coming This was fantastic.
Speaker 3:This is fun, guys.
Speaker 1:We gotta have you back. This is really, really awesome.
Speaker 4:Let's make it a
Speaker 2:regular thing.
Speaker 3:I enjoy this. And super malicious on you guys. Mean like Thank you so much. You were like a well your previous thing gets a couple months in but this is like a couple weeks out and you're already killing it. That's awesome.
Speaker 1:You so much. We really appreciate you coming on.
Speaker 2:And for for for the record we're not building a network.
Speaker 1:Yeah. It's actually a head fake.
Speaker 2:It's a head fake.
Speaker 1:It's just a show. No.
Speaker 2:It's just a show forever. That's right. Show will never will never come to you and say give us half for for editing, you know.
Speaker 1:You can't.
Speaker 3:And You you want at least 75%.
Speaker 2:Yeah. Exactly. Great having you on. Great having you
Speaker 1:on, Dorcas.
Speaker 2:Awesome. Excited to get, you know, everybody to get access to
Speaker 1:the book. Yeah. Yeah. Go get the book. I I printed it out.
Speaker 1:We have a buy the actual thing.
Speaker 2:Yeah. So
Speaker 3:so Thank you for
Speaker 4:your get
Speaker 1:it Stripe Press. The scaling era is here.
Speaker 3:Cheers. Thank you guys so much for having me on.
Speaker 2:Bye. See you. Bye.
Speaker 1:And we got Casey Hanmer coming in. We kept him waiting. Sorry about that, Casey. Hopefully, you're still here with us because we want to hear about Terraform. We want have you seen his office?
Speaker 1:He builds from a castle in Burbank. It's it's like Amazing. Some guy built this these crazy castles. It's a fascinating company. Boom.
Speaker 1:Hey, Casey. How are you doing?
Speaker 7:Hello. Very well. Can you hear me?
Speaker 1:Yeah. We can hear you great. Are you in the castle today, are you somewhere else? Where are
Speaker 4:you?
Speaker 7:I'm I'm in the castle. The castle. I'm behind a a fancy background of some synthetic natural gas samples we made last year. But Amazing. I mean, the the castle windowless wall is right by me here.
Speaker 8:Very cool.
Speaker 1:Can you can you, give me just a little bit of intro explanation of the company? And then I I mean, I do wanna hear the story of, like, how the castle got built again because it's a funny story.
Speaker 7:Sure. Well, let's start with Terraform. So three and a half years ago, I I rage quit my job at at NASA JPL and, incorporated, Terraform Industries to build cheap synthetic natural gas from sunlight and air.
Speaker 4:Mhmm.
Speaker 7:And we've been at that for, like, three and a half years now. We've we've made a lot of progress. It's super exciting. The castle itself was built in the eighties by the the Bandy family who built many of the industrial buildings in in Burbank. And they worked with a a general contractor who who was down to do crazy stuff.
Speaker 7:And it turns out if you're building from cinder blocks, you can make them any shape you like. So Yeah. That's beautiful thing. Got a castle.
Speaker 1:Yeah. Yeah. It's amazing. Can we can we start with this this quote tweet that went mega viral this week? You said, I think a general misunderstanding in either direction about how hard this is is a major contributor to the outside context problem occurring in politics right now.
Speaker 1:Don't at me. It's a video of Elon Musk landing a, a rocket, and and I I think Elon quote tweeted you and said, just read the instructions. But can you give us some more context on, like, unpack that? Like, what what exactly is the outside context problem? What what's going on?
Speaker 7:Yeah. It's a bit of a a geeky deep cut. So the outside context problem is a a concept popularized in Ian M. Banks' culture novel series, in particular the That's right. The novel Accession, which primarily deals with the culture coming in contact with a with kind of a previously unknown alien.
Speaker 7:I mean, this the whole book's full of aliens, but this is like an alien alien intelligence that that's modality is very reactive. And so the the usual ways of kind of reaching out and probing and and attacking and so on are just being reflected with overwhelming force. And and then Just Read The Instructions is the name of one of the culture ships in in Amex's novel. So in this novel, there's millions and millions of of fully sentient, very, very large spacecraft that fly around, basically looking after humanity. The the the deeper point is, I think Elon's a very interesting person.
Speaker 7:I think he's obviously very underestimated, has been throughout his entire career for whatever reason. And
Speaker 2:But is he
Speaker 7:And I think, you know, convention sorry.
Speaker 2:Sorry to interrupt, but is he underestimated in our corner of the Internet or or just sort of broadly? Do you think he's still underestimated by the average Elon, you know, fanboy?
Speaker 7:I think even then. Yeah. And and I think, you know, conventional wisdom a year ago was like, oh, Elon is getting involved in politics. He's gonna, you know, shoot his feet off. And I think, yeah, he he genuinely took a big risk backing Donald Trump for the for the election.
Speaker 7:You know, if if if Kamala had won, doubt it would have gone very well for him. But, but, you know, now yeah. There's a person who a year ago, we're hearing all these op eds like, oh, he doesn't understand politics. Know? He's still hearing.
Speaker 7:He doesn't understand politics. He's taking a huge risk. He should just focus on Mars. What is he what is he talking about, etcetera. He's now in the White House, like, literally running the IT modernization process of the entire federal government, which is code for he has root access to the entire government.
Speaker 7:And I think, ultimately, I think history will show this is gonna be a massive net positive for United States. You know, obviously, there's gonna be some some mistakes and some struggles and pain along the way. That that's always been clear from the outset. I wrote a blog post about this last year saying, you know, why do we need a department of government efficiency? But, I think, overall, we should probably regard ourselves as extremely lucky that someone of Elon's caliber has taken in in interest in fixing the processes that affect all of us, not just, you know, finding finding ways to serve the interests of his particular companies or or, like, you know, skirt around certain regulatory issues, such as the normal normal way that you deal with these things.
Speaker 7:But but, yeah, I'm I'm I think the conversation needs to be had here. And my general complaint is that the is that the criticisms that are being leveled at the Doge process are not constructive because they're not engaging with the reality of what's going on. And I think, you know, this is a process that would be improved by constructive and high quality criticism and suggestions and better ideas flying into the system. I think that's always the case. But there's an outside context problem because most of the people who are jumping up and down about the various alleged outrages that Touge is committing do not understand what's going on.
Speaker 7:They it's it's beyond their context, which is not a huge surprise, but but, know, it is it is a major problem.
Speaker 1:It's interesting.
Speaker 7:Particularly for their interests. You know? And I've wrote a I've wrote a post about this as well in the context of California. Like, California, I live there. I love it.
Speaker 7:It's great. It's a great state, but it's not the state that anyone wants to be like right now. And that's a real problem for the future of the progressive movement. You know? Like Yeah.
Speaker 7:It San Francisco needs to be a shining city on a hill.
Speaker 2:Yeah. I agree. I have a sort of a random question, but I think you're gonna have an interesting take on it. Do you have any sort of, like, broad advice for investors that are fancying themselves as deep tech or hard tech investors after years of SaaS and sort of Web three and and now it's sort of the hot thing? I mean, in you started your company in 2021.
Speaker 2:Even then, you know, it wasn't, you know, there there's sort of the meme, right, which is, you know, deep tech would have loved zero interest rates. Right? But like Mhmm. There wasn't a lot of, like, companies like yours actually being you know, there were plenty, but there was not as many as there are now. So do you have advice to just sort of venture capitalists broadly about how you're evaluating you know, if an if a, you know, bright entrepreneur comes to you and says they want to do something that, like, many people say is impossible, like, how do how do you actually, you know, how how how would you advise them to kind of like evaluate that?
Speaker 2:Because like it just seems like there's so many exciting companies right now and some are very clearly like fake and some are very like real and maybe they won't succeed but like they're doing like very real work and, you know, very likely will will have a dramatic impact. Question.
Speaker 7:Yeah. Thanks for that. I mean, one can theorize about these issues and how one might go about, you know, acquiring the necessary expertise to make, you know, better than random, judgments when it comes to potential deep tech startups. But, you know, the reality is it's extremely capital intensive. It operates on a a different set of both physical and, you legal laws than SaaS, and that that actually makes it, you know, significantly less efficient, I think Mhmm.
Speaker 7:In terms of the health and functionality of a typical VC ecosystem in this case. But instead, you can look at, like, well, what are the contemporary and historical cases where major innovations were successfully, you know, brought at scale into the market? And in almost all cases, it was actually led within an organization. So you have an existing organization that has existing, you know, capital relationships and projects and and in particular, a large team of of very skilled people who are extremely aggressive about delivering value. And then you take that team and you you throw the word a new kind of problem and a new kind of problem and so And and the most salient example in the recent past is the Elon industrial complex, where, you know, Elon has this family of companies that that are doing things that everyone else, you know, really struggles to even kind of wrap their heads around in most cases.
Speaker 7:And but but, you know, history is replete with examples. And I recently have kind of gone very deep on Henry Kaiser, who founded more than a hundred companies, built the Hoover Dam, built 1,500 ships in World War two, know, stuff that we regard as impossible today. Like, oh, how can we fix the American shipbuilding? He just stood up a shipyard from scratch in, like, less than two months in in 1941. Like, they didn't they didn't have computers back then.
Speaker 7:It's just insane stuff. That's crazy. So so, yeah, there there are historical examples of this happening. And I think one way of thinking about it is that capital allocation in hard tech is a stronger emphasis on human capital than on just liquid money. And and and probably the the optimal place to put the capital allocation layer is within already successful organizations that have already, you know, proved their proved their worth.
Speaker 2:Speaking Yeah. Speaking of organizations, is you said you rage quit NASA. Is NASA's best work behind them? Or if, you know, now that Elon has root access, you know, could we see, you know, the the organization sort of revitalized at some point?
Speaker 7:I I very much hope that that the Doge team will will devote some efforts to helping NASA recover its historical capabilities, which we have to remember within living memory were the envy remain the envy of the world. But, unfortunately, my experience it's it's personal experience. I didn't see all of it, obviously, but my experience at NASA was that the organization is you know, it's still stuffed to the gills with brilliant people, but most of those people spend most of their time not being allowed to do really extraordinary work, like actively being blocked and hindered and and and punished for for going above and beyond, which is not how we should be running a space program competing with China. Like, China's Program Managers in their space program, they fear the consequences of failure, and NASA's program managers do not. And that is the key difference.
Speaker 1:Can you talk a little bit about, stagnation? The you know, what what happened in 1970? There's a whole bunch of different theories. Is it a culture issue? Do we need to just adopt, like, a you can just do things mindset?
Speaker 1:I've always been tracking energy. We that that's the point where the energy growth kind of broke from 2.7% per year to 2%. It feels like AI might be enough of a moment to kick us back into gear. Maybe it's this dose stuff that's going on. Will it be political?
Speaker 1:But how important is unstaginating on the energy side to kick everything into gear, And what are the stack ranked priorities for actually seeing increased human flourishing, economic growth, energy, all the all the proxy metrics that are important?
Speaker 7:Well, you hit on the most important aspect there, which is we can measure this. Right? At the end of the day, we either have total factor productivity growth or we don't. And if we do, then our children will have a better life than us no matter what. And if we don't, then they won't no matter what.
Speaker 7:It doesn't matter how innovative we get about social programs and redistributive spending and various sneaky forms of communism. At the end of the day, we're either growing the economy or we're not. And I think that what the hell happened in 1971 is overdetermined in some Mhmm. You know, Henry Kaiser died in 1967. You know, various OPEC issues and oil shocks and so on occurred.
Speaker 7:We had the passage of NEPA and and CEQA in the early seventies as well. You know, all of these things, I think, well intentioned or or kind of contingent at the time. But but since then, we've I think if we had not seen the emergence of Moore's Law and improved computing capacity, we would have been in much more dire straits as far as economic stagnation goes. Yeah. But I'm also extremely optimistic that we're gonna we're gonna turn this around, first of all, because we seem to have gained the ability to talk about stagnation, first of all, and, like, to measure it and to worry about it and be conscious of the fact that this is probably actively impeding, for example, the fertility rate.
Speaker 7:And the second thing is that, you know, obviously, AI will help. But we've also got this incredible technology in in solar power, which is allowing us to convert in a sense, we're going back to the land converting sunlight into energy about a hundred times more efficiently than plants do, and electricity is quite a bit more useful than coal. So I'm I'm just very optimistic that we're going to solve the energy problem, you know, this decade. We might may have take a good crack at the permitting problem, at least in the West this decade as well. And I think those two things combined plus, you know, some, intelligent AIs will really help us get back on the Henry Adams curve.
Speaker 1:Can you talk more about solar? Why are you so bullish on solar and not maybe less bullish on nuclear? There's a lot of chatter
Speaker 2:in both.
Speaker 1:Everyone loves both right now. But why why is solar what you've chosen to kinda make your lives work here?
Speaker 7:Yeah. At least for now. I mean, to be clear, I'm not bigoted against nuclear power. I'm not worried about nuclear radiation. I taught nuclear physics at Caltech for a while.
Speaker 7:Mhmm. It's a fabulous technology. But but, again, like, we we can theorize about it or we can look at at history. And the history shows us that nuclear reactors have been enthusiastically adopted by various navies for operating, you know, clandestine, you know, underwater vessels that need air independent power supplies or nuclear aircraft carriers that need the ability to outrun said submarines. But other than that, you know, even the Navy moved away from using nuclear reactors to power their surface fleet.
Speaker 7:And I think if you, you know, want to honestly understand this, you need to to kind of dive into why that is the case. Mhmm. At the end of the day, nuclear reactors are steam engines, and steam engines operate on what's called the Brayton cycle, and there are certain irreducible costs associated with, you know, steam turbines and so on that that just drive the cost up. And that's even if NRC didn't exist tomorrow, right, even if even if you could buy enriched uranium on Amazon.com, it would still be the case that that that just the steam engine component of it is going to cost no less than coal. Solar on the other hand is fascinating because sunlight rains down on the earth every day for free.
Speaker 7:There there's a fusion reactor up in the sky. Due to some, like, weird quantum effects and, again, silicon trickery, it is possible to convert that into high grade energy in the form of electricity with, you know, a layer of a layer of silicon that's thinner than a sheet of paper, considerably thinner than a sheet of paper. The the silicon is is enormously abundant on the earth. We've gotten really quite good at making it. There are factories worldwide now now churning churning out more than a terawatt of solar per year with no signs of slowing down.
Speaker 7:If anything, production is increasing 30 to 40% per year, which is just bananas. It's it's it's it's it's a sort of growth rate you'd like to see for any technology. And and and the reason this is occurring is because there's a positive feedback loop that's already been kicked off. Right? So we don't have to theorize about, well, what is it gonna take to get to, you know, first of a kind, second of a kind, hundredth of a kind, you know, start getting those economies of scale down?
Speaker 7:In solar, that's already happening. It's a commoditized product. It has no moving parts. You don't need any special, you know, skills or labor to install and operate it. It's even easier to do than planting corn.
Speaker 7:You know? Just put it on the ground, and it spits out power, which in our discussion about 1971, you know, electrical power is basically wealth. It's a it's a free money printer you can put on the ground. And this is one of the reasons that I'm slightly frustrated about, the various tariffs on solar panels. I think if if China, our geopolitical adversary, is attempting to just to to harm us by giving us solar panels subsidized at their taxpayers' expense, we should do the textbook thing that you do when people are trying to do predatory dumping, which is buy as much of it as you possibly can to hurt them even worse.
Speaker 7:And I don't pave Nevada with them or something. We could figure out something useful to do with them down the track. But, yeah, base basically, that's
Speaker 1:the key.
Speaker 7:Speaking of, you know, some batteries on the side and the problem is solved.
Speaker 1:Yeah. Speaking of, like, paving Nevada, what does your what does the future of Earth look like in twenty, fifty years? Is it solar panels all over in space? Is it solar panels all over the Earth? Is this is it going to feel cyberpunky, or will there still be trees around at all?
Speaker 1:I am I just in a matrix pod and but but my total factor productivity is going up, I'm happy. What does the long term future look like for you or in your mind?
Speaker 7:I think it's a bit hard to say. I'm quite optimistic. Mhmm. You'd have to be to start a hardware company. And I think really what the future looks like comes down to people like you and I and what we decide to build.
Speaker 7:Yeah. Right? Like, you want a great future, come build it. Mhmm. In terms of energy, we can we can give every man, woman, and child on earth the amount of energy we enjoy here in The United States, which is about 20 barrels of oil per person per year, with something like 6% of earth's surface under solar, which is it's a it's much, much less than we currently use for grazing or for row crops or for forestry.
Speaker 1:Wow.
Speaker 7:It's a little more than we currently have covered in, like, densely populated cities. But, but it's it's quite a bit less in agriculture. That's that's enough for everyone. If you if like, the the the scenario the previous call you were talking to Dwarkash about, like, getting paper clipped. Not gonna get paper clipped.
Speaker 7:What's gonna happen is the the net present value of land, for agriculture is about $500 per acre per season. But for solar, it's about, you know, a hundred to $200,000, per year. Now if that's power powering an AI, artificial superintelligence data center, which is a thousand times more economic productive than your terribly poorly evolved human brain, which has to sleep eight hours a day and, you know, browse Twitter, then then obviously, economically speaking, our farmland is ultimately going to get paved over with solar, and we're going to starve to death. And, like, that's the that's the the true AI doom scenario. Mhmm.
Speaker 7:So, again, that's something we should be aware of and something we should should figure out how to how to forestall. Will obviously have a lot of solar in space, but I think mostly it'll be for powering applications in space. I don't think people are gonna be beaming power down from space to the earth anytime anytime soon.
Speaker 1:Is that just because the economics don't work out? I've seen a couple of these companies. They're using mirrors or lasers.
Speaker 7:Yeah. The the mirror one is interesting actually because because it kind of exploits the fact that, like, you're if you're a solar array owner, you have no other way of getting more power Mhmm. Other than paying someone to reflect some more down to you, which is super cool. Don't get me wrong. As far as solar cost goes, the the Starlink constellation is is solar powered.
Speaker 7:It's in orbit. It generates considerably more power than the space station does, collectively, obviously, like maybe a hundred times more power than the space station. And most of that power is used to amplify radio signals and and transmit them down to Earth where they're received by our antennas. And the revenue per per watt of electricity used to transmit microwaves through Earth's atmosphere to the consumer, because it's, you know, chopped up and turned into Internet data as opposed to just raw power, is about a billion times higher. Like, the revenue per watt is a billion times higher with it being Internet at a few hundred watts or whatever transmission power than it being you know, attempting to transmit gigawatts of power down for power plants where we pay, you know, 10¢ a kilowatt hour or something like that.
Speaker 7:Now Starlink is actually fabulously profitable, and I'm super proud of the team there. I just I think it's absolutely incredible what they've done. But there's no way in hell that their profit margin is 10000000%. Right? So, like, it's just it's just not possible.
Speaker 7:So I don't think that space based solar power is likely to be a thing.
Speaker 2:Do you think that the average seed stage deep tech founder doesn't take economics seriously enough? Because in almost almost every sort of question we've asked you, you've had sort of economics sort of rationale behind your answer. I think we've seen some companies emerge recently that, you know, there's sort of this, like, sad scenario where they, like, achieve the impossible or do the really hard thing. And then what's waiting for them is is potentially a a product that, like, is not economically viable in the sense that, like, you made the product, but then there's sort of you can't actually get any value.
Speaker 1:Yeah. I've heard about nuclear companies where it's like they literally just had one number wrong in their spreadsheet model, and they did the thing, but it just didn't it wasn't competitive on the grid at all.
Speaker 7:Yeah. That's a nightmare scenario. Yeah. No. I mean, like, at the end of the day, if you're super passionate about a particular technology and you think it has a chance of success and you have investors who agree with you, then go give it a give it a shot.
Speaker 7:Right? Mhmm. You don't know. I don't know. Like, customers and consumers, they like they like weird things.
Speaker 7:We don't we just don't know. Like, that's the fabulous thing about capitalism. It it it just firehoses options and and potentials at at the market to see what sticks. And if you think back to when we were children, there's no way we would have known that like the stable attractive form factor for cell phones would be something like this. That's not Yeah.
Speaker 7:That's not how they worked back then. So for the younger listeners, just like quiver in fear. We didn't have cell phones when we were young. So so you should give it a go. On the other hand, if you are trying to do a hard hardware tech thing at massive scale Mhmm.
Speaker 7:You absolutely have to have capitalism behind you. You cannot fight capitalism
Speaker 5:Yep.
Speaker 7:You know, to to get to massive scale. Forever.
Speaker 1:You have to grab massive value. The gravity
Speaker 7:It's not a it's a mystery.
Speaker 1:Economics.
Speaker 7:Yeah. Exactly. Would you would you rather have them on on your team or against you? And and I just see this this mistake time and time again, which is like, well, this technology will work if we can bring about some, like, massive behavioral change in the entire market or something. Like, probably that's not gonna happen.
Speaker 7:But if you can produce a product like an iPhone where when the new one comes out, hundreds of million hundreds of millions of people worldwide feel actively burdened by a thousand dollars of cash in their wallet, and they just they just like, shut up and take my money, then then you've done the right thing. Right? You're you're thrilling people that you're you're you're giving them something that they that they'll happily part with their hard earned cash to receive. And yeah. I mean, it's not like this is a mystery.
Speaker 7:I've won last people want.
Speaker 2:I have one too. Okay.
Speaker 1:Look. Want We'll go fast. Yeah. The scrolls project, is it just about inspiration, or is there a practical application to that project?
Speaker 7:I think it's there's a certain kind of person who really likes to curate organ curate and organize information. And and I think it will ultimately tell us a lot more about our historical origins and past, and I think that's worthwhile.
Speaker 2:Dory? You mentioned briefly the sort of AI doomsday scenario where the AI just realizes that it should blanket the Earth with solar panels and to, you know, feed itself. What else scares you in the world That's a good question. To end on a high note?
Speaker 7:I I think that when we figure out how to drastically increase human lifespans, it will turn out that we would have been able to build these drugs since the nineteen thirties. And so we've we've literally allowed billions of people to die painfully, lonely, in old age when if we just had enough insight and enough effort, could have figured out how to stop that a long time ago.
Speaker 1:Wow. Interesting. Well, hopefully, we go build them soon because the best day to plant a tree
Speaker 2:Well, unfortunately, we used all the GPUs to make Ghibli and build this week. So maybe next week, we can put it towards Next week, we can put
Speaker 1:the AI to work on lives. You so much for joining.
Speaker 2:This is actually an awesome I
Speaker 1:I really enjoyed this. Yeah. Yeah. What was that?
Speaker 7:I was gonna say one one last thing. Like Yeah. One of the major problems with Doge is that, like, we have these entitlements, costs of Social Security, Medicare, Medicaid, and so on. 1% of the federal budget is spent on dialysis. Right?
Speaker 7:These are all diseases of old age. Mhmm. Right? If we can just increase human lifespan by 10%, we cut that whole tranche, trillions of dollars of annual spending by 10% because ten percent fewer people die every year. Yep.
Speaker 7:Like and and they also get to not die. Yeah. It's amazing. Seems blindingly obvious. And yet and yet, you we spent less than point 1% of our of our national health budget on antiaging research.
Speaker 7:That's wild. Point 1%.
Speaker 1:Wow. Well, we'll have to dig into that more. Thank you so much for stopping by.
Speaker 2:We would love to have you back on. Maybe we'll come over and we'll we'll do one in the in the castle at some Yeah.
Speaker 1:I'd love to.
Speaker 2:That'd be great.
Speaker 7:Thank you
Speaker 1:so much. This is this is a fantastic conversation.
Speaker 2:Really enjoyed it.
Speaker 1:Have a great rest your day. Have a great weekend. Talk to you soon.
Speaker 2:What a brilliant what a brilliant mind.
Speaker 1:Yes. Scrolls enjoyer. You always get good stuff with the scrolls enjoyers.
Speaker 2:Scroll guys.
Speaker 1:The scroll guys. We gotta have the whole the whole maybe we should do a scrolls day.
Speaker 2:Scrolls day.
Speaker 1:But back away from scrolls, we're going back to books. Got Nadia Asperhoff coming in the temple of technology. She has, just announced a new book called Antimemetics. She's a former Stripe Press author in the the in the the hall of fame of authors, in my opinion. If you publish on Stripe Press, you're
Speaker 2:If you can publish there, you can publish anywhere. If you
Speaker 1:can publish there, you can
Speaker 2:That's what people have been saying.
Speaker 1:Yeah. It's really it's really the NFL of publishing. Yeah. Yeah. Like, it's it's anyway, let's
Speaker 3:bring it
Speaker 2:Super excited for this.
Speaker 1:Let's Thanks so much for joining. We're having some fun. Yeah. Thanks for joining. Can you give us I'm surprised that
Speaker 2:Dalian didn't join, by the way. Yeah. Because Pavel was about to have his he was about to have this amazing, you
Speaker 4:know, this
Speaker 1:first Do not send Dalian the link. He will just bomb I think he
Speaker 4:has the
Speaker 1:interview. He does have the link. Don't let him know that he could use the link because he will
Speaker 4:just bombard
Speaker 1:He's a great hype man. Anyway, yeah, would you mind just a brief introduction and and and give us a little high level of the book.
Speaker 4:Sure. I'm Nadia, a writer and researcher. I just published this new book called Antimemetics, Why, Ideas Resist Spreading. And it's about this idea of anti memes, which are self censoring ideas. So ideas that we find really interesting and compelling in the moment, but for whatever reason, they kind of resist sharing or being remembered.
Speaker 4:Mhmm. So I think we usually think that if an idea is good or interesting or compelling, it's gonna find a way to share or be shared or spread on its own. But there's this whole other class of ideas. So things like taboos, forbidden ideas, stuff we don't really talk about in public, cognitive biases, so stuff that we are sort of like blind spots about ourselves that we don't really realize. All this kind of forms this class of ideas that are antimemetic and don't spread super easily even though they're, interesting and boring ideas.
Speaker 4:So that's what I wanted to read about.
Speaker 1:How much of memetics is just actually compressing the idea down? I was thinking about how Lulu's going direct manifesto has this really pithy phrase going direct, founder mode, same thing with the network state, kind of two words that really hits versus, Leopold Aschenbrenner wrote a fantastic essay about AI, called it situational awareness. We don't see people using situational awareness as a meme to talk about acceleration and where the future of AI is going. Is it just a packaging problem, or is there something about the ideas itself that creates an antimemetic property?
Speaker 4:Yeah. I think some of it is a packaging problem. Like, ideas themselves have these innate qualities that are mimetic or antimemetic. Some of it is just about, like, how the person perceiving it or receiving it is, like, how do they interact with that idea. So you can think of certain ideas that are spread really, really easily in certain networks, but not in others.
Speaker 4:Mhmm. Conspiracy theories are a good example of this. So, yeah, I think it's it's a mix of both. It's partly about the idea. It's partly about the person that is, like, sharing it or receiving it.
Speaker 1:Sure. Jordy?
Speaker 2:Well, one, I wanna make this segment like the most beautiful ad for your book. So I'd like to go like through a number a number of the of the ideas. I did wanna a book had been going viral and I got a copy of it. I wanted to get kind of your high level take on it. It's funny.
Speaker 2:There there was this was maybe released in 2021. There is no anti memetics division. It sort of went viral a little bit last year, but then it is no longer, like, for sale, or I guess you can buy it or the paperbacks are trading. Is is that the
Speaker 4:Hundreds dollars on eBay right now.
Speaker 2:Yeah. So I have one. I should probably, you know, just sell it and buy like 20,000
Speaker 1:Keep holding.
Speaker 2:Holding. No. No. No. That's worth a million.
Speaker 2:But it's I just thought the great irony of, like, you know, they were they were maybe going after, you know, something similar to what you've been researching it. But but the but then it's not even available anymore. So it's like not not not a
Speaker 1:They're dog foodie antibiotics.
Speaker 5:Do you
Speaker 2:have any sort of like take on that I'm curious. Yeah, and then more more broadly, like in researching this, like what what's the full breadth of what you were kind of like looking at, right? Because I imagine you were researching and and finding like forbidden PDFs and books in a bunch of different areas, right, of, like, this sort of esoteric knowledge that, humanity ignores for some
Speaker 8:reason. Yeah.
Speaker 4:For I'll give a a big, plug for Quantum who wrote there No Antimemetics Division. And he actually coined the term anti meme like back in the day, and it originates from this fiction, like horror sci fi kind of online wiki. And so, yeah, he published that book, I think 2021. It's a horror science fiction book, and it's about antiememes as this sort of like anthropomorphized creatures that are kinda like these black holes. They destroy everything they touch.
Speaker 4:They consume everything that comes in contact with them, but for whatever reason, we can't seem to remember that we've interacted with them. And so there's this intelligence unit, that is trying to fight anti memes, but, like, can't even remember that they're doing it. Hence the title, There is an Antimemetics Division. Super cool book. Highly recommend it and all of his writing, honestly.
Speaker 4:And that was the first book I read back in 2021 that kinda kicked off my interest in the topic. I had never heard of the concept of anti memes before this. And, yeah, I I was reading that book kinda like depths of COVID, yeah, 2021, sad dark times, but, like, also
Speaker 2:Weird time
Speaker 4:to kind of a time. Everyone was doing stuff in, like, group chats, and they're doing stuff. Like, Clubhouse was a thing, and I I've been working at Substack. And so I was thinking a lot just kind of about, like, how does this concept apply to the real world and, like, Birdie Antony show up in in the rest of our life. So it was definitely a direct inspiration for this book.
Speaker 4:But, yeah, I wanted just kinda like a a nonfiction treatment of of the concepts.
Speaker 2:Totally. It was not practical in any way. It was just sort of, like you said, a sci fi horror exploration of it.
Speaker 4:I feel like it Can you
Speaker 8:talk about
Speaker 4:best that I've ever heard Yeah. Inspired.
Speaker 2:Can you talk about like, feel like we're in such a weird time. Like, our generation was sort of like not born on the internet, but we sort of like gained consciousness on the Internet. And the Internet is something that just like circulates ideas both like fact and fiction like extremely rapidly. And so in many ways, like conspiracy theories and these like sort of forbidden potentially truths or maybe it's fiction, you know, you don't know. They sort of like, I feel like I've grown up in a world where I don't, you know, like Yeah.
Speaker 2:The only conspiracy theory I believe is that there are just many many many many conspiracies going on and all sorts of things at all times. Right? So can you talk about like, yeah, I think like what what can people take away maybe from your book in terms of how to just like process information online when you don't know if something is completely false or an idea that or something that's factual or maybe it's an idea that's sort of just anti mimetic and it doesn't want to be known or it doesn't wanna be identified as as truth.
Speaker 4:Yeah. I thought about that a lot while I was writing this, just because I think there's a lot of doom and gloom about kinda like where the Internet is right now or like social media didn't turn out to be what we expected. And I really didn't wanna write that kind of a book. And because it is partly about this transition from, you know, once upon a time, it was you could post whatever you wanted on Twitter, you weren't worried about, like, getting, you know, trolled, canceled, attacked, whatever. And a lot of people I think part of why we kinda withdrew to group chats and these more private spaces is because we kinda just, you know, wanted to tune out tune out all of that.
Speaker 4:But I really wanted to show that it's not that one is replacing the other. It's not like, oh, you know, Twitter or whatever is over and we're just gonna all, like, go back into our little caves and only talk to our friends. But, like, both of these things actually feed off of each other and, you know, all the things that get workshopped and the group chats kinda make their way back out into public channels and vice versa. And I think, like, there's there are things about that that are are good and bad. And maybe to your point just about, yeah, not being able to even tell what's real anymore, but, like, group chats aren't really, like, a safe haven from whatever is happening in in public channels because they end up kinda, like, becoming these super incubators for crazy ideas, and so ideas can get even crazier and even weirder when they're being workshopped by, like, a small group of people.
Speaker 4:And so in some ways, they're kind of, like, mutating and making ideas even even crazier. But yeah, not to be overly defeatist about it, I think it is also kind of like your reality is what you make of it. And where we decide to direct our attention, what we decide to focus on, that is what your reality becomes. And so maybe that's a little bit scary or destabilizing to people to feel like they can't tell what's real anymore. But I also think if you just can find ways to harness your attention and focus on the things you actually care about, then life is not really so bad.
Speaker 4:So
Speaker 2:Yeah. And, like, you you compared, like, what's happening in sort of the the town square, like something like X, and then what's happening in group chats. Right? Like oftentimes, like we're in this world where there's sort of this like fiction appearing on the timeline, and then the truth is like racing through group chats. And on a long enough time horizon, they intersect.
Speaker 2:Right? Because like they almost like an idea can be like put out there and maybe people agree with it in the moment, but then eventually sort of what's happening in the sort of like private group chats like actually does start spreading and sort of like breaking containment. And in in many ways, it feels like that's happening faster than ever. Right?
Speaker 1:Yeah. I I wanted to ask a question about kind of like practicality here. I think a lot of founders and business folks think about can they align their company or their mission or their vision of the future along with a meme or condense it down? Obviously, the the going direct thing has been a great encapsulation of, like, what Lulu does, and it's helped her business as opposed to in addition to helped people that have adopted that strategy. Is it possible to do the opposite, to take an attack on you and somehow twist it into an anti meme so that the idea that is hurtful to you or your business or whatever becomes harder to discuss?
Speaker 4:Interesting. Yes. Definitely. Yeah. I I I do talk in the book about, this idea of, like, obscurantism, which Nick Bostrom coined.
Speaker 4:The idea of, can you make things boring or can you kinda hide them as a way of sort of suppressing them? So if you think about, like, suppressing ideas, people often think about, like, putting a hard, like, a hard lock on it so that, you know, make it forbidden, make it password protected, whatever, but that often makes people more excited to figure out, like, what is actually going on under there. Yeah. But if you make it really, really boring and really uninteresting and, like or just, like, really difficult to parse, people kinda just lose interest and wander away, and that's definitely a that would be, an anti memetic information warfare tactic, I guess.
Speaker 1:I've seen this where, I've seen people chirping at each other on x, and I've noticed that whoever puts up the longer piece of content usually ends the conversation. So you post, you know, I like SpaceX, and then I quote tweet you, and you're like, SpaceX is terrible. But then if you quote tweet back with something that's just like an essay, I'll just be like, ugh. I'm tired. I'm I'm out.
Speaker 1:And and you kind of win by
Speaker 4:doing blustering.
Speaker 1:And I've also seen the I've also seen these attempts to take, you know, a legitimate attack vector and then and then wrap it in political extremism on the left or the right. And so if you're being attacked by someone, you can say, oh, well, like, that's an idea from 4chan, or that's an idea that's from the communist manifesto. So we should disregard it and and put it in the forbidden category even if it's a legitimate criticism of whatever's going on. And so I don't know. It's a it's a fascinating topic.
Speaker 2:Can you can you talk about some of the differences between anti memetic ideas and anti commercial ideas? And anti commercial, I would, put something like intermittent fasting in that bucket where nobody makes money if people eat less. Right? Like like it's sort of this idea that like there's there's whole conspiracies around intermittent fasting which is like big breakfast was like came in and they're like, you gotta eat a big breakfast and like there's a bunch of people that benefit from like the big breakfast, you know Yeah. You know, complex.
Speaker 2:And then if you start going around and saying like, hey, you know, you can just have coffee or water and like just eat lunch. Like it's hard to monetize that. Right? It's not like I can be like, hey, don't eat food and also, like, you know, pay me to, like, teach you how to not eat food. And and some people have done that.
Speaker 2:But, like, I feel like there's maybe a line between ideas that are anti memetic and, like, just don't spread for one reason and then stuff that's maybe, like, slightly more obvious, but it's just, like, not spreading because there's no, like, sort of economic or commercial interest that's, like, trying to spread a certain idea.
Speaker 4:Yeah. That's super interesting. I hadn't actually thought about that yet. Yeah. I I think, like, there's still plenty of ideas that can spread memetically even without sort of, like, a commercial engine behind them.
Speaker 4:So, yeah, folk wisdom, aphorisms, things like that. But you're right that having Especially I think in the realm of health related things, I'm just thinking about a lot of chronic health issues and stuff like that where people kind of have to bumble around and find their own answers to things. And it's partly Even if they're chronic health issues that tons of people are facing and dealing with, there's no clear answer to them. And I think sometimes when the answer is something that can't be monetized or sold, it's if we think of it as just like another engine for driving the spread of information, if that engine is missing, then something kinda might languish and and not reach everyone that it should. So, yeah, that's super interesting.
Speaker 2:How do you think about Elon's positioning with x AI around the sort of, like, truth seeking AI? Like, is it possible that a, you know, properly trained LLM could be more inclined to spread ideas that that are sort of antimemetic within humans, yet, like, the machine is just, like, you ask it a question, it's, like, well, obviously, like, this is, like, what you're dealing with, whereas a human would just tell you, you know, you know, some other, you know, answer that that maybe was, like, more correct in some ways.
Speaker 4:I think it could be really useful for, like, ideas that are antiemetic on an individual level. I know some people will use AIs for this where it's like, tell me based on the conversations we've been having, tell me something about myself that I don't know or I don't realize. And maybe just having someone that can just tell you straight up what the answer to that is versus your friends who might kind of not be super honest, That could be useful. I still think it's like really hard to spread things that are taboos or forbidden ideas through a network. So even if one person really believes it and yeah, is is totally bought into an idea, like how do you get it to actually spread from person to person.
Speaker 4:And, yeah, it's a little bit harder to think about, like, how that that takes place.
Speaker 1:Mhmm. Well, I'm super excited for the book. Congratulations. Yeah.
Speaker 2:We should do like, once we get a full copy here Yeah. Just, like, actually ordering, we'd love we'll do a deep dive on it.
Speaker 1:Yeah. And we'd love have you back.
Speaker 2:Try to yeah. We'll you back on, and we'll we'll help the audience figure out how to make money.
Speaker 1:How to
Speaker 3:make money,
Speaker 1:man. It's even better.
Speaker 2:Because that's
Speaker 1:That's the theme of the show. That's really gonna get
Speaker 2:the book.
Speaker 1:Yeah. How do you oh, my competitor? No one's ever heard of them now all of a sudden. No one can remember them. Yeah.
Speaker 1:They just disappeared from the
Speaker 4:end of the because
Speaker 1:we put antiepoinetics to work in capitalism. Fantastic having you. Thanks so much for coming on the show. Yeah. And congratulations.
Speaker 1:Two time author now. Fantastic.
Speaker 3:Incredible.
Speaker 1:We're going, the opposite direction. We're going straight into the, the AI SDR scandal that's going on. We're going back, revisiting 11 x. You you might have heard about it earlier this week. 11 x, there was a TechCrunch article said that maybe the, ARR figures were inflated.
Speaker 1:Maybe they were using, customer logos who had churned, and they hadn't been removed from the Yeah. From the website quickly enough. We have the, the founder of Rocks on, who is here to tell us all about SDRs, AI SDR technology agents, and whole
Speaker 2:lot more.
Speaker 1:But I'll let him introduce himself. Welcome to the Templar Technology. Great to have you here. How are doing today?
Speaker 8:I'm good. I'm good. Thanks for having me on, John. Jordy, nice to meet you. Sam connected us, so excited excited to join your Ramp sponsors.
Speaker 1:Yes. This is the Ramp sponsor segment. I mean Sam. Seriously, like, I went to Ramp, and I was like, okay. There's a whole bunch of noise in this whole industry, but there's probably something that's working.
Speaker 1:Like, just tell me what you guys use. And they were like, well, we use rocks, and it's great. And so, they introduced us, and I'm glad to have you here. And I wanna learn more about the product genuinely because, it's not something that we're using yet. But in the future, we imagine that, you know, millions of people are getting phone calls and emails from us every single day, encouraging them to listen to the show, encouraging them to to, you know, download, rate us five stars on Apple Podcasts.
Speaker 1:We want a really intense million AI Salesforce, and we're hoping you can help us
Speaker 2:with that. Wanna swarm.
Speaker 1:We wanna swarm. You you wanna swarm.
Speaker 8:The board. Amazing.
Speaker 1:But, anyway, what what do you actually do? Yeah.
Speaker 8:Absolutely. So we're still pretty early. Most of it is all word-of-mouth as yet. So there's a big launch coming in a couple of months, and we actually have a sales and marketing team. So we build the first enterprise ready agentic CRM.
Speaker 4:Mhmm.
Speaker 8:So it's a new generation of software where the CRM works for you.
Speaker 1:Mhmm.
Speaker 8:So rocks in the system, like, RAM gets installed. Mhmm. It unifies all your customer data in one place, keeps it in in RAM's environment
Speaker 1:Yep.
Speaker 8:And then feeds a swarm of agents. And our agents are designed to supercharge the max premiums, the sound of the word. Like, how do you supercharge the highest compensated frontline kind of winners or or killers in go to market?
Speaker 7:I love it.
Speaker 8:And the way we do that is we double or triple their productivity by having these agents basically do a lot of the back office work. Like and that's kind of our vision is the winning companies of tomorrow, like Ramp, are gonna have supercharged builders with Cursor and Cognition and supercharged sellers with hopefully rocks. So that's that's what we do.
Speaker 1:I love it. I have a ton of follow-up questions. I mean, it sounds like one thing that we were kicking around was this this idea that maybe it's too early to have an agent at the front line actually writing copy, hitting send, and maybe we're more in the Centaur era where a human with an AI is more powerful than either a human or an AI alone. Is that how you're thinking about it right now even if in the long term you're going full, the AI will actually send
Speaker 2:the emails? And kind of going off of that specifically, I think everyone can see why back back office work is getting sort of automated.
Speaker 1:Perfect for LMS. Up your data.
Speaker 2:Cursor can have an amazing NPS even if it has like a huge error rate. Yet front office work, when you're interacting with customers, like everybody's experienced this at some point, you know, either the individual themselves or an employee of the individual, like, sort of messes something up with a customer. And it's sort of like this frustrating experience because it's like actually lost revenue versus like a lost five minutes, like, correcting a bug. So Yeah. Balancing that sort of front, like, the bigger challenge is probably, like, the front office work where the error rates just have to be much lower.
Speaker 8:Yep. Absolutely. As Lulu kind of gave you the best advice, like, we're building bath suits, not butlers. Mhmm. And the core idea comes from I've held the bag all my life, and ran a good market for a public company before.
Speaker 8:We're not at a point where you can completely go FSD for your largest customers.
Speaker 1:Yeah.
Speaker 8:The core idea is how do you supercharge Sam or Max or Eric with AI, but they are the ones who are orchestrating or QBing the system where they review and send. Because the cost are really messing up is in in my in the small term, it feels like a bad email, but as you you would degrade trust through the brand, through the person. And that's what we focus on is how do you build an agentic system that you could use today. It's not vaporware. It's things we can use today, but how do we supercharge the kind of frontline kind of back carriers?
Speaker 8:I think over time, like, there's gonna be dramatic seat compression or or efficiencies to be had where people who support those back carriers, which is kinda 80% of the employee base, they will have to kinda evolve to thrive or or kinda risk basically not being relevant.
Speaker 1:Can you talk about the innovator's dilemma, sustaining innovation in agentic systems versus disruptive innovation? Logan Bartlett on his show said that, you know, if you go back to mobile, the the the Salesforce of mobile was just Salesforce, and his thesis was that maybe the Salesforce of AI is just Salesforce. But, and they're obviously doing stuff in AI. But at the same time, we're seeing Apple drop the ball on product development. Google's dropping the ball on product development.
Speaker 1:It feels like there's more fertile ground than ever. So how have you reasoned through that, and why are you taking this bet even though there's some people that are saying, hey. It's only a matter of time till the big guys get it together.
Speaker 8:%. Like, Benioff's the goat. Like, he's the goat of all goats.
Speaker 2:That's good.
Speaker 1:And Let's hear for Benioff, folks. You're a legend.
Speaker 8:Yeah. Like, I I grew up not in the Bay Area. Like, you read up on them, and they're the big inspiration. So if you look at ServiceNow and and Salesforce, they have a massive advantage because they have distribution, but also
Speaker 1:Totally.
Speaker 8:Data. Like Yeah. Kind of firm belief is that the alpha is in working with the intelligence providers and being earning the right to be the custodian for enterprise data and bringing in public data. So our position is what we're arbitraging is what some somebody who holds kind of a bag at a public company knows is traditional systems have both lost usage, the users. They don't wanna use them.
Speaker 8:They use new tools, but they've also lost data. Data is now in the warehouse. So if you 40% of the data in this in a in a software kind of data warehouse, like a Snowflake, is actually go to market data.
Speaker 1:Interesting.
Speaker 8:When when we think about kind of building these new systems, incumbents are $3,000,000,000 kind of incumbent has a massive advantage. But what we're focused on on is how can we, a, earn the eyeballs of the Sam's and and Max Freeman's of the world, but do it in a way where we're doing a rug pull where to build a system of record, which indexes the data that's not already in the system of record. Right? So ROS gets installed, indexes everything in the warehouse, brings in you can connect all the other sources. And I think the winningcom kinda companies would be folks who bundle data end users in these platforms that hopefully earn the right to be the next kind of default platforms.
Speaker 2:Talk about the pressure that you feel and then maybe other other companies, specifically in San Francisco around. You see these companies coming out. You see these revenue charts. Right? It used to be investors Billboards on the 01/2001.
Speaker 2:Be and and seriously, in 2021, I remember saying investors would say, you know, sort of best in class companies are getting to 1,000,000 ARR in nine months. Yep. And now it's basically like, okay. Best in class companies are getting to 10 mil in three months. Yep.
Speaker 2:So if you're and and so what the the the sort of potentially toxic thing about this pressure is that it's making some people feel like, well, if I wanna win as a company, like, we need to put up, you know, just these ridiculous numbers that, you know, and and anybody that's built a company knows that, like, in revenue ramp is not necessarily, like, always connect you know, tied to, like, customer satisfaction or product quality or everything like that. So I'm I'm curious to know, like, do do you think that sort of pressure to grow extremely quickly is is forcing people to make sort of, like, short term decisions or riskier decisions or in or in some cases even, like, you know, misstate facts?
Speaker 8:Yes. I think it's ultimately the founder's psyche. Like, I've been around the block, but I I have to say two things this time around is makes me the most insecure. Like, all of us are very insecure. Right?
Speaker 8:My wife's listening in. Obviously, like, we're all insecure monkeys in some ways. So two things. One is I think this this new generation is about talking about these crazy kinda kind of revenue curves. The way I internalize it is ultimately, like, the secret to building enduring high quality revenues to make customers happy and the right customers happy.
Speaker 8:So we focus on strategic, like, the Siemens of the world, enterprise customers like Redis and MongoDB and Ramp. Like, how do we land there? How do we earn the right to expand and be essential in these businesses which would generate enduring revenue over time as long as we're kinda kinda helping them kinda secure and grow their own revenue. So that's how I internalize that. I think the game is gonna be at quality about enduring revenue, and there are businesses like Wiz and Datadog to be built now.
Speaker 8:It's just that Datadog and Wiz, although they were rocket ships, they always focus on essential kind of, solutions to really, really large businesses, and that's what we're focused on. The first part of the psyche is I I think it's not spoken enough is the fact is end consumer expectations are in this invisible asymptote where they're expecting singularity. Like, consumers, because they've been sold singularity and are using propensity in chat GPT, they expect all work to be done by software. So that is actually the one that keeps me up in mind. Like, how do you deliver the genetic experiences, which are inherently probabilistic to meet these kind of insane kinda consumer expectation?
Speaker 8:And then that's where I think the winning products are gonna come from. So the revenue stuff kind of really, really hits, but I kinda process it out. But the consumers and their expectation, there's some stuff that kinda keeps me up twenty four seven.
Speaker 2:Is it is it internally, are you thinking with the team, like, how do we achieve this sort of, like, chat GPT moment for the enterprise? Right? Like you could imagine, like, there is some place that you guys could hit in terms of product quality Yeah. That would be so magical that like you would get that.
Speaker 1:But Cognition kind of had that where they launched Devon and it was like this viral sensation
Speaker 2:Yeah.
Speaker 1:Even though it was an enterprise agent for coding. But it's a great question.
Speaker 2:Yeah. Like do you think that that's do you think that that's like, maybe it's the wrong way to look at it because Yeah. Maybe. Businesses will realize, hey. We have something magic here.
Speaker 2:We're not gonna talk about it. We're just gonna, like, let you know, it's antiemematic. You know, we just
Speaker 8:Right. Yeah. You're you're spot on. So we're in the boring space of ERP and CRM, the most essential, but also the most lucrative software market. Like, we've chosen the path to be the daily driver for all customer facing knowledge workers.
Speaker 8:So we focus on building something that's a land expand. Land with the maxes and the sound of the world, but growing to a 200 plus kind of active users at cramping. That's kind of the core motion. I think that agentic applications that win is where everybody's using it day in, day out. And so we optimize for, like, internal, like, post land virality.
Speaker 8:So there is no seats. You land rocks, everybody uses it. And I think that's that's at least my thesis is become a daily driver.
Speaker 1:Can you talk about the actual instantiation of the product? Obviously, it sounds like you're plugging into the data warehouse, the snowflakes of the world. It sounds like you're maybe not plugging into the Salesforce installations. Is there a Rocks app? Like, how does a salesperson actually interact with Rocks day to day?
Speaker 8:Absolutely. Pure ramp like kinda cracked engineering shops. So we build product. We're pragmatic product builders, and we work with the platforms. So we have a web app, an iOS app.
Speaker 1:Yep.
Speaker 8:The guy who built ramp and Robin is here. We have a Slack app. We have an email app. So we wanna be the front application where it is a swarm of agents working for you. It's powered by our own warehouse native CRMs.
Speaker 8:It's running in your warehouse, and it's a two way sync to Salesforce or HubSpot. Right? So that's kind of our core idea.
Speaker 7:Got it.
Speaker 8:In in kind of large sophisticated organizations, we're already winning, like the Mongols and these folks. Like, we would definitely integrate our API into their internal tools. In most of the rest, we wanna build an application that humans actually want to use, and then hopefully get two to three x more productive this year and put them on a path to being kinda 10 times more productive.
Speaker 1:Love it. Jordy, you got anything else?
Speaker 2:That's all I got. This is great.
Speaker 5:Well, this
Speaker 1:is fantastic. We'll have to hand you back.
Speaker 2:Need get our big news.
Speaker 1:I hear there's big news coming. I want him back when he announces it. I wanna hear it first.
Speaker 8:Absolutely. Well, one short story. Sorry. I'm a sports nerd. I grew up in India getting up in the morning seeing sports center to see like Jordan and the wizards.
Speaker 8:That's that's when I saw You're Jordan. All the success.
Speaker 2:Hey. Hey, We talked about this morning, like there's gonna be a one name entrepreneur
Speaker 1:Yes.
Speaker 2:For AI b two b SaaS, and you could be that you could be that So I can't could Kobe Kobe. Of b two b SaaS. We're rooting for you.
Speaker 1:Wilt. That's good. Jordan. Well, thanks for coming by. This is fantastic.
Speaker 1:Alright.
Speaker 5:Thank you. Well,
Speaker 1:we got another one name entrepreneur coming in the temple of technology any minute now. Augustus. Augustus. You don't even need to know his last name. You might know him by his Augustus, former
Speaker 2:Roman roman maker. Roman emperor.
Speaker 1:Former Roman emperor.
Speaker 2:The yeah. Yeah. Accused member of the deep state.
Speaker 1:Accused member of the deep state. We're here to confront him. Put on the tinfoil hat. Break it down. I promised him we wouldn't go too crazy with the jokes because we know the Internet is watching, and this will be closed.
Speaker 2:I won't go too crazy, John.
Speaker 1:Augustus, I I really appreciate you taking taking a shot and taking a chance on this
Speaker 2:crazy has no fear. He faces his he faces his vocal opponent opponents, daily.
Speaker 1:Yeah. He really is good at duking it out on the timeline. Never lets them get to the last quote tweet. He's having fun with it too, but it must be a high stress situation because the numbers are getting crazy. We got Augustus here to break it down for us.
Speaker 2:There he is.
Speaker 1:Welcome to the temple of technology, Augustus.
Speaker 2:Wow. I I feel like your hair grew an inch. You must be drinking a lot of, like, milk or whatever to to stimulate that hair growth.
Speaker 6:I'm I'm drinking Justin bone broth, bro.
Speaker 1:There we go.
Speaker 2:There we go.
Speaker 1:Keblin fire shout out.
Speaker 2:Shout out you. Former brother brother of the week. Former brother of the
Speaker 1:week, Justin Mares. Can you give us just set the table for us for those who haven't been following the drama. What's going on? How'd you wind up in this situation? Is it frustrating?
Speaker 1:Is it hilarious? A little bit of both?
Speaker 2:What's going And it's great. It's it's the same effect as, like, the sort of historical attempted hit pieces where it's like your attackers are like, he is the most powerful man in Silicon Valley. He is controlling the weather.
Speaker 1:We Yeah. The hit pieces have gone direct now. Like, you're getting the puff pieces in the in the mainstream media, and then the decentralized media, the citizen journalists are coming for you. So break it down.
Speaker 6:Yeah. Yeah. No, dude. My favorite so far has been the guy that was like, this is clearly a CIA psyop to make weather modification cool. He's backed by Peter Thiel and the same click of the same cabal of VCs as all these things.
Speaker 6:The the state of the union is 29 states this year have proposed legislation to carte blanche all weather modification and atmospheric engineering. Engineering. And so these bills, they're essentially coming from a place of people being concerned about chemtrails
Speaker 4:Mhmm.
Speaker 6:And people being concerned about solar radiation management. Chemtrails, it's a suspicion that the government is like, what you see is contrails, long streaks in the sky. They suspect that's just like poison.
Speaker 1:Mhmm.
Speaker 6:Probably not. I haven't seen evidence of that yet, but open to being convinced. And then solar radiation management is an attempt to dim the amount of sun that reaches the Earth to cool the planet down. So it's this climate intervention. And then cloud seeding kinda just gets lumped in.
Speaker 6:Cloud seeding is what Rainmaker does, and it's nothing like the other two things that people are describing. But because nobody knows the difference, they're trying to ban it because they think that people are either getting poisoned or that we're some sort of, like, agenda twenty forty antihuman globalist initiative. Mhmm. And so surreal as it's been, I'm both in the trenches in Twitter and in many state capitals throughout the union. A lot of them I haven't test ified at because I realized that actually drums up a lot more craziness than maybe is good.
Speaker 6:But we're in this we're in this public knife fight with, the government of Florida in Tallahassee just trying to you you know that tweet that that iconic tweet where Trump says, like, I just wanna stop the world from killing itself? Mhmm. Like, I just wanna bring people water that need it. And I think that if you ban fire seating, you're basically banning, like, rocketry or fission. And so that's what we're trying to stop right now.
Speaker 2:Can you talk about water scarcity? I think people it's hard for them to process because they turn on the tap or they turn on the shower and just water comes out and it's not that expensive.
Speaker 1:Yeah. Water comes
Speaker 2:from the faucet.
Speaker 1:Gasoline comes from
Speaker 2:the faucet. We're here in Los Angeles which is, you know, one of the most, you know, sort of arid, you know, parts of The United States.
Speaker 1:It's only possible because of some corrupt bargain that happened to, like, reroute the river, like, fifty years ago. Right?
Speaker 6:Yeah. Yeah. Well, there's the one, the Owens Valley situation. So Okay. LA County basically had to, like, send out agents posing as private people to buy up all the land between, like, the Owens River Valley and then redirect the water.
Speaker 6:And then there's a couple water barons that, like, you can't even say the name of really, like, disappeared in California. But the the California state water supply strategy, like, from the Department of Natural Resources, says that half a million acres of farmland have to turn into desert by 2030 in California in order to, like, maintain water supply for the cities. Like, Phoenix, Arizona is banning new housing development because there's not enough water. Salt Lake City, people are getting respiratory problems because the Great Salt Lake is drying up, and there's not enough water. And then, you know, like, the wildfires that go on all over the country, that's because there's not enough water on the ground.
Speaker 6:And in the case of Florida, a lot of people have told me, like, you know what? Listen. I'm convinced that cloud seeding is useful and beneficial for a state like California that is in a drought and that that is a desert. But Florida, we get plenty of rain. Like, why do we need to bother modifying the weather here?
Speaker 6:Well, 14,000,000 acres of farmland in Florida is currently in a drought, and then 30,000 acres of Miami Dade County just burnt to the ground because there wasn't enough water, because there was a drought. So, like, if if mayor Suarez is listening, please save us. We, yeah, we we just wanna bring people water, and and drought is much more ubiquitous and consequential than people realize. Like, if you wanna make America healthy again, we have to grow food domestically. And if we don't have enough water to do that, then it's gonna be some cookie GMO, like, Chinese stuff that gets shipped over the ocean or, like, Soylent Green product, not
Speaker 1:to Yeah. No. I hear you.
Speaker 2:Deep cut.
Speaker 6:Deep cut. Yeah. Yeah. That's that's where we're at right now with respect to drought.
Speaker 1:No. No. Yeah. A lot of distinctions.
Speaker 3:What are
Speaker 2:these is there anybody besides you fighting to save, you know, the the private and public just private and public groups from being able to do cloud seeding? Like, are is it a is are are you the the last man standing, or do you have a team?
Speaker 6:So well, I've got a great team at Rainmaker, and I'm really grateful for that. And we've got a bunch of farmers, that are beneficiaries of our program go to bat for us because, like, they previously had to tear up their pistachio ranch or their pistachio orchards because they didn't have enough water for them, and now they're better able to farm because of the production of water that we're we're doing. But, you know, that that part where or that, like, famous Stanford speech where Teal says, you know, you should go after a really small market. Like, if there's lots of people involved and it's already too late well, I'm not, like, the last man standing. I'm kinda like the first man standing.
Speaker 6:So it's other than the farmers that we're helping and the other great team members I had at Rainmaker running around the country, it's it's Augustus Dureka of the a couple states, 30 states.
Speaker 2:Yeah. And and what is that yeah. Like, how do you even balance this sort of simultaneously you have, you know, you're you're a company, you generate revenue, you have expenses, you're sort of planning timelines, and you're also potentially fighting like legal battles, and you're also dealing with like technical risk in the business. Like, right? Like, what what you're doing is very hard.
Speaker 2:It can be done. But, like, that's also a challenge. Like, as, you know, as an entrepreneur, you know, how do you kind of balance all those things? And it feels to me like, you know, the the sort of legal risk is, the most asymmetric, you know, risk. You know, you can always launch an you know, create a new prototype, you know, take another flight, etcetera.
Speaker 2:But then if there's sort of these, like, blanket bans, that that feels like is that, like, taking up 60% of your time now just making sure that you're gonna be able to do this in five years?
Speaker 6:Well, so not not quite. I mean, it's definitely, like, a large double digit percentage. The interesting thing with respect to the political dynamic, though, is this. It's not really, like, a right left issue so much as it is, like, an East West West issue because the West doesn't have water. So Democrats and Republicans alike on the Western Half of The US are like, oh, this is sick.
Speaker 6:I would love to have water. Then the people that have as much water as they have conventionally needed and aren't as subject to drought, they're the ones more wary of it. So I'm not worried about Rainmaker domestically at a state level just because there's a lot of states that we haven't gotten into yet, like Nevada or Arizona or New Mexico. I do think that the domino theory thing holds true to some extent. So, yes, we do have to focus on it.
Speaker 6:The super, like, funny, crazy outcome is that I just like if Rainmaker was totally banned from The US, then we just double down on Riyadh, and then we, like, camp out there for five years and come back weathered and and, you know, wearing desert garb.
Speaker 1:My Iraqis. Lawrence of Arabia mode.
Speaker 2:My Iraqis. My
Speaker 1:Dune. My Dune.
Speaker 4:Dude, they're
Speaker 1:gonna send you to Dune.
Speaker 6:I watched I watched Dune two in theaters five times, bro. A religious zealot comes to turn the desert planet green.
Speaker 1:I love it. That was great.
Speaker 6:Yeah. So, no, I trust my team a lot. I have really good
Speaker 1:talk a little bit about just, like, the general fear around cloud seeding? I think that right now with Health and Maha, there's just this general idea that, like, anything new is risky.
Speaker 2:Yeah. Luddite fertilizers are having a heyday.
Speaker 1:Yeah. Bull market and let it
Speaker 2:because of for sure.
Speaker 1:But, mean, at the same time, like, you know, you eat too much salt, your doctor will tell you that's gonna hurt your heart. And there's a million different things where even if it's safe at a certain level, you take it up by a hundred x and it it gets bad. And there's just all these balancing. And then also, we do the mice studies, but does that really transfer? And, you know, what happens when someone's, like, you know, vaping pink liquid for, you know, fifty years.
Speaker 1:Like, we don't really know, and we're kinda figuring it out right now. How what what gives you confidence about the the low impact of this?
Speaker 6:Mhmm. So, you know, I I really sympathize with people that are worried about this because it sounds on its face, like, crazy to be modifying the weather and to be dispersing chemicals. But, to your point about, like, vaping stuff for fifty years and not having longitudinal data on the health outcomes, like, cloud seeding, even though Rainmaker is very innovative, even though a lot of new things have been done in academia that we're implementing in from, like, the last five years, cloud seeding is 80 years old, dude. This was invented in The United States in 1945. GE has the first patent from 1946.
Speaker 6:People have done eighty or thirty or fifty years depending on the watershed studies on the concentration of silver iodide that ends up in the water and the soil. And after decades of operation, you only see parts per trillion of this stuff. Like, you need super sophisticated instrumentation to even detect it in the first place. And so there's there's been no negligible health impacts found to, either people or agriculture or the environment. And those studies, they've been done.
Speaker 6:And, like, should Rainmaker replicate those? Should we continue to do it just to prove it out further? Totally agree. But, like, it's totally safe. And if you think about the, like, l d fifty, like, the the lethal dose of silver I it's less than salt.
Speaker 6:It's less than table salt. It's milligrams per kilogram with salt, and then it's two thousand eight hundred for silver iodine. So this is this is, like, a resoundingly safe material to be using, and and the data from decades shows that.
Speaker 1:Can you talk about a little of the history of I feel like when it's easy to look at the chemtrails thing and be like, okay. Well, like, the most aggressive conspiracy theory there is that it's, like, literally, like, mind control. And it's like, we're so we're so divided in America. I don't know. The mind control drugs clearly aren't working because what are they mind controlling us to do?
Speaker 1:But there is a legitimate criticism that, like, when you fly a bunch of planes around, those contrails are emitting pollution, and your your skies get dirtier. And I remember after 09:11, all the planes were grounded. It was the clearest day in the skies ever. And there is some sort of, like, low key harm, and and there's always these, like, knock on effects. I'm sure with the solar radiation management, same thing.
Speaker 1:We we have been experiencing global warming, but we've also been experiencing global dimming, and those have been kind of counteracting each other in some way. Can you just talk about some of the history of these things and and how you think about worrying about the the second and third order effects of cloud seeding?
Speaker 6:Yeah. Yeah. Totally. So, with respect to weather modification and climate engineering, like, the the thing that I think everybody needs to realize is that we've been doing it unintentionally for hundreds of years. Yeah.
Speaker 6:Like, build a city, then you have a heat island that affects cloud formation and precipitation patterns. Sure. If you have a a coal plant or a steel plant or a nat gas plant, the steam and the aerosol you emit reliably creates more clouds and precipitation, like, tens of miles downwind as well. The emit the emissions from our cars. Right?
Speaker 6:Like, those have pollutants in them, apart from just, like, CO two considerations. So we we do modify the weather unintentionally all the time.
Speaker 1:Yeah.
Speaker 6:Rainmaker's thesis is that, like, we should be modifying it intentionally to either, like, unfuck the earth as it stands or make it more lush and abundant once we've done the former. When it comes to knock on effects, the common question is, like, you know, if you're making it rain more here, is that reducing precipitation downwind there?
Speaker 1:Yep.
Speaker 6:Totally reasonable question. Like, I think on its face, that that logic makes sense. Unfortunately, the system's pretty complicated. It turns out that only 9% of all of the water that traverses the atmosphere in The United States precipitates over it. The vast majority is either recycled by the oceans, precipitates over the oceans, evaporates away, and never condenses again over The US.
Speaker 6:So, if you just increase the utilized water in the atmosphere, then it's purely positive sum, and you can select to some extent for which clouds aren't naturally going to precipitate with the appropriate radar and probes. The evidence of downwind drought is totally uncomplaining. Nobody's provided reliable data there. But are people concerned about it? Are they rightfully concerned about it?
Speaker 6:Sure. Totally. And then the last thing I'll say with respect to contrails, you know, there's this funny thing that people realize where if you fly a bunch of planes early in the morning, and then develop contrails, you'll actually cool that area locally in the planet to some extent because it'll reflect sunlight before it warms the planet up with the light of day. And if you fly a bunch of planes sunset, then that will retain the heat and kind of act like an insulating layer, and and keep the earth warmer. So, like, just plane flights, you know, American Airlines, Delta, whoever, they are doing weather modification unintentionally apart from any chemtrail stuff.
Speaker 6:Sure. I'm just hoping to give a little bit more sanity to the ways in which we're modifying the weather.
Speaker 1:Earlier this week, there was a video that went viral of a, very cinematic desert in China that has been reforested. Do you think you'd have an easier time in the public eye if you were just planting trees, or is is any modification of our terrain just a hot button issue regardless? And do they have to do cloud seeding in addition to that, or are they just using traditional irrigation? Like, how does that I wanted to talk to you about that that Chinese video that went viral because it was so striking. And it was like for me, it was like, I want that here for sure, But I don't know how other people took it.
Speaker 6:Yeah. It looks like you get, like, a thatch roof in the middle of the goby.
Speaker 1:Totally. Yeah. It felt like this idyllic, like, little preserve. You put a ranch house on that. The golden retriever's running outside.
Speaker 1:You're taking the horse down to the local. Golden retriever mode. To,
Speaker 7:yeah, to
Speaker 1:the local, the saloon. And it just seems like like it's the it's the West. Right? Again, you have new land that people can go live in, and that's just like a land of opportunity. And I think that'd be so cool.
Speaker 1:But, yeah, what what what's your take on it?
Speaker 6:Dude, I mean, that's what the Central Valley was. The Central Valley used to be deserts and swamps, and now it is the most productive agricultural region on the planet. It produces, like, 30% of all of the fruits and then the majority of, like, the red fruits in The United States, we used to terraform all the time. Like, the Hoover Dam was an attempt to terraform the West. Yeah.
Speaker 6:I think that, like, people that you know, like, maybe it's a last man problem. Maybe it's. Maybe it's just, like, general, like well, no. I won't say that. But, like, we've just lost the desire to be great.
Speaker 6:I think in The United States, we've lost the desire to, like, build the future and see the future through. If I was just planting trees, yeah, for sure, it would be easier. You need the water, though. So unless you're setting up enormous conveyance systems, which China largely is, and cloud seeding in addition to that. Mhmm.
Speaker 6:It it's a nonstarter. There's a couple technologies, you know, I mentioned last time, like soil amendments that can get soil to retain more water. That would be good. But, you know, trees, they emit dust that creates, like, a a majority of our clouds in some regions and induces precipitation. Clouds can like, it mimics that natural process.
Speaker 6:It's just with the material that works a little bit better. So, yeah, man. Like, I I plan to plant trees in the future. I plan to make as many things green as possible. I hope that people, like, get on board with the vision to make deserts green and great, but, it's gonna take a minute.
Speaker 2:I had a funny experience at Hereticon. There was a guy talking about terraforming. I'm blanking on his name, but he put out this like amazing
Speaker 1:Thomas Puero.
Speaker 2:Thomas Puero made this amazing presentation. It was like forty minutes long showing all the different places around the world that we could terraform and how amazing it would be. And one of the places that he highlighted was the Salton Sea and how you could theoretically divert the Colorado River.
Speaker 4:I've been
Speaker 1:there with Augustus. We went out there. And Ben.
Speaker 2:And then I went up to him afterwards and I was like, this is an amazing project. I would love to make the Salton Sea like Dubai, which is like, you know, it's a it's a
Speaker 1:Majority comes up, how do we make money off this?
Speaker 2:No, but I asked him, I was like, how how do we actually do You know, I was thinking like the way to do it is through a hyper commercial project that says all this land is worthless right now because it's toxic surrounding this sort of evaporating sea. Like, if somebody were to come in and buy up all the land around the Salton Sea that's now very depressed, and then start lobbying to actually make this stuff happen, like it feels like you need, like, like he I asked him how how does this get done? He was like, I have no idea. Like somebody should try it but like I don't know how they would actually get it done. Is it is it really just like one person caring that that sort of like makes these kind of projects possible, right?
Speaker 2:Like in the case right now, there's like a good chance that weather modification, cloud seating, etcetera, just would get banned if you weren't like flying to Florida frequently.
Speaker 1:Great theory of history.
Speaker 2:Great man theory of history. Yeah, it feels like we miss air in this sort of Doge era. Like if we do enter a period of like deregulation. Yeah. In some areas like some of these projects you know, might be possible.
Speaker 2:But, yeah, how how do you think about the salt and sea opportunity and then just, like, you know, if we're even capable of doing projects at that scale anymore?
Speaker 6:So a related thing, interestingly enough, is, like, I think that there should be way more alternative finance that start ups employed. I promise this is related. I'm gonna wait until q one or q two twenty six to do this, but, we're just gonna stand up a land fund, and we're just gonna start buying up the land that currently isn't arable, primarily in Arizona and California. Because the same acre of land that's worth, like, $6 in either of those places, has the same soil quality that the Central Valley does that would sell for, like, 70 to 500,000. Like, we'll we'll set up a subsidiary that we sell some equity in to get LPs involved so that we can terraform that and then flip the land or keep it as an agricultural asset that is only made viable because of the water that we bring.
Speaker 6:It has to be a commercial interest. I don't think that we you know, may maybe America at one point had like, maybe there's this idyllic previous America that existed. I'm not sure where people did things just for the public benefit, or because it was cool. Now, yes, it has to be cool, but moreover, like, there has to be commercial interest. And, yeah, I'm going to do exactly that.
Speaker 6:And I I hope that somebody beats me to it for the sake of all the benefit that'll come from it. But, if they don't, I'll make a lot more money because of them.
Speaker 1:This is hilarious because, you're gonna be duking it out when you're buying that land because Casey Hanmer, we had him on the show earlier. He wants to put solar panels all over it and turn it into a parking lot. And you're gonna be like, no. Like, I I'll pay a dollar more per square foot. But I I I I do
Speaker 2:We can have the auction. We'll have the we'll do the auction here.
Speaker 1:Yeah. Yeah. Auction. Hey. $5.
Speaker 1:5 dollars. I have $6. I $6. I I do have a question about the the cloud sitting stuff, it feels like I mean, all of this stuff is somewhat zero sum. Like, we're not really creating new water.
Speaker 1:We're kind of moving it around. But desalination seems awesome. I was looking at it pretty intensely a year ago, and it seemed really difficult, honestly. But desalination, like, what is your take there? And it just feels like we have you know?
Speaker 1:I would expect a a Gundot company to be doing this. Like, we have people working on nuclear now. We have people working on solar. You're working on cloud seeding. Like like, desalination, if I heard, like, oh, yeah.
Speaker 1:There's some hot guy, hot startup who's working on, on desalination, I wouldn't be like, oh, this is breaking my mind right now. I'd be like, of course. That's the next thing that the guys will go after in the El Segundo because, like, they've kinda checked the box on everything else. What's the state of desalination, and what's your take?
Speaker 6:Yeah. Desal is great. I'm not anti desal. I don't think that we step on each other's toes. It is a modification of the water cycle.
Speaker 6:Right? Like, whereas whereas they are taking saltwater out of the ocean, making it fresh and usable, we're taking cloud water out of the sky and bring it down to make it fresh and usable. It's it's relatively efficient right now. I think there's a lot of promise in catalytic desalination. I'm not enough of an electrical engineer to come up with something sufficiently innovative in the space, but I think that probably shows more promise than RO.
Speaker 6:It's it's relatively efficient as it is. The the problem with desalination for gigascale American projects for a lot of the world that is currently undersupplied with respect to water is conveyance. We can set up a huge desal project in LA. Sweet. No dice for Colorado.
Speaker 6:Doesn't matter for Nevada. Barely even matters for the Central Valley because you have to step hundreds of miles of huge pipes and then maintain those pipes for as long as the project exists. And it's doable. Like, the California state water project is evidence of conveyance working, but it is crazy infrastructure and crazy eminent domain that goes into shipping that water. So should California be more reliant on it totally?
Speaker 6:Could we, like, with tariffs or something, strong-arm Mexico into giving us, like, the Sea Of Cortez to desalinate for Arizona? Maybe. But if you're in the interior of The United States, the only way that you can make more water is if you break it down from the sky, and that's that's who we're primarily trying to serve.
Speaker 1:This was, Blake Master's point that I really enjoyed. He said, you know, how do we get water into the interior states, Nevada, Arizona, and New Mexico? Well, we're gonna reroute all the water from California into those states, and then California the future of water in California is nuclear powered desalination. And he said that, like, ten years ago. Haven't heard anyone try and build it.
Speaker 1:Maybe he should have stuck with venture capital and and and backed that company because I want it to happen. But maybe it'll happen soon. That'd great.
Speaker 5:Okay. We're we're Or
Speaker 4:Yeah.
Speaker 6:No desal, no cloud seeding, boiling the ocean. Nuclear boilers on the surface of the ocean to make big enough clouds to fly inland.
Speaker 1:Oh, interesting. Okay. So you just
Speaker 2:make the
Speaker 1:clouds and then float them over the river Many atmospheric rivers, right?
Speaker 2:Many people have wanted to boil the ocean but not literally. So we need more literal ocean boilers.
Speaker 1:For sure.
Speaker 2:I have a more personal question. How are you doing with all this? I care about, you know, I'm lucky to be an investor in Rainmaker, but I care about you, you know, even more on a personal level. I think your prefrontal cortex is like, you know, about to maybe fully develop. Right?
Speaker 2:But like, you're under a lot of pressure for being like, you know, in your mid twenties. Like, I I I, you know, anybody who's had a post go viral or sort of break containment knows what happens. Like, your your message requests on every platform are probably, like, you know, very dark. And, you know, are you are you you know, you seem like you're handling it incredibly well, but does it ever does it ever get to you?
Speaker 6:That's really that's really caring. Thanks for asking, man. You know, like, I I view Rainmaker as the best means by which I can serve God in my lifetime. And I owe everything in my life to Jesus Christ because of all that he's done for me in this life and then hopefully in the next. So this this situation that I'm going through, no big deal at all.
Speaker 6:Like, if if it ends up on the other side with, you know, ideally, me having gotten to educate and convince some people and bring them more water in places where they need it, I am happy to take the punches and the death threats in the interim. And even if it didn't work out, you know, I'm gonna give it my all. And I was a high school debate kid, so I I live for this kind of dorky.
Speaker 1:That's weird.
Speaker 2:Not mess with the high school debate kids. They will go to war with you on the timeline.
Speaker 1:Yeah. I mean, we kinda predicted, like, like, the rise was so quick, the mood would shift. But I didn't expect that this was the instantiation of that mood, the shift around the vibe of the company. Right?
Speaker 2:Yeah. I just have to say like, there's so many people rooting for you, right? Yeah. Like you really are, you know, your your critics that call you an industry plant, you know, in many ways like, you are, you know, you are a product of of of our industry and and there's just a lot of people, you know, rooting for for your success. So proud of you for for not not letting me
Speaker 1:Never back down.
Speaker 2:Never back down.
Speaker 1:Always claim victory.
Speaker 2:Always claim victory. Don't let the haters get to you. And I would say, you know, like have a good rest of your Friday, but I'm sure today is basically a Monday for the Rainmaker Rainmaker team. Job's not finished.
Speaker 1:Yeah. I I I'm so sorry that the squat rack had to go. How are you getting fitness in these days then?
Speaker 6:I was that was tragic, dude. I am walking to and from the office because I live four blocks away in Glendale. And otherwise, I'm going hard skinny priest mode. I'm big skinny priest.
Speaker 1:Skinny priest mode. You've been the muscular warrior. I I look forward to when you exit this and become a portly merchant. No. It's gonna happen.
Speaker 1:You're gonna you're gonna IPO the company, sell it all, and become a private equity guy, roll it up the rest of the farmland or Yeah. The poorly merchant is the future for sure.
Speaker 2:Now, I'm excited to I'm excited to have you back on when when when you're doing some of the land stuff because like that does feel like, you know, verticalizing and like capturing the full value that that you guys can create over time is just like very obviously, like you should own the land that you're raining water down on and and, you know, gain, you know, the full economic benefit of the work. It's fantastic having you on.
Speaker 1:Yeah. This is great.
Speaker 2:As always.
Speaker 6:Thanks, guys.
Speaker 2:Our weather our weather man. Our weather man. We needed a weather man. We got it.
Speaker 6:We're forecasting for TPPN.
Speaker 1:Yeah. Yeah. You should.
Speaker 2:It's it's you can't do for like, forecasting would just be too easy for you because, like, yeah, there's gonna be rain, like, in that square mile over there.
Speaker 1:Yeah. We need the the Zoom background, like, green screen and see you and, like, the map of next time you're on, fake Zoom background with the with the map of LA, and you can tell us where it's gonna rain Right. This weekend. I'll have you on Friday. Hang out.
Speaker 4:It's gonna
Speaker 1:be fun. Anyways, great hanging. We'll talk to
Speaker 8:you soon.
Speaker 1:That's great. And so, next up, we have, Mike Newpe, from, he founded Zapier. He also founded, Ark Prize, which we've talked about on the show before. So Ark Prize, I highly recommend everyone go and check it out and try and do them. So ARC Prize is a, AGI eval, an evaluation that is designed to be hard for AI but trivial for human beings.
Speaker 1:And so, they look like puzzles, and there's a grid of squares that are different colors. And your goal is to is to, recreate the pattern or understand the pattern. It gives you a whole bunch of references, and then you are tasked with, creating the solution to the puzzle. And it's been very, very difficult, for AI. AI has struggled with this even though it looks like something that should be able to be one shot.
Speaker 1:And Mike is in the studio now. Here we go. Mike. How are you doing?
Speaker 5:Hi, guys. Thanks for having me. I'm doing good. How's your Friday going?
Speaker 1:Oh, it's great. How's yours?
Speaker 5:Quite excellent. Spent a lot of time with the research team.
Speaker 1:Oh, awesome. Can you give us just a brief introduction, your background, and then how you wound up working on Archprize and kind of what you're excited to announce most recently?
Speaker 5:Yeah. So, I I'm, I guess, probably best known for cofounding Zapier about fifteen years ago, an automation company. This is what I've been working on the last thirteen plus years. And, you know, I had sort of, like, call me AI curious. I got I went sort of all in on AI early January twenty twenty two when the chain of thought paper first came
Speaker 1:out. Mhmm.
Speaker 5:All these, like, LM reasoning benchmarks were sort of, like, spiking up in performance, and they got me really curious. Like, are we on path for AGI or not? And this ended up leading to Zapier deploying, I think, a lot of AI products really early into the market. Mhmm. And there was this trend I had seen and heard.
Speaker 5:I spent, you know, probably hundreds of hours talking with customers around, you know, playing AI agents. Zapier's been deploying it for a couple years now. And the the feedback was always consistent. Like, hey. I I get the I get the hype, but, like, they just they're not reliable enough yet.
Speaker 5:You know? They don't they don't work two out of 10 times, and that just doesn't work for these unsupervised automation products. And this was sort of in contrast to all this, like, EGI hype hype, right, that I see on Twitter all day long.
Speaker 1:Yep.
Speaker 5:Same thing. I'm sure you guys saw, you know, in the 2023, '20 '20 '4 era. And so I I try to set out to figure out how do I, like, explain this? Because, like, I have two sets of lived experience that sort of don't match. And this is how I rediscovered Francois Jollet's arc benchmark that was originally published in 2019.
Speaker 5:And I sort of expected to have gotten kind of beaten by this point, you know, first first looking into probably, again, 2023, '20 '20 '4. Surprised that it basically hadn't. And and not only hadn't been beaten, there basically been no progress, which I thought was really fascinating given the fact that we've, like, scaled up these language model systems by almost, like, 50,000 times over the last, you know, three, four years prior. And, so that was kind of the genesis of me, leading to meeting Francois and kind of pitching on this idea of, like, hey. I think this is literally the most important unbeaten AI benchmark in the world.
Speaker 5:I think it makes a really important statement that, like, pre training scaling alone is not sufficient to get to AGI, And more people should know about this fact, and so we launched Ark Prize together try to sort of raise awareness of the benchmark and just honestly inspire more people to work on new ideas again towards AGI.
Speaker 1:And can you talk about a little bit about the prize money and then what happened with Ark Prize one and then what's happening now with the second iteration?
Speaker 5:Yeah. So when we launched last year, you know, like I said, there've been very little progress, towards, I think, g p four o had was scoring 4% on the benchmark. Yeah. That's four years in, five years in.
Speaker 1:And and and, again, like, like, a like a eight year old should be able to do these. It it's Yeah.
Speaker 5:Especially RKT I one. It was Yeah. It turned out to be, in retrospect, quite quite simple. It's a very binary sort of fluid intelligence test.
Speaker 1:Yeah.
Speaker 5:And, yeah, like, you know, all the knowledge you need to solve this benchmark are doable by you you acquire the knowledge very early on in childhood development. You can take the tasks, in fact, yourself. If you go to arcprice.org, we have a big player. You know, I think what's what's special about ARC is that it is the the sort of design concept of it is it targets something that's quite straightforward and easy and simple for humans, but hard for AI. And this is in contrast to every other AI benchmark that exists today, which you're trying to, like, challenge this, like, PhD plus plus frontier, you know, like frontier math or, you know, humanities last exam.
Speaker 5:You know, these are sort of AI benchmarks that you really do need to be, like, PhD level plus to be able to even solve as a human. And in contrast, ARC makes a very different claim that, hey. There's still a lot of, like, very straightforward for human capability that these frontier AI systems don't have. And so that's why we won Shark Prize. During the contest, the I think one of the things ARC Prize 2024 will be noted for is probably the introduction of this, like, test time adaptation method.
Speaker 5:That was some of the papers that came out from, the the contest. We had a big million dollar prize, in in order to beat the benchmark to get to 85 percent. No no one did. You also have to do it with a very high degree of efficiency. And we kind of expected that was gonna be the case.
Speaker 5:We also had this paper track where people could submit new ideas and push the conceptual forward force, which is where most of the coolest progress came out from, from last year. And then at the end of the year, obviously, last fall, we started seeing things like o one and o three. And, these are this is a very big, this is a very big update, I think, because systems like o one, particularly o one pro and things like o three, are not just purely scaling up language models. These are not language models anymore. These are really AI systems.
Speaker 5:They have a big model as a component of the system, but they have some sort of sort of synthesis engine on top that's allow allows them to, like, reuse their existing pre training knowledge at test time to recombine the knowledge. And that allows them to make significant progress towards ARC, and we saw it with three in December, you know, score a pretty high degree of efficiency solve ARC ABG on 75%. So they're not quite at 85%. And then, you know, OpenAI tested an even higher compute version of it that probably was probably a couple million dollars to test. They got 87%, I think.
Speaker 5:It's crazy.
Speaker 3:And
Speaker 5:so that was kinda where we we wrapped up, last year. We've been part of work over the
Speaker 1:couple years actually. Quickly. While you were saying that Good. Fantastic setting the table, I went to arcprize.org/play. I did the daily puzzle.
Speaker 1:I still got it. I'm still at a job. You could do it at home if you want arcprize.org/play.
Speaker 2:Do you think, with all the billions of dollars flowing around the ecosystem, do you think arc prize is capital constrained in any way? Like, would would we have seen, you know, better results to date, or is this sort of social status of just, like, you know, achieve achieving
Speaker 1:kind of like scaling are are we are we compute bound here, or is it algorithm bound?
Speaker 5:Right now, I think ARC AGI two that we just introduced this week on Monday Mhmm. Is pretty direct evidence that we need new ideas still Mhmm. To get to AGI. We shared I don't know if we can throw it up or we can put it in show notes or something. Sure.
Speaker 5:There's a chart I shared on Twitter that shows the scaling performance of even frontier AI systems like o and Pro and o three against the version one of the dataset that we've been using for the last five years and then version two that we just, introduced this week. And, basically, it resets the baseline back to 0% language models. Pure LLM systems are scoring, like, 0% now. Again, on on ARC p two, single COT systems like r one and o one score, like, 1%. And the sort of estimates we have right now for the frontier systems that are actually adding pretty sophisticated, you know, synthesis engines on top like o one Pro and o three, single digit percentages on their efficient compute settings.
Speaker 5:So, I think there's a very interesting point that, like, you know, we we kinda moved from this regime where people are claiming, oh, we're just gonna scale up language models. Right? More data, more parameters, we're gonna
Speaker 4:get to
Speaker 5:AGI, and people realize they think that's not quite the story now. And then there's a new story that's emerged over the last, like, five months, which is, oh, we're gonna scale up this test and compute, and that's gonna get us to AGI. And I think what v two shows is that that's not quite either. We we still need some structural ideas in order to, like, get to a regime where we can scale up to actually reach human performance on this stuff.
Speaker 2:Do you think, like, with how how much, human capital is sort of concentrated in San Francisco, Like, do you think that is in many in the in sort of, like, geographic concentration of so many of the brightest minds in AI is sort of, like, potentially even holding back, like, new random ideas. And that's why ARC can potentially sort of if we need new ideas, like maybe they don't come from the sort of the center of the AI universe. And maybe they come from some random kid on the internet who's just like, you know, has the time to think completely independently and Yeah.
Speaker 1:This feels kinda scrolls adjacent.
Speaker 2:Yeah. Exactly.
Speaker 1:You you you put this idea out, and a bunch of people can take wild wild swings at it, you get just new idea generation.
Speaker 5:I gotta say thank you to Nat Friedman. You talked about the Phoebe's challenge. Of course. Literally, I'm very inspired by Sure. What they're able to accomplish, and that was one of the motivations for actually running the prize in the way we did.
Speaker 5:I think, yes, part of the goal of our enterprise was to specifically to try and reach independent researchers and teams and, like, give them a problem to work on that they could actually make meaningful frontier progress on. Right? I think the especially in academia in the last maybe five years, there's been this disheartening belief of, like, hey. Can we really advance the frontier? Because, like, you you have to have this, you know, billions of dollars in funding to scale up these language models.
Speaker 5:And I think ARC shows that, like, yeah, there actually are frontier problems that are unsolved that individual people and individual teams can actually make a difference on today. You know, that part of our whole goal of launching our price server is reinspire a lot of independent researchers to work on things like this and bring new ideas to the fold.
Speaker 1:I I remember when, when ChatGPT got a computer and it was a it was able to write some Python code then ex execute it. Is there are there restrictions on kind of custom systems that are LLM or reasoning model driven but fine tuned to help with the execution of ARC prizes? Is that is that breaking the, like, the the rules, or is that something that is actually encouraged and fine? How do you think about, like I guess it all it all, boils down to, like, overfitting on this problem, but, it seems like even with the incentive to overfit, it hasn't really happened yet.
Speaker 5:Yeah. Arc this is one of the reasons Arc v one lasted five years was because it was it had very good design original design built into it with a public test set and a private test set.
Speaker 1:Sure.
Speaker 8:And that
Speaker 5:private test set really prevented folks from being able to sort of overfit on it. Mhmm. And so we there's actually two tracks for the ArcPrice Foundation. We have the the contest track. This is on Kaggle.
Speaker 5:This is the, like, big grand prize. Yep. All the prize money is attached to this. Kaggle graciously donates a lot of compute to allow us to host there. And on the grand prize is basically to get to that 85% within Kaggle's efficiency limits.
Speaker 5:Mhmm. So this is about get about $50 worth of compute per submission that you send in. And this is, like, a pretty high bar for efficiency. And in fact, this is not an arbitrary bar. We do think that efficiency is actually a really, important aspect of intelligence.
Speaker 5:Totally. You know, you can brute force your way up to intelligence, but we really do wanna be shooting for, like, human targets and efficiency for this stuff. Yep. That's the contest. When we launched last year, there was a lot of demand from just the community of the world on, like, hey.
Speaker 5:I okay. I get that, like, I can't run my big language model on this, you know, in Kaggle, but I really wanna know how to frontier AI systems do. And so we launched a second track that's hosted on archprize.org where we benchmark and test all the, like, Frontier commercial systems to showcase what are they, know, capable of doing. Basically, these existence proofs that I think will eventually, you know, filter down to the higher efficiency solutions and open source solutions that can be run on something like Kaggle. And even there, like, within the sort of same efficiency Yeah.
Speaker 5:You know, accuracy specs, we still haven't seen any frontier AI system beat, RQJI two level and RQJI one. Yeah. So there's still a long way to go.
Speaker 1:What does that track actually look like? Is does that mean that, like, the team from OpenAI is writing the specific prompt? Are they doing prompt engineering, or is it your team is just taking the API and and giving everyone kind of the same goal? Is there any standardization there?
Speaker 5:There is. We so we Arch Price does most of the testing today ourselves. Like, we basically we wait for API access, either get early access or, you know, wait for the public access. And we have a GitHub repo where we just have a standard, very simple prompt that we use to baseline across all the sort of frontier systems that that we test. OpenAI situation was a little different because they had reached out to us and said, hey.
Speaker 5:We think we've got a really impressive result on the public eval set, we'd like your help to verify on a semi private set, which is what we created that dataset to be able to do.
Speaker 1:Mhmm.
Speaker 5:And in that case, there were really, you know, very little prompt engineering at all. It was very mostly just a verification of what we'd what we've seen from them. So, yeah, the in in all these cases, the the amount of prompting that goes into these is extremely minimal. Basically, just giving it the sort of grid with numbers and asking it to solve and it directly goes into it.
Speaker 1:Very cool.
Speaker 2:Can you talk about consumer AI agents we've talked about on this show? Like, it feels like, you know, once a week someone comes out and says they're building the consumer AI agent that's gonna like book you a hotel or book you a flight or like do do these things that sound really simple and then like nobody's delivered to our knowledge like a truly magic agentic consumer experience to book a flight. Even even, you know, you could use, like, OpenAI's operator to do this. But, like, even when you're sort of watching operator work, it's like it feels like you're using it doesn't feel like truly magic yet. And just, you know, given, like in many ways, like Zapier is just like the, like such a foundational company when it comes to automating anything on Mhmm.
Speaker 2:The internet. I remember that was magic to me as a kid where I was like, oh, did it, as a kid, I'd say like in college, right? It's like, wait, data came in here, and then it just went and did this other thing? Or like, I did this one thing?
Speaker 5:Yeah, feels like a superpower, right? Like, oh, totally.
Speaker 2:And so in many ways, it's like criminally underrated in terms of getting to that sort of like, you know, agentic Internet. But like why haven't people been able to like I even saw like Perplexity like was saying last week, like, we're building, you know, consumer agents that's gonna like book your flight. So like an and Apple's promised this and everybody's promised it, but we haven't seen that magic experience yet. And I'm curious why you think that is.
Speaker 5:We've been trying to deploy agents for two years at Zapier. And this was kind of what led me to sort of originally getting a bit interested in ARC is, you know, it's it's this reliability problem. Language Language model's fundamental. You're stochastic in terms of how they work. And so you're never gonna get that % accuracy directly from the model without adding a lot of extra additional guardrails source like processing on top.
Speaker 5:We've done some of that for Zapier. The way that I kinda think about it, you got these concentric rings of use cases that kinda get locked as reliability and consistency goes up. And today, we're pretty much still in the con in the and and this is true at the frontier. We're pretty much still in the kind of concentric ring of, like, personal productivity and team based automation where the risk to failure isn't very high. Right?
Speaker 5:Like, if if you're gonna start deploying one of these, like, completely unsupervised agents into the wild, you're gonna have a pretty high bar, right, if there's a sort of high degree of risk if this thing goes wrong. You know? All of our customer trust or brand trust or something like that. So that's what we've been kind of seeing is companies and users sort of slowly increasing those conscientious things as the progress is the technology is improving. I actually think we're gonna start to see agents start to work this year, specifically because of progress on ARC.
Speaker 5:I think there's this is an underappreciated fact. Like, people always ask, like, hey. Don't ARC looks like puzzles? Like, what's the economic utility of this stuff? One of the most important things that ARC test for is your ability to adapt to problems you haven't seen before.
Speaker 5:That's really the spirit and the essence of what it is. It's it's saying, hey. Can you solve problems you didn't just memorize, but you actually have used knowledge you did memorize to, like, solve new things that you've never seen before? This is fundamentally the same aspect, like, thing we're testing for in terms of generalization or liability with with agents. So I think because we're starting to see AI reasoning systems that are able to make progress against ArcGI one, albeit relatively inefficiently, But their the capability is sort of now existing.
Speaker 5:There are AI systems out there that exist some degree of fluid intelligence. That's gonna start increasing their reliability, and more and more of those concentric rings are gonna get unlocked, I think, starting to share.
Speaker 1:I wanna talk about, just like AI, AGI, ASI metrics. I I feel like ArkPrize is a fantastically important benchmark. I agree with you on that. Ray Kurzweil has been benchmarking against just, like, flops to human flops, basically. He's put the singularity at 2045.
Speaker 1:He kinda nailed the Turing test date, I believe. But I've I've been kicking around this idea of, like, maybe the real test of, like, AI hitting some sort of, like, tipping point is just, like, how much economic value is being created by AI. And once that hits some sort of, like, 10% of total global GDP or 50%, then we've reached the AI singularity or something. Are there any metrics that you're looking at for kind of this, like, intelligence explosion, ASI, AGI, but outside of, like, an individual moment or, you know, application use case benchmark, something more global and, like, human relevant?
Speaker 5:You know, pragmatically, you know, I think the the definition that I use for AGI these days is basically when we stop being able to easily find tasks that humans can do that AI can't do Mhmm. We've got we've got AGI. Mhmm. And, you know, I I think that's, like I said, a bit of a pragmatic answer. One of the things that surprised to me about Arcadia I two was the relative degree of ease by which we were able to create it.
Speaker 5:Oh, yeah. Given this, like, big moment in December of, like, o three beating Arcadia I one, you know, it's actually not too hard for us to come up with tasks that we are able to sort of verify. We actually did a controlled human study to make sure every task in the v two dataset was solvable by at least two humans, on the same rule set. Mhmm. And we're able to find that relatively easily.
Speaker 5:So I I think that shows we still have ways to go. I think once we hit that bar, though, when the gap between, like, easy for humans hard for AI is zero, I think it'd be extremely difficult for anyone to, like, argue the opposite side that, like, we we don't have AGI. I I'm not I don't find the economic, like, measurements quite as useful for understanding the capabilities of what Frontier AI does. I think it's a good measure of impact, and I have no problems using it for that point. But I think if you wanna use, like, an economic measure to make a statement about capabilities, the challenges that obscures, and there's multiple ways to get to that outcome.
Speaker 5:And this is one of the challenges, like, language models. Like, language models generally work in, like, a memorization style regime where they're, right, learning lots of data. They're able to apply to very similar types of patterns that they've seen before, but not novel patterns. That's what RKGI RKGI shows. And, like, those have economic utility.
Speaker 5:It turns out that, like, intelligence is just memorizing actually does interesting things for us. Zapier is making money with the current regime of AI today. Mhmm. And so I think if you really wanna, like, understand capabilities of frontier AI systems to make predictions based on the capabilities, you need more precise narrow, like, wedges on that capability. And that's really what ArcGI tries to be is this wedge on, you know, do do do these AI systems have the ability to actually adapt to problems that they haven't seen before and acquire skills, you know, rapidly and efficiently to apply to new things that that haven't been done before.
Speaker 2:A little a little more fun question. What what was your reaction to this week's sort of Ghibli moment? It it felt like we we talked about this on the show earlier. Slightly moment. The the funny thing for me is, like, so we had a couple of posts each that sort of like broke containment out of like traditional like teapot or whatever.
Speaker 2:Yeah. And you know, were there was a lot of people quoting
Speaker 1:Like normies.
Speaker 2:People quoting our posts being like, all right, the jig is up. Like what app, what filter app did you use for this? So like, there's still people out there that like, you know, are just like completely unaware of like progress. But like for those that sort of use these tools every day, it felt like this magic moment where people are just one shotting these sort of image generation models and getting these sort of incredible outputs. But but and, yeah, it just felt
Speaker 5:And one thing we're excited about. Like, I guess my my question to you guys is, you know, do you think that the degree by which we saw, like, the, you know, gibliization of, like, Twitter and X over this week, do you think that was the degree of baseline interest in that, like, series and movie prior to this week? Or do you think, like, most of the people that were posting that stuff were this is the first time they've ever really ex got exposed to that kind of content.
Speaker 1:Yeah. I I I kind of ran a test on this inadvertently because I posted both a scene from Oppenheimer, Ghiblified. It got, like, 30,000 likes. And then I posted an actual picture from Spirited Away, a real still from the actual movie, 24 likes. And so there's
Speaker 4:a little
Speaker 5:bit of, I I love the fact you're running a skirt. That's fantastic.
Speaker 1:Yeah. And so there's a little bit of, like, okay. It's not just the filter. It's this recontextualization. I I I thought a lot of it was, like, the there's a reason we're using Studio Ghibli specifically because there are other art styles that you can recreate with style transfer for a long time.
Speaker 1:But Yep. Studio Ghibli fits this, like, perfect. It doesn't quite go into the uncanny valley. And then if the AI if if you say, hey. I want you to recreate a picture of me, but use a Hollywood VFX pipeline to recreate the three d model and then do exactly what they do in the Avengers and take it to the top level, it'll just look like a photo because VFX has now become photoreal.
Speaker 1:And so you you can't go full photoreal or else it just doesn't look like a filter at all. And if you go to stick figure, it's unimpressive. And so you need this, like, perfect art style that looks clearly different but still recognizable. And the level of alteration that happens in Studio Ghibli's animation style is that perfect intersection where it feels like impossible to just create with a vanilla style transfer or, like, edge detection and just pixel manipulation. You're not just recoloring.
Speaker 1:It's not a filter, but at the same time, it's not just, oh, so it's so photoreal that it's just like, oh, yeah. You took a picture of me like this and then it looks like this and it's like, that's impressive, but it just looks like you took the photo like this. I don't know. That that was my take. Yeah.
Speaker 1:Joey, what what
Speaker 5:I I guess, like, I just get you know, my my own answer to that is, like, I think it significantly increased increased the awareness of, like, this art style, this media. Totally. I I I don't I I I don't know by how much. But, clearly, like, people are doing it who've never even, like, seen Spirited Away or heard of the director or, like, anything like that. And so I do think that there's really important, you know, thing around the ease of use of the tools, you know, actually gets people, like, I think, exposed and empowered to actually, like, do this stuff on a very mass scale and very rapidly.
Speaker 5:Think it's think we saw Totally. Saw this week that you actually really matter for mass adoption on this stuff. Yeah. Yeah. So I I think I I'm very excited to see it.
Speaker 1:What what would you do? Human, like, the human creativity. Like, we had basic, like, AI image generation for a while, and then we got the Harry Potter Balenciaga video. And that was clearly, like, AI generated, but the idea to combine Harry Potter and Balenciaga, which are these two very orthogonal concepts, like, that creates the humor and that came clearly from a human. And so I'm I'm excited to see, like, where this goes to where I will see a Ghibli or a or Ghibli inspired video and be like, oh, wow.
Speaker 1:This is, like, actually entertaining to me, not just a tech demo. Jordy, was your So
Speaker 2:I'm curious to get your take. So if you were running an image, it was a bad week to be running an image generation model because unless you're having your users prompt and get that quality of output that consistently And and there's just sort of like a distribution effect too where like, you know, I'm sure they're they're, you know, even just like the the user base increased, you know, dramatically just off of this one sort of meme cycle. Like, where would you if you were running one of these sort of image generation models, where would you go? And do you think that some of these players should be thinking in a more weird way, like kind of trying to generate these new ideas versus making, like, sort of derivative foundation model efforts?
Speaker 5:There's an interesting tie in to things like ARC that we were talking about before and trying to assess capabilities and pushing new ideas forward. Right? Like, this is stir steering the conversation a little differently from how you sort of framed it there. But, like, you know, one of the, you know, risks of, like, a sort of frontier of AI that is sort of dogmatic in its view of scaling up is going to produce systems that end users are using that all look the same and have the exact same capability. And, you know, ARC shows us like, hey.
Speaker 5:There are still frontiers that are unsolved for. There are interesting things. Here's a benchmark that you can kinda go measure against and actually, you know, direct progress and inspire folks towards. That potentially could have, like, some sort of large step function change in capability of what these systems are are actually able to do. You want, like, people, I think, pushing and exploring that frontier and, you know, measures of benchmarks, I think, actually good ways to, you know, help direct and guide guide research attention there.
Speaker 5:And I think I don't know the full details on how the new image generation from Aura works, but, like, at a high level, it does seem quite structurally different from the diffusion approach that even DALL E three and a lot of other, you know, image generators are using. They're doing some sort of, like, tokenization Yep. Like, decoding system. You know, they're they're they're, like, clearly have tried something new that, like, doesn't exist that didn't exist before this week that's allowing them to get these, like, frontier results. I think we wanna see basically that mindset continue to get pushed across, like, the entire field, both on, you know, media, obviously, but also on the sort of reasoning and AI side.
Speaker 5:I think it's gonna unlock a ton of use cases that people sort of written off AI for today. They're like, oh, okay. I think I I think I know what AI can do and can't do. And, you know, now I'm comfortable in my sort of, you know, what my my job or, you know, the the things I'm gonna ask it to do. And, like, I think there's still a lot that we wanted to do that we can't yet.
Speaker 5:And things like ARC are hopefully useful measures towards towards those tasks.
Speaker 2:Yeah. My my reaction was just generally, like, if you have a hundred million dollars in a bank and you were in the bank and you were trying to do what OpenAI is now doing like remarkably better than you are doing, then like maybe you need to focus on like, you know, true innovation and trying to create these sort of like novel approaches to do said thing. Otherwise, maybe different Or maybe some
Speaker 1:unique distribution or something.
Speaker 2:We talked about the enterprise.
Speaker 4:Like, last maybe
Speaker 5:I had this hot take, like, a year ago. It's like, I think anyone who's, like, just doing model training at this point is, like, lighting money on fire. If you really wanna make a unique difference, especially if you're a small startup like a founder, like, you gotta go take an orthogonal approach. You gotta try something different than what everyone else is doing. That's the only way you're gonna be able to potentially, like, I think capture attention and provide a lot new value to the world.
Speaker 2:Yeah. Wait. Talk talk I know we're probably over time, but, like, there there's no capital constraints in early stage AI right now. Right? And, like, the Zapier, like, I think the number one one of the number one reasons that I would like see you guys in headlines is like another story of like, oh, Zapier raised x, you know, single digit millions or whatever.
Speaker 5:You you wanna hear the funny quick quip on that? Was about a million bucks back in 2012. We never spent the money. By the time we actually got the round closed, figured out who we wanted to hire, got the payroll started, like, revenue had caught up. And so, literally, I think you could trace every dollar we raised all the
Speaker 2:way to. Yeah. And so there's sort of an like, it's potentially a problem right now that people aren't forced to be hyper creative because they just are like, oh, well, we have $30,000,000. Like, we might as well, like, we we're supposed to spend it.
Speaker 1:Mhmm.
Speaker 2:And, like, spending money to be innovative is different than, like, you know, like, being innovative because you are constrained on the capital side.
Speaker 1:Yeah. I I have a follow-up here. How should we be thinking about Gary Marcus these days? He wrote deep learning alone isn't getting us to human like AI. He's been an advocate for symbol manipulation.
Speaker 1:He's been kind of on the outs in inner AI circles. Rich Sutton wrote the, you know, scale is all you need, the the bitter lesson, of course. But we're in this weird scenario where, like, we've done the scaling. We're still bullish on scaling. We are gonna build bigger data centers and do bigger training runs, but then we might also need new algorithms.
Speaker 1:So he's, like, kind of right. How do you interpret, maybe not just his legacy, but just the the puzzle of how scaling fits into all of this.
Speaker 5:I generally think he's been more right than wrong. I think if you, like just take a limited five year view on this from 2020 up until 2020, end of '20 '20 '4, you know, I I think it was a generally right like, he was making the right ideas. You know, I think one of the reasons I personally find things like ARC more useful is they provide a direct way to do sense finding on this stuff. You don't I don't have to, like, rely on my trust of another individual or another human in order to make, like, statements or build my confidence or intuition or my model, personal model of the world based on, like, trusting someone else's analysis. I can just look at reality.
Speaker 5:Mark is a contact with reality mode. We can have AI systems. I could go try to do this stuff, have humans try to do it and measure the difference and actually look. And, like, I've always found that that's a going straight to the truth is, like, a much faster way to get to the frontier of knowledge. If you really wanna know what's true or not, kinda just have to look for things like that that measure it as opposed to, like, you know, relying on proxies like like other folks.
Speaker 5:But, yeah, I would say Marcus has generally been more right than wrong. And for what it's worth, I think, actually, the better lesson is somewhat under, like, I I think often gets misinterpreted. You know, makes a statement about search and learning as these general per for methods of of scaling. But Sun also makes the key point in the in the paper that that, hey. Like, these are the the the thing that we are actually applying search and learning on top of is an an architecture that was invented by a human in the first place.
Speaker 5:Mhmm. The core idea of the thing that we are scaling finally came from from from a person, from a human. And that's still to do true today still. And I think that is very inspiring even in the current regime we find ourselves here in March 2025 where, yeah, I think we actually do need some idea changes. I think we need some structural changes in terms of how the architecture, how the algorithms work here in order to, you know, certainly beat something like RPJ two, the high degree of efficiency.
Speaker 5:And, yeah, there's gonna be a scaling component to it. But, like, don't miss that, like, ah, yes. There's actually an idea component too that often gets, like, kinda brushed over.
Speaker 2:Final question. Who do you want to highlight that you feel like is doing important research in AI that's maybe underhyped but and not super online, but, you know, whether they're attacking ARC or or just generally doing work. Because there's a lot of people that love, you know, the sort of hyper commercial approaches to AI, you know, going and working at the labs, but there's it's totally valid if you just wanna, like, do research somewhere and and just, like, focus on that.
Speaker 5:Probably a few individuals that I think are sort of doing really interesting work in and around, program synthesis, which is a sort of a a parallel paradigm AI paradigm to deep learning. Actually, I don't think either is sufficient. I think some merger of the two is what's necessary to get to AGI. Top story for another day. But there's quite a few people that are sort of working this alternate paradigm that are doing some interesting work.
Speaker 5:On Twitter, this is, not this person's extremely online, but, Victor Talan's a really good follow, on on on X and Twitter. Been working on this sort of sort of alternative, like, system called HVM that does this, like, crazy, like, enumeration of program synthesis really quickly. There's a couple academics that I respect a lot. Josh at MIT, Melanie Mitchell at Santa Fe Institute are two folks who've been really deep in this, the ARC world and the program synthesis world for for many years and have sort of cultivated and steered some of the community over the last five years towards working towards new ideas. So I I really do respect and appreciate those folks for that.
Speaker 1:I have one last question, then we'll let you go. Would you recommend against learning to code? And do you think it's possible to try and build an ARC prize for programmers, a a task that a pro a novice or reasonable programmer could do that no AI could solve?
Speaker 5:Yes. In fact, I'll let me answer the second first. ARC is that challenge itself. So Yeah. One thing that is, somewhat maybe confusing is, like, we present the puzzles very visually for humans to take.
Speaker 5:Right? They they look like grids. You, like, draw on the the pixels. Mhmm. When these challenges are presented to computers, it's not an actual image processing challenge at all.
Speaker 5:It's it's just a two d list of numbers. It's a matrix of numbers for that represent the input grid. It's just zero to nine for the each number represents a color, and then you get an output matrix for the output. And your job as a programmer is to write a program that maps the input sequence to the output sequence. And that is absolutely a challenge that every programmer human today could do because they'd be able look at this you know, look at the pattern and say, okay.
Speaker 5:I'll just write the program that transforms this because I and figure out the rules because I'm, a smart intelligent human. I have, like, the capability of adapting to problems that I haven't seen before. And this is actually a, like, program synthesis challenge. You literally are asking a human to write a program to solve it, and that's the same thing we're asking the AI to do as well as to produce a program that can solve it. And things like o three are kinda unique because they're, like, language programs.
Speaker 5:Right? They have this chain of thought. You know, you can think of a chain of thought as, you know, a a program in in that in English that transforms the input to the output. But, yeah, literally, ARC is is the second thing that you that you asked for there. What about On the first question, should you learn to code?
Speaker 5:I I, like, I I haven't, like, I guess, thought super deeply about this. My hot take is, like, guess, yes, you should still learn to code. Like, primarily because it's been it gives you, like, leverage over technology today. And, yeah, like, I I I don't see that leverage over technology going away anytime soon, particularly if you wanna work on, like, the frontier of this stuff.
Speaker 4:Mhmm.
Speaker 5:And so, you know, I I sort of think the the arc of humanity's history has generally been to, like, produce tools that give individual humans more leverage over the universe around us. And I still think code is is that thing today. You know, I think AI will probably eclipse it, but, like, I don't think code is gonna go away in that future.
Speaker 1:Got it. Well, thanks so much for joining. This was a fantastic conversation. We'll have Thanks, guys. Back.
Speaker 1:I know that there's gonna be more developments with Arkprize too. I wanna stay abreast of them, So please come back when there's big news.
Speaker 5:Thanks, guys. ArkPrize.org if people wanna enter the contest.
Speaker 1:Yeah. Go check it out.
Speaker 5:Cheers. Have a great day.
Speaker 2:Thanks for coming on.
Speaker 1:Bye. Bye. I mean, I I love I love Ark Prize. I think that they should take out like a huge billboard for it. Go to adquick.com.
Speaker 1:They should be doing out of home advertising. Especially with Adquick they could make it easy and measurable. They could say goodbye to the headaches of out of home technology.
Speaker 2:Wait, I have an idea. Yes. Let's donate them enough money just to run a billboard with Adquick.
Speaker 1:That'd be fantastic. Now we're talking. Want everyone working on our prizes. It's a fantastic project. It's very, very fascinating.
Speaker 1:We'll have to have Francois Cholet, the creator of the actual the actual test as well on the show at some point. And maybe we can get him to pick up a watch on Bezel because they have over 22,500 luxury watches fully authenticated in house by Bezel's team of experts.
Speaker 2:You know if Getbezel.com. Any company running on Ramp
Speaker 1:Yeah.
Speaker 2:Will save time and money. And then they can sort of actually, you know, potentially do more distributions. And then the shareholders would actually be able to spend more money on Bezel.
Speaker 5:Oh,
Speaker 2:yeah. Which is one of the many reasons to use Ramp.
Speaker 1:That's great. And and and, I mean, if they're doing all that, like, where should they stay when they're on vacation?
Speaker 2:I would find your happy place. Find your happy place.
Speaker 1:Book a wander with inspiring views, hotel, great amenities, dreamy beds, top tier cleaning, and twenty four seven concierge service, folks. It's a vacation home, but better. Do you wanna do any of these in any of these timeline posts before we get out of here? It's already past two. We've done a great job this week.
Speaker 1:I think we've streamed a lot. I'm excited to get home and just Start prepping
Speaker 4:Monday too.
Speaker 1:Sleep on my Eight Sleep because there
Speaker 2:That's right.
Speaker 1:Late sleep brings you nights that feel Feel your best days. You can turn any bed into an ultimate sleeping experience. Go to 8Sleep.com/TBPN.
Speaker 2:I love the I love the head fake. Oh, should we get into some content? Oh, exactly.
Speaker 1:Another ad, baby.
Speaker 2:Let's do it. And I mean,
Speaker 1:there's there's some other news we should cover. Crusoe raised $225,000,000 in debt to buy more NVIDIA GPUs, and they're using current NVIDIA GPUs as collateral. And people are saying this is very circular, but it makes sense. I mean, NVIDIA GPUs have, have value. And, of course, they can be priced and
Speaker 2:My leverage lever up. Do it again 10 times. You know? Just just one hand washes the other.
Speaker 1:I love it.
Speaker 2:You know?
Speaker 1:We have a we also have a post that we should we should shout out to Rob Schulz over at Snagged. They're officially launching domain sales and brokerage service. We're starting with a list of 50 plus premium domains like tri.com, tuscany Com, geeks Com, and
Speaker 2:bevvy.tuscany.com, Rob. Yeah. No. But here's the real story. Rob got us tbpnHedid..com, and he did it in a shockingly little amount time.
Speaker 2:It was like
Speaker 1:A lot
Speaker 4:of people
Speaker 1:have been like, oh, TBPN, kind of a mouthful. It's a lot of letters. But I'm like, okay. Think of another 4letter.com that we could get very quickly. And we have the four letter handle on x.
Speaker 1:And and so I think that there's it's underrated to find one of these one of these short single word domains, single acronym domains, and then build your brand world around it. And I think, even though it it's been it's it's probably harder to get people to remember TBPN the first time they hear it. After they hear it about 10 times, it has the chattypty effect where you're like, oh, yeah. TBPN. It's a weird thing, but it just it stands for what it stands for.
Speaker 1:It's great. Anyway, also, Xi Jinping advocates for stable trade at Beijing business meetings. This is a bigger story I'm sure we'll be digging into later. But the Chinese president, Xi Jinping, engaged in a series of meetings with global business leaders and foreign dignitaries in Beijing highlighted by his discussions with representatives of the international business community. Xi emphasized China's continued openness to foreign investment.
Speaker 1:He's saying, let's open the floodgates. Let's get some capital in here. Let's build some tech. We'll see how it goes. We'll close
Speaker 2:on the show.
Speaker 1:With an Elon Musk post that you were alluding to earlier. Kitsi, he's he's quote tweeting Kitsi who says finishing touches, we need more compute, to cure cancer. Right? And, you can see that the image is loading a, looks like a Simpsons character. And so it's very funny we're in this weird thing.
Speaker 1:Do you
Speaker 2:think that loading effect is is, just to make it more addictive?
Speaker 1:It's not. I think it actually does relate to the change in the algorithm or technology because we're no longer using pure diffusion where it's all random noise and then you're denoising it iteratively at the whole image level, which is what you see on mid journey when they're sending you those updates. Now it is more going line by line, because it's a token based system. So there's some diffusion that's going in, and then there's also some tokenization. And I think OpenAI, you know, the Ghiblification kinda drowned out the technical discussion that should follow, Ben Thompson broke it down a little bit.
Speaker 1:I'm sure there will be other deep dives. And I'm sure there'll be open source versions of this. Like, every time a new you know, Manus came out, there was open Manus. And Yeah. And people always love to dig into.
Speaker 1:Even though the papers aren't released, you know, this is not an open source technology. I'm sure we will see an open source version of this technology within six months to a year, probably two weeks if we're being honest. Because, as soon as people see this tech, they they wanna build it. They wanna figure it out. And so that's the story.
Speaker 1:Anyway, thanks for listening. Please go leave us five stars on Apple Podcasts and Spotify, and stay tuned. We have a bunch of stuff lined up for next week already. Gonna be doing more special days. We had a really fun time with Defense Day.
Speaker 1:Today kinda turned into AI author day. There was also some terraforming day in there. But we're very excited
Speaker 2:for it. It's great. Never know what you're gonna get.
Speaker 1:You never know what you're gonna get. So stay tuned. Follow us.
Speaker 2:We are excited for Monday. Alright.
Speaker 1:And have a great weekend, folks. We'll talk to you soon.
Speaker 2:Talk soon. Have
Speaker 1:a good one.